path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Week06.ipynb | ###Markdown
###Code
###Output
_____no_output_____
###Markdown
[Bag of Words Meets Bags of Popcorn](https://www.kaggle.com/c/word2vec-nlp-tutorial/data)====== Data SetThe labeled data set consists of 50,000 IMDB movie reviews, specially selected for sentiment analysis. The sentiment of reviews is binary, meaning the IMDB rating =7 have a sentiment score of 1. No individual movie has more than 30 reviews. The 25,000 review labeled training set does not include any of the same movies as the 25,000 review test set. In addition, there are another 50,000 IMDB reviews provided without any rating labels. File descriptionslabeledTrainData - The labeled training set. The file is tab-delimited and has a header row followed by 25,000 rows containing an id, sentiment, and text for each review. Data fields* id - Unique ID of each review* sentiment - Sentiment of the review; 1 for positive reviews and 0 for negative reviews* review - Text of the review ObjectiveObjective of this dataset is base on **review** we predict **sentiment** (positive or negative) so X is **review** column and y is **sentiment** column 1. Load Datasetwe only forcus on "labeledTrainData.csv" fileLet's first of all have a look at the data.[Click here to download dataset](https://s3-ap-southeast-1.amazonaws.com/ml101-khanhnguyen/week3/assignment/labeledTrainData.tsv)
###Code
# Import pandas, numpy
import pandas as pd
import numpy as np
# Read dataset with extra params sep='\t', encoding="latin-1"
df = pd.read_csv('labeledTrainData.tsv',sep='\t',encoding="latin-1")
df.head()
###Output
_____no_output_____
###Markdown
2. Preprocessing
###Code
import nltk
nltk.download()
from nltk.corpus import brown
brown.words()
from nltk.corpus import stopwords
stop = stopwords.words('english')
stop
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = (re.sub('[\W]+', ' ', text.lower()) + ' ' + ' '.join(emoticons).replace('-', ''))
return text
#test the function preprocessor()
print(preprocessor('With all this stuff going down at the moment #$::? )'))
from nltk.stem import PorterStemmer
#split a text into list of words
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
token = [porter.stem(word) for word in text.split()]
return token
# split the dataset in train and test
from sklearn.model_selection import train_test_split
X = df['review']
y = df['sentiment']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
###Output
_____no_output_____
###Markdown
3. Create Model and Train Using **Pipeline** to concat **tfidf** step and **LogisticRegression** step
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(stop_words=stop,
tokenizer=tokenizer_porter,
preprocessor=preprocessor)
# Import Pipeline, LogisticRegression, TfidfVectorizer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
clf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
clf.fit(X_train, y_train)
###Output
_____no_output_____ |
theory_learning/theory_unification.ipynb | ###Markdown
Settings for which unification methods to turn on:
###Code
is_symbolic_unification = True
is_network_unification = True
###Output
_____no_output_____
###Markdown
Load theory hub:
###Code
"""Load the theory hub. Should point to the directory that the theory_hub lives.
An example theory hub can be downloaded at https://drive.google.com/file/d/1I0P0MUjP9b_HGMacNZ0avO_0CtTtG_y0/view?usp=sharing,
after downloading, put it under the directory dirname.
"""
exp_id = "test"
date_time = "10-15"
dirname = theory_PATH + "/{0}_{1}/".format(exp_id, date_time)
filename_hub = dirname + "file_continuous_num_4_pred_8_linear_dom_8_onehot_mse_1e-07_sim_DLs-0-3-relative_optim_adam-0.005_adam-0.001_reg_1e-08_1e-05_batch_2000_core_DLs_order_-1_lossd_None_False_infl_False_#_20000_mul_5000_MDL_both_2D_3L_id_210_7_hub.p"
theory_hub = load_model_dict_at_theory_hub(pickle.load(open(filename_hub, "rb")), is_cuda = is_cuda)
###Output
_____no_output_____
###Markdown
Symbolic unification:
###Code
"""Implementing the symbolic unification algorithm (Alg. 4) in (Wu and Tegmark, 2019):"""
if is_symbolic_unification:
num_clusters = 3 # The number of clusters for clustering the theories into
df, exprs_unified_list = unification_symbolic(theory_hub.theory, num_clusters=num_clusters, verbose=True)
###Output
_____no_output_____
###Markdown
Learning network unification:
###Code
"""Directly learn multiple master theories that unifies subsets of theories in the theory hub (new feature).
Here the unification is using generalized-mean loss as introduced in (Wu and Tegmark, 2019). Instead of operating
on data points, this generalized-mean loss is operating on theories, s.t. for each master theories, it only
specilizes to subset of theories.
"""
if is_network_unification:
# Settings on theory:
array_id = 0
input_size = 4
output_size = 2
num_master_theories = 5
master_model_type = "regulated-Net"
statistics_output_neurons = 6
master_loss_combine_mode = "on-loss"
master_loss_core = "mse"
master_loss_mode = "harmonic"
master_model_num_neurons = 8
master_optim_type = ("adam", 1e-3)
master_optim_type_classifier = ("adam", 5e-5)
master_reg_amp = 1e-3
master_reg_amp_classifier = 1e-3
theory_remove_threshold = 5e-3
target_array_id = "7"
filter_words = []
exp_id = get_args(exp_id, 1)
num_master_theories = get_args(num_master_theories, 2, "int")
master_model_type = get_args(master_model_type, 3)
statistics_output_neurons = get_args(statistics_output_neurons, 4, "int")
master_loss_combine_mode = get_args(master_loss_combine_mode, 5)
master_loss_core = get_args(master_loss_core, 6)
master_loss_mode = get_args(master_loss_mode, 7)
master_model_num_neurons = get_args(master_model_num_neurons, 8, "int")
master_optim_type = get_args(master_optim_type, 9, "tuple")
master_optim_type_classifier = get_args(master_optim_type_classifier, 10, "tuple")
master_reg_amp = get_args(master_reg_amp, 11, "float")
master_reg_amp_classifier = get_args(master_reg_amp_classifier, 12, "float")
theory_remove_threshold = get_args(theory_remove_threshold, 13, "float")
target_array_id = get_args(target_array_id, 14, "int")
filter_words = get_args(filter_words, 15, "tuple")
date_time = get_args(date_time, 16)
array_id = get_args(array_id, 17, "int")
# Setting up dirname and filename:
load_previous = True
filename_hub_cand = filter_filename(dirname, include = [target_array_id, "hub.p", *filter_words])
filename_hub = dirname + filename_hub_cand[0]
print("filename_hub: {0}".format(filename_hub))
filename = filename_hub[:-6] + ".p"
make_dir(filename)
print("filename: {0}\n".format(filename))
filename_unification = filename[:-2] + "/unification/num_{0}_type_{1}_statistics_{2}_{3}_core_{4}_mode_{5}_neurons_{6}_optim_{7}_{8}_reg_{9}_{10}_thresh_{11}_id_{12}".format(
num_master_theories, master_model_type, statistics_output_neurons, master_loss_combine_mode, master_loss_core, master_loss_mode, master_model_num_neurons,
to_string(master_optim_type), to_string(master_optim_type_classifier), master_reg_amp, master_reg_amp_classifier, theory_remove_threshold, array_id,
)
make_dir(filename_unification)
print("filename_unification: {0}\n".format(filename_unification))
# Initialize certain parameters:
master_reg_dict = {"master_model": {"weight": master_reg_amp, "bias": master_reg_amp},
"statistics_Net": {"weight": master_reg_amp, "bias": master_reg_amp},
}
master_reg_dict_classifier = {"classifier": {"weight": master_reg_amp_classifier, "bias": master_reg_amp_classifier}}
struct_param_regulated_Net = [
[master_model_num_neurons, "Simple_Layer", {}],
[master_model_num_neurons, "Simple_Layer", {}],
[output_size, "Simple_Layer", {"activation": "linear"}],
]
# Load theory_hub:
wait_time = 1
wait_time_exponent = 1.2
while True:
theory_hub = load_model_dict_at_theory_hub(pickle.load(open(filename_hub, "rb")), is_cuda = is_cuda)
if len(theory_hub.theory) == 0:
wait_time *= wait_time_exponent
print("No theory exists in the theory_hub. Wait for {0:.1f} seconds...".format(wait_time))
time.sleep(wait_time)
else:
print("Succesfully load theory_hub {0} with non-empty theory_collections!".format(filename_hub))
break
info_dict = {}
# Propose master_theories:
theory_dict = theory_hub.get_theory_tuples(input_size = input_size)
master_theory_dict = theory_hub.propose_master_theories(num_master_theories = num_master_theories,
input_size = input_size,
output_size = output_size,
statistics_output_neurons = statistics_output_neurons,
master_model_type = master_model_type,
struct_param_regulated_Net = struct_param_regulated_Net,
)
# Fit the master_theories to all the theories:
data_record = theory_hub.fit_master_theory(
master_theory_dict = master_theory_dict,
theory_dict = theory_dict,
optim_type = master_optim_type,
reg_dict = master_reg_dict,
loss_core = master_loss_core,
loss_mode = master_loss_mode,
loss_combine_mode = master_loss_combine_mode,
num_iter = 1000,
patience = 10,
inspect_interval = 10,
isplot = isplot,
filename = filename_unification,
)
info_dict["data_record_whole"] = deepcopy(data_record)
info_dict["master_theory_whole"] = deepcopy({name: master_theory.model_dict for name, master_theory in master_theory_dict.items()})
pickle.dump(info_dict, open(filename_unification + ".p", "wb"))
# Assign master theory to theories:
group_list = theory_hub.assign_master_theories_to_theories(master_theory_dict, theory_dict)
print("=" * 150 + "\nMaster_theory assignment:")
for assigned_master_theory_dict, assigned_theory_dict in group_list:
print("master_theory: {0}".format(list(assigned_master_theory_dict.keys())[0]))
print("assigned_theories: {0}\n".format(list(assigned_theory_dict.keys())))
# Train each master_theory individually:
for i, (assigned_master_theory_dict, assigned_theory_dict) in enumerate(group_list):
print("=" * 150)
print("Fitting {0}th assigned group:".format(i))
print("master_theory: {0}".format(list(assigned_master_theory_dict.keys())[0]))
print("assigned_theories: {0}\n".format(list(assigned_theory_dict.keys())) + "=" * 150 + "\n")
master_theory_name = list(assigned_master_theory_dict.keys())[0]
master_theory = assigned_master_theory_dict[master_theory_name]
# Train the master_model:
data_record = theory_hub.fit_master_theory(
master_theory_dict = assigned_master_theory_dict,
theory_dict = assigned_theory_dict,
optim_type = master_optim_type,
reg_dict = master_reg_dict,
loss_core = master_loss_core,
loss_mode = master_loss_mode,
loss_combine_mode = master_loss_combine_mode,
num_iter = 10000,
patience = 20,
inspect_interval = 50,
isplot = isplot,
filename = filename_unification,
)
info_dict["data_record_{0}".format(i)] = deepcopy(data_record)
# Train the master classifier:
data_record_classifier = theory_hub.fit_master_classifier(
master_theory = master_theory,
theory_dict = assigned_theory_dict,
optim_type_classifier = master_optim_type_classifier,
reg_dict_classifier = master_reg_dict_classifier,
)
info_dict["data_record_classifier_{0}".format(i)] = deepcopy(data_record_classifier)
# Add master_theory_tuple:
theory_hub = load_model_dict_at_theory_hub(pickle.load(open(filename_hub, "rb")), is_cuda = is_cuda)
theory_hub.add_master_theory(name = master_theory_name,
master_theory = assigned_master_theory_dict[master_theory_name],
theory_tuples = assigned_theory_dict,
is_replace = True,
)
master_theory_tuple = Master_Theory_Tuple(master_theory = assigned_master_theory_dict[master_theory_name], theory_tuples = assigned_theory_dict)
info_dict["master_theory_tuple_{0}".format(i)] = deepcopy(master_theory_tuple.model_dict)
# Removed passed theory (whose loss with the master_theory is less than the theory_remove_threshold):
master_loss_fun = Master_Loss_Fun(core = master_loss_core, cumu_mode = master_loss_mode)
loss_matrix = master_loss_fun.get_loss_matrix(assigned_master_theory_dict, assigned_theory_dict, use_train = False)
passed_theory = (loss_matrix < theory_remove_threshold).data.long()[0]
passed_theory_names = []
for j in range(len(passed_theory)):
is_pass = passed_theory[j]
if is_pass == 1:
passed_theory_names.append(list(assigned_theory_dict.keys())[j])
# if experiment_mode != "on-unification":
# popped_theories = theory_hub.remove_theories(passed_theory_names)
# pickle.dump(theory_hub.model_dict, open(filename_hub, "wb"))
info_dict["popped_theories"] = passed_theory_names
pickle.dump(info_dict, open(filename_unification + ".p", "wb"))
###Output
_____no_output_____ |
code/Analysis_code/Significance_Tests_Token_Type_TTR.ipynb | ###Markdown
Significance Tests: Token, Type, TTR, & K-BandNote: this code uses the character token/type dataframe annotated w/ TTR and k-band created [here](https://github.com/Data-Science-for-Linguists-2019/Animated-Movie-Gendered-Dialogue/blob/master/code/Analysis_code/Tok_Type_TTR_Analysis.ipynb) and the line by line dataframe created [here](https://github.com/Data-Science-for-Linguists-2019/Animated-Movie-Gendered-Dialogue/blob/master/code/Analysis_code/POS_Tag_Adj_Analysis.ipynb)After I found all those differences in Token Counts, Type Counts and TTR--do they mean anything for my data?In the code below, I first compare Genders and Roles separately, asking...* In general, is there a difference in male/female or pro/ant stats?* Across Disney eras, is there a difference in male, female, pro, and ant stats?* Within Each Disney Era, is there a difference in male/female or ant/pro stats?* Across the two companies, Disney and Dreamworks, are there differences in male/female and pro/ant stats?* Within each company, is there a difference in male/female and pro/ant stats?Then, I combine role and gender, asking...* Is there a difference between male pros and ants?* Is there a difference between female pros and ants?* Is there a difference between male and female pros?* Is there a difference between male and female ants?Specifically, I will investigate the following stats:* Token Count per Line* Type Count per Line* Total Token Count* Total Type Count* TTR* K-Band Table of Contents1. [Data Frames/Data](data)2. [Token/Type Count per Line](toktypeline) 1. [Gender](gen1) 2. [Role](role1) 3. [Gender and Role](gr1)3. [Total Token/Type Count per Character](totaltoktype) 1. [Gender](gen2) 2. [Role](role2) 3. [Gender and Role](gr2)4. [TTR](ttr) 1. [Gender](gen3) 2. [Role](role3) 3. [Gender and Role](gr3)5. [K-Band](kband) 1. [Gender](gen4) 2. [Role](role4) 3. [Gender and Role](gr4)6. [Quick Peek at TTR by Line](qp) DataFrames/Data
###Code
import pandas as pd
from scipy import stats
movie_df = pd.read_pickle(r'C:/Users/cassi/Desktop/Data_Science/Animated-Movie-Gendered-Dialogue/private/all_tagged_dialogue.pkl')
movie_df.info()
char_df = pd.read_pickle(r'C:/Users/cassi/Desktop/Data_Science/Animated-Movie-Gendered-Dialogue/private/char_tok_type_TTR.pkl')
char_df.info()
f_movie_df = movie_df[movie_df.Gender == 'f']
m_movie_df = movie_df[movie_df.Gender == 'm']
pro_movie_df = movie_df[movie_df.Role == 'PRO']
ant_movie_df = movie_df[movie_df.Role == 'ANT']
help_movie_df = movie_df[movie_df.Role == 'HELPER']
###Output
_____no_output_____
###Markdown
Token and Type Stats Average Token and Type Count per Line Gender Overall
###Code
#Token count per Line M v F
stats.ttest_ind(m_movie_df.Token_Count, f_movie_df.Token_Count, equal_var = False)
#Type Count per Line M v F
stats.ttest_ind(m_movie_df.Type_Count, f_movie_df.Type_Count, equal_var = False)
###Output
_____no_output_____
###Markdown
Overall, Token Count per line and Type Count per line between genders are significantly different Gender Over Time
###Code
f_movies_early = f_movie_df[f_movie_df.Disney_Period == 'EARLY']
m_movies_early = m_movie_df[m_movie_df.Disney_Period == 'EARLY']
f_movies_mid = f_movie_df[f_movie_df.Disney_Period == 'MID']
m_movies_mid = m_movie_df[m_movie_df.Disney_Period == 'MID']
f_movies_late = f_movie_df[f_movie_df.Disney_Period == 'LATE']
m_movies_late = m_movie_df[m_movie_df.Disney_Period == 'LATE']
# Comparing female Token Count over time
stats.f_oneway(f_movies_early.Token_Count, f_movies_mid.Token_Count, f_movies_late.Token_Count)
stats.ttest_ind(f_movies_early.Token_Count, f_movies_mid.Token_Count, equal_var=False)
stats.ttest_ind(f_movies_early.Token_Count, f_movies_late.Token_Count, equal_var=False)
stats.ttest_ind(f_movies_mid.Token_Count, f_movies_late.Token_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
There are significant differences between female token county per line over time.However, to my surprise, female token counts were actually much longer in the early period than in the middle period. Female token counts then went up again in the late period, to a count that isn't very different from the counts in the early period
###Code
# Comparing Male Token Count per Line Over Time
stats.f_oneway(m_movies_early.Token_Count, m_movies_mid.Token_Count, m_movies_late.Token_Count)
stats.ttest_ind(m_movies_early.Token_Count, m_movies_mid.Token_Count, equal_var=False)
stats.ttest_ind(m_movies_early.Token_Count, m_movies_late.Token_Count, equal_var=False)
stats.ttest_ind(m_movies_mid.Token_Count, m_movies_late.Token_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Unlike female token counts per line, male token counts per line have constantly increased with each era. Though the difference in their token counts wasn't significant from the early to mid period, there was a significant increase from the middle to late period.
###Code
#Comparing Male and Female token counts per line w/in each era
stats.ttest_ind(m_movies_early.Token_Count, f_movies_early.Token_Count, equal_var=False)
stats.ttest_ind(m_movies_mid.Token_Count, f_movies_mid.Token_Count, equal_var=False)
stats.ttest_ind(m_movies_late.Token_Count, f_movies_late.Token_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Again, not the disprecancies that I expected. Women spoke more in the earlier movies, but not significantly, while women spoke significantly fewer tokens per line than men did in the middle and late periods!
###Code
# Comparing female Type Count over time
stats.f_oneway(f_movies_early.Type_Count, f_movies_mid.Type_Count, f_movies_late.Type_Count)
stats.ttest_ind(f_movies_early.Type_Count, f_movies_mid.Type_Count, equal_var=False)
stats.ttest_ind(f_movies_early.Type_Count, f_movies_late.Type_Count, equal_var=False)
stats.ttest_ind(f_movies_mid.Type_Count, f_movies_late.Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
There are significant differences between female type county per line over time. Like token count, we see type count go down in the middle period and then increase again in the late period.
###Code
# Comparing Male Type Count per Line Over Time
stats.f_oneway(m_movies_early.Type_Count, m_movies_mid.Type_Count, m_movies_late.Type_Count)
stats.ttest_ind(m_movies_early.Type_Count, m_movies_mid.Type_Count, equal_var=False)
stats.ttest_ind(m_movies_early.Type_Count, m_movies_late.Type_Count, equal_var=False)
stats.ttest_ind(m_movies_mid.Type_Count, m_movies_late.Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Unlike female type counts per line, male type counts per line have constantly increased with each era. Though the difference in their type counts wasn't significant from the early to mid period, there was a significant increase from the middle to late period.
###Code
#Comparing Male and Female type counts per line w/in each era
stats.ttest_ind(m_movies_early.Type_Count, f_movies_early.Type_Count, equal_var=False)
stats.ttest_ind(m_movies_mid.Type_Count, f_movies_mid.Type_Count, equal_var=False)
stats.ttest_ind(m_movies_late.Type_Count, f_movies_late.Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Again, no significant difference in the early era of Disney, but men do use more types than women in the middle and late eras of Disney Gender Across Companies
###Code
f_movies_disney = f_movie_df[f_movie_df.Disney_Period != 'DREAMWORKS']
f_movies_dw = f_movie_df[f_movie_df.Disney_Period == 'DREAMWORKS']
m_movies_disney = m_movie_df[m_movie_df.Disney_Period != 'DREAMWORKS']
m_movies_dw = m_movie_df[m_movie_df.Disney_Period == 'DREAMWORKS']
## Between male and female characters is disney films
stats.ttest_ind(m_movies_disney.Token_Count, f_movies_disney.Token_Count, equal_var=False)
## Between male and female characters in Dreamworks Films
stats.ttest_ind(m_movies_dw.Token_Count, f_movies_dw.Token_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Though in general men have higher token counts per line than women, this difference is much more significant in Dreamworks films
###Code
## Between male characters in Dreamworks and Disney
stats.ttest_ind(m_movies_disney.Token_Count, m_movies_dw.Token_Count, equal_var=False)
## Between female characters in Dreamworks and Disney
stats.ttest_ind(f_movies_disney.Token_Count, f_movies_dw.Token_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
In Disney films, both men and women seem to have higher token counts per line than Dreamworks men and women, but this difference is much more pronounced among female characters.
###Code
## Between male and female characters is disney films
stats.ttest_ind(m_movies_disney.Type_Count, f_movies_disney.Type_Count, equal_var=False)
## Between male and female characters in Dreamworks Films
stats.ttest_ind(m_movies_dw.Type_Count, f_movies_dw.Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Though in general men have higher token counts per line than women, this difference is much more significant in Dreamworks films
###Code
## Between male characters in Dreamworks and Disney
stats.ttest_ind(m_movies_disney.Type_Count, m_movies_dw.Type_Count, equal_var=False)
## Between female characters in Dreamworks and Disney
stats.ttest_ind(f_movies_disney.Type_Count, f_movies_dw.Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Role Overall
###Code
stats.ttest_ind(pro_movie_df.Token_Count, ant_movie_df.Token_Count, equal_var = False)
stats.ttest_ind(pro_movie_df.Type_Count, ant_movie_df.Type_Count, equal_var = False)
###Output
_____no_output_____
###Markdown
Overall, there do seem to be significant differences in token counts and type counts by line based on role, with protagonists having significantly shorter lines Role Over Time
###Code
pro_movies_early = pro_movie_df[pro_movie_df.Disney_Period == 'EARLY']
pro_movies_mid = pro_movie_df[pro_movie_df.Disney_Period == 'MID']
pro_movies_late = pro_movie_df[pro_movie_df.Disney_Period == 'LATE']
ant_movies_early = ant_movie_df[ant_movie_df.Disney_Period == 'EARLY']
ant_movies_mid = ant_movie_df[ant_movie_df.Disney_Period == 'MID']
ant_movies_late = ant_movie_df[ant_movie_df.Disney_Period == 'LATE']
# Comparing Protagonist Token Count over time
stats.f_oneway(pro_movies_early.Token_Count, pro_movies_mid.Token_Count, pro_movies_late.Token_Count)
stats.ttest_ind(pro_movies_early.Token_Count, pro_movies_mid.Token_Count, equal_var=False)
stats.ttest_ind(pro_movies_early.Token_Count, pro_movies_late.Token_Count, equal_var=False)
stats.ttest_ind(pro_movies_mid.Token_Count, pro_movies_late.Token_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Protagonists had more tokens per lines in the early and late periods than in the middle period, to a significant degree. However, there's hardly any difference between token count for protagonists in the middle and late periods.
###Code
# Comparing Antagonist Token Count per Line Over Time
stats.f_oneway(ant_movies_early.Token_Count, ant_movies_mid.Token_Count, ant_movies_late.Token_Count)
stats.ttest_ind(ant_movies_early.Token_Count, ant_movies_mid.Token_Count, equal_var=False)
stats.ttest_ind(ant_movies_early.Token_Count, ant_movies_late.Token_Count, equal_var=False)
stats.ttest_ind(ant_movies_mid.Token_Count, ant_movies_late.Token_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
The Token counts per line for antagonists have not changed significantly over time.
###Code
#Comparing Pros and Ants token counts per line w/in each era
stats.ttest_ind(ant_movies_early.Token_Count, pro_movies_early.Token_Count, equal_var=False)
stats.ttest_ind(ant_movies_mid.Token_Count, pro_movies_mid.Token_Count, equal_var=False)
stats.ttest_ind(ant_movies_late.Token_Count, pro_movies_late.Token_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
In all eras, antagonists have longer lines than protagonists, but this difference is only significant in the middle period.
###Code
# Comparing protagonist Type Count over time
stats.f_oneway(pro_movies_early.Type_Count, pro_movies_mid.Type_Count, pro_movies_late.Type_Count)
stats.ttest_ind(pro_movies_early.Type_Count, pro_movies_mid.Type_Count, equal_var=False)
stats.ttest_ind(pro_movies_early.Type_Count, pro_movies_late.Type_Count, equal_var=False)
stats.ttest_ind(pro_movies_mid.Type_Count, pro_movies_late.Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
There are significant differences between protagonist type counts per line over time. Like token count, we see type count go down in the middle period and then increase again in the late period.
###Code
# Comparing Antagonist Type Count per Line Over Time
stats.f_oneway(ant_movies_early.Type_Count, ant_movies_mid.Type_Count, ant_movies_late.Type_Count)
stats.ttest_ind(ant_movies_early.Type_Count, ant_movies_mid.Type_Count, equal_var=False)
stats.ttest_ind(ant_movies_early.Type_Count, ant_movies_late.Type_Count, equal_var=False)
stats.ttest_ind(ant_movies_mid.Type_Count, ant_movies_late.Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Antagonist type counts aren't significantly different over time.
###Code
#Comparing Protagonist and Antagonist type counts per line w/in each era
stats.ttest_ind(pro_movies_early.Type_Count, ant_movies_early.Type_Count, equal_var=False)
stats.ttest_ind(pro_movies_mid.Type_Count, ant_movies_mid.Type_Count, equal_var=False)
stats.ttest_ind(pro_movies_late.Type_Count, ant_movies_late.Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
We see protagonists speak significantly fewer types per line than antagonists do in the middle period and the late periods. This difference is not strong in the early period. Roles Across Companies
###Code
ant_movies_disney = ant_movie_df[ant_movie_df.Disney_Period != 'DREAMWORKS']
ant_movies_dw = ant_movie_df[ant_movie_df.Disney_Period == 'DREAMWORKS']
pro_movies_disney = pro_movie_df[pro_movie_df.Disney_Period != 'DREAMWORKS']
pro_movies_dw = pro_movie_df[pro_movie_df.Disney_Period == 'DREAMWORKS']
#Between antagonists in Disney and Dreamworks
stats.ttest_ind(ant_movies_disney.Token_Count, ant_movies_dw.Token_Count, equal_var=False)
#Between protagonists in Disney and Dreamworks
stats.ttest_ind(pro_movies_disney.Token_Count, pro_movies_dw.Token_Count, equal_var=False)
#Between protagonists and antagonists in Disney
stats.ttest_ind(pro_movies_disney.Token_Count, ant_movies_disney.Token_Count, equal_var=False)
#Between protagonists and antagonists in Dreamworks
stats.ttest_ind(pro_movies_dw.Token_Count, ant_movies_dw.Token_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
There's a significant difference in how many tokens Disney and Dreamworks antagonists use (disney ones use more). Also, in Disney movies protagonists speak significantly fewer tokens than antagonists. This difference isn't significant in Dreamworks movies.
###Code
#Between antagonists in Disney and Dreamworks
stats.ttest_ind(ant_movies_disney.Type_Count, ant_movies_dw.Type_Count, equal_var=False)
#Between protagonists in Disney and Dreamworks
stats.ttest_ind(pro_movies_disney.Type_Count, pro_movies_dw.Type_Count, equal_var=False)
#Between protagonists and antagonists in Disney
stats.ttest_ind(pro_movies_disney.Type_Count, ant_movies_disney.Type_Count, equal_var=False)
#Between protagonists and antagonists in Dreamworks
stats.ttest_ind(pro_movies_dw.Type_Count, ant_movies_dw.Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Gender and RoleLet's see if token or type count by line differs based on both role and gender
###Code
movies_gen_role = movie_df[(movie_df.Gender != 'n') & (movie_df.Role != 'N')]
pro_f_movies = movies_gen_role[(movies_gen_role.Gender == 'f') & (movies_gen_role.Role == 'PRO')]
pro_m_movies = movies_gen_role[(movies_gen_role.Gender == 'm') & (movies_gen_role.Role == 'PRO')]
ant_f_movies = movies_gen_role[(movies_gen_role.Gender == 'f') & (movies_gen_role.Role == 'ANT')]
ant_m_movies = movies_gen_role[(movies_gen_role.Gender == 'm') & (movies_gen_role.Role == 'ANT')]
stats.ttest_ind(pro_f_movies.Token_Count, pro_m_movies.Token_Count, equal_var=False)
stats.ttest_ind(ant_f_movies.Token_Count, ant_m_movies.Token_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Female protagonists have significantly lower token counts per line than their male counterparts, but the same isn't true for female antagonists!
###Code
stats.ttest_ind(pro_f_movies.Token_Count, ant_f_movies.Token_Count, equal_var=False)
stats.ttest_ind(pro_m_movies.Token_Count, ant_m_movies.Token_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Also, female protagonists use fewer tokens per line than their antagonist counterparts, but there's no significant difference between token counts per line for male protagonists and antagonists!
###Code
stats.ttest_ind(pro_f_movies.Type_Count, pro_m_movies.Type_Count, equal_var=False)
stats.ttest_ind(ant_f_movies.Type_Count, ant_m_movies.Type_Count, equal_var=False)
stats.ttest_ind(pro_f_movies.Type_Count, ant_f_movies.Type_Count, equal_var=False)
stats.ttest_ind(pro_m_movies.Type_Count, ant_m_movies.Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Total Token and Type Counts GenderI decided to also look at these stats over all of the character's dialogue, to account for the short lines. Though a female character may have shorter lines per line, if she has more lines in the movie, this will be reflected by her total token and type counts Overall
###Code
f_chars = char_df[char_df.Gender == 'f']
m_chars = char_df[char_df.Gender == 'm']
stats.ttest_ind(f_chars.Total_Tok_Count, m_chars.Total_Tok_Count, equal_var = False)
stats.ttest_ind(f_chars.Total_Type_Count, m_chars.Total_Type_Count, equal_var = False)
###Output
_____no_output_____
###Markdown
Gender Over Time
###Code
f_chars_early = f_chars[f_chars.Disney_Period == 'EARLY']
m_chars_early = m_chars[m_chars.Disney_Period == 'EARLY']
f_chars_mid = f_chars[f_chars.Disney_Period == 'MID']
m_chars_mid = m_chars[m_chars.Disney_Period == 'MID']
f_chars_late = f_chars[f_chars.Disney_Period == 'LATE']
m_chars_late = m_chars[m_chars.Disney_Period == 'LATE']
# Comparing female Total Token Count over time
stats.f_oneway(f_chars_early.Total_Tok_Count, f_chars_mid.Total_Tok_Count, f_chars_late.Total_Tok_Count)
stats.ttest_ind(f_chars_early.Total_Tok_Count, f_chars_mid.Total_Tok_Count, equal_var=False)
stats.ttest_ind(f_chars_early.Total_Tok_Count, f_chars_late.Total_Tok_Count, equal_var=False)
stats.ttest_ind(f_chars_mid.Total_Tok_Count, f_chars_late.Total_Tok_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
In terms of total token counts, there's no significant different over time for female speakers.
###Code
# Comparing Male Total Token Count Over Time
stats.f_oneway(m_chars_early.Total_Tok_Count, m_chars_mid.Total_Tok_Count, m_chars_late.Total_Tok_Count)
stats.ttest_ind(m_chars_early.Total_Tok_Count, m_chars_mid.Total_Tok_Count, equal_var=False)
stats.ttest_ind(m_chars_early.Total_Tok_Count, m_chars_late.Total_Tok_Count, equal_var=False)
stats.ttest_ind(m_chars_mid.Total_Tok_Count, m_chars_late.Total_Tok_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Again, no significant difference in total token count over time for men.
###Code
#Comparing Male and Female total token counts w/in each era
stats.ttest_ind(m_chars_early.Total_Tok_Count, f_chars_early.Total_Tok_Count, equal_var=False)
stats.ttest_ind(m_chars_mid.Total_Tok_Count, f_chars_mid.Total_Tok_Count, equal_var=False)
stats.ttest_ind(m_chars_late.Total_Tok_Count, f_chars_late.Total_Tok_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
In each era, men have fewer total tokens than women, but this difference is only significant in the earliest era.
###Code
# Comparing female Total Type Count over time
stats.f_oneway(f_chars_early.Total_Type_Count, f_chars_mid.Total_Type_Count, f_chars_late.Total_Type_Count)
stats.ttest_ind(f_chars_early.Total_Type_Count, f_chars_mid.Total_Type_Count, equal_var=False)
stats.ttest_ind(f_chars_early.Total_Type_Count, f_chars_late.Total_Type_Count, equal_var=False)
stats.ttest_ind(f_chars_mid.Total_Type_Count, f_chars_late.Total_Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Like token count, total female type count isn't significantly different across time except for between the early and mid era.
###Code
# Comparing Male Total Type Count Over Time
stats.f_oneway(m_chars_early.Total_Type_Count, m_chars_mid.Total_Type_Count, m_chars_late.Total_Type_Count)
stats.ttest_ind(m_chars_early.Total_Type_Count, m_chars_mid.Total_Type_Count, equal_var=False)
stats.ttest_ind(m_chars_early.Total_Type_Count, m_chars_late.Total_Type_Count, equal_var=False)
stats.ttest_ind(m_chars_mid.Total_Type_Count, m_chars_late.Total_Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
No significant difference here either.
###Code
#Comparing Male and Female Total type counts w/in each era
stats.ttest_ind(m_chars_early.Total_Type_Count, f_chars_early.Total_Type_Count, equal_var=False)
stats.ttest_ind(m_chars_mid.Total_Type_Count, f_chars_mid.Total_Type_Count, equal_var=False)
stats.ttest_ind(m_chars_late.Total_Type_Count, f_chars_late.Total_Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
The only significant difference here is in the early era, when men used significantly fewer types than women. Gender Across Companies
###Code
f_chars_disney = f_chars[f_chars.Disney_Period != 'DREAMWORKS']
f_chars_dw = f_chars[f_chars.Disney_Period == 'DREAMWORKS']
m_chars_disney = m_chars[m_chars.Disney_Period != 'DREAMWORKS']
m_chars_dw = m_chars[m_chars.Disney_Period == 'DREAMWORKS']
## Between male and female characters is disney films
stats.ttest_ind(m_chars_disney.Total_Tok_Count, f_chars_disney.Total_Tok_Count, equal_var=False)
## Between male and female characters in Dreamworks Films
stats.ttest_ind(m_chars_dw.Total_Tok_Count, f_chars_dw.Total_Tok_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
While in Disney, men have a significanlty lower total token count, in Dreamworks men have a significantly higher total token count.
###Code
## Between male characters in Dreamworks and Disney
stats.ttest_ind(m_chars_disney.Total_Tok_Count, m_chars_dw.Total_Tok_Count, equal_var=False)
## Between female characters in Dreamworks and Disney
stats.ttest_ind(f_chars_disney.Total_Tok_Count, f_chars_dw.Total_Tok_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Male disney characters have fewer total tokens than male dreamworks characters, but female disney characters have more total tokens than female dreamworks characters.
###Code
## Between male and female characters is disney films
stats.ttest_ind(m_chars_disney.Total_Type_Count, f_chars_disney.Total_Type_Count, equal_var=False)
## Between male and female characters in Dreamworks Films
stats.ttest_ind(m_chars_dw.Total_Type_Count, f_chars_dw.Total_Type_Count, equal_var=False)
## Between male characters in Dreamworks and Disney
stats.ttest_ind(m_chars_disney.Total_Type_Count, m_chars_dw.Total_Type_Count, equal_var=False)
## Between female characters in Dreamworks and Disney
stats.ttest_ind(f_chars_disney.Total_Tok_Count, f_chars_dw.Total_Tok_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Role Overall
###Code
pro_chars = char_df[char_df.Role == 'PRO']
ant_chars = char_df[char_df.Role == 'ANT']
helper_chars = char_df[char_df.Role == 'HELPER']
stats.ttest_ind(pro_chars.Total_Tok_Count, ant_chars.Total_Tok_Count, equal_var = False)
stats.ttest_ind(pro_chars.Total_Type_Count, ant_chars.Total_Type_Count, equal_var = False)
###Output
_____no_output_____
###Markdown
Overall, there do seem to be significant differences in total token counts and type counts based on role, with protagonists having significantly shorter lines Role Over Time
###Code
pro_chars_early = pro_chars[pro_chars.Disney_Period == 'EARLY']
pro_chars_mid = pro_chars[pro_chars.Disney_Period == 'MID']
pro_chars_late = pro_chars[pro_chars.Disney_Period == 'LATE']
ant_chars_early = ant_chars[ant_chars.Disney_Period == 'EARLY']
ant_chars_mid = ant_chars[ant_chars.Disney_Period == 'MID']
ant_chars_late = ant_chars[ant_chars.Disney_Period == 'LATE']
# Comparing Protagonist Token Count over time
stats.f_oneway(pro_chars_early.Total_Tok_Count, pro_chars_mid.Total_Tok_Count, pro_chars_late.Total_Tok_Count)
stats.ttest_ind(pro_chars_early.Total_Tok_Count, pro_chars_mid.Total_Tok_Count, equal_var=False)
stats.ttest_ind(pro_chars_early.Total_Tok_Count, pro_chars_late.Total_Tok_Count, equal_var=False)
stats.ttest_ind(pro_chars_mid.Total_Tok_Count, pro_chars_late.Total_Tok_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Protagonist total token count hasn't changed significantly over time
###Code
# Comparing Antagonist Total Token Count Over Time
stats.f_oneway(ant_chars_early.Total_Tok_Count, ant_chars_mid.Total_Tok_Count, ant_chars_late.Total_Tok_Count)
stats.ttest_ind(ant_chars_early.Total_Tok_Count, ant_chars_mid.Total_Tok_Count, equal_var=False)
stats.ttest_ind(ant_chars_early.Total_Tok_Count, ant_chars_late.Total_Tok_Count, equal_var=False)
stats.ttest_ind(ant_chars_mid.Total_Tok_Count, ant_chars_late.Total_Tok_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Total token counts have not changed significanlty over time for antagonists.
###Code
#Comparing Pros and Ants token counts w/in each era
stats.ttest_ind(ant_chars_early.Total_Tok_Count, pro_chars_early.Total_Tok_Count, equal_var=False)
stats.ttest_ind(ant_chars_mid.Total_Tok_Count, pro_chars_mid.Total_Tok_Count, equal_var=False)
stats.ttest_ind(ant_chars_late.Total_Tok_Count, pro_chars_late.Total_Tok_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
In all eras, antagonists have more total tokens than protagonists, but this difference is only significant in the middle and late periods.
###Code
# Comparing protagonist Total Type Count over time
stats.f_oneway(pro_chars_early.Total_Type_Count, pro_chars_mid.Total_Type_Count, pro_chars_late.Total_Type_Count)
stats.ttest_ind(pro_chars_early.Total_Type_Count, pro_chars_mid.Total_Type_Count, equal_var=False)
stats.ttest_ind(pro_chars_early.Total_Type_Count, pro_chars_late.Total_Type_Count, equal_var=False)
stats.ttest_ind(pro_chars_mid.Total_Type_Count, pro_chars_late.Total_Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
There's no significant difference between protagonist total type count across eras.
###Code
# Comparing Antagonist Total Type Count Over Time
stats.f_oneway(ant_chars_early.Total_Type_Count, ant_chars_mid.Total_Type_Count, ant_chars_late.Total_Type_Count)
stats.ttest_ind(ant_chars_early.Total_Type_Count, ant_chars_mid.Total_Type_Count, equal_var=False)
stats.ttest_ind(ant_chars_early.Total_Type_Count, ant_chars_late.Total_Type_Count, equal_var=False)
stats.ttest_ind(ant_chars_mid.Total_Type_Count, ant_chars_late.Total_Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Antagonist total type counts aren't significantly different over time.
###Code
#Comparing Protagonist and Antagonist total type counts w/in each era
stats.ttest_ind(pro_chars_early.Total_Type_Count, ant_chars_early.Total_Type_Count, equal_var=False)
stats.ttest_ind(pro_chars_mid.Total_Type_Count, ant_chars_mid.Total_Type_Count, equal_var=False)
stats.ttest_ind(pro_chars_late.Total_Type_Count, ant_chars_late.Total_Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
We see protagonists speak significantly fewer types per line than antagonists do in the middle period. This difference is not strong in other periods. Roles Across Companies
###Code
ant_chars_disney = ant_chars[ant_chars.Disney_Period != 'DREAMWORKS']
ant_chars_dw = ant_chars[ant_chars.Disney_Period == 'DREAMWORKS']
pro_chars_disney = pro_chars[pro_chars.Disney_Period != 'DREAMWORKS']
pro_chars_dw = pro_chars[pro_chars.Disney_Period == 'DREAMWORKS']
#Between antagonists in Disney and Dreamworks
stats.ttest_ind(ant_chars_disney.Total_Tok_Count, ant_chars_dw.Total_Tok_Count, equal_var=False)
#Between protagonists in Disney and Dreamworks
stats.ttest_ind(pro_chars_disney.Total_Tok_Count, pro_chars_dw.Total_Tok_Count, equal_var=False)
#Between protagonists and antagonists in Disney
stats.ttest_ind(pro_chars_disney.Total_Tok_Count, ant_chars_disney.Total_Tok_Count, equal_var=False)
#Between protagonists and antagonists in Dreamworks
stats.ttest_ind(pro_chars_dw.Total_Tok_Count, ant_chars_dw.Total_Tok_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
There isn't a significant difference in how many total tokens Disney and Dreamworks antagonists use, but there is for protagonists (with disney protagonists speaking less). Also, in Disney and Dreamworks movies protagonists speak significantly more total tokens than antagonists.
###Code
#Between antagonists in Disney and Dreamworks
stats.ttest_ind(ant_chars_disney.Total_Type_Count, ant_chars_dw.Total_Type_Count, equal_var=False)
#Between protagonists in Disney and Dreamworks
stats.ttest_ind(pro_chars_disney.Total_Type_Count, pro_chars_dw.Total_Type_Count, equal_var=False)
#Between protagonists and antagonists in Disney
stats.ttest_ind(pro_chars_disney.Total_Type_Count, ant_chars_disney.Total_Type_Count, equal_var=False)
#Between protagonists and antagonists in Dreamworks
stats.ttest_ind(pro_chars_dw.Total_Type_Count, ant_chars_dw.Total_Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Gender and RoleLet's see if token or type count by line differs based on both role and gender
###Code
chars_gen_role = char_df[(char_df.Gender != 'n') & (char_df.Role != 'N')]
pro_f_chars = chars_gen_role[(chars_gen_role.Gender == 'f') & (chars_gen_role.Role == 'PRO')]
pro_m_chars = chars_gen_role[(chars_gen_role.Gender == 'm') & (chars_gen_role.Role == 'PRO')]
ant_f_chars = chars_gen_role[(chars_gen_role.Gender == 'f') & (chars_gen_role.Role == 'ANT')]
ant_m_chars = chars_gen_role[(chars_gen_role.Gender == 'm') & (chars_gen_role.Role == 'ANT')]
stats.ttest_ind(pro_f_chars.Total_Tok_Count, pro_m_chars.Total_Tok_Count, equal_var=False)
stats.ttest_ind(ant_f_chars.Total_Tok_Count, ant_m_chars.Total_Tok_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Female protagonists have significantly lower total token counts than their male counterparts, but the same isn't true for female antagonists!
###Code
stats.ttest_ind(pro_f_chars.Total_Tok_Count, ant_f_chars.Total_Tok_Count, equal_var=False)
stats.ttest_ind(pro_m_chars.Total_Tok_Count, ant_m_chars.Total_Tok_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
Also, female and male protagonists use fewer tokens per line than their antagonist counterparts.
###Code
stats.ttest_ind(pro_f_chars.Total_Type_Count, pro_m_chars.Total_Type_Count, equal_var=False)
stats.ttest_ind(ant_f_chars.Total_Type_Count, ant_m_chars.Total_Type_Count, equal_var=False)
stats.ttest_ind(pro_f_chars.Total_Type_Count, ant_f_chars.Total_Type_Count, equal_var=False)
stats.ttest_ind(pro_m_chars.Total_Type_Count, ant_m_chars.Total_Type_Count, equal_var=False)
###Output
_____no_output_____
###Markdown
TTR (Total Type Count/Total Token Count)Because TTR is so sensitive to text length, I decided to look at TTR for Total Type Counts / Total Token Counts. Gender Overall
###Code
stats.ttest_ind(f_chars.TTR, m_chars.TTR, equal_var = False)
###Output
_____no_output_____
###Markdown
In terms of Total Token Count, Type Count, and TTR, there's no significant difference between genders. This suggests that while men may speak in longer spurts than women, the amount overall their speech variety doesn't differ much. Gender Over Time
###Code
# Comparing female TTR over time
stats.f_oneway(f_chars_early.TTR, f_chars_mid.TTR, f_chars_late.TTR)
stats.ttest_ind(f_chars_early.TTR, f_chars_mid.TTR, equal_var=False)
stats.ttest_ind(f_chars_early.TTR, f_chars_late.TTR, equal_var=False)
stats.ttest_ind(f_chars_mid.TTR, f_chars_late.TTR, equal_var=False)
###Output
_____no_output_____
###Markdown
Overall, female ttr has consistently increased over time, but not always significantly. The middle era and late era aren't significantly different.
###Code
# Comparing male TTR over time
stats.f_oneway(m_chars_early.TTR, m_chars_mid.TTR, m_chars_late.TTR)
stats.ttest_ind(m_chars_early.TTR, m_chars_mid.TTR, equal_var=False)
stats.ttest_ind(m_chars_early.TTR, m_chars_late.TTR, equal_var=False)
stats.ttest_ind(m_chars_mid.TTR, m_chars_late.TTR, equal_var=False)
###Output
_____no_output_____
###Markdown
Overall, male TTR has also increased with each era. But this time, the jump between the early era and the mid era is not significant
###Code
#Comparing Male and Female ttr w/in each era
stats.ttest_ind(m_chars_early.TTR, f_chars_early.TTR, equal_var=False)
stats.ttest_ind(m_chars_mid.TTR, f_chars_mid.TTR, equal_var=False)
stats.ttest_ind(m_chars_late.TTR, f_chars_late.TTR, equal_var=False)
###Output
_____no_output_____
###Markdown
Within each era, there is no significant difference between male and female TTR (though it appears to be closest to significant in the early period) Gender Across Companies
###Code
## Between male and female characters is disney films
stats.ttest_ind(m_chars_disney.TTR, f_chars_disney.TTR, equal_var=False)
## Between male and female characters in Dreamworks Films
stats.ttest_ind(m_chars_dw.TTR, f_chars_dw.TTR, equal_var=False)
###Output
_____no_output_____
###Markdown
Female characters actually have higher TTR's in Dreamworks, but this difference is not significant. In fact, the TTRs for each gender in each company aren't signficantly different
###Code
## Between male characters in Dreamworks and Disney
stats.ttest_ind(m_chars_disney.TTR, m_chars_dw.TTR, equal_var=False)
## Between female characters in Dreamworks and Disney
stats.ttest_ind(f_chars_disney.TTR, f_chars_dw.TTR, equal_var=False)
###Output
_____no_output_____
###Markdown
There's also no significant difference between male or female lines depending on which company they're from. Role Overall
###Code
stats.ttest_ind(pro_chars.TTR, ant_chars.TTR, equal_var = False)
#definite difference here!!
###Output
_____no_output_____
###Markdown
The overall difference in TTR between male and female characters isn't significant, but this difference is signficant between roles. This suggests that villains speak for longer spurts AND speak a wider variety of words over all. Role Over Time
###Code
# Comparing protagonist TTR over time
stats.f_oneway(pro_chars_early.TTR, pro_chars_mid.TTR, pro_chars_late.TTR)
stats.ttest_ind(pro_chars_early.TTR, pro_chars_mid.TTR, equal_var=False)
stats.ttest_ind(pro_chars_early.TTR, pro_chars_late.TTR, equal_var=False)
stats.ttest_ind(pro_chars_mid.TTR, pro_chars_late.TTR, equal_var=False)
###Output
_____no_output_____
###Markdown
Overall, protagonist ttr has not changed significantly over time.
###Code
# Comparing antagonist TTR over time
stats.f_oneway(ant_chars_early.TTR, ant_chars_mid.TTR, ant_chars_late.TTR)
stats.ttest_ind(ant_chars_early.TTR, ant_chars_mid.TTR, equal_var=False)
stats.ttest_ind(ant_chars_early.TTR, ant_chars_late.TTR, equal_var=False)
stats.ttest_ind(ant_chars_mid.TTR, ant_chars_late.TTR, equal_var=False)
###Output
_____no_output_____
###Markdown
Overall, antagonist TTR has not changed consistently with each era, but the difference is significant. The only two eras with a significant difference are the middle and late era, in which the antagonist TTR drops.
###Code
#Comparing pro and ant ttr w/in each era
stats.ttest_ind(pro_chars_early.TTR, ant_chars_early.TTR, equal_var=False)
stats.ttest_ind(pro_chars_mid.TTR, ant_chars_mid.TTR, equal_var=False)
stats.ttest_ind(pro_chars_late.TTR, ant_chars_late.TTR, equal_var=False)
###Output
_____no_output_____
###Markdown
Protagonists have significantly lower TTR's than antagonists in the middle era! But this difference isn't significant in any other era Role Across Companies
###Code
## Between pro and ant characters is disney films
stats.ttest_ind(pro_chars_disney.TTR, ant_chars_disney.TTR, equal_var=False)
## Between pro and ant characters in Dreamworks Films
stats.ttest_ind(pro_chars_dw.TTR, ant_chars_dw.TTR, equal_var=False)
###Output
_____no_output_____
###Markdown
In both companies, the protagonists' TTR is significantly less than the antagonists' TTR (though this difference is more pronounced in Disney)
###Code
## Between pro characters in Dreamworks and Disney
stats.ttest_ind(pro_chars_disney.TTR, pro_chars_dw.TTR, equal_var=False)
## Between ant characters in Dreamworks and Disney
stats.ttest_ind(ant_chars_disney.TTR, ant_chars_dw.TTR, equal_var=False)
###Output
_____no_output_____
###Markdown
There's also no significant difference between protagonist or antagonist TTR depending on which company they're from. Gender and Role
###Code
stats.ttest_ind(pro_f_chars.TTR, pro_m_chars.TTR, equal_var=False)
stats.ttest_ind(ant_f_chars.TTR, ant_m_chars.TTR, equal_var=False)
###Output
_____no_output_____
###Markdown
Female antagonists have a significantly lower TTR than male antagonists.
###Code
stats.ttest_ind(pro_f_chars.TTR, ant_f_chars.TTR, equal_var=False)
stats.ttest_ind(pro_m_chars.TTR, ant_m_chars.TTR, equal_var=False)
###Output
_____no_output_____
###Markdown
Male protagonists have significantly lower TTR's than male antagonists. K-BandK-bands are another way to measure vocabulary sophistication. Before we can measure this, we need to get rid of NaN values (32 of them)
###Code
char_df = char_df[char_df.AVG_K_BAND.notnull()]
char_df.info()
char_df.AVG_K_BAND.describe()
f_chars = char_df[char_df.Gender == 'f']
m_chars = char_df[char_df.Gender == 'm']
pro_chars = char_df[char_df.Role == 'PRO']
ant_chars = char_df[char_df.Role == 'ANT']
helper_chars = char_df[char_df.Role == 'HELPER']
f_chars_early = f_chars[f_chars.Disney_Period == 'EARLY']
m_chars_early = m_chars[m_chars.Disney_Period == 'EARLY']
f_chars_mid = f_chars[f_chars.Disney_Period == 'MID']
m_chars_mid = m_chars[m_chars.Disney_Period == 'MID']
f_chars_late = f_chars[f_chars.Disney_Period == 'LATE']
m_chars_late = m_chars[m_chars.Disney_Period == 'LATE']
f_chars_disney = f_chars[f_chars.Disney_Period != 'DREAMWORKS']
f_chars_dw = f_chars[f_chars.Disney_Period == 'DREAMWORKS']
m_chars_disney = m_chars[m_chars.Disney_Period != 'DREAMWORKS']
m_chars_dw = m_chars[m_chars.Disney_Period == 'DREAMWORKS']
pro_chars_early = pro_chars[pro_chars.Disney_Period == 'EARLY']
pro_chars_mid = pro_chars[pro_chars.Disney_Period == 'MID']
pro_chars_late = pro_chars[pro_chars.Disney_Period == 'LATE']
ant_chars_early = ant_chars[ant_chars.Disney_Period == 'EARLY']
ant_chars_mid = ant_chars[ant_chars.Disney_Period == 'MID']
ant_chars_late = ant_chars[ant_chars.Disney_Period == 'LATE']
ant_chars_disney = ant_chars[ant_chars.Disney_Period != 'DREAMWORKS']
ant_chars_dw = ant_chars[ant_chars.Disney_Period == 'DREAMWORKS']
pro_chars_disney = pro_chars[pro_chars.Disney_Period != 'DREAMWORKS']
pro_chars_dw = pro_chars[pro_chars.Disney_Period == 'DREAMWORKS']
chars_gen_role = char_df[(char_df.Gender != 'n') & (char_df.Role != 'N')]
pro_f_chars = chars_gen_role[(chars_gen_role.Gender == 'f') & (chars_gen_role.Role == 'PRO')]
pro_m_chars = chars_gen_role[(chars_gen_role.Gender == 'm') & (chars_gen_role.Role == 'PRO')]
ant_f_chars = chars_gen_role[(chars_gen_role.Gender == 'f') & (chars_gen_role.Role == 'ANT')]
ant_m_chars = chars_gen_role[(chars_gen_role.Gender == 'm') & (chars_gen_role.Role == 'ANT')]
###Output
_____no_output_____
###Markdown
Gender Overall
###Code
stats.ttest_ind(f_chars.AVG_K_BAND, m_chars.AVG_K_BAND, equal_var = False)
###Output
_____no_output_____
###Markdown
A difference, but not significant. Gender Over Time
###Code
# Comparing female k-bands over time
stats.f_oneway(f_chars_early.AVG_K_BAND, f_chars_mid.AVG_K_BAND, f_chars_late.AVG_K_BAND)
stats.ttest_ind(f_chars_early.AVG_K_BAND, f_chars_mid.AVG_K_BAND, equal_var=False)
stats.ttest_ind(f_chars_early.AVG_K_BAND, f_chars_late.AVG_K_BAND, equal_var=False)
stats.ttest_ind(f_chars_mid.AVG_K_BAND, f_chars_late.AVG_K_BAND, equal_var=False)
###Output
_____no_output_____
###Markdown
Between each era, there's no significant difference between k-band for female characters.
###Code
# Comparing male k-bands over time
stats.f_oneway(m_chars_early.AVG_K_BAND, m_chars_mid.AVG_K_BAND, m_chars_late.AVG_K_BAND)
stats.ttest_ind(m_chars_early.AVG_K_BAND, m_chars_mid.AVG_K_BAND, equal_var=False)
stats.ttest_ind(m_chars_early.AVG_K_BAND, m_chars_late.AVG_K_BAND, equal_var=False)
stats.ttest_ind(m_chars_mid.AVG_K_BAND, m_chars_late.AVG_K_BAND, equal_var=False)
###Output
_____no_output_____
###Markdown
There's no significant difference between male k-bands either.
###Code
#Comparing Male and Female k-band w/in each era
stats.ttest_ind(m_chars_early.AVG_K_BAND, f_chars_early.AVG_K_BAND, equal_var=False)
stats.ttest_ind(m_chars_mid.AVG_K_BAND, f_chars_mid.AVG_K_BAND, equal_var=False)
stats.ttest_ind(m_chars_late.AVG_K_BAND, f_chars_late.AVG_K_BAND, equal_var=False)
###Output
_____no_output_____
###Markdown
Within each era, male characters have higher k-bands than female characters, but the only significant difference between male and female k-band is in the middle era. Gender Across Companies
###Code
## Between male and female characters is disney films
stats.ttest_ind(m_chars_disney.AVG_K_BAND, f_chars_disney.AVG_K_BAND, equal_var=False)
## Between male and female characters in Dreamworks Films
stats.ttest_ind(m_chars_dw.AVG_K_BAND, f_chars_dw.AVG_K_BAND, equal_var=False)
###Output
_____no_output_____
###Markdown
In both companies, male characters have higher k-bands, but this difference is only significant in Disney movies.
###Code
## Between male characters in Dreamworks and Disney
stats.ttest_ind(m_chars_disney.AVG_K_BAND, m_chars_dw.AVG_K_BAND, equal_var=False)
## Between female characters in Dreamworks and Disney
stats.ttest_ind(f_chars_disney.AVG_K_BAND, f_chars_dw.AVG_K_BAND, equal_var=False)
###Output
_____no_output_____
###Markdown
There's also no significant difference between male or female k-bands depending on which company they're from. Role Overall
###Code
stats.ttest_ind(pro_chars.AVG_K_BAND, ant_chars.AVG_K_BAND, equal_var = False)
###Output
_____no_output_____
###Markdown
The overall difference in k-band between protagonists and antagonists isn't significant. Role Over Time
###Code
# Comparing protagonist k-band over time
stats.f_oneway(pro_chars_early.AVG_K_BAND, pro_chars_mid.AVG_K_BAND, pro_chars_late.AVG_K_BAND)
stats.ttest_ind(pro_chars_early.AVG_K_BAND, pro_chars_mid.AVG_K_BAND, equal_var=False)
stats.ttest_ind(pro_chars_early.AVG_K_BAND, pro_chars_late.AVG_K_BAND, equal_var=False)
stats.ttest_ind(pro_chars_mid.AVG_K_BAND, pro_chars_late.AVG_K_BAND, equal_var=False)
###Output
_____no_output_____
###Markdown
Overall, protagonist k-band has gone up over time, but only increased significantly between the early and late periods.
###Code
# Comparing antagonist k-band over time
stats.f_oneway(ant_chars_early.AVG_K_BAND, ant_chars_mid.AVG_K_BAND, ant_chars_late.AVG_K_BAND)
stats.ttest_ind(ant_chars_early.AVG_K_BAND, ant_chars_mid.AVG_K_BAND, equal_var=False)
stats.ttest_ind(ant_chars_early.AVG_K_BAND, ant_chars_late.AVG_K_BAND, equal_var=False)
stats.ttest_ind(ant_chars_mid.AVG_K_BAND, ant_chars_late.AVG_K_BAND, equal_var=False)
###Output
_____no_output_____
###Markdown
Overall, antagonist k-band has not changed consistently with each era, but the difference between the mid and late period is significant--antagonists' k-bands have gone down between these two periods.
###Code
#Comparing pro and ant k-band w/in each era
stats.ttest_ind(pro_chars_early.AVG_K_BAND, ant_chars_early.AVG_K_BAND, equal_var=False)
stats.ttest_ind(pro_chars_mid.AVG_K_BAND, ant_chars_mid.AVG_K_BAND, equal_var=False)
stats.ttest_ind(pro_chars_late.AVG_K_BAND, ant_chars_late.AVG_K_BAND, equal_var=False)
###Output
_____no_output_____
###Markdown
Protagonists have significantly lower k-bands in the early and middle period. Though they have higher k-bands than antagonists in the late period, this difference is not significant. Role Across Companies
###Code
## Between pro and ant characters is disney films
stats.ttest_ind(pro_chars_disney.AVG_K_BAND, ant_chars_disney.AVG_K_BAND, equal_var=False)
## Between pro and ant characters in Dreamworks Films
stats.ttest_ind(pro_chars_dw.AVG_K_BAND, ant_chars_dw.AVG_K_BAND, equal_var=False)
###Output
_____no_output_____
###Markdown
In both companies, the difference between protagonist and antagonist k-bands is not signficant.
###Code
## Between pro characters in Dreamworks and Disney
stats.ttest_ind(pro_chars_disney.AVG_K_BAND, pro_chars_dw.AVG_K_BAND, equal_var=False)
## Between ant characters in Dreamworks and Disney
stats.ttest_ind(ant_chars_disney.AVG_K_BAND, ant_chars_dw.AVG_K_BAND, equal_var=False)
###Output
_____no_output_____
###Markdown
There's a signficant difference between antagonists across companies--Disney antagonists are more likely to have a higher k-band. Gender and Role
###Code
stats.ttest_ind(pro_f_chars.AVG_K_BAND, pro_m_chars.AVG_K_BAND, equal_var=False)
stats.ttest_ind(ant_f_chars.AVG_K_BAND, ant_m_chars.AVG_K_BAND, equal_var=False)
###Output
_____no_output_____
###Markdown
There is no significant difference here.
###Code
stats.ttest_ind(pro_f_chars.AVG_K_BAND, ant_f_chars.AVG_K_BAND, equal_var=False)
stats.ttest_ind(pro_m_chars.AVG_K_BAND, ant_m_chars.AVG_K_BAND, equal_var=False)
###Output
_____no_output_____
###Markdown
And no significant difference here either. A Quick Peek at TTR by Line
###Code
movie_df['TTR'] = movie_df.Type_Count / movie_df.Token_Count
f_movie_df = movie_df[movie_df.Gender == 'f']
m_movie_df = movie_df[movie_df.Gender == 'm']
stats.ttest_ind(f_movie_df.TTR, m_movie_df.TTR, equal_var = False)
movie_df.groupby('Gender')['TTR'].describe()
###Output
_____no_output_____
###Markdown
Wow! This makes the difference seem extremely significant! But recall from earlier that females have significantly shorter lines. The caveat of TTR is that as line length goes up, you're more likely to get a lower TTR. The difference above is worth noting, but I don't think that it tells the whole story.
###Code
#TTR by line for Role
ant_movie_df = movie_df[movie_df.Role == 'ANT']
pro_movie_df = movie_df[movie_df.Role == 'PRO']
movie_df.groupby('Role')['TTR'].describe()
stats.ttest_ind(ant_movie_df.TTR, pro_movie_df.TTR, equal_var = False)
###Output
_____no_output_____ |
examples/ColorTableaux.ipynb | ###Markdown
A colored tableau with custom interaction_Removing a cell triggers more removing_
###Code
from sage.combinat.tableau import Tableau
t = Tableau([[1,1,1,1,1],[1,1,1,1],[1,1,1],[1,1],[1]])
from sage_widget_adapters.combinat.tableau_grid_view_adapter import TableauGridViewAdapter
class ColorTableauGVAdapter(TableauGridViewAdapter):
@staticmethod
def cell_to_display(cell_content, display_type):
if cell_content:
return False
return True
def display_to_cell(self, display_value, display_type):
if display_value == True:
return 0
return 1
@staticmethod
def addable_cells(obj):
return []
def remove_cell(self, obj, pos, dirty={}):
# We pop all corners from pos to the right end
tl = super(ColorTableauGVAdapter, self).remove_cell(obj, pos, dirty).to_list()
for c in self.removable_cells(obj):
if c[1] > pos[1]:
tl[c[0]].pop()
try:
return self.objclass(tl)
except:
print("Cell (%s,%s) cannot be removed from this object!" % pos)
return obj
cta = ColorTableauGVAdapter()
%%html
<style>.red {background-color: red}
.yellow {background-color: yellow}
</style>
from sage_combinat_widgets.grid_view_widget import GridViewWidget, ButtonCell, BlankButton, styled_button_cell
from ipywidgets import Layout
blyt = Layout(width='25px',height='25px', margin='0', padding='0')
w = GridViewWidget(t, cta, cell_layout=blyt,
cell_widget_classes=[styled_button_cell(style_name="red"), styled_button_cell(style_name="yellow")],
cell_widget_class_index=lambda x:x[1]%2, # Alternate one red column and one yellow column
display_convention='fr')
w
###Output
_____no_output_____ |
course1/week1-ungraded-lab/server.ipynb | ###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
_____no_output_____
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
_____no_output_____
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
_____no_output_____
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
_____no_output_____
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
_____no_output_____
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
========================
Image processed: apple.jpg
Detected object: apple with confidence level of 0.5718880891799927
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5819529294967651
Detected object: orange with confidence level of 0.534569263458252
Detected object: orange with confidence level of 0.5153823494911194
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5819529294967651
Detected object: orange with confidence level of 0.534569263458252
Detected object: orange with confidence level of 0.5153823494911194
Detected object: apple with confidence level of 0.3484269678592682
Detected object: orange with confidence level of 0.32852253317832947
Detected object: apple with confidence level of 0.31235381960868835
Detected object: orange with confidence level of 0.27941495180130005
Detected object: orange with confidence level of 0.2749632000923157
Detected object: apple with confidence level of 0.27466171979904175
Detected object: orange with confidence level of 0.2146201878786087
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [27817]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: 127.0.0.1:62279 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:62279 - "GET /favicon.ico HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:62280 - "GET /docs HTTP/1.1" 200 OK
INFO: 127.0.0.1:62280 - "GET /openapi.json HTTP/1.1" 200 OK
INFO: 127.0.0.1:62287 - "POST /predict?model=yolov3-tiny HTTP/1.1" 400 Bad Request
INFO: 127.0.0.1:62287 - "POST /predict?model=yolov3-tiny HTTP/1.1" 400 Bad Request
INFO: 127.0.0.1:62287 - "POST /predict?model=yolov3-tiny HTTP/1.1" 400 Bad Request
INFO: 127.0.0.1:62287 - "POST /predict?model=yolov3-tiny HTTP/1.1" 400 Bad Request
INFO: 127.0.0.1:62287 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:62326 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:62327 - "GET /docs HTTP/1.1" 200 OK
INFO: 127.0.0.1:62327 - "GET /openapi.json HTTP/1.1" 200 OK
INFO: 127.0.0.1:62332 - "POST /predict?model=yolov3 HTTP/1.1" 400 Bad Request
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [27817]
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, confidence: float, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, confidence=confidence, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [130749]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
2022-03-25 10:46:37.212701: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-03-25 10:46:37.212986: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818483829498291
Detected object: orange with confidence level of 0.5346482992172241
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818483829498291
Detected object: orange with confidence level of 0.5346482992172241
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759876132011414
Detected object: orange with confidence level of 0.32876095175743103
Detected object: apple with confidence level of 0.31244680285453796
Detected object: orange with confidence level of 0.2798606753349304
Detected object: orange with confidence level of 0.2749978303909302
Detected object: apple with confidence level of 0.2744506895542145
Detected object: orange with confidence level of 0.21419063210487366
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8081)
###Output
INFO: Started server process [216159]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8081 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [7158]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, confidence: float = 0.5, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model, confidence=confidence)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [20083]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
_____no_output_____
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
_____no_output_____
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
_____no_output_____
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
_____no_output_____
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
_____no_output_____
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, confidence: float = 0.5, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model, confidence=confidence)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [60315]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, yr=0.5, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model, confidence = yr)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [13344]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818483829498291
Detected object: orange with confidence level of 0.5346482992172241
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818483829498291
Detected object: orange with confidence level of 0.5346482992172241
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759876132011414
Detected object: orange with confidence level of 0.32876095175743103
Detected object: apple with confidence level of 0.31244680285453796
Detected object: orange with confidence level of 0.2798606753349304
Detected object: orange with confidence level of 0.2749978303909302
Detected object: apple with confidence level of 0.2744506895542145
Detected object: orange with confidence level of 0.21419063210487366
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, confidence: float, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, confidence=confidence, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf, write_conf=True)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [173349]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
2021-09-07 10:00:10.041620: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [107582]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model,confidence_score: float, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model, confidence=confidence_score)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [99611]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, confidence: float, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model, confidence=confidence)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [11034]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [11404]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.58184814453125
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099048614502
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.58184814453125
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099048614502
Detected object: apple with confidence level of 0.34759876132011414
Detected object: orange with confidence level of 0.32876095175743103
Detected object: apple with confidence level of 0.3124467730522156
Detected object: orange with confidence level of 0.27986064553260803
Detected object: orange with confidence level of 0.2749975919723511
Detected object: apple with confidence level of 0.2744506001472473
Detected object: orange with confidence level of 0.21419072151184082
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [14243]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, confidence: float = 0.5, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model, confidence=confidence)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [29]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
========================
Image processed: apple.jpg
Detected object: apple with confidence level of 0.5717206597328186
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, confidence: float, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model, confidence=confidence)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [4032]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: 127.0.0.1:62574 - "POST /predict?model=yolov3-tiny?confidence=0.2 HTTP/1.1" 422 Unprocessable Entity
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [4032]
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818483829498291
Detected object: orange with confidence level of 0.5346482992172241
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818483829498291
Detected object: orange with confidence level of 0.5346482992172241
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759876132011414
Detected object: orange with confidence level of 0.32876095175743103
Detected object: apple with confidence level of 0.31244680285453796
Detected object: orange with confidence level of 0.2798606753349304
Detected object: orange with confidence level of 0.2749978303909302
Detected object: apple with confidence level of 0.2744506895542145
Detected object: orange with confidence level of 0.21419063210487366
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [20192]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818483829498291
Detected object: orange with confidence level of 0.5346482992172241
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818483829498291
Detected object: orange with confidence level of 0.5346482992172241
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759876132011414
Detected object: orange with confidence level of 0.32876095175743103
Detected object: apple with confidence level of 0.31244680285453796
Detected object: orange with confidence level of 0.2798606753349304
Detected object: orange with confidence level of 0.2749978303909302
Detected object: apple with confidence level of 0.2744506895542145
Detected object: orange with confidence level of 0.21419063210487366
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, confidence: float, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, float(confidence), model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [64389]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
========================
Image processed: apple.jpg
Detected object: apple with confidence level of 0.5717206597328186
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [14300]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...), confidence=0.5):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image,confidence=float(confidence), model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [39488]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
========================
Image processed: apple.jpg
Detected object: apple with confidence level of 0.571720540523529
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818483829498291
Detected object: orange with confidence level of 0.5346482992172241
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818483829498291
Detected object: orange with confidence level of 0.5346482992172241
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759876132011414
Detected object: orange with confidence level of 0.32876095175743103
Detected object: apple with confidence level of 0.31244680285453796
Detected object: orange with confidence level of 0.2798606753349304
Detected object: orange with confidence level of 0.2749978303909302
Detected object: apple with confidence level of 0.2744506895542145
Detected object: orange with confidence level of 0.21419063210487366
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, confidence: float, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, confidence=confidence, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [32345]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
========================
Image processed: apple.jpg
Detected object: apple with confidence level of 0.5717206597328186
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, confidence: float, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model, confidence=confidence)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [35]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
========================
Image processed: apple.jpg
Detected object: apple with confidence level of 0.5717206597328186
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, confidence:float, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model, confidence=confidence)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [7108]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [6560]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
_____no_output_____
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
_____no_output_____
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
_____no_output_____
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
_____no_output_____
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
_____no_output_____
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [8920]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
========================
Image processed: apple.jpg
Detected object: apple with confidence level of 0.5717206597328186
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.programiz.com/python-programming/decorator) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [35003]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
========================
Image processed: apple.jpg
Detected object: apple with confidence level of 0.571720540523529
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818483829498291
Detected object: orange with confidence level of 0.5346482992172241
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818483829498291
Detected object: orange with confidence level of 0.5346482992172241
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759876132011414
Detected object: orange with confidence level of 0.32876095175743103
Detected object: apple with confidence level of 0.31244680285453796
Detected object: orange with confidence level of 0.2798606753349304
Detected object: orange with confidence level of 0.2749978303909302
Detected object: apple with confidence level of 0.2744506895542145
Detected object: orange with confidence level of 0.21419063210487366
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, confidence: float, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, confidence=confidence, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [36984]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
def load_image(files):
for image_file in files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
load_image(image_files)
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
========================
Image processed: apple.jpg
Detected object: apple with confidence level of 0.5717206001281738
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346481800079346
Detected object: orange with confidence level of 0.5150994658470154
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346481800079346
Detected object: orange with confidence level of 0.5150994658470154
Detected object: apple with confidence level of 0.3475988805294037
Detected object: orange with confidence level of 0.3287607729434967
Detected object: apple with confidence level of 0.3124470114707947
Detected object: orange with confidence level of 0.2798607051372528
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.27445051074028015
Detected object: orange with confidence level of 0.21419112384319305
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, confidence: float, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, confidence=confidence, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
os.getenv("DOCKER-SETUP")
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [272]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.515099287033081
Detected object: apple with confidence level of 0.34759870171546936
Detected object: orange with confidence level of 0.32876086235046387
Detected object: apple with confidence level of 0.31244686245918274
Detected object: orange with confidence level of 0.27986079454421997
Detected object: orange with confidence level of 0.2749977707862854
Detected object: apple with confidence level of 0.2744504511356354
Detected object: orange with confidence level of 0.21419058740139008
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, confidence: float, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model, confidence=confidence)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
INFO: Started server process [45]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
###Markdown
Ungraded Lab Part 1 - Deploying a Machine Learning ModelWelcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Object Detection with YOLOV3 Inspecting the imagesLet's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
from IPython.display import Image, display
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
_____no_output_____
###Markdown
Overview of the modelNow that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box functionBefore using the object detection model, create a directory where you can store the resulting images:
###Code
import os
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f'images/{filename}'
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f'images_with_boxes/{filename}', output_image)
# Display the image with bounding boxes
display(Image(f'images_with_boxes/{filename}'))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
_____no_output_____
###Markdown
Changing the confidence levelLooks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
_____no_output_____
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
_____no_output_____
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a serverNow that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications Client-Server modelWhen talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab. EndpointsYou can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...``` HTTP RequestsThe client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string. Why fastAPI?With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title='Deploying a ML Model with FastAPI')
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f'images_uploaded/{filename}', output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f'images_uploaded/{filename}', mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.
###Code
# Allows the server to be run in this interactive environment
nest_asyncio.apply()
# Host depends on the setup you selected (docker or virtual env)
host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
# Spin up the server!
uvicorn.run(app, host=host, port=8000)
###Output
_____no_output_____ |
docs/jupyter/t_pipelines/t_icp_registration.ipynb | ###Markdown
ICP registrationThis tutorial demonstrates the ICP (Iterative Closest Point) registration algorithm. It has been a mainstay of geometric registration in both research and industry for many years. The inputs are two point clouds and an initial transformation that roughly aligns the source point cloud to the target point cloud. The output is a refined transformation that tightly aligns the two point clouds. A helper function `draw_registration_result` visualizes the alignment during the registration process. In this tutorial, we show differnt ICP variants, and the API for using them. Helper visualization functionThe function below visualizes a target point cloud and a source point cloud transformed with an alignment transformation. The target point cloud and the source point cloud are painted with cyan and yellow colors respectively. The more and tighter the two point-clouds overlap with each other, the better the alignment result.
###Code
def draw_registration_result(source, target, transformation):
source_temp = source.clone()
target_temp = target.clone()
source_temp.transform(transformation)
# This is patched version for tutorial rendering.
# Use `draw` function for you application.
o3d.visualization.draw_geometries(
[source_temp.to_legacy(),
target_temp.to_legacy()],
zoom=0.4459,
front=[0.9288, -0.2951, -0.2242],
lookat=[1.6784, 2.0612, 1.4451],
up=[-0.3402, -0.9189, -0.1996])
###Output
_____no_output_____
###Markdown
Understanding ICP Algorithm In general, the ICP algorithm iterates over two steps:1. Find correspondence set $\mathcal{K}=\{(\mathbf{p}, \mathbf{q})\}$ from target point cloud $\mathbf{P}$, and source point cloud $\mathbf{Q}$ transformed with current transformation matrix $\mathbf{T}$.2. Update the transformation $\mathbf{T}$ by minimizing an objective function $E(\mathbf{T})$ defined over the correspondence set $\mathcal{K}$.Different variants of ICP use different objective functions $E(\mathbf{T})$ [\[BeslAndMcKay1992\]](../reference.htmlbeslandmckay1992) [\[ChenAndMedioni1992\]](../reference.htmlchenandmedioni1992) [\[Park2017\]](../reference.htmlpark2017). Different variants of ICP Point-to-point ICPWe first show a point-to-point ICP algorithm [\[BeslAndMcKay1992\]](../reference.htmlbeslandmckay1992) using the objective\begin{equation}E(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\|\mathbf{p} - \mathbf{T}\mathbf{q}\|^{2}\end{equation}The class `TransformationEstimationPointToPoint` provides functions to compute the residuals and Jacobian matrices of the point-to-point ICP objective.--- Point-to-plane ICPThe point-to-plane ICP algorithm [\[ChenAndMedioni1992\]](../reference.htmlchenandmedioni1992) uses a different objective function\begin{equation}E(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big((\mathbf{p} - \mathbf{T}\mathbf{q})\cdot\mathbf{n}_{\mathbf{p}}\big)^{2},\end{equation}where $\mathbf{n}_{\mathbf{p}}$ is the normal of point $\mathbf{p}$. [\[Rusinkiewicz2001\]](../reference.htmlrusinkiewicz2001) has shown that the point-to-plane ICP algorithm has a faster convergence speed than the point-to-point ICP algorithm.The class `TransformationEstimationPointToPlane` provides functions to compute the residuals and Jacobian matrices of the point-to-plane ICP objective. --- Colored ICPFollowing [\[Park2017\]](../reference.htmlpark2017), it runs ICP iterations with a joint optimization objective\begin{equation}E(\mathbf{T}) = (1-\delta)E_{C}(\mathbf{T}) + \delta E_{G}(\mathbf{T})\end{equation}where $\mathbf{T}$ is the transformation matrix to be estimated. $E_{C}$ and $E_{G}$ are the photometric and geometric terms, respectively. $\delta\in[0,1]$ is a weight parameter that has been determined empirically.The geometric term $E_{G}$ is the same as the Point-to-plane ICP objective\begin{equation}E_{G}(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big((\mathbf{p} - \mathbf{T}\mathbf{q})\cdot\mathbf{n}_{\mathbf{p}}\big)^{2},\end{equation}where $\mathcal{K}$ is the correspondence set in the current iteration. $\mathbf{n}_{\mathbf{p}}$ is the normal of point $\mathbf{p}$.The color term $E_{C}$ measures the difference between the color of point $\mathbf{q}$ (denoted as $C(\mathbf{q})$) and the color of its projection on the tangent plane of $\mathbf{p}$.\begin{equation}E_{C}(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big(C_{\mathbf{p}}(\mathbf{f}(\mathbf{T}\mathbf{q})) - C(\mathbf{q})\big)^{2},\end{equation}where $C_{\mathbf{p}}(\cdot)$ is a precomputed function continuously defined on the tangent plane of $\mathbf{p}$. Function$\mathbf{f}(\cdot)$ projects a 3D point to the tangent plane. For more details, refer to [\[Park2017\]](../reference.htmlpark2017).To further improve efficiency, [\[Park2017\]](../reference.htmlpark2017) proposes a multi-scale registration scheme. The class `TransformationEstimationForColoredICP` provides functions to compute the residuals and Jacobian matrices of the joint optimization objective. --- Understanding ICP API **Note:** The `Tensor based ICP implementation` API is slightly different than the [Eigen based ICP implementation](../pipelines/icp_registration.rst), to support more functionalities. Input`PointClouds` between which the `Transformation` is to be estimated. [open3d.t.PointCloud]- Source Tensor PointCloud. [Float32 or Float64 dtypes are supported].- Target Tensor PointCloud. [Float32 or Float64 dtypes are supported]. **Note:** The initial alignment is usually obtained by a global registration algorithm.
###Code
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# For Colored-ICP `colors` attribute must be of the same dtype as `positions` and `normals` attribute.
source.point["colors"] = source.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
target.point["colors"] = target.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
# Initial guess transform between the two point-cloud.
# ICP algortihm requires a good initial allignment to converge efficiently.
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])
draw_registration_result(source, target, trans_init)
###Output
_____no_output_____
###Markdown
--- Parameters Max correspondence Distances- This is the radius of distance from each point in the source point-cloud in which the neighbour search will try to find a corresponding point in the target point-cloud.- It is a `double` for `icp`, and `utility.DoubleVector` for `multi-scale-icp`.- One may typically keep this parameter between `1.0x - 3.0x` `voxel-size` for each scale.- This parameter is most important for performance tuning, as a higher radius will take larger time (as the neighbour search will be performed over a larger radius).
###Code
# For Vanilla ICP (double)
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.07
# For Multi-Scale ICP (o3d.utility.DoubleVector):
# `max_correspondence_distances` is proportianal to the resolution or the `voxel_sizes`.
# In general it is recommended to use values between 1x - 3x of the corresponding `voxel_sizes`.
# We may have a higher value of the `max_correspondence_distances` for the first coarse
# scale, as it is not much expensive, and gives us more tolerance to initial allignment.
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])
###Output
_____no_output_____
###Markdown
Initial Transform from Source to Target [open3d.core.Tensor]- Initial estimate for transformation from source to target.- Transformation matrix Tensor of shape [4, 4] of type `Float64` on `CPU:0` device- The initial alignment is usually obtained by a global registration algorithm. See [Global registration](../pipelines/global_registration.rst) for examples.
###Code
# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4],
[0.0, 0.0, 0.0, 1.0]])
###Output
_____no_output_____
###Markdown
Estimation Method - This sets the ICP method to compute the transformation between two point-clouds given the correspondences.Options:- **o3d.t.pipelines.registration.TransformationEstimationPointToPoint()** - Point to Point ICP.- **o3d.t.pipelines.registration.TransformationEstimationPointToPlane(robust_kernel)** - Point to Plane ICP. - Requires `target point-cloud` to have `normals` attribute (of same dtype as `position` attribute).- **o3d.t.pipelines.registration.TransformationEstimationForColoredICP(robust_kernel, lambda)** - Colored ICP. - Requires `target` point-cloud to have `normals` attribute (of same dtype as `position` attribute). - Requires `source` and `target` point-clouds to have `colors` attribute (of same dtype as `position` attribute).- **o3d.t.pipelines.registration.TransformationEstimationForGeneralizedICP(robust_kernel, epsilon)** [To be added]. - Generalized ICP.
###Code
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
###Output
_____no_output_____
###Markdown
Estimation Method also supports `Robust Kernels`: Robust kernels are used for outlier rejection. More on this in `Robust Kernel` section.`robust_kernel = o3d.t.pipelines.registration.robust_kernel.RobustKernel(method, scale, shape)`Method options:- robust_kernel.RobustKernelMethod.L2Loss- robust_kernel.RobustKernelMethod.L1Loss- robust_kernel.RobustKernelMethod.HuberLoss- robust_kernel.RobustKernelMethod.CauchyLoss- robust_kernel.RobustKernelMethod.GMLoss- robust_kernel.RobustKernelMethod.TukeyLoss- robust_kernel.RobustKernelMethod.GeneralizedLoss ICP Convergence Criteria [relative rmse, relative fitness, max iterations]- This sets the condition for termination or when the scale iterations can be considered to be converged. - If the relative (of change in value from the last iteration) rmse and fitness are equal or less than the specified value, the iterations for that scale will be considered as converged/completed.- For `Multi-Scale ICP` it is a `list` of `ICPConvergenceCriteria`, for each scale of ICP, to provide more fine control over performance.- One may keep the initial values of `relative_fitness` and `relative_rmse` low as we just want to get an estimate transformation, and high for later iterations to fine-tune.- Iterations on higher-resolution is more costly (takes more time), so we want to do fewer iterations on higher resolution.
###Code
# Convergence-Criteria for Vanilla ICP:
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001,
relative_rmse=0.000001,
max_iteration=50)
# List of Convergence-Criteria for Multi-Scale ICP:
# We can control `ConvergenceCriteria` of each `scale` individually.
# We want to keep `relative_fitness` and `relative_rmse` high (more error tolerance)
# for initial scales, i.e. we will be happy to consider ICP converged, when difference
# between 2 successive iterations for that scale is smaller than this value.
# We expect less accuracy (more error tolerance) initial coarse-scale iteration,
# and want our later scale convergence to be more accurate (less error tolerance).
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=20),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]
###Output
_____no_output_____
###Markdown
Voxel Sizes- It is the voxel size (lower voxel size corresponds to higher resolution), for each scale of multi-scale ICP.- We want to perform initial iterations on a coarse point-cloud (low-resolution or small voxel size) (as it is more time-efficient, and avoids local-minima), and then move to a dense point-cloud (high-resolution or small voxel size. Therefore the voxel sizes must be strictly decreasing order.
###Code
# Vanilla ICP
voxel_size = 0.025
# Lower `voxel_size` is equivalent to higher resolution,
# and we want to perform iterations from coarse to dense resolution,
# therefore `voxel_sizes` must be in strictly decressing order.
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])
###Output
_____no_output_____
###Markdown
Save Loss LogWhen `True`, it saves the iteration-wise values of `fitness`, `inlier_rmse`, `transformaton`, `scale`, `iteration` in `loss_log_` in `regsitration_result`. Default: `False`.
###Code
save_loss_log = True
###Output
_____no_output_____
###Markdown
--- Vanilla ICP Example 1. Set Inputs and Parameters
###Code
# Input point-clouds
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.07
# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4],
[0.0, 0.0, 0.0, 1.0]])
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
# Convergence-Criteria for Vanilla ICP
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001,
relative_rmse=0.000001,
max_iteration=50)
# Down-sampling voxel-size.
voxel_size = 0.025
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
###Output
_____no_output_____
###Markdown
2. Get Registration Result from ICP
###Code
s = time.time()
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
###Output
_____no_output_____
###Markdown
---Now let's try with poor initial initialisation
###Code
init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)
max_correspondence_distance = 0.07
s = time.time()
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
###Output
_____no_output_____
###Markdown
As we can see, poor initial allignment might fail ICP convergenceHaving large `max_correspondence_distance` might resolve this issue. But it will take longer to process.
###Code
init_source_to_target = o3d.core.Tensor.eye(4, o3c.float32)
max_correspondence_distance = 0.5
s = time.time()
# It is highly recommended to down-sample the point-cloud before using
# ICP algorithm, for better performance.
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
###Output
_____no_output_____
###Markdown
We may resolve the above issues and get even better accuracy by using `Multi-Scale ICP` --- Multi-Scale ICP Example Problems with using Vanilla-ICP (previous version):- Running the ICP algorithm on dense point-clouds is very slow. - It requires good initial alignment: - If the point-cloud is not well aligned, the convergence might get stuck in local-minima in initial iterations. - We need to have a larger `max_correspondence_distance` if the aligned point cloud does not have sufficient overlaps. - If point-cloud is heavily down-sampled (coarse), the obtained result will not be accurate. These drawbacks can be solved using Multi-Scale ICP.In Multi-Scale ICP, we perform the initial iterations on coarse point-cloud to get a better estimate of initial alignment and use this alignment for convergence on a more dense point cloud. ICP on coarse point cloud is in-expensive, and allows us to use a larger `max_correspondence_distance`. It is also less likely for the convergence to get stuck in local minima. As we get a good estimate, it takes fewer iterations on dense point-cloud to converge to a more accurate transform. It is recommended to use `Multi-Scale ICP` over `ICP`, for efficient convergence, especially for large point clouds. 1. Set Inputs and Parameters
###Code
# Input point-clouds
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])
# List of Convergence-Criteria for Multi-Scale ICP:
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=20),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]
# `max_correspondence_distances` for Multi-Scale ICP (o3d.utility.DoubleVector):
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])
# Initial alignment or source to target transform.
init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
###Output
_____no_output_____
###Markdown
2. Get Registration Result from Multi-Scale ICP
###Code
# Setting Verbosity to Debug, helps in fine-tuning the performance.
# o3d.utility.set_verbosity_level(o3d.utility.VerbosityLevel.Debug)
s = time.time()
registration_ms_icp = treg.multi_scale_icp(source, target, voxel_sizes,
criteria_list,
max_correspondence_distances,
init_source_to_target, estimation,
save_loss_log)
ms_icp_time = time.time() - s
print("Time taken by Multi-Scale ICP: ", ms_icp_time)
print("Inlier Fitness: ", registration_ms_icp.fitness)
print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)
draw_registration_result(source, target, registration_ms_icp.transformation)
###Output
_____no_output_____
###Markdown
Plotting Convergence GraphWe can use the `registration_result.loss_log`, to plot convergence and fine-tune our application.
###Code
from matplotlib import pyplot as plt
def plot_rmse(registration_result):
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(20, 5))
axes.set_title("Inlier RMSE vs Iteration")
axes.plot(registration_result.loss_log["index"].numpy(),
registration_result.loss_log["inlier_rmse"].numpy())
def plot_scale_wise_rmse(registration_result):
scales = registration_result.loss_log["scale"].numpy()
iterations = registration_result.loss_log["iteration"].numpy()
num_scales = scales[-1][0] + 1
fig, axes = plt.subplots(nrows=1, ncols=num_scales, figsize=(20, 5))
masks = {}
for scale in range(0, num_scales):
masks[scale] = registration_result.loss_log["scale"] == scale
rmse = registration_result.loss_log["inlier_rmse"][masks[scale]].numpy()
iteration = registration_result.loss_log["iteration"][
masks[scale]].numpy()
title_prefix = "Scale Index: " + str(scale)
axes[scale].set_title(title_prefix + " Inlier RMSE vs Iteration")
axes[scale].plot(iteration, rmse)
print("Vanilla ICP")
plot_rmse(registration_icp)
print("Multi Scale ICP")
plot_rmse(registration_ms_icp)
plot_scale_wise_rmse(registration_ms_icp)
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(20, 5))
axes.set_title("Vanilla ICP and Multi-Scale ICP `Inlier RMSE` vs `Iteration`")
if len(registration_ms_icp.loss_log["index"]) > len(
registration_icp.loss_log["inlier_rmse"]):
axes.plot(registration_ms_icp.loss_log["index"].numpy(),
registration_ms_icp.loss_log["inlier_rmse"].numpy(),
registration_icp.loss_log["inlier_rmse"].numpy())
else:
axes.plot(registration_icp.loss_log["index"].numpy(),
registration_icp.loss_log["inlier_rmse"].numpy(),
registration_ms_icp.loss_log["inlier_rmse"].numpy())
###Output
_____no_output_____
###Markdown
--- Multi-Scale ICP on CUDA device Example
###Code
# The algorithm runs on the same device as the source and target point-cloud.
source_cuda = source.cuda(0)
target_cuda = target.cuda(0)
s = time.time()
registration_ms_icp = treg.multi_scale_icp(source_cuda, target_cuda,
voxel_sizes, criteria_list,
max_correspondence_distances,
init_source_to_target, estimation,
save_loss_log)
ms_icp_time = time.time() - s
print("Time taken by Multi-Scale ICP: ", ms_icp_time)
print("Inlier Fitness: ", registration_ms_icp.fitness)
print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)
draw_registration_result(source.cpu(), target.cpu(),
registration_ms_icp.transformation)
###Output
_____no_output_____
###Markdown
--- Case of `no correspondences`.In case of `no_correspondences` the `fitness` and `inlier_rmse` is `0`.
###Code
max_correspondence_distance = 0.02
init_source_to_target = np.asarray([[1.0, 0.0, 0.0, 5], [0.0, 1.0, 0.0, 7],
[0.0, 0.0, 1.0, 10], [0.0, 0.0, 0.0, 1.0]])
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
print("Transformation: \n", registration_icp.transformation)
if registration_icp.fitness == 0 and registration_icp.inlier_rmse == 0:
print("ICP Convergence Failed, as no correspondence were found")
###Output
_____no_output_____
###Markdown
--- Information Matrix `Information Matrix` gives us futher information about how well the point-clouds are alligned.
###Code
information_matrix = treg.get_information_matrix(
source, target, max_correspondence_distances[2],
registration_ms_icp.transformation)
print(information_matrix)
###Output
_____no_output_____
###Markdown
--- Now that we have a basic understanding of the ICP algorithm and the API, let's experiment with the different versions to understand the difference Initial Allignment
###Code
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# Initial guess transform between the two point-cloud.
# ICP algortihm requires a good initial allignment to converge efficiently.
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])
draw_registration_result(source, target, trans_init)
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.02
print("Initial alignment")
evaluation = treg.evaluate_registration(source, target,
max_correspondence_distance, trans_init)
print("Fitness: ", evaluation.fitness)
print("Inlier RMSE: ", evaluation.inlier_rmse)
###Output
_____no_output_____
###Markdown
--- Point-To-Point ICP Registration We first show a point-to-point ICP algorithm [\[BeslAndMcKay1992\]](../reference.htmlbeslandmckay1992) using the objective\begin{equation}E(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\|\mathbf{p} - \mathbf{T}\mathbf{q}\|^{2}\end{equation}The class `TransformationEstimationPointToPoint` provides functions to compute the residuals and Jacobian matrices of the point-to-point ICP objective. 1. Set Inputs and Parameters
###Code
# Input point-clouds
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPoint()
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.02
# Initial alignment or source to target transform.
init_source_to_target = trans_init
# Convergence-Criteria for Vanilla ICP
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=30)
# Down-sampling voxel-size. If voxel_size < 0, original scale is used.
voxel_size = -1
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
print("Apply Point-to-Point ICP")
s = time.time()
reg_point_to_point = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by Point-To-Point ICP: ", icp_time)
print("Fitness: ", reg_point_to_point.fitness)
print("Inlier RMSE: ", reg_point_to_point.inlier_rmse)
draw_registration_result(source, target, reg_point_to_point.transformation)
###Output
_____no_output_____
###Markdown
The fitness score increases from `0.174722` to `0.372474`. The inlier_rmse reduces from `0.011771` to `0.007761`. By default, icp runs until convergence or reaches a maximum number of iterations (30 by default). It can be changed to allow more computation time and to improve the results further.
###Code
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=1000)
print("Apply Point-to-Point ICP")
s = time.time()
reg_point_to_point = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by Point-To-Point ICP: ", icp_time)
print("Fitness: ", reg_point_to_point.fitness)
print("Inlier RMSE: ", reg_point_to_point.inlier_rmse)
draw_registration_result(source, target, reg_point_to_point.transformation)
###Output
_____no_output_____
###Markdown
The final alignment is tight. The fitness score improves to `0.620972`. The inlier_rmse reduces to `0.006581`.--- Point-to-Plane ICP RegistrationThe point-to-plane ICP algorithm [\[ChenAndMedioni1992\]](../reference.htmlchenandmedioni1992) uses a different objective function\begin{equation}E(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big((\mathbf{p} - \mathbf{T}\mathbf{q})\cdot\mathbf{n}_{\mathbf{p}}\big)^{2},\end{equation}where $\mathbf{n}_{\mathbf{p}}$ is the normal of point $\mathbf{p}$. [\[Rusinkiewicz2001\]](../reference.htmlrusinkiewicz2001) has shown that the point-to-plane ICP algorithm has a faster convergence speed than the point-to-point ICP algorithm.The class `TransformationEstimationPointToPlane` provides functions to compute the residuals and Jacobian matrices of the point-to-plane ICP objective.
###Code
estimation = treg.TransformationEstimationPointToPlane()
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=30)
print("Apply Point-to-Plane ICP")
s = time.time()
reg_point_to_plane = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by Point-To-Plane ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_point_to_plane.transformation)
###Output
_____no_output_____
###Markdown
The point-to-plane ICP reaches tight alignment within 30 iterations (a `fitness` score of 0.620972 and an `inlier_rmse` score of 0.006581). --- Colored ICP RegistrationThis tutorial demonstrates an ICP variant that uses both geometry and color for registration. It implements the algorithm of [\[Park2017\]](../reference.htmlpark2017). The color information locks the alignment along the tangent plane. Thus this algorithm is more accurate and more robust than prior point cloud registration algorithms, while the running speed is comparable to that of ICP registration.
###Code
# Overriding visualization function, according to best camera view for colored-icp sample data.
def draw_registration_result(source, target, transformation):
source_temp = source.clone()
target_temp = target.clone()
source_temp.transform(transformation)
# This is patched version for tutorial rendering.
# Use `draw` function for you application.
o3d.visualization.draw_geometries(
[source_temp.to_legacy(),
target_temp.to_legacy()],
zoom=0.5,
front=[-0.2458, -0.8088, 0.5342],
lookat=[1.7745, 2.2305, 0.9787],
up=[0.3109, -0.5878, -0.7468])
print("1. Load two point clouds and show initial pose")
demo_cicp_pcds = o3d.data.DemoColoredICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_cicp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_cicp_pcds.paths[1])
# For Colored-ICP `colors` attribute must be of the same dtype as `positions` and `normals` attribute.
source.point["colors"] = source.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
target.point["colors"] = target.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
# draw initial alignment
current_transformation = np.identity(4)
draw_registration_result(source, target, current_transformation)
###Output
_____no_output_____
###Markdown
Setting baseline with point-to-plane registrationWe first run Point-to-plane ICP as a baseline approach. The visualization below shows misaligned green triangle textures. This is because a geometric constraint does not prevent two planar surfaces from slipping.
###Code
estimation = treg.TransformationEstimationPointToPlane()
max_correspondence_distance = 0.02
init_source_to_target = np.identity(4)
print("Apply Point-to-Plane ICP")
s = time.time()
reg_point_to_plane = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation)
icp_time = time.time() - s
print("Time taken by Point-To-Plane ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_point_to_plane.transformation)
###Output
_____no_output_____
###Markdown
Colored RegistrationThe core function for colored point cloud registration is `registration_colored_icp`. Following [\[Park2017\]](../reference.htmlpark2017), it runs ICP iterations (see [Point-to-point ICP](../pipelines/icp_registration.ipynbPoint-to-point-ICP) for details) with a joint optimization objective\begin{equation}E(\mathbf{T}) = (1-\delta)E_{C}(\mathbf{T}) + \delta E_{G}(\mathbf{T})\end{equation}where $\mathbf{T}$ is the transformation matrix to be estimated. $E_{C}$ and $E_{G}$ are the photometric and geometric terms, respectively. $\delta\in[0,1]$ is a weight parameter that has been determined empirically.The geometric term $E_{G}$ is the same as the [Point-to-plane ICP](../pipelines/icp_registration.ipynbPoint-to-plane-ICP) objective\begin{equation}E_{G}(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big((\mathbf{p} - \mathbf{T}\mathbf{q})\cdot\mathbf{n}_{\mathbf{p}}\big)^{2},\end{equation}where $\mathcal{K}$ is the correspondence set in the current iteration. $\mathbf{n}_{\mathbf{p}}$ is the normal of point $\mathbf{p}$.The color term $E_{C}$ measures the difference between the color of point $\mathbf{q}$ (denoted as $C(\mathbf{q})$) and the color of its projection on the tangent plane of $\mathbf{p}$.\begin{equation}E_{C}(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big(C_{\mathbf{p}}(\mathbf{f}(\mathbf{T}\mathbf{q})) - C(\mathbf{q})\big)^{2},\end{equation}where $C_{\mathbf{p}}(\cdot)$ is a precomputed function continuously defined on the tangent plane of $\mathbf{p}$. Function$\mathbf{f}(\cdot)$ projects a 3D point to the tangent plane. For more details, refer to [\[Park2017\]](../reference.htmlpark2017).To further improve efficiency, [\[Park2017\]](../reference.htmlpark2017) proposes a multi-scale registration scheme.
###Code
estimation = treg.TransformationEstimationForColoredICP()
current_transformation = np.identity(4)
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=50),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 30),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 14)
]
max_correspondence_distances = o3d.utility.DoubleVector([0.08, 0.04, 0.02])
voxel_sizes = o3d.utility.DoubleVector([0.04, 0.02, 0.01])
# colored pointcloud registration
# This is implementation of following paper
# J. Park, Q.-Y. Zhou, V. Koltun,
# Colored Point Cloud Registration Revisited, ICCV 2017
print("Colored point cloud registration")
s = time.time()
reg_multiscale_icp = treg.multi_scale_icp(source, target, voxel_sizes,
criteria_list,
max_correspondence_distances,
init_source_to_target, estimation)
icp_time = time.time() - s
print("Time taken by Colored ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_multiscale_icp.transformation)
###Output
_____no_output_____
###Markdown
--- Case of `no correspondences`.In case of `no_correspondences` the `fitness` and `inlier_rmse` is `0`.
###Code
max_correspondence_distance = 0.02
init_source_to_target = np.asarray([[1.0, 0.0, 0.0, 5], [0.0, 1.0, 0.0, 7],
[0.0, 0.0, 1.0, 10], [0.0, 0.0, 0.0, 1.0]])
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
print("Transformation: \n", registration_icp.transformation)
if registration_icp.fitness == 0 and registration_icp.inlier_rmse == 0:
print("ICP Convergence Failed, as no correspondence were found")
###Output
_____no_output_____
###Markdown
--- Information Matrix `Information Matrix` gives us futher information about how well the point-clouds are alligned.
###Code
information_matrix = treg.get_information_matrix(
source, target, max_correspondence_distances[2],
registration_ms_icp.transformation)
print(information_matrix)
###Output
_____no_output_____
###Markdown
--- Now that we have a basic understanding of the ICP algorithm and the API, let's experiment with the different versions to understand the difference Initial Allignment
###Code
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# Initial guess transform between the two point-cloud.
# ICP algortihm requires a good initial allignment to converge efficiently.
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])
draw_registration_result(source, target, trans_init)
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.02
print("Initial alignment")
evaluation = treg.evaluate_registration(source, target,
max_correspondence_distance, trans_init)
print("Fitness: ", evaluation.fitness)
print("Inlier RMSE: ", evaluation.inlier_rmse)
###Output
_____no_output_____
###Markdown
--- Point-To-Point ICP Registration We first show a point-to-point ICP algorithm [\[BeslAndMcKay1992\]](../reference.htmlbeslandmckay1992) using the objective\begin{equation}E(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\|\mathbf{p} - \mathbf{T}\mathbf{q}\|^{2}\end{equation}The class `TransformationEstimationPointToPoint` provides functions to compute the residuals and Jacobian matrices of the point-to-point ICP objective. 1. Set Inputs and Parameters
###Code
# Input point-clouds
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPoint()
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.02
# Initial alignment or source to target transform.
init_source_to_target = trans_init
# Convergence-Criteria for Vanilla ICP
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=30)
# Down-sampling voxel-size. If voxel_size < 0, original scale is used.
voxel_size = -1
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
print("Apply Point-to-Point ICP")
s = time.time()
reg_point_to_point = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by Point-To-Point ICP: ", icp_time)
print("Fitness: ", reg_point_to_point.fitness)
print("Inlier RMSE: ", reg_point_to_point.inlier_rmse)
draw_registration_result(source, target, reg_point_to_point.transformation)
###Output
_____no_output_____
###Markdown
The fitness score increases from `0.174722` to `0.372474`. The inlier_rmse reduces from `0.011771` to `0.007761`. By default, icp runs until convergence or reaches a maximum number of iterations (30 by default). It can be changed to allow more computation time and to improve the results further.
###Code
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=1000)
print("Apply Point-to-Point ICP")
s = time.time()
reg_point_to_point = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by Point-To-Point ICP: ", icp_time)
print("Fitness: ", reg_point_to_point.fitness)
print("Inlier RMSE: ", reg_point_to_point.inlier_rmse)
draw_registration_result(source, target, reg_point_to_point.transformation)
###Output
_____no_output_____
###Markdown
The final alignment is tight. The fitness score improves to `0.620972`. The inlier_rmse reduces to `0.006581`.--- Point-to-Plane ICP RegistrationThe point-to-plane ICP algorithm [\[ChenAndMedioni1992\]](../reference.htmlchenandmedioni1992) uses a different objective function\begin{equation}E(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big((\mathbf{p} - \mathbf{T}\mathbf{q})\cdot\mathbf{n}_{\mathbf{p}}\big)^{2},\end{equation}where $\mathbf{n}_{\mathbf{p}}$ is the normal of point $\mathbf{p}$. [\[Rusinkiewicz2001\]](../reference.htmlrusinkiewicz2001) has shown that the point-to-plane ICP algorithm has a faster convergence speed than the point-to-point ICP algorithm.The class `TransformationEstimationPointToPlane` provides functions to compute the residuals and Jacobian matrices of the point-to-plane ICP objective.
###Code
estimation = treg.TransformationEstimationPointToPlane()
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=30)
print("Apply Point-to-Plane ICP")
s = time.time()
reg_point_to_plane = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by Point-To-Plane ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_point_to_plane.transformation)
###Output
_____no_output_____
###Markdown
The point-to-plane ICP reaches tight alignment within 30 iterations (a `fitness` score of 0.620972 and an `inlier_rmse` score of 0.006581). --- Colored ICP RegistrationThis tutorial demonstrates an ICP variant that uses both geometry and color for registration. It implements the algorithm of [\[Park2017\]](../reference.htmlpark2017). The color information locks the alignment along the tangent plane. Thus this algorithm is more accurate and more robust than prior point cloud registration algorithms, while the running speed is comparable to that of ICP registration.
###Code
# Overriding visualization function, according to best camera view for colored-icp sample data.
def draw_registration_result(source, target, transformation):
source_temp = source.clone()
target_temp = target.clone()
source_temp.transform(transformation)
# This is patched version for tutorial rendering.
# Use `draw` function for you application.
o3d.visualization.draw_geometries(
[source_temp.to_legacy(),
target_temp.to_legacy()],
zoom=0.5,
front=[-0.2458, -0.8088, 0.5342],
lookat=[1.7745, 2.2305, 0.9787],
up=[0.3109, -0.5878, -0.7468])
print("1. Load two point clouds and show initial pose")
demo_cicp_pcds = o3d.data.DemoColoredICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_cicp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_cicp_pcds.paths[1])
# For Colored-ICP `colors` attribute must be of the same dtype as `positions` and `normals` attribute.
source.point["colors"] = source.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
target.point["colors"] = target.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
# draw initial alignment
current_transformation = np.identity(4)
draw_registration_result(source, target, current_transformation)
###Output
_____no_output_____
###Markdown
Setting baseline with point-to-plane registrationWe first run Point-to-plane ICP as a baseline approach. The visualization below shows misaligned green triangle textures. This is because a geometric constraint does not prevent two planar surfaces from slipping.
###Code
estimation = treg.TransformationEstimationPointToPlane()
max_correspondence_distance = 0.02
init_source_to_target = np.identity(4)
print("Apply Point-to-Plane ICP")
s = time.time()
reg_point_to_plane = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation)
icp_time = time.time() - s
print("Time taken by Point-To-Plane ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_point_to_plane.transformation)
###Output
_____no_output_____
###Markdown
Colored RegistrationThe core function for colored point cloud registration is `registration_colored_icp`. Following [\[Park2017\]](../reference.htmlpark2017), it runs ICP iterations (see [Point-to-point ICP](../pipelines/icp_registration.ipynbPoint-to-point-ICP) for details) with a joint optimization objective\begin{equation}E(\mathbf{T}) = (1-\delta)E_{C}(\mathbf{T}) + \delta E_{G}(\mathbf{T})\end{equation}where $\mathbf{T}$ is the transformation matrix to be estimated. $E_{C}$ and $E_{G}$ are the photometric and geometric terms, respectively. $\delta\in[0,1]$ is a weight parameter that has been determined empirically.The geometric term $E_{G}$ is the same as the [Point-to-plane ICP](../pipelines/icp_registration.ipynbPoint-to-plane-ICP) objective\begin{equation}E_{G}(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big((\mathbf{p} - \mathbf{T}\mathbf{q})\cdot\mathbf{n}_{\mathbf{p}}\big)^{2},\end{equation}where $\mathcal{K}$ is the correspondence set in the current iteration. $\mathbf{n}_{\mathbf{p}}$ is the normal of point $\mathbf{p}$.The color term $E_{C}$ measures the difference between the color of point $\mathbf{q}$ (denoted as $C(\mathbf{q})$) and the color of its projection on the tangent plane of $\mathbf{p}$.\begin{equation}E_{C}(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big(C_{\mathbf{p}}(\mathbf{f}(\mathbf{T}\mathbf{q})) - C(\mathbf{q})\big)^{2},\end{equation}where $C_{\mathbf{p}}(\cdot)$ is a precomputed function continuously defined on the tangent plane of $\mathbf{p}$. Function$\mathbf{f}(\cdot)$ projects a 3D point to the tangent plane. For more details, refer to [\[Park2017\]](../reference.htmlpark2017).To further improve efficiency, [\[Park2017\]](../reference.htmlpark2017) proposes a multi-scale registration scheme.
###Code
estimation = treg.TransformationEstimationForColoredICP()
current_transformation = np.identity(4)
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=50),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 30),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 14)
]
max_correspondence_distances = o3d.utility.DoubleVector([0.08, 0.04, 0.02])
voxel_sizes = o3d.utility.DoubleVector([0.04, 0.02, 0.01])
# colored pointcloud registration
# This is implementation of following paper
# J. Park, Q.-Y. Zhou, V. Koltun,
# Colored Point Cloud Registration Revisited, ICCV 2017
print("Colored point cloud registration")
s = time.time()
reg_multiscale_icp = treg.multi_scale_icp(source, target, voxel_sizes,
criteria_list,
max_correspondence_distances,
init_source_to_target, estimation)
icp_time = time.time() - s
print("Time taken by Colored ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_multiscale_icp.transformation)
###Output
_____no_output_____
###Markdown
ICP registrationThis tutorial demonstrates the ICP (Iterative Closest Point) registration algorithm. It has been a mainstay of geometric registration in both research and industry for many years. The inputs are two point clouds and an initial transformation that roughly aligns the source point cloud to the target point cloud. The output is a refined transformation that tightly aligns the two point clouds. A helper function `draw_registration_result` visualizes the alignment during the registration process. In this tutorial, we show different ICP variants, and the API for using them. Helper visualization functionThe function below visualizes a target point cloud and a source point cloud transformed with an alignment transformation. The target point cloud and the source point cloud are painted with cyan and yellow colors respectively. The more and tighter the two point-clouds overlap with each other, the better the alignment result.
###Code
def draw_registration_result(source, target, transformation):
source_temp = source.clone()
target_temp = target.clone()
source_temp.transform(transformation)
# This is patched version for tutorial rendering.
# Use `draw` function for you application.
o3d.visualization.draw_geometries(
[source_temp.to_legacy(),
target_temp.to_legacy()],
zoom=0.4459,
front=[0.9288, -0.2951, -0.2242],
lookat=[1.6784, 2.0612, 1.4451],
up=[-0.3402, -0.9189, -0.1996])
###Output
_____no_output_____
###Markdown
Understanding ICP Algorithm In general, the ICP algorithm iterates over two steps:1. Find correspondence set $\mathcal{K}=\{(\mathbf{p}, \mathbf{q})\}$ from target point cloud $\mathbf{P}$, and source point cloud $\mathbf{Q}$ transformed with current transformation matrix $\mathbf{T}$.2. Update the transformation $\mathbf{T}$ by minimizing an objective function $E(\mathbf{T})$ defined over the correspondence set $\mathcal{K}$.Different variants of ICP use different objective functions $E(\mathbf{T})$ [\[BeslAndMcKay1992\]](../reference.htmlbeslandmckay1992) [\[ChenAndMedioni1992\]](../reference.htmlchenandmedioni1992) [\[Park2017\]](../reference.htmlpark2017). Understanding ICP API **Note:** The `Tensor based ICP implementation` API is slightly different than the [Eigen based ICP implementation](../pipelines/icp_registration.rst), to support more functionalities. Input`PointClouds` between which the `Transformation` is to be estimated. [open3d.t.PointCloud]- Source Tensor PointCloud. [Float32 or Float64 dtypes are supported].- Target Tensor PointCloud. [Float32 or Float64 dtypes are supported]. **Note:** The initial alignment is usually obtained by a global registration algorithm.
###Code
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# For Colored-ICP `colors` attribute must be of the same dtype as `positions` and `normals` attribute.
source.point["colors"] = source.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
target.point["colors"] = target.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
# Initial guess transform between the two point-cloud.
# ICP algorithm requires a good initial alignment to converge efficiently.
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])
draw_registration_result(source, target, trans_init)
###Output
_____no_output_____
###Markdown
--- Parameters Max correspondence Distances- This is the radius of distance from each point in the source point-cloud in which the neighbour search will try to find a corresponding point in the target point-cloud.- It is a `double` for `icp`, and `utility.DoubleVector` for `multi-scale-icp`.- One may typically keep this parameter between `1.0x - 3.0x` `voxel-size` for each scale.- This parameter is most important for performance tuning, as a higher radius will take larger time (as the neighbour search will be performed over a larger radius).
###Code
# For Vanilla ICP (double)
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.07
# For Multi-Scale ICP (o3d.utility.DoubleVector):
# `max_correspondence_distances` is proportional to the resolution or the `voxel_sizes`.
# In general it is recommended to use values between 1x - 3x of the corresponding `voxel_sizes`.
# We may have a higher value of the `max_correspondence_distances` for the first coarse
# scale, as it is not much expensive, and gives us more tolerance to initial alignment.
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])
###Output
_____no_output_____
###Markdown
Initial Transform from Source to Target [open3d.core.Tensor]- Initial estimate for transformation from source to target.- Transformation matrix Tensor of shape [4, 4] of type `Float64` on `CPU:0` device- The initial alignment is usually obtained by a global registration algorithm. See [Global registration](../pipelines/global_registration.rst) for examples.
###Code
# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4],
[0.0, 0.0, 0.0, 1.0]])
###Output
_____no_output_____
###Markdown
Estimation Method - This sets the ICP method to compute the transformation between two point-clouds given the correspondences.Options:- **o3d.t.pipelines.registration.TransformationEstimationPointToPoint()** - Point to Point ICP.- **o3d.t.pipelines.registration.TransformationEstimationPointToPlane(robust_kernel)** - Point to Plane ICP. - Requires `target point-cloud` to have `normals` attribute (of same dtype as `position` attribute).- **o3d.t.pipelines.registration.TransformationEstimationForColoredICP(robust_kernel, lambda)** - Colored ICP. - Requires `target` point-cloud to have `normals` attribute (of same dtype as `position` attribute). - Requires `source` and `target` point-clouds to have `colors` attribute (of same dtype as `position` attribute).- **o3d.t.pipelines.registration.TransformationEstimationForGeneralizedICP(robust_kernel, epsilon)** [To be added]. - Generalized ICP.
###Code
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
###Output
_____no_output_____
###Markdown
Estimation Method also supports `Robust Kernels`: Robust kernels are used for outlier rejection. More on this in `Robust Kernel` section.`robust_kernel = o3d.t.pipelines.registration.robust_kernel.RobustKernel(method, scale, shape)`Method options:- robust_kernel.RobustKernelMethod.L2Loss- robust_kernel.RobustKernelMethod.L1Loss- robust_kernel.RobustKernelMethod.HuberLoss- robust_kernel.RobustKernelMethod.CauchyLoss- robust_kernel.RobustKernelMethod.GMLoss- robust_kernel.RobustKernelMethod.TukeyLoss- robust_kernel.RobustKernelMethod.GeneralizedLoss ICP Convergence Criteria [relative rmse, relative fitness, max iterations]- This sets the condition for termination or when the scale iterations can be considered to be converged. - If the relative (of change in value from the last iteration) rmse and fitness are equal or less than the specified value, the iterations for that scale will be considered as converged/completed.- For `Multi-Scale ICP` it is a `list` of `ICPConvergenceCriteria`, for each scale of ICP, to provide more fine control over performance.- One may keep the initial values of `relative_fitness` and `relative_rmse` low as we just want to get an estimate transformation, and high for later iterations to fine-tune.- Iterations on higher-resolution is more costly (takes more time), so we want to do fewer iterations on higher resolution.
###Code
# Convergence-Criteria for Vanilla ICP:
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001,
relative_rmse=0.000001,
max_iteration=50)
# List of Convergence-Criteria for Multi-Scale ICP:
# We can control `ConvergenceCriteria` of each `scale` individually.
# We want to keep `relative_fitness` and `relative_rmse` high (more error tolerance)
# for initial scales, i.e. we will be happy to consider ICP converged, when difference
# between 2 successive iterations for that scale is smaller than this value.
# We expect less accuracy (more error tolerance) initial coarse-scale iteration,
# and want our later scale convergence to be more accurate (less error tolerance).
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=20),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]
###Output
_____no_output_____
###Markdown
Voxel Sizes- It is the voxel size (lower voxel size corresponds to higher resolution), for each scale of multi-scale ICP.- We want to perform initial iterations on a coarse point-cloud (low-resolution or small voxel size) (as it is more time-efficient, and avoids local-minima), and then move to a dense point-cloud (high-resolution or small voxel size. Therefore the voxel sizes must be strictly decreasing order.
###Code
# Vanilla ICP
voxel_size = 0.025
# Lower `voxel_size` is equivalent to higher resolution,
# and we want to perform iterations from coarse to dense resolution,
# therefore `voxel_sizes` must be in strictly decressing order.
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])
###Output
_____no_output_____
###Markdown
Get Iteration-wise registration result using callback lambda functionOptional lambda function, saves string to tensor dictionary of attributes such as "iteration_index", "scale_index", "scale_iteration_index", "inlier_rmse", "fitness", "transformation", on CPU device, updated after each iteration.
###Code
# Example callback_after_iteration lambda function:
callback_after_iteration = lambda updated_result_dict : print("Iteration Index: {}, Fitness: {}, Inlier RMSE: {},".format(
updated_result_dict["iteration_index"].item(),
updated_result_dict["fitness"].item(),
updated_result_dict["inlier_rmse"].item()))
###Output
_____no_output_____
###Markdown
--- Vanilla ICP Example 1. Set Inputs and Parameters
###Code
# Input point-clouds
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.07
# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4],
[0.0, 0.0, 0.0, 1.0]])
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
# Convergence-Criteria for Vanilla ICP
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001,
relative_rmse=0.000001,
max_iteration=50)
# Down-sampling voxel-size.
voxel_size = 0.025
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
###Output
_____no_output_____
###Markdown
2. Get Registration Result from ICP
###Code
s = time.time()
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, callback_after_iteration)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
###Output
_____no_output_____
###Markdown
---Now let's try with poor initial initialisation
###Code
init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)
max_correspondence_distance = 0.07
s = time.time()
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
###Output
_____no_output_____
###Markdown
As we can see, poor initial alignment might fail ICP convergenceHaving large `max_correspondence_distance` might resolve this issue. But it will take longer to process.
###Code
init_source_to_target = o3d.core.Tensor.eye(4, o3c.float32)
max_correspondence_distance = 0.5
s = time.time()
# It is highly recommended to down-sample the point-cloud before using
# ICP algorithm, for better performance.
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
###Output
_____no_output_____
###Markdown
We may resolve the above issues and get even better accuracy by using `Multi-Scale ICP` --- Multi-Scale ICP Example Problems with using Vanilla-ICP (previous version):- Running the ICP algorithm on dense point-clouds is very slow. - It requires good initial alignment: - If the point-cloud is not well aligned, the convergence might get stuck in local-minima in initial iterations. - We need to have a larger `max_correspondence_distance` if the aligned point cloud does not have sufficient overlaps. - If point-cloud is heavily down-sampled (coarse), the obtained result will not be accurate. These drawbacks can be solved using Multi-Scale ICP.In Multi-Scale ICP, we perform the initial iterations on coarse point-cloud to get a better estimate of initial alignment and use this alignment for convergence on a more dense point cloud. ICP on coarse point cloud is in-expensive, and allows us to use a larger `max_correspondence_distance`. It is also less likely for the convergence to get stuck in local minima. As we get a good estimate, it takes fewer iterations on dense point-cloud to converge to a more accurate transform. It is recommended to use `Multi-Scale ICP` over `ICP`, for efficient convergence, especially for large point clouds. 1. Set Inputs and Parameters
###Code
# Input point-clouds
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])
# List of Convergence-Criteria for Multi-Scale ICP:
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=20),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]
# `max_correspondence_distances` for Multi-Scale ICP (o3d.utility.DoubleVector):
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])
# Initial alignment or source to target transform.
init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
callback_after_iteration = lambda loss_log_map : print("Iteration Index: {}, Scale Index: {}, Scale Iteration Index: {}, Fitness: {}, Inlier RMSE: {},".format(
loss_log_map["iteration_index"].item(),
loss_log_map["scale_index"].item(),
loss_log_map["scale_iteration_index"].item(),
loss_log_map["fitness"].item(),
loss_log_map["inlier_rmse"].item()))
###Output
_____no_output_____
###Markdown
2. Get Registration Result from Multi-Scale ICP
###Code
# Setting Verbosity to Debug, helps in fine-tuning the performance.
# o3d.utility.set_verbosity_level(o3d.utility.VerbosityLevel.Debug)
s = time.time()
registration_ms_icp = treg.multi_scale_icp(source, target, voxel_sizes,
criteria_list,
max_correspondence_distances,
init_source_to_target, estimation,
callback_after_iteration)
ms_icp_time = time.time() - s
print("Time taken by Multi-Scale ICP: ", ms_icp_time)
print("Inlier Fitness: ", registration_ms_icp.fitness)
print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)
draw_registration_result(source, target, registration_ms_icp.transformation)
###Output
_____no_output_____
###Markdown
--- Multi-Scale ICP on CUDA device Example
###Code
# The algorithm runs on the same device as the source and target point-cloud.
source_cuda = source.cuda(0)
target_cuda = target.cuda(0)
s = time.time()
registration_ms_icp = treg.multi_scale_icp(source_cuda, target_cuda,
voxel_sizes, criteria_list,
max_correspondence_distances,
init_source_to_target, estimation)
ms_icp_time = time.time() - s
print("Time taken by Multi-Scale ICP: ", ms_icp_time)
print("Inlier Fitness: ", registration_ms_icp.fitness)
print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)
draw_registration_result(source.cpu(), target.cpu(),
registration_ms_icp.transformation)
###Output
_____no_output_____
###Markdown
--- Case of `no correspondences`.In case of `no_correspondences` the `fitness` and `inlier_rmse` is `0`.
###Code
max_correspondence_distance = 0.02
init_source_to_target = np.asarray([[1.0, 0.0, 0.0, 5], [0.0, 1.0, 0.0, 7],
[0.0, 0.0, 1.0, 10], [0.0, 0.0, 0.0, 1.0]])
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
print("Transformation: \n", registration_icp.transformation)
if registration_icp.fitness == 0 and registration_icp.inlier_rmse == 0:
print("ICP Convergence Failed, as no correspondence were found")
###Output
_____no_output_____
###Markdown
--- Information Matrix `Information Matrix` gives us further information about how well the point-clouds are aligned.
###Code
information_matrix = treg.get_information_matrix(
source, target, max_correspondence_distances[2],
registration_ms_icp.transformation)
print(information_matrix)
###Output
_____no_output_____
###Markdown
--- Now that we have a basic understanding of the ICP algorithm and the API, let's experiment with the different versions to understand the difference Initial Alignment
###Code
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# Initial guess transform between the two point-cloud.
# ICP algorithm requires a good initial alignment to converge efficiently.
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])
draw_registration_result(source, target, trans_init)
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.02
print("Initial alignment")
evaluation = treg.evaluate_registration(source, target,
max_correspondence_distance, trans_init)
print("Fitness: ", evaluation.fitness)
print("Inlier RMSE: ", evaluation.inlier_rmse)
###Output
_____no_output_____
###Markdown
--- Point-To-Point ICP Registration We first show a point-to-point ICP algorithm [\[BeslAndMcKay1992\]](../reference.htmlbeslandmckay1992) using the objective\begin{equation}E(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\|\mathbf{p} - \mathbf{T}\mathbf{q}\|^{2}\end{equation}The class `TransformationEstimationPointToPoint` provides functions to compute the residuals and Jacobian matrices of the point-to-point ICP objective. 1. Set Inputs and Parameters
###Code
# Input point-clouds
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPoint()
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.02
# Initial alignment or source to target transform.
init_source_to_target = trans_init
# Convergence-Criteria for Vanilla ICP
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=30)
# Down-sampling voxel-size. If voxel_size < 0, original scale is used.
voxel_size = -1
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
print("Apply Point-to-Point ICP")
s = time.time()
reg_point_to_point = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size)
icp_time = time.time() - s
print("Time taken by Point-To-Point ICP: ", icp_time)
print("Fitness: ", reg_point_to_point.fitness)
print("Inlier RMSE: ", reg_point_to_point.inlier_rmse)
draw_registration_result(source, target, reg_point_to_point.transformation)
###Output
_____no_output_____
###Markdown
The fitness score increases from `0.174722` to `0.372474`. The inlier_rmse reduces from `0.011771` to `0.007761`. By default, icp runs until convergence or reaches a maximum number of iterations (30 by default). It can be changed to allow more computation time and to improve the results further.
###Code
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=1000)
print("Apply Point-to-Point ICP")
s = time.time()
reg_point_to_point = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size)
icp_time = time.time() - s
print("Time taken by Point-To-Point ICP: ", icp_time)
print("Fitness: ", reg_point_to_point.fitness)
print("Inlier RMSE: ", reg_point_to_point.inlier_rmse)
draw_registration_result(source, target, reg_point_to_point.transformation)
###Output
_____no_output_____
###Markdown
The final alignment is tight. The fitness score improves to `0.620972`. The inlier_rmse reduces to `0.006581`.--- Point-to-Plane ICP RegistrationThe point-to-plane ICP algorithm [\[ChenAndMedioni1992\]](../reference.htmlchenandmedioni1992) uses a different objective function\begin{equation}E(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big((\mathbf{p} - \mathbf{T}\mathbf{q})\cdot\mathbf{n}_{\mathbf{p}}\big)^{2},\end{equation}where $\mathbf{n}_{\mathbf{p}}$ is the normal of point $\mathbf{p}$. [\[Rusinkiewicz2001\]](../reference.htmlrusinkiewicz2001) has shown that the point-to-plane ICP algorithm has a faster convergence speed than the point-to-point ICP algorithm.The class `TransformationEstimationPointToPlane` provides functions to compute the residuals and Jacobian matrices of the point-to-plane ICP objective.
###Code
estimation = treg.TransformationEstimationPointToPlane()
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=30)
print("Apply Point-to-Plane ICP")
s = time.time()
reg_point_to_plane = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size)
icp_time = time.time() - s
print("Time taken by Point-To-Plane ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_point_to_plane.transformation)
###Output
_____no_output_____
###Markdown
The point-to-plane ICP reaches tight alignment within 30 iterations (a `fitness` score of 0.620972 and an `inlier_rmse` score of 0.006581). --- Colored ICP RegistrationThis tutorial demonstrates an ICP variant that uses both geometry and color for registration. It implements the algorithm of [\[Park2017\]](../reference.htmlpark2017). The color information locks the alignment along the tangent plane. Thus this algorithm is more accurate and more robust than prior point cloud registration algorithms, while the running speed is comparable to that of ICP registration.
###Code
# Overriding visualization function, according to best camera view for colored-icp sample data.
def draw_registration_result(source, target, transformation):
source_temp = source.clone()
target_temp = target.clone()
source_temp.transform(transformation)
# This is patched version for tutorial rendering.
# Use `draw` function for you application.
o3d.visualization.draw_geometries(
[source_temp.to_legacy(),
target_temp.to_legacy()],
zoom=0.5,
front=[-0.2458, -0.8088, 0.5342],
lookat=[1.7745, 2.2305, 0.9787],
up=[0.3109, -0.5878, -0.7468])
print("1. Load two point clouds and show initial pose")
demo_cicp_pcds = o3d.data.DemoColoredICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_cicp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_cicp_pcds.paths[1])
# For Colored-ICP `colors` attribute must be of the same dtype as `positions` and `normals` attribute.
source.point["colors"] = source.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
target.point["colors"] = target.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
# draw initial alignment
current_transformation = np.identity(4)
draw_registration_result(source, target, current_transformation)
###Output
_____no_output_____
###Markdown
Setting baseline with point-to-plane registrationWe first run Point-to-plane ICP as a baseline approach. The visualization below shows misaligned green triangle textures. This is because a geometric constraint does not prevent two planar surfaces from slipping.
###Code
estimation = treg.TransformationEstimationPointToPlane()
max_correspondence_distance = 0.02
init_source_to_target = np.identity(4)
print("Apply Point-to-Plane ICP")
s = time.time()
reg_point_to_plane = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation)
icp_time = time.time() - s
print("Time taken by Point-To-Plane ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_point_to_plane.transformation)
###Output
_____no_output_____
###Markdown
Colored RegistrationThe core function for colored point cloud registration is `registration_colored_icp`. Following [\[Park2017\]](../reference.htmlpark2017), it runs ICP iterations (see [Point-to-point ICP](../pipelines/icp_registration.ipynbPoint-to-point-ICP) for details) with a joint optimization objective\begin{equation}E(\mathbf{T}) = (1-\delta)E_{C}(\mathbf{T}) + \delta E_{G}(\mathbf{T})\end{equation}where $\mathbf{T}$ is the transformation matrix to be estimated. $E_{C}$ and $E_{G}$ are the photometric and geometric terms, respectively. $\delta\in[0,1]$ is a weight parameter that has been determined empirically.The geometric term $E_{G}$ is the same as the [Point-to-plane ICP](../pipelines/icp_registration.ipynbPoint-to-plane-ICP) objective\begin{equation}E_{G}(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big((\mathbf{p} - \mathbf{T}\mathbf{q})\cdot\mathbf{n}_{\mathbf{p}}\big)^{2},\end{equation}where $\mathcal{K}$ is the correspondence set in the current iteration. $\mathbf{n}_{\mathbf{p}}$ is the normal of point $\mathbf{p}$.The color term $E_{C}$ measures the difference between the color of point $\mathbf{q}$ (denoted as $C(\mathbf{q})$) and the color of its projection on the tangent plane of $\mathbf{p}$.\begin{equation}E_{C}(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big(C_{\mathbf{p}}(\mathbf{f}(\mathbf{T}\mathbf{q})) - C(\mathbf{q})\big)^{2},\end{equation}where $C_{\mathbf{p}}(\cdot)$ is a precomputed function continuously defined on the tangent plane of $\mathbf{p}$. Function$\mathbf{f}(\cdot)$ projects a 3D point to the tangent plane. For more details, refer to [\[Park2017\]](../reference.htmlpark2017).To further improve efficiency, [\[Park2017\]](../reference.htmlpark2017) proposes a multi-scale registration scheme.
###Code
estimation = treg.TransformationEstimationForColoredICP()
current_transformation = np.identity(4)
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=50),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 30),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 14)
]
max_correspondence_distances = o3d.utility.DoubleVector([0.08, 0.04, 0.02])
voxel_sizes = o3d.utility.DoubleVector([0.04, 0.02, 0.01])
# colored pointcloud registration
# This is implementation of following paper
# J. Park, Q.-Y. Zhou, V. Koltun,
# Colored Point Cloud Registration Revisited, ICCV 2017
print("Colored point cloud registration")
s = time.time()
reg_multiscale_icp = treg.multi_scale_icp(source, target, voxel_sizes,
criteria_list,
max_correspondence_distances,
init_source_to_target, estimation)
icp_time = time.time() - s
print("Time taken by Colored ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_multiscale_icp.transformation)
###Output
_____no_output_____
###Markdown
ICP registrationThis tutorial demonstrates the ICP (Iterative Closest Point) registration algorithm. It has been a mainstay of geometric registration in both research and industry for many years. The inputs are two point clouds and an initial transformation that roughly aligns the source point cloud to the target point cloud. The output is a refined transformation that tightly aligns the two point clouds. A helper function `draw_registration_result` visualizes the alignment during the registration process. In this tutorial, we show differnt ICP variants, and the API for using them. Helper visualization functionThe function below visualizes a target point cloud and a source point cloud transformed with an alignment transformation. The target point cloud and the source point cloud are painted with cyan and yellow colors respectively. The more and tighter the two point-clouds overlap with each other, the better the alignment result.
###Code
def draw_registration_result(source, target, transformation):
source_temp = source.clone()
target_temp = target.clone()
source_temp.transform(transformation)
# This is patched version for tutorial rendering.
# Use `draw` function for you application.
o3d.visualization.draw_geometries(
[source_temp.to_legacy(),
target_temp.to_legacy()],
zoom=0.4459,
front=[0.9288, -0.2951, -0.2242],
lookat=[1.6784, 2.0612, 1.4451],
up=[-0.3402, -0.9189, -0.1996])
###Output
_____no_output_____
###Markdown
Understanding ICP Algorithm In general, the ICP algorithm iterates over two steps:1. Find correspondence set $\mathcal{K}=\{(\mathbf{p}, \mathbf{q})\}$ from target point cloud $\mathbf{P}$, and source point cloud $\mathbf{Q}$ transformed with current transformation matrix $\mathbf{T}$.2. Update the transformation $\mathbf{T}$ by minimizing an objective function $E(\mathbf{T})$ defined over the correspondence set $\mathcal{K}$.Different variants of ICP use different objective functions $E(\mathbf{T})$ [\[BeslAndMcKay1992\]](../reference.htmlbeslandmckay1992) [\[ChenAndMedioni1992\]](../reference.htmlchenandmedioni1992) [\[Park2017\]](../reference.htmlpark2017). Understanding ICP API **Note:** The `Tensor based ICP implementation` API is slightly different than the [Eigen based ICP implementation](../pipelines/icp_registration.rst), to support more functionalities. Input`PointClouds` between which the `Transformation` is to be estimated. [open3d.t.PointCloud]- Source Tensor PointCloud. [Float32 or Float64 dtypes are supported].- Target Tensor PointCloud. [Float32 or Float64 dtypes are supported]. **Note:** The initial alignment is usually obtained by a global registration algorithm.
###Code
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# For Colored-ICP `colors` attribute must be of the same dtype as `positions` and `normals` attribute.
source.point["colors"] = source.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
target.point["colors"] = target.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
# Initial guess transform between the two point-cloud.
# ICP algortihm requires a good initial allignment to converge efficiently.
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])
draw_registration_result(source, target, trans_init)
###Output
_____no_output_____
###Markdown
--- Parameters Max correspondence Distances- This is the radius of distance from each point in the source point-cloud in which the neighbour search will try to find a corresponding point in the target point-cloud.- It is a `double` for `icp`, and `utility.DoubleVector` for `multi-scale-icp`.- One may typically keep this parameter between `1.0x - 3.0x` `voxel-size` for each scale.- This parameter is most important for performance tuning, as a higher radius will take larger time (as the neighbour search will be performed over a larger radius).
###Code
# For Vanilla ICP (double)
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.07
# For Multi-Scale ICP (o3d.utility.DoubleVector):
# `max_correspondence_distances` is proportianal to the resolution or the `voxel_sizes`.
# In general it is recommended to use values between 1x - 3x of the corresponding `voxel_sizes`.
# We may have a higher value of the `max_correspondence_distances` for the first coarse
# scale, as it is not much expensive, and gives us more tolerance to initial allignment.
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])
###Output
_____no_output_____
###Markdown
Initial Transform from Source to Target [open3d.core.Tensor]- Initial estimate for transformation from source to target.- Transformation matrix Tensor of shape [4, 4] of type `Float64` on `CPU:0` device- The initial alignment is usually obtained by a global registration algorithm. See [Global registration](../pipelines/global_registration.rst) for examples.
###Code
# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4],
[0.0, 0.0, 0.0, 1.0]])
###Output
_____no_output_____
###Markdown
Estimation Method - This sets the ICP method to compute the transformation between two point-clouds given the correspondences.Options:- **o3d.t.pipelines.registration.TransformationEstimationPointToPoint()** - Point to Point ICP.- **o3d.t.pipelines.registration.TransformationEstimationPointToPlane(robust_kernel)** - Point to Plane ICP. - Requires `target point-cloud` to have `normals` attribute (of same dtype as `position` attribute).- **o3d.t.pipelines.registration.TransformationEstimationForColoredICP(robust_kernel, lambda)** - Colored ICP. - Requires `target` point-cloud to have `normals` attribute (of same dtype as `position` attribute). - Requires `source` and `target` point-clouds to have `colors` attribute (of same dtype as `position` attribute).- **o3d.t.pipelines.registration.TransformationEstimationForGeneralizedICP(robust_kernel, epsilon)** [To be added]. - Generalized ICP.
###Code
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
###Output
_____no_output_____
###Markdown
Estimation Method also supports `Robust Kernels`: Robust kernels are used for outlier rejection. More on this in `Robust Kernel` section.`robust_kernel = o3d.t.pipelines.registration.robust_kernel.RobustKernel(method, scale, shape)`Method options:- robust_kernel.RobustKernelMethod.L2Loss- robust_kernel.RobustKernelMethod.L1Loss- robust_kernel.RobustKernelMethod.HuberLoss- robust_kernel.RobustKernelMethod.CauchyLoss- robust_kernel.RobustKernelMethod.GMLoss- robust_kernel.RobustKernelMethod.TukeyLoss- robust_kernel.RobustKernelMethod.GeneralizedLoss ICP Convergence Criteria [relative rmse, relative fitness, max iterations]- This sets the condition for termination or when the scale iterations can be considered to be converged. - If the relative (of change in value from the last iteration) rmse and fitness are equal or less than the specified value, the iterations for that scale will be considered as converged/completed.- For `Multi-Scale ICP` it is a `list` of `ICPConvergenceCriteria`, for each scale of ICP, to provide more fine control over performance.- One may keep the initial values of `relative_fitness` and `relative_rmse` low as we just want to get an estimate transformation, and high for later iterations to fine-tune.- Iterations on higher-resolution is more costly (takes more time), so we want to do fewer iterations on higher resolution.
###Code
# Convergence-Criteria for Vanilla ICP:
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001,
relative_rmse=0.000001,
max_iteration=50)
# List of Convergence-Criteria for Multi-Scale ICP:
# We can control `ConvergenceCriteria` of each `scale` individually.
# We want to keep `relative_fitness` and `relative_rmse` high (more error tolerance)
# for initial scales, i.e. we will be happy to consider ICP converged, when difference
# between 2 successive iterations for that scale is smaller than this value.
# We expect less accuracy (more error tolerance) initial coarse-scale iteration,
# and want our later scale convergence to be more accurate (less error tolerance).
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=20),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]
###Output
_____no_output_____
###Markdown
Voxel Sizes- It is the voxel size (lower voxel size corresponds to higher resolution), for each scale of multi-scale ICP.- We want to perform initial iterations on a coarse point-cloud (low-resolution or small voxel size) (as it is more time-efficient, and avoids local-minima), and then move to a dense point-cloud (high-resolution or small voxel size. Therefore the voxel sizes must be strictly decreasing order.
###Code
# Vanilla ICP
voxel_size = 0.025
# Lower `voxel_size` is equivalent to higher resolution,
# and we want to perform iterations from coarse to dense resolution,
# therefore `voxel_sizes` must be in strictly decressing order.
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])
###Output
_____no_output_____
###Markdown
Save Loss LogWhen `True`, it saves the iteration-wise values of `fitness`, `inlier_rmse`, `transformaton`, `scale`, `iteration` in `loss_log_` in `regsitration_result`. Default: `False`.
###Code
save_loss_log = True
###Output
_____no_output_____
###Markdown
--- Vanilla ICP Example 1. Set Inputs and Parameters
###Code
# Input point-clouds
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.07
# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4],
[0.0, 0.0, 0.0, 1.0]])
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
# Convergence-Criteria for Vanilla ICP
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001,
relative_rmse=0.000001,
max_iteration=50)
# Down-sampling voxel-size.
voxel_size = 0.025
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
###Output
_____no_output_____
###Markdown
2. Get Registration Result from ICP
###Code
s = time.time()
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
###Output
_____no_output_____
###Markdown
---Now let's try with poor initial initialisation
###Code
init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)
max_correspondence_distance = 0.07
s = time.time()
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
###Output
_____no_output_____
###Markdown
As we can see, poor initial allignment might fail ICP convergenceHaving large `max_correspondence_distance` might resolve this issue. But it will take longer to process.
###Code
init_source_to_target = o3d.core.Tensor.eye(4, o3c.float32)
max_correspondence_distance = 0.5
s = time.time()
# It is highly recommended to down-sample the point-cloud before using
# ICP algorithm, for better performance.
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
###Output
_____no_output_____
###Markdown
We may resolve the above issues and get even better accuracy by using `Multi-Scale ICP` --- Multi-Scale ICP Example Problems with using Vanilla-ICP (previous version):- Running the ICP algorithm on dense point-clouds is very slow. - It requires good initial alignment: - If the point-cloud is not well aligned, the convergence might get stuck in local-minima in initial iterations. - We need to have a larger `max_correspondence_distance` if the aligned point cloud does not have sufficient overlaps. - If point-cloud is heavily down-sampled (coarse), the obtained result will not be accurate. These drawbacks can be solved using Multi-Scale ICP.In Multi-Scale ICP, we perform the initial iterations on coarse point-cloud to get a better estimate of initial alignment and use this alignment for convergence on a more dense point cloud. ICP on coarse point cloud is in-expensive, and allows us to use a larger `max_correspondence_distance`. It is also less likely for the convergence to get stuck in local minima. As we get a good estimate, it takes fewer iterations on dense point-cloud to converge to a more accurate transform. It is recommended to use `Multi-Scale ICP` over `ICP`, for efficient convergence, especially for large point clouds. 1. Set Inputs and Parameters
###Code
# Input point-clouds
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])
# List of Convergence-Criteria for Multi-Scale ICP:
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=20),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]
# `max_correspondence_distances` for Multi-Scale ICP (o3d.utility.DoubleVector):
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])
# Initial alignment or source to target transform.
init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
###Output
_____no_output_____
###Markdown
2. Get Registration Result from Multi-Scale ICP
###Code
# Setting Verbosity to Debug, helps in fine-tuning the performance.
# o3d.utility.set_verbosity_level(o3d.utility.VerbosityLevel.Debug)
s = time.time()
registration_ms_icp = treg.multi_scale_icp(source, target, voxel_sizes,
criteria_list,
max_correspondence_distances,
init_source_to_target, estimation,
save_loss_log)
ms_icp_time = time.time() - s
print("Time taken by Multi-Scale ICP: ", ms_icp_time)
print("Inlier Fitness: ", registration_ms_icp.fitness)
print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)
draw_registration_result(source, target, registration_ms_icp.transformation)
###Output
_____no_output_____
###Markdown
Plotting Convergence GraphWe can use the `registration_result.loss_log`, to plot convergence and fine-tune our application.
###Code
from matplotlib import pyplot as plt
def plot_rmse(registration_result):
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(20, 5))
axes.set_title("Inlier RMSE vs Iteration")
axes.plot(registration_result.loss_log["index"].numpy(),
registration_result.loss_log["inlier_rmse"].numpy())
def plot_scale_wise_rmse(registration_result):
scales = registration_result.loss_log["scale"].numpy()
iterations = registration_result.loss_log["iteration"].numpy()
num_scales = scales[-1][0] + 1
fig, axes = plt.subplots(nrows=1, ncols=num_scales, figsize=(20, 5))
masks = {}
for scale in range(0, num_scales):
masks[scale] = registration_result.loss_log["scale"] == scale
rmse = registration_result.loss_log["inlier_rmse"][masks[scale]].numpy()
iteration = registration_result.loss_log["iteration"][
masks[scale]].numpy()
title_prefix = "Scale Index: " + str(scale)
axes[scale].set_title(title_prefix + " Inlier RMSE vs Iteration")
axes[scale].plot(iteration, rmse)
print("Vanilla ICP")
plot_rmse(registration_icp)
print("Multi Scale ICP")
plot_rmse(registration_ms_icp)
plot_scale_wise_rmse(registration_ms_icp)
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(20, 5))
axes.set_title("Vanilla ICP and Multi-Scale ICP `Inlier RMSE` vs `Iteration`")
if len(registration_ms_icp.loss_log["index"]) > len(
registration_icp.loss_log["inlier_rmse"]):
axes.plot(registration_ms_icp.loss_log["index"].numpy(),
registration_ms_icp.loss_log["inlier_rmse"].numpy(),
registration_icp.loss_log["inlier_rmse"].numpy())
else:
axes.plot(registration_icp.loss_log["index"].numpy(),
registration_icp.loss_log["inlier_rmse"].numpy(),
registration_ms_icp.loss_log["inlier_rmse"].numpy())
###Output
_____no_output_____
###Markdown
--- Multi-Scale ICP on CUDA device Example
###Code
# The algorithm runs on the same device as the source and target point-cloud.
source_cuda = source.cuda(0)
target_cuda = target.cuda(0)
s = time.time()
registration_ms_icp = treg.multi_scale_icp(source_cuda, target_cuda,
voxel_sizes, criteria_list,
max_correspondence_distances,
init_source_to_target, estimation,
save_loss_log)
ms_icp_time = time.time() - s
print("Time taken by Multi-Scale ICP: ", ms_icp_time)
print("Inlier Fitness: ", registration_ms_icp.fitness)
print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)
draw_registration_result(source.cpu(), target.cpu(),
registration_ms_icp.transformation)
###Output
_____no_output_____
###Markdown
ICP registrationThis tutorial demonstrates the ICP (Iterative Closest Point) registration algorithm. It has been a mainstay of geometric registration in both research and industry for many years. The inputs are two point clouds and an initial transformation that roughly aligns the source point cloud to the target point cloud. The output is a refined transformation that tightly aligns the two point clouds. A helper function `draw_registration_result` visualizes the alignment during the registration process. In this tutorial, we show differnt ICP variants, and the API for using them. Helper visualization functionThe function below visualizes a target point cloud and a source point cloud transformed with an alignment transformation. The target point cloud and the source point cloud are painted with cyan and yellow colors respectively. The more and tighter the two point-clouds overlap with each other, the better the alignment result.
###Code
def draw_registration_result(source, target, transformation):
source_temp = source.clone()
target_temp = target.clone()
source_temp.transform(transformation)
# This is patched version for tutorial rendering.
# Use `draw` function for you application.
o3d.visualization.draw_geometries(
[source_temp.to_legacy(),
target_temp.to_legacy()],
zoom=0.4459,
front=[0.9288, -0.2951, -0.2242],
lookat=[1.6784, 2.0612, 1.4451],
up=[-0.3402, -0.9189, -0.1996])
###Output
_____no_output_____
###Markdown
Understanding ICP Algorithm In general, the ICP algorithm iterates over two steps:1. Find correspondence set $\mathcal{K}=\{(\mathbf{p}, \mathbf{q})\}$ from target point cloud $\mathbf{P}$, and source point cloud $\mathbf{Q}$ transformed with current transformation matrix $\mathbf{T}$.2. Update the transformation $\mathbf{T}$ by minimizing an objective function $E(\mathbf{T})$ defined over the correspondence set $\mathcal{K}$.Different variants of ICP use different objective functions $E(\mathbf{T})$ [\[BeslAndMcKay1992\]](../reference.htmlbeslandmckay1992) [\[ChenAndMedioni1992\]](../reference.htmlchenandmedioni1992) [\[Park2017\]](../reference.htmlpark2017). Different variants of ICP Point-to-point ICPWe first show a point-to-point ICP algorithm [\[BeslAndMcKay1992\]](../reference.htmlbeslandmckay1992) using the objective\begin{equation}E(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\|\mathbf{p} - \mathbf{T}\mathbf{q}\|^{2}\end{equation}The class `TransformationEstimationPointToPoint` provides functions to compute the residuals and Jacobian matrices of the point-to-point ICP objective.--- Point-to-plane ICPThe point-to-plane ICP algorithm [\[ChenAndMedioni1992\]](../reference.htmlchenandmedioni1992) uses a different objective function\begin{equation}E(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big((\mathbf{p} - \mathbf{T}\mathbf{q})\cdot\mathbf{n}_{\mathbf{p}}\big)^{2},\end{equation}where $\mathbf{n}_{\mathbf{p}}$ is the normal of point $\mathbf{p}$. [\[Rusinkiewicz2001\]](../reference.htmlrusinkiewicz2001) has shown that the point-to-plane ICP algorithm has a faster convergence speed than the point-to-point ICP algorithm.The class `TransformationEstimationPointToPlane` provides functions to compute the residuals and Jacobian matrices of the point-to-plane ICP objective. --- Colored ICPFollowing [\[Park2017\]](../reference.htmlpark2017), it runs ICP iterations with a joint optimization objective\begin{equation}E(\mathbf{T}) = (1-\delta)E_{C}(\mathbf{T}) + \delta E_{G}(\mathbf{T})\end{equation}where $\mathbf{T}$ is the transformation matrix to be estimated. $E_{C}$ and $E_{G}$ are the photometric and geometric terms, respectively. $\delta\in[0,1]$ is a weight parameter that has been determined empirically.The geometric term $E_{G}$ is the same as the Point-to-plane ICP objective\begin{equation}E_{G}(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big((\mathbf{p} - \mathbf{T}\mathbf{q})\cdot\mathbf{n}_{\mathbf{p}}\big)^{2},\end{equation}where $\mathcal{K}$ is the correspondence set in the current iteration. $\mathbf{n}_{\mathbf{p}}$ is the normal of point $\mathbf{p}$.The color term $E_{C}$ measures the difference between the color of point $\mathbf{q}$ (denoted as $C(\mathbf{q})$) and the color of its projection on the tangent plane of $\mathbf{p}$.\begin{equation}E_{C}(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big(C_{\mathbf{p}}(\mathbf{f}(\mathbf{T}\mathbf{q})) - C(\mathbf{q})\big)^{2},\end{equation}where $C_{\mathbf{p}}(\cdot)$ is a precomputed function continuously defined on the tangent plane of $\mathbf{p}$. Function$\mathbf{f}(\cdot)$ projects a 3D point to the tangent plane. For more details, refer to [\[Park2017\]](../reference.htmlpark2017).To further improve efficiency, [\[Park2017\]](../reference.htmlpark2017) proposes a multi-scale registration scheme. The class `TransformationEstimationForColoredICP` provides functions to compute the residuals and Jacobian matrices of the joint optimization objective. --- Understanding ICP API **Note:** The `Tensor based ICP implementation` API is slightly different than the [Eigen based ICP implementation](../pipelines/icp_registration.rst), to support more functionalities. Input`PointClouds` between which the `Transformation` is to be estimated. [open3d.t.PointCloud]- Source Tensor PointCloud. [Float32 or Float64 dtypes are supported].- Target Tensor PointCloud. [Float32 or Float64 dtypes are supported]. **Note:** The initial alignment is usually obtained by a global registration algorithm.
###Code
source = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_0.pcd")
target = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_1.pcd")
# For Colored-ICP `colors` attribute must be of the same dtype as `positions` and `normals` attribute.
source.point["colors"] = source.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
target.point["colors"] = target.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
# Initial guess transform between the two point-cloud.
# ICP algortihm requires a good initial allignment to converge efficiently.
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])
draw_registration_result(source, target, trans_init)
###Output
_____no_output_____
###Markdown
--- Parameters Max correspondence Distances- This is the radius of distance from each point in the source point-cloud in which the neighbour search will try to find a corresponding point in the target point-cloud.- It is a `double` for `icp`, and `utility.DoubleVector` for `multi-scale-icp`.- One may typically keep this parameter between `1.0x - 3.0x` `voxel-size` for each scale.- This parameter is most important for performance tuning, as a higher radius will take larger time (as the neighbour search will be performed over a larger radius).
###Code
# For Vanilla ICP (double)
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.07
# For Multi-Scale ICP (o3d.utility.DoubleVector):
# `max_correspondence_distances` is proportianal to the resolution or the `voxel_sizes`.
# In general it is recommended to use values between 1x - 3x of the corresponding `voxel_sizes`.
# We may have a higher value of the `max_correspondence_distances` for the first coarse
# scale, as it is not much expensive, and gives us more tolerance to initial allignment.
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])
###Output
_____no_output_____
###Markdown
Initial Transform from Source to Target [open3d.core.Tensor]- Initial estimate for transformation from source to target.- Transformation matrix Tensor of shape [4, 4] of type `Float64` on `CPU:0` device- The initial alignment is usually obtained by a global registration algorithm. See [Global registration](../pipelines/global_registration.rst) for examples.
###Code
# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4],
[0.0, 0.0, 0.0, 1.0]])
###Output
_____no_output_____
###Markdown
Estimation Method - This sets the ICP method to compute the transformation between two point-clouds given the correspondences.Options:- **o3d.t.pipelines.registration.TransformationEstimationPointToPoint()** - Point to Point ICP.- **o3d.t.pipelines.registration.TransformationEstimationPointToPlane(robust_kernel)** - Point to Plane ICP. - Requires `target point-cloud` to have `normals` attribute (of same dtype as `position` attribute).- **o3d.t.pipelines.registration.TransformationEstimationForColoredICP(robust_kernel, lambda)** - Colored ICP. - Requires `target` point-cloud to have `normals` attribute (of same dtype as `position` attribute). - Requires `source` and `target` point-clouds to have `colors` attribute (of same dtype as `position` attribute).- **o3d.t.pipelines.registration.TransformationEstimationForGeneralizedICP(robust_kernel, epsilon)** [To be added]. - Generalized ICP.
###Code
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
###Output
_____no_output_____
###Markdown
Estimation Method also supports `Robust Kernels`: Robust kernels are used for outlier rejection. More on this in `Robust Kernel` section.`robust_kernel = o3d.t.pipelines.registration.robust_kernel.RobustKernel(method, scale, shape)`Method options:- robust_kernel.RobustKernelMethod.L2Loss- robust_kernel.RobustKernelMethod.L1Loss- robust_kernel.RobustKernelMethod.HuberLoss- robust_kernel.RobustKernelMethod.CauchyLoss- robust_kernel.RobustKernelMethod.GMLoss- robust_kernel.RobustKernelMethod.TukeyLoss- robust_kernel.RobustKernelMethod.GeneralizedLoss ICP Convergence Criteria [relative rmse, relative fitness, max iterations]- This sets the condition for termination or when the scale iterations can be considered to be converged. - If the relative (of change in value from the last iteration) rmse and fitness are equal or less than the specified value, the iterations for that scale will be considered as converged/completed.- For `Multi-Scale ICP` it is a `list` of `ICPConvergenceCriteria`, for each scale of ICP, to provide more fine control over performance.- One may keep the initial values of `relative_fitness` and `relative_rmse` low as we just want to get an estimate transformation, and high for later iterations to fine-tune.- Iterations on higher-resolution is more costly (takes more time), so we want to do fewer iterations on higher resolution.
###Code
# Convergence-Criteria for Vanilla ICP:
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001,
relative_rmse=0.000001,
max_iteration=50)
# List of Convergence-Criteria for Multi-Scale ICP:
# We can control `ConvergenceCriteria` of each `scale` individually.
# We want to keep `relative_fitness` and `relative_rmse` high (more error tolerance)
# for initial scales, i.e. we will be happy to consider ICP converged, when difference
# between 2 successive iterations for that scale is smaller than this value.
# We expect less accuracy (more error tolerance) initial coarse-scale iteration,
# and want our later scale convergence to be more accurate (less error tolerance).
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=20),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]
###Output
_____no_output_____
###Markdown
Voxel Sizes- It is the voxel size (lower voxel size corresponds to higher resolution), for each scale of multi-scale ICP.- We want to perform initial iterations on a coarse point-cloud (low-resolution or small voxel size) (as it is more time-efficient, and avoids local-minima), and then move to a dense point-cloud (high-resolution or small voxel size. Therefore the voxel sizes must be strictly decreasing order.
###Code
# Vanilla ICP
voxel_size = 0.025
# Lower `voxel_size` is equivalent to higher resolution,
# and we want to perform iterations from coarse to dense resolution,
# therefore `voxel_sizes` must be in strictly decressing order.
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])
###Output
_____no_output_____
###Markdown
Save Loss LogWhen `True`, it saves the iteration-wise values of `fitness`, `inlier_rmse`, `transformaton`, `scale`, `iteration` in `loss_log_` in `regsitration_result`. Default: `False`.
###Code
save_loss_log = True
###Output
_____no_output_____
###Markdown
--- Vanilla ICP Example 1. Set Inputs and Parameters
###Code
# Input point-clouds
source = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_0.pcd")
target = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_1.pcd")
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.07
# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4],
[0.0, 0.0, 0.0, 1.0]])
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
# Convergence-Criteria for Vanilla ICP
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001,
relative_rmse=0.000001,
max_iteration=50)
# Down-sampling voxel-size.
voxel_size = 0.025
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
###Output
_____no_output_____
###Markdown
2. Get Registration Result from ICP
###Code
s = time.time()
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
###Output
_____no_output_____
###Markdown
---Now let's try with poor initial initialisation
###Code
init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)
max_correspondence_distance = 0.07
s = time.time()
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
###Output
_____no_output_____
###Markdown
As we can see, poor initial allignment might fail ICP convergenceHaving large `max_correspondence_distance` might resolve this issue. But it will take longer to process.
###Code
init_source_to_target = o3d.core.Tensor.eye(4, o3c.float32)
max_correspondence_distance = 0.5
s = time.time()
# It is highly recommended to down-sample the point-cloud before using
# ICP algorithm, for better performance.
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
###Output
_____no_output_____
###Markdown
We may resolve the above issues and get even better accuracy by using `Multi-Scale ICP` --- Multi-Scale ICP Example Problems with using Vanilla-ICP (previous version):- Running the ICP algorithm on dense point-clouds is very slow. - It requires good initial alignment: - If the point-cloud is not well aligned, the convergence might get stuck in local-minima in initial iterations. - We need to have a larger `max_correspondence_distance` if the aligned point cloud does not have sufficient overlaps. - If point-cloud is heavily down-sampled (coarse), the obtained result will not be accurate. These drawbacks can be solved using Multi-Scale ICP.In Multi-Scale ICP, we perform the initial iterations on coarse point-cloud to get a better estimate of initial alignment and use this alignment for convergence on a more dense point cloud. ICP on coarse point cloud is in-expensive, and allows us to use a larger `max_correspondence_distance`. It is also less likely for the convergence to get stuck in local minima. As we get a good estimate, it takes fewer iterations on dense point-cloud to converge to a more accurate transform. It is recommended to use `Multi-Scale ICP` over `ICP`, for efficient convergence, especially for large point clouds. 1. Set Inputs and Parameters
###Code
# Input point-clouds
source = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_0.pcd")
target = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_1.pcd")
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])
# List of Convergence-Criteria for Multi-Scale ICP:
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=20),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]
# `max_correspondence_distances` for Multi-Scale ICP (o3d.utility.DoubleVector):
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])
# Initial alignment or source to target transform.
init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
###Output
_____no_output_____
###Markdown
2. Get Registration Result from Multi-Scale ICP
###Code
# Setting Verbosity to Debug, helps in fine-tuning the performance.
# o3d.utility.set_verbosity_level(o3d.utility.VerbosityLevel.Debug)
s = time.time()
registration_ms_icp = treg.multi_scale_icp(source, target, voxel_sizes,
criteria_list,
max_correspondence_distances,
init_source_to_target, estimation,
save_loss_log)
ms_icp_time = time.time() - s
print("Time taken by Multi-Scale ICP: ", ms_icp_time)
print("Inlier Fitness: ", registration_ms_icp.fitness)
print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)
draw_registration_result(source, target, registration_ms_icp.transformation)
###Output
_____no_output_____
###Markdown
Plotting Convergence GraphWe can use the `registration_result.loss_log`, to plot convergence and fine-tune our application.
###Code
from matplotlib import pyplot as plt
def plot_rmse(registration_result):
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(20, 5))
axes.set_title("Inlier RMSE vs Iteration")
axes.plot(registration_result.loss_log["index"].numpy(),
registration_result.loss_log["inlier_rmse"].numpy())
def plot_scale_wise_rmse(registration_result):
scales = registration_result.loss_log["scale"].numpy()
iterations = registration_result.loss_log["iteration"].numpy()
num_scales = scales[-1][0] + 1
fig, axes = plt.subplots(nrows=1, ncols=num_scales, figsize=(20, 5))
masks = {}
for scale in range(0, num_scales):
masks[scale] = registration_result.loss_log["scale"] == scale
rmse = registration_result.loss_log["inlier_rmse"][masks[scale]].numpy()
iteration = registration_result.loss_log["iteration"][
masks[scale]].numpy()
title_prefix = "Scale Index: " + str(scale)
axes[scale].set_title(title_prefix + " Inlier RMSE vs Iteration")
axes[scale].plot(iteration, rmse)
print("Vanilla ICP")
plot_rmse(registration_icp)
print("Multi Scale ICP")
plot_rmse(registration_ms_icp)
plot_scale_wise_rmse(registration_ms_icp)
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(20, 5))
axes.set_title("Vanilla ICP and Multi-Scale ICP `Inlier RMSE` vs `Iteration`")
if len(registration_ms_icp.loss_log["index"]) > len(
registration_icp.loss_log["inlier_rmse"]):
axes.plot(registration_ms_icp.loss_log["index"].numpy(),
registration_ms_icp.loss_log["inlier_rmse"].numpy(),
registration_icp.loss_log["inlier_rmse"].numpy())
else:
axes.plot(registration_icp.loss_log["index"].numpy(),
registration_icp.loss_log["inlier_rmse"].numpy(),
registration_ms_icp.loss_log["inlier_rmse"].numpy())
###Output
_____no_output_____
###Markdown
--- Multi-Scale ICP on CUDA device Example
###Code
# The algorithm runs on the same device as the source and target point-cloud.
source_cuda = source.cuda(0)
target_cuda = target.cuda(0)
s = time.time()
registration_ms_icp = treg.multi_scale_icp(source_cuda, target_cuda,
voxel_sizes, criteria_list,
max_correspondence_distances,
init_source_to_target, estimation,
save_loss_log)
ms_icp_time = time.time() - s
print("Time taken by Multi-Scale ICP: ", ms_icp_time)
print("Inlier Fitness: ", registration_ms_icp.fitness)
print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)
draw_registration_result(source.cpu(), target.cpu(),
registration_ms_icp.transformation)
###Output
_____no_output_____
###Markdown
--- Case of `no correspondences`.In case of `no_correspondences` the `fitness` and `inlier_rmse` is `0`.
###Code
max_correspondence_distance = 0.02
init_source_to_target = np.asarray([[1.0, 0.0, 0.0, 5], [0.0, 1.0, 0.0, 7],
[0.0, 0.0, 1.0, 10], [0.0, 0.0, 0.0, 1.0]])
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
print("Transformation: \n", registration_icp.transformation)
if registration_icp.fitness == 0 and registration_icp.inlier_rmse == 0:
print("ICP Convergence Failed, as no correspondence were found")
###Output
_____no_output_____
###Markdown
--- Information Matrix `Information Matrix` gives us futher information about how well the point-clouds are alligned.
###Code
information_matrix = treg.get_information_matrix(
source, target, max_correspondence_distances[2],
registration_ms_icp.transformation)
print(information_matrix)
###Output
_____no_output_____
###Markdown
--- Now that we have a basic understanding of the ICP algorithm and the API, let's experiment with the different versions to understand the difference Initial Allignment
###Code
source = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_0.pcd")
target = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_1.pcd")
# Initial guess transform between the two point-cloud.
# ICP algortihm requires a good initial allignment to converge efficiently.
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])
draw_registration_result(source, target, trans_init)
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.02
print("Initial alignment")
evaluation = treg.evaluate_registration(source, target,
max_correspondence_distance, trans_init)
print("Fitness: ", evaluation.fitness)
print("Inlier RMSE: ", evaluation.inlier_rmse)
###Output
_____no_output_____
###Markdown
--- Point-To-Point ICP Registration We first show a point-to-point ICP algorithm [\[BeslAndMcKay1992\]](../reference.htmlbeslandmckay1992) using the objective\begin{equation}E(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\|\mathbf{p} - \mathbf{T}\mathbf{q}\|^{2}\end{equation}The class `TransformationEstimationPointToPoint` provides functions to compute the residuals and Jacobian matrices of the point-to-point ICP objective. 1. Set Inputs and Parameters
###Code
# Input point-clouds
source = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_0.pcd")
target = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_1.pcd")
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPoint()
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.02
# Initial alignment or source to target transform.
init_source_to_target = trans_init
# Convergence-Criteria for Vanilla ICP
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=30)
# Down-sampling voxel-size. If voxel_size < 0, original scale is used.
voxel_size = -1
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
print("Apply Point-to-Point ICP")
s = time.time()
reg_point_to_point = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by Point-To-Point ICP: ", icp_time)
print("Fitness: ", reg_point_to_point.fitness)
print("Inlier RMSE: ", reg_point_to_point.inlier_rmse)
draw_registration_result(source, target, reg_point_to_point.transformation)
###Output
_____no_output_____
###Markdown
The fitness score increases from `0.174722` to `0.372474`. The inlier_rmse reduces from `0.011771` to `0.007761`. By default, icp runs until convergence or reaches a maximum number of iterations (30 by default). It can be changed to allow more computation time and to improve the results further.
###Code
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=1000)
print("Apply Point-to-Point ICP")
s = time.time()
reg_point_to_point = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by Point-To-Point ICP: ", icp_time)
print("Fitness: ", reg_point_to_point.fitness)
print("Inlier RMSE: ", reg_point_to_point.inlier_rmse)
draw_registration_result(source, target, reg_point_to_point.transformation)
###Output
_____no_output_____
###Markdown
The final alignment is tight. The fitness score improves to `0.620972`. The inlier_rmse reduces to `0.006581`.--- Point-to-Plane ICP RegistrationThe point-to-plane ICP algorithm [\[ChenAndMedioni1992\]](../reference.htmlchenandmedioni1992) uses a different objective function\begin{equation}E(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big((\mathbf{p} - \mathbf{T}\mathbf{q})\cdot\mathbf{n}_{\mathbf{p}}\big)^{2},\end{equation}where $\mathbf{n}_{\mathbf{p}}$ is the normal of point $\mathbf{p}$. [\[Rusinkiewicz2001\]](../reference.htmlrusinkiewicz2001) has shown that the point-to-plane ICP algorithm has a faster convergence speed than the point-to-point ICP algorithm.The class `TransformationEstimationPointToPlane` provides functions to compute the residuals and Jacobian matrices of the point-to-plane ICP objective.
###Code
estimation = treg.TransformationEstimationPointToPlane()
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=30)
print("Apply Point-to-Plane ICP")
s = time.time()
reg_point_to_plane = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by Point-To-Plane ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_point_to_plane.transformation)
###Output
_____no_output_____
###Markdown
The point-to-plane ICP reaches tight alignment within 30 iterations (a `fitness` score of 0.620972 and an `inlier_rmse` score of 0.006581). --- Colored ICP RegistrationThis tutorial demonstrates an ICP variant that uses both geometry and color for registration. It implements the algorithm of [\[Park2017\]](../reference.htmlpark2017). The color information locks the alignment along the tangent plane. Thus this algorithm is more accurate and more robust than prior point cloud registration algorithms, while the running speed is comparable to that of ICP registration.
###Code
# Overriding visualization function, according to best camera view for colored-icp sample data.
def draw_registration_result(source, target, transformation):
source_temp = source.clone()
target_temp = target.clone()
source_temp.transform(transformation)
# This is patched version for tutorial rendering.
# Use `draw` function for you application.
o3d.visualization.draw_geometries(
[source_temp.to_legacy(),
target_temp.to_legacy()],
zoom=0.5,
front=[-0.2458, -0.8088, 0.5342],
lookat=[1.7745, 2.2305, 0.9787],
up=[0.3109, -0.5878, -0.7468])
print("1. Load two point clouds and show initial pose")
source = o3d.t.io.read_point_cloud("../../test_data/ColoredICP/frag_115.ply")
target = o3d.t.io.read_point_cloud("../../test_data/ColoredICP/frag_116.ply")
# For Colored-ICP `colors` attribute must be of the same dtype as `positions` and `normals` attribute.
source.point["colors"] = source.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
target.point["colors"] = target.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
# draw initial alignment
current_transformation = np.identity(4)
draw_registration_result(source, target, current_transformation)
###Output
_____no_output_____
###Markdown
Setting baseline with point-to-plane registrationWe first run Point-to-plane ICP as a baseline approach. The visualization below shows misaligned green triangle textures. This is because a geometric constraint does not prevent two planar surfaces from slipping.
###Code
estimation = treg.TransformationEstimationPointToPlane()
max_correspondence_distance = 0.02
init_source_to_target = np.identity(4)
print("Apply Point-to-Plane ICP")
s = time.time()
reg_point_to_plane = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation)
icp_time = time.time() - s
print("Time taken by Point-To-Plane ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_point_to_plane.transformation)
###Output
_____no_output_____
###Markdown
Colored RegistrationThe core function for colored point cloud registration is `registration_colored_icp`. Following [\[Park2017\]](../reference.htmlpark2017), it runs ICP iterations (see [Point-to-point ICP](../pipelines/icp_registration.ipynbPoint-to-point-ICP) for details) with a joint optimization objective\begin{equation}E(\mathbf{T}) = (1-\delta)E_{C}(\mathbf{T}) + \delta E_{G}(\mathbf{T})\end{equation}where $\mathbf{T}$ is the transformation matrix to be estimated. $E_{C}$ and $E_{G}$ are the photometric and geometric terms, respectively. $\delta\in[0,1]$ is a weight parameter that has been determined empirically.The geometric term $E_{G}$ is the same as the [Point-to-plane ICP](../pipelines/icp_registration.ipynbPoint-to-plane-ICP) objective\begin{equation}E_{G}(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big((\mathbf{p} - \mathbf{T}\mathbf{q})\cdot\mathbf{n}_{\mathbf{p}}\big)^{2},\end{equation}where $\mathcal{K}$ is the correspondence set in the current iteration. $\mathbf{n}_{\mathbf{p}}$ is the normal of point $\mathbf{p}$.The color term $E_{C}$ measures the difference between the color of point $\mathbf{q}$ (denoted as $C(\mathbf{q})$) and the color of its projection on the tangent plane of $\mathbf{p}$.\begin{equation}E_{C}(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big(C_{\mathbf{p}}(\mathbf{f}(\mathbf{T}\mathbf{q})) - C(\mathbf{q})\big)^{2},\end{equation}where $C_{\mathbf{p}}(\cdot)$ is a precomputed function continuously defined on the tangent plane of $\mathbf{p}$. Function$\mathbf{f}(\cdot)$ projects a 3D point to the tangent plane. For more details, refer to [\[Park2017\]](../reference.htmlpark2017).To further improve efficiency, [\[Park2017\]](../reference.htmlpark2017) proposes a multi-scale registration scheme.
###Code
estimation = treg.TransformationEstimationForColoredICP()
current_transformation = np.identity(4)
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=50),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 30),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 14)
]
max_correspondence_distances = o3d.utility.DoubleVector([0.08, 0.04, 0.02])
voxel_sizes = o3d.utility.DoubleVector([0.04, 0.02, 0.01])
# colored pointcloud registration
# This is implementation of following paper
# J. Park, Q.-Y. Zhou, V. Koltun,
# Colored Point Cloud Registration Revisited, ICCV 2017
print("Colored point cloud registration")
s = time.time()
reg_multiscale_icp = treg.multi_scale_icp(source, target, voxel_sizes,
criteria_list,
max_correspondence_distances,
init_source_to_target, estimation)
icp_time = time.time() - s
print("Time taken by Colored ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_multiscale_icp.transformation)
###Output
_____no_output_____
###Markdown
ICP registrationThis tutorial demonstrates the ICP (Iterative Closest Point) registration algorithm. It has been a mainstay of geometric registration in both research and industry for many years. The inputs are two point clouds and an initial transformation that roughly aligns the source point cloud to the target point cloud. The output is a refined transformation that tightly aligns the two point clouds. A helper function `draw_registration_result` visualizes the alignment during the registration process. In this tutorial, we show differnt ICP variants, and the API for using them. Helper visualization functionThe function below visualizes a target point cloud and a source point cloud transformed with an alignment transformation. The target point cloud and the source point cloud are painted with cyan and yellow colors respectively. The more and tighter the two point-clouds overlap with each other, the better the alignment result.
###Code
def draw_registration_result(source, target, transformation):
source_temp = source.clone()
target_temp = target.clone()
source_temp.transform(transformation)
# This is patched version for tutorial rendering.
# Use `draw` function for you application.
o3d.visualization.draw_geometries(
[source_temp.to_legacy(),
target_temp.to_legacy()],
zoom=0.4459,
front=[0.9288, -0.2951, -0.2242],
lookat=[1.6784, 2.0612, 1.4451],
up=[-0.3402, -0.9189, -0.1996])
###Output
_____no_output_____
###Markdown
Understanding ICP Algorithm In general, the ICP algorithm iterates over two steps:1. Find correspondence set $\mathcal{K}=\{(\mathbf{p}, \mathbf{q})\}$ from target point cloud $\mathbf{P}$, and source point cloud $\mathbf{Q}$ transformed with current transformation matrix $\mathbf{T}$.2. Update the transformation $\mathbf{T}$ by minimizing an objective function $E(\mathbf{T})$ defined over the correspondence set $\mathcal{K}$.Different variants of ICP use different objective functions $E(\mathbf{T})$ [\[BeslAndMcKay1992\]](../reference.htmlbeslandmckay1992) [\[ChenAndMedioni1992\]](../reference.htmlchenandmedioni1992) [\[Park2017\]](../reference.htmlpark2017). Understanding ICP API **Note:** The `Tensor based ICP implementation` API is slightly different than the [Eigen based ICP implementation](../pipelines/icp_registration.rst), to support more functionalities. Input`PointClouds` between which the `Transformation` is to be estimated. [open3d.t.PointCloud]- Source Tensor PointCloud. [Float32 or Float64 dtypes are supported].- Target Tensor PointCloud. [Float32 or Float64 dtypes are supported]. **Note:** The initial alignment is usually obtained by a global registration algorithm.
###Code
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# For Colored-ICP `colors` attribute must be of the same dtype as `positions` and `normals` attribute.
source.point["colors"] = source.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
target.point["colors"] = target.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
# Initial guess transform between the two point-cloud.
# ICP algortihm requires a good initial allignment to converge efficiently.
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])
draw_registration_result(source, target, trans_init)
###Output
_____no_output_____
###Markdown
--- Parameters Max correspondence Distances- This is the radius of distance from each point in the source point-cloud in which the neighbour search will try to find a corresponding point in the target point-cloud.- It is a `double` for `icp`, and `utility.DoubleVector` for `multi-scale-icp`.- One may typically keep this parameter between `1.0x - 3.0x` `voxel-size` for each scale.- This parameter is most important for performance tuning, as a higher radius will take larger time (as the neighbour search will be performed over a larger radius).
###Code
# For Vanilla ICP (double)
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.07
# For Multi-Scale ICP (o3d.utility.DoubleVector):
# `max_correspondence_distances` is proportianal to the resolution or the `voxel_sizes`.
# In general it is recommended to use values between 1x - 3x of the corresponding `voxel_sizes`.
# We may have a higher value of the `max_correspondence_distances` for the first coarse
# scale, as it is not much expensive, and gives us more tolerance to initial allignment.
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])
###Output
_____no_output_____
###Markdown
Initial Transform from Source to Target [open3d.core.Tensor]- Initial estimate for transformation from source to target.- Transformation matrix Tensor of shape [4, 4] of type `Float64` on `CPU:0` device- The initial alignment is usually obtained by a global registration algorithm. See [Global registration](../pipelines/global_registration.rst) for examples.
###Code
# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4],
[0.0, 0.0, 0.0, 1.0]])
###Output
_____no_output_____
###Markdown
Estimation Method - This sets the ICP method to compute the transformation between two point-clouds given the correspondences.Options:- **o3d.t.pipelines.registration.TransformationEstimationPointToPoint()** - Point to Point ICP.- **o3d.t.pipelines.registration.TransformationEstimationPointToPlane(robust_kernel)** - Point to Plane ICP. - Requires `target point-cloud` to have `normals` attribute (of same dtype as `position` attribute).- **o3d.t.pipelines.registration.TransformationEstimationForColoredICP(robust_kernel, lambda)** - Colored ICP. - Requires `target` point-cloud to have `normals` attribute (of same dtype as `position` attribute). - Requires `source` and `target` point-clouds to have `colors` attribute (of same dtype as `position` attribute).- **o3d.t.pipelines.registration.TransformationEstimationForGeneralizedICP(robust_kernel, epsilon)** [To be added]. - Generalized ICP.
###Code
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
###Output
_____no_output_____
###Markdown
Estimation Method also supports `Robust Kernels`: Robust kernels are used for outlier rejection. More on this in `Robust Kernel` section.`robust_kernel = o3d.t.pipelines.registration.robust_kernel.RobustKernel(method, scale, shape)`Method options:- robust_kernel.RobustKernelMethod.L2Loss- robust_kernel.RobustKernelMethod.L1Loss- robust_kernel.RobustKernelMethod.HuberLoss- robust_kernel.RobustKernelMethod.CauchyLoss- robust_kernel.RobustKernelMethod.GMLoss- robust_kernel.RobustKernelMethod.TukeyLoss- robust_kernel.RobustKernelMethod.GeneralizedLoss ICP Convergence Criteria [relative rmse, relative fitness, max iterations]- This sets the condition for termination or when the scale iterations can be considered to be converged. - If the relative (of change in value from the last iteration) rmse and fitness are equal or less than the specified value, the iterations for that scale will be considered as converged/completed.- For `Multi-Scale ICP` it is a `list` of `ICPConvergenceCriteria`, for each scale of ICP, to provide more fine control over performance.- One may keep the initial values of `relative_fitness` and `relative_rmse` low as we just want to get an estimate transformation, and high for later iterations to fine-tune.- Iterations on higher-resolution is more costly (takes more time), so we want to do fewer iterations on higher resolution.
###Code
# Convergence-Criteria for Vanilla ICP:
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001,
relative_rmse=0.000001,
max_iteration=50)
# List of Convergence-Criteria for Multi-Scale ICP:
# We can control `ConvergenceCriteria` of each `scale` individually.
# We want to keep `relative_fitness` and `relative_rmse` high (more error tolerance)
# for initial scales, i.e. we will be happy to consider ICP converged, when difference
# between 2 successive iterations for that scale is smaller than this value.
# We expect less accuracy (more error tolerance) initial coarse-scale iteration,
# and want our later scale convergence to be more accurate (less error tolerance).
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=20),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]
###Output
_____no_output_____
###Markdown
Voxel Sizes- It is the voxel size (lower voxel size corresponds to higher resolution), for each scale of multi-scale ICP.- We want to perform initial iterations on a coarse point-cloud (low-resolution or small voxel size) (as it is more time-efficient, and avoids local-minima), and then move to a dense point-cloud (high-resolution or small voxel size. Therefore the voxel sizes must be strictly decreasing order.
###Code
# Vanilla ICP
voxel_size = 0.025
# Lower `voxel_size` is equivalent to higher resolution,
# and we want to perform iterations from coarse to dense resolution,
# therefore `voxel_sizes` must be in strictly decressing order.
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])
###Output
_____no_output_____
###Markdown
Get Iteration-wise registration result using callback lambda functionOptional lambda function, saves string to tensor dictionary of attributes such as "iteration_index", "scale_index", "scale_iteration_index", "inlier_rmse", "fitness", "transformation", on CPU device, updated after each iteration.
###Code
# Example callback_after_iteration lambda function:
callback_after_iteration = lambda updated_result_dict : print("Iteration Index: {}, Fitness: {}, Inlier RMSE: {},".format(
updated_result_dict["iteration_index"].item(),
updated_result_dict["fitness"].item(),
updated_result_dict["inlier_rmse"].item()))
###Output
_____no_output_____
###Markdown
--- Vanilla ICP Example 1. Set Inputs and Parameters
###Code
# Input point-clouds
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.07
# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4],
[0.0, 0.0, 0.0, 1.0]])
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
# Convergence-Criteria for Vanilla ICP
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001,
relative_rmse=0.000001,
max_iteration=50)
# Down-sampling voxel-size.
voxel_size = 0.025
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
###Output
_____no_output_____
###Markdown
2. Get Registration Result from ICP
###Code
s = time.time()
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, callback_after_iteration)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
###Output
_____no_output_____
###Markdown
---Now let's try with poor initial initialisation
###Code
init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)
max_correspondence_distance = 0.07
s = time.time()
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
###Output
_____no_output_____
###Markdown
As we can see, poor initial allignment might fail ICP convergenceHaving large `max_correspondence_distance` might resolve this issue. But it will take longer to process.
###Code
init_source_to_target = o3d.core.Tensor.eye(4, o3c.float32)
max_correspondence_distance = 0.5
s = time.time()
# It is highly recommended to down-sample the point-cloud before using
# ICP algorithm, for better performance.
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
###Output
_____no_output_____
###Markdown
We may resolve the above issues and get even better accuracy by using `Multi-Scale ICP` --- Multi-Scale ICP Example Problems with using Vanilla-ICP (previous version):- Running the ICP algorithm on dense point-clouds is very slow. - It requires good initial alignment: - If the point-cloud is not well aligned, the convergence might get stuck in local-minima in initial iterations. - We need to have a larger `max_correspondence_distance` if the aligned point cloud does not have sufficient overlaps. - If point-cloud is heavily down-sampled (coarse), the obtained result will not be accurate. These drawbacks can be solved using Multi-Scale ICP.In Multi-Scale ICP, we perform the initial iterations on coarse point-cloud to get a better estimate of initial alignment and use this alignment for convergence on a more dense point cloud. ICP on coarse point cloud is in-expensive, and allows us to use a larger `max_correspondence_distance`. It is also less likely for the convergence to get stuck in local minima. As we get a good estimate, it takes fewer iterations on dense point-cloud to converge to a more accurate transform. It is recommended to use `Multi-Scale ICP` over `ICP`, for efficient convergence, especially for large point clouds. 1. Set Inputs and Parameters
###Code
# Input point-clouds
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])
# List of Convergence-Criteria for Multi-Scale ICP:
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=20),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]
# `max_correspondence_distances` for Multi-Scale ICP (o3d.utility.DoubleVector):
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])
# Initial alignment or source to target transform.
init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
callback_after_iteration = lambda loss_log_map : print("Iteration Index: {}, Scale Index: {}, Scale Iteration Index: {}, Fitness: {}, Inlier RMSE: {},".format(
loss_log_map["iteration_index"].item(),
loss_log_map["scale_index"].item(),
loss_log_map["scale_iteration_index"].item(),
loss_log_map["fitness"].item(),
loss_log_map["inlier_rmse"].item()))
###Output
_____no_output_____
###Markdown
2. Get Registration Result from Multi-Scale ICP
###Code
# Setting Verbosity to Debug, helps in fine-tuning the performance.
# o3d.utility.set_verbosity_level(o3d.utility.VerbosityLevel.Debug)
s = time.time()
registration_ms_icp = treg.multi_scale_icp(source, target, voxel_sizes,
criteria_list,
max_correspondence_distances,
init_source_to_target, estimation,
callback_after_iteration)
ms_icp_time = time.time() - s
print("Time taken by Multi-Scale ICP: ", ms_icp_time)
print("Inlier Fitness: ", registration_ms_icp.fitness)
print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)
draw_registration_result(source, target, registration_ms_icp.transformation)
###Output
_____no_output_____
###Markdown
--- Multi-Scale ICP on CUDA device Example
###Code
# The algorithm runs on the same device as the source and target point-cloud.
source_cuda = source.cuda(0)
target_cuda = target.cuda(0)
s = time.time()
registration_ms_icp = treg.multi_scale_icp(source_cuda, target_cuda,
voxel_sizes, criteria_list,
max_correspondence_distances,
init_source_to_target, estimation)
ms_icp_time = time.time() - s
print("Time taken by Multi-Scale ICP: ", ms_icp_time)
print("Inlier Fitness: ", registration_ms_icp.fitness)
print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)
draw_registration_result(source.cpu(), target.cpu(),
registration_ms_icp.transformation)
###Output
_____no_output_____
###Markdown
--- Case of `no correspondences`.In case of `no_correspondences` the `fitness` and `inlier_rmse` is `0`.
###Code
max_correspondence_distance = 0.02
init_source_to_target = np.asarray([[1.0, 0.0, 0.0, 5], [0.0, 1.0, 0.0, 7],
[0.0, 0.0, 1.0, 10], [0.0, 0.0, 0.0, 1.0]])
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
print("Transformation: \n", registration_icp.transformation)
if registration_icp.fitness == 0 and registration_icp.inlier_rmse == 0:
print("ICP Convergence Failed, as no correspondence were found")
###Output
_____no_output_____
###Markdown
--- Information Matrix `Information Matrix` gives us futher information about how well the point-clouds are alligned.
###Code
information_matrix = treg.get_information_matrix(
source, target, max_correspondence_distances[2],
registration_ms_icp.transformation)
print(information_matrix)
###Output
_____no_output_____
###Markdown
--- Now that we have a basic understanding of the ICP algorithm and the API, let's experiment with the different versions to understand the difference Initial Allignment
###Code
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# Initial guess transform between the two point-cloud.
# ICP algortihm requires a good initial allignment to converge efficiently.
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])
draw_registration_result(source, target, trans_init)
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.02
print("Initial alignment")
evaluation = treg.evaluate_registration(source, target,
max_correspondence_distance, trans_init)
print("Fitness: ", evaluation.fitness)
print("Inlier RMSE: ", evaluation.inlier_rmse)
###Output
_____no_output_____
###Markdown
--- Point-To-Point ICP Registration We first show a point-to-point ICP algorithm [\[BeslAndMcKay1992\]](../reference.htmlbeslandmckay1992) using the objective\begin{equation}E(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\|\mathbf{p} - \mathbf{T}\mathbf{q}\|^{2}\end{equation}The class `TransformationEstimationPointToPoint` provides functions to compute the residuals and Jacobian matrices of the point-to-point ICP objective. 1. Set Inputs and Parameters
###Code
# Input point-clouds
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPoint()
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.02
# Initial alignment or source to target transform.
init_source_to_target = trans_init
# Convergence-Criteria for Vanilla ICP
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=30)
# Down-sampling voxel-size. If voxel_size < 0, original scale is used.
voxel_size = -1
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
print("Apply Point-to-Point ICP")
s = time.time()
reg_point_to_point = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size)
icp_time = time.time() - s
print("Time taken by Point-To-Point ICP: ", icp_time)
print("Fitness: ", reg_point_to_point.fitness)
print("Inlier RMSE: ", reg_point_to_point.inlier_rmse)
draw_registration_result(source, target, reg_point_to_point.transformation)
###Output
_____no_output_____
###Markdown
The fitness score increases from `0.174722` to `0.372474`. The inlier_rmse reduces from `0.011771` to `0.007761`. By default, icp runs until convergence or reaches a maximum number of iterations (30 by default). It can be changed to allow more computation time and to improve the results further.
###Code
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=1000)
print("Apply Point-to-Point ICP")
s = time.time()
reg_point_to_point = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size)
icp_time = time.time() - s
print("Time taken by Point-To-Point ICP: ", icp_time)
print("Fitness: ", reg_point_to_point.fitness)
print("Inlier RMSE: ", reg_point_to_point.inlier_rmse)
draw_registration_result(source, target, reg_point_to_point.transformation)
###Output
_____no_output_____
###Markdown
The final alignment is tight. The fitness score improves to `0.620972`. The inlier_rmse reduces to `0.006581`.--- Point-to-Plane ICP RegistrationThe point-to-plane ICP algorithm [\[ChenAndMedioni1992\]](../reference.htmlchenandmedioni1992) uses a different objective function\begin{equation}E(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big((\mathbf{p} - \mathbf{T}\mathbf{q})\cdot\mathbf{n}_{\mathbf{p}}\big)^{2},\end{equation}where $\mathbf{n}_{\mathbf{p}}$ is the normal of point $\mathbf{p}$. [\[Rusinkiewicz2001\]](../reference.htmlrusinkiewicz2001) has shown that the point-to-plane ICP algorithm has a faster convergence speed than the point-to-point ICP algorithm.The class `TransformationEstimationPointToPlane` provides functions to compute the residuals and Jacobian matrices of the point-to-plane ICP objective.
###Code
estimation = treg.TransformationEstimationPointToPlane()
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=30)
print("Apply Point-to-Plane ICP")
s = time.time()
reg_point_to_plane = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size)
icp_time = time.time() - s
print("Time taken by Point-To-Plane ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_point_to_plane.transformation)
###Output
_____no_output_____
###Markdown
The point-to-plane ICP reaches tight alignment within 30 iterations (a `fitness` score of 0.620972 and an `inlier_rmse` score of 0.006581). --- Colored ICP RegistrationThis tutorial demonstrates an ICP variant that uses both geometry and color for registration. It implements the algorithm of [\[Park2017\]](../reference.htmlpark2017). The color information locks the alignment along the tangent plane. Thus this algorithm is more accurate and more robust than prior point cloud registration algorithms, while the running speed is comparable to that of ICP registration.
###Code
# Overriding visualization function, according to best camera view for colored-icp sample data.
def draw_registration_result(source, target, transformation):
source_temp = source.clone()
target_temp = target.clone()
source_temp.transform(transformation)
# This is patched version for tutorial rendering.
# Use `draw` function for you application.
o3d.visualization.draw_geometries(
[source_temp.to_legacy(),
target_temp.to_legacy()],
zoom=0.5,
front=[-0.2458, -0.8088, 0.5342],
lookat=[1.7745, 2.2305, 0.9787],
up=[0.3109, -0.5878, -0.7468])
print("1. Load two point clouds and show initial pose")
demo_cicp_pcds = o3d.data.DemoColoredICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_cicp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_cicp_pcds.paths[1])
# For Colored-ICP `colors` attribute must be of the same dtype as `positions` and `normals` attribute.
source.point["colors"] = source.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
target.point["colors"] = target.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
# draw initial alignment
current_transformation = np.identity(4)
draw_registration_result(source, target, current_transformation)
###Output
_____no_output_____
###Markdown
Setting baseline with point-to-plane registrationWe first run Point-to-plane ICP as a baseline approach. The visualization below shows misaligned green triangle textures. This is because a geometric constraint does not prevent two planar surfaces from slipping.
###Code
estimation = treg.TransformationEstimationPointToPlane()
max_correspondence_distance = 0.02
init_source_to_target = np.identity(4)
print("Apply Point-to-Plane ICP")
s = time.time()
reg_point_to_plane = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation)
icp_time = time.time() - s
print("Time taken by Point-To-Plane ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_point_to_plane.transformation)
###Output
_____no_output_____
###Markdown
Colored RegistrationThe core function for colored point cloud registration is `registration_colored_icp`. Following [\[Park2017\]](../reference.htmlpark2017), it runs ICP iterations (see [Point-to-point ICP](../pipelines/icp_registration.ipynbPoint-to-point-ICP) for details) with a joint optimization objective\begin{equation}E(\mathbf{T}) = (1-\delta)E_{C}(\mathbf{T}) + \delta E_{G}(\mathbf{T})\end{equation}where $\mathbf{T}$ is the transformation matrix to be estimated. $E_{C}$ and $E_{G}$ are the photometric and geometric terms, respectively. $\delta\in[0,1]$ is a weight parameter that has been determined empirically.The geometric term $E_{G}$ is the same as the [Point-to-plane ICP](../pipelines/icp_registration.ipynbPoint-to-plane-ICP) objective\begin{equation}E_{G}(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big((\mathbf{p} - \mathbf{T}\mathbf{q})\cdot\mathbf{n}_{\mathbf{p}}\big)^{2},\end{equation}where $\mathcal{K}$ is the correspondence set in the current iteration. $\mathbf{n}_{\mathbf{p}}$ is the normal of point $\mathbf{p}$.The color term $E_{C}$ measures the difference between the color of point $\mathbf{q}$ (denoted as $C(\mathbf{q})$) and the color of its projection on the tangent plane of $\mathbf{p}$.\begin{equation}E_{C}(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big(C_{\mathbf{p}}(\mathbf{f}(\mathbf{T}\mathbf{q})) - C(\mathbf{q})\big)^{2},\end{equation}where $C_{\mathbf{p}}(\cdot)$ is a precomputed function continuously defined on the tangent plane of $\mathbf{p}$. Function$\mathbf{f}(\cdot)$ projects a 3D point to the tangent plane. For more details, refer to [\[Park2017\]](../reference.htmlpark2017).To further improve efficiency, [\[Park2017\]](../reference.htmlpark2017) proposes a multi-scale registration scheme.
###Code
estimation = treg.TransformationEstimationForColoredICP()
current_transformation = np.identity(4)
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=50),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 30),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 14)
]
max_correspondence_distances = o3d.utility.DoubleVector([0.08, 0.04, 0.02])
voxel_sizes = o3d.utility.DoubleVector([0.04, 0.02, 0.01])
# colored pointcloud registration
# This is implementation of following paper
# J. Park, Q.-Y. Zhou, V. Koltun,
# Colored Point Cloud Registration Revisited, ICCV 2017
print("Colored point cloud registration")
s = time.time()
reg_multiscale_icp = treg.multi_scale_icp(source, target, voxel_sizes,
criteria_list,
max_correspondence_distances,
init_source_to_target, estimation)
icp_time = time.time() - s
print("Time taken by Colored ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_multiscale_icp.transformation)
###Output
_____no_output_____ |
MO_Aula_SciKit.ipynb | ###Markdown
Scikit-Learn * É considerada como a biblitoca de Python mais utilizada para a implementação de métodos baseados em algoritmos de aprendizagem de máquina (*machine learning*).* A versão atual é a 0.24.2 (abril 2021).* URL: http://scikit-learn.org Formulação do Problema * Problema de classificação **supervisionada** de texto.* Hoje iremos investigar o método de aprendizagem de máquina que seja mais apropriado para resolvê-lo.* Considere um site de notícias que publica matérias jornalísticas de vários temas. * Economia, saúde e esportes são exemplos de temas. * O objetivo é criar um método classificador que receba um texto de entrada e consiga identificar qual é o assunto do texto.* O classificador assume que cada texto está associado a um tema.* É um problema de classificação de texto multiclasses. ![problema-machine-learning.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABBkAAAIjCAYAAAExSfOVAAAACXBIWXMAAA7EAAAOxAGVKw4bAANc+klEQVR4nOydBVwWyRvHf2+/dDdISIoCigV2d2J3nN116oXn3Xl3xp39907P7u7u7kRRMBClu+Ht9z+zLyAICihg3H75LLM5O7vvs795ZnZ2hq9Wq8FSIpRk4n7AcSoy8co4LeUG/1MngOXzgjUIlgKwBlEK2k7cg1XftYS1qQ7mbbqJc7dfw9RQm9lmrC/GtYeRuLW+P2oN2sSsu7iyN3S0hJ8yyaWGNYhScHhhF2RkyZn56f1rMxNFoVDB75st4JC/AxdfMOu4XA7WHXqEsT1qfLL0fgisQZQCLpcLfV1RofVCIRd3Ng7MW+7S2KUik1Wm/BcNQi1RKpG23RkCEx3EGq6Ee916JTlOVd4J+xz4LxoEXs61QauFKnSunoYF2w9BLvGFj68vgoKCIBAIkJycDENDQ5IVKLBkyRJMmDCBHvauomN9dxfXS8HPnr7rdLSoqm7epClOnzv79jZO2VxR2fGfMwiFXAn+dXusco1DZjofw7vfQ2dhHeTWx1Aj0NPTY5Y5HA74fH6uQbyTJ09D4OzgiGs3b8KvTh3y86thbmHBGNaadWvhX68eYwyHDx7CyZMnYW9vj5s3bmDH7l0Vccml4j9nEMQRQIhaAhOuCts5aVi4wgSJV5xwt98ixkegRkDh8Xg4deoUWrRowax7XwUe3f487CUz//xlaN76ZCVglE9X2nfswEyfM/85g+DzOIj1r4RmLWTI6puMZaMfYPa5IGabSlXYTShBTe5lvEP6jb6Y+sk3/OcMgsAZ+t0hZmbnO7P9dyOTyfpkZmZuyb/OyMjos/MFPpT/okGwvAfWIFgKwBoESwFYg6hgSFGU8VLv378PHx+fCjnnxBXXIzd838a2JPuyBvEJoMZQkSwe7Vfi8g5rEKXk0v3Xi2q4mBRYp1KruVwOp8RV25deqDEuoHqZp+1txGLxvFtPE1419Lb7u6THsAZRSprVdrag4e7zT0d3a+y64kPiKM4YXl3YiJSqfeBt8nE/j5aW1gxiDKU6hjWID+RDjaEk2DcaAHs6o5ZDyREUeIlyaLgzOqx6Xl6nZg2ipKxZs2bpzJkzxx07dgy+vr4Fto0dOxbnz59PevTokck7Di8VMTe3Q+HdC/UcbKA08EBE8AVmvYWZBWLjX2Hb3QT0rmFaFqcqBGsQ7yev3nro0KHMVBTLly+ngXH+/XV1dVUZGRkfVHltWbsXE76KjiuwPjY+lgl71xB/SLQlgjWId+Dq6qp++vQD6rZzIMbwIS20c0hF0N9D0PNOP1z4twtMOID0wGDwKrvBoul2NJp7EHuHVPrw6N8DaxDvIL8x+Pv74+rVq8UeQ4uTtP3EunXrPurcqvDdMLW3huJUNAS5b0m4HHCEIqiJX3Hv9GVgSJ+POse7YA2iBJTEGCi0ouljjYHCtRsKC1I4CG77Zp2ow1omTIqf9NHxvw/WID5zrAI2IHrPwOJ3LCNYgygWFeKVXJjluIffrrmBtVeSkPBvDYBnAbFABChksLfUQ0h0GloN+RUn1v748aeVnQOETZhZ6qlyyP+J5+RY3KR8m/WzBlEMHK4O1KpsNPzrHtrvqoP512VYsnEplke1whgi6xK5FGIOhzEGypZe0WVz4hxjeKMOnHI3BgprEMVAjYFycUp1YIqMmZdeGF9gH0m+VlWmLcugvkoWgzHzT+F/P/TXnC9kHURug5GrFeUJaxCfI0IT7FqznDGI5ktf4PR4YgzKJySLckHvbek4MWkwkmL2l8upWYN4BxcvXkTDhg0/ybkVCUHoMXQUejeqipdPE2C12BzRoYHYEpiKf7zPw6CcjIHCGsQ7IMbAaHObNm2OEVqX9LjatWuH3Lx50/1jzs039cHyH3yAHwYVWN/Xy4D87/IxURd/7nKN/SuAGEOb0uxPjKG8klIhsAbBUgDWIEqJXC5vl5GRcTj/OrYZPkupCJi5Sz22pWmFtaH8GFiDqACoMVAqsmHth8IaRAXgVq2WU8jDW6Gh6QbwqaBzfmg2xhpEBWBtqvvSukkTTpNPnZASwBpECXkWkeJlqqN+UNQ2+q3F3suvHg3t4FOtotNV1rAGUUJcbA0DkfMiIfdjG8q+a5E/DGlb9behHYw+WdrKEtYgPoD8+fOQtl+HIeTCGgRLAViDYCkAaxDvQV2KjsCVkOPEWk8I+LEwsGkOK8Mg9Lv/I2pcvY9FaxaUKA5Obn9Gn5Bcg/iqe0CfO3fuzBkzZswtz3Ns+MsEJoZAouk1pIe2h0pbCO2rZ9C05naytWQG8TnAKkQZMWRKGhas2YgpbT1xdbcdvASROPM0CKPK/5te2kGaMi0tTV9HRyezuH2VSiWP7v+u7axBvIcBh++Bp+IgSy7FrchkiHh8pqc6CID5lblo16Rx3r7Tpn6HNt6rMLb9BMz9eyROHfwLkx3CIJEIyjWNJJdRT2xhgzbVbTIuPk15b5Yje7BNLRQJ8L8l+zDm7y1F7ptnEJUcPPA67EmJEzJ04VmsmdwU+xeMRedpy/PW//PDWIycs/w9R74N/Yr+3R85bdy6EwP69ChFfGXHxvbVMfJkIPhifSgU8dASCMnTCCbJI8JUiMi3by2jeeBIxejSQYXYWBBD0IWHuwQp2eVrEJRzwSmo5ahX7H4XLwTCwEAHg755dxOPNwZRpxMO/jURhy4+xL8H9qBD41Z4Eq+CMjsdztUbIiXkEhKyVOBr6aGlmwEyas6ANC0aDbv3hSwzCp1atoS/tSXa92iF03N6459zL1Bj8hpsGNcZnWvaYOedGBxd/StaTzwCdcp97LpwEcNa1URgyHMg7hhadP0BkeGJyLBsgebdOuD6v5MgVyjhVdUVe889QPLd/dDX1cKBE0fx9Po5jJjyMy7ceVyiGyYWiyUl2rGoY7l8yIjLqCL+JZFbpv9KqhJmxIjVKjU4XM2D9iK6Fy6tO4Mo/1aY3WQY7qfxIOdp4fcffsAdaTZ0FWq46Wh9aDLeCfF7OZNa2pbIB2ze0hdnTt6Bjs67vw3NM4j+XRtCy1qB5jUGIl2pj0Pnr2Lxyp2YOKI3lq7dg7YrlyDz0XGcvBGMb8ZOg5GO5lCRvhUTHrvyqODJf9CE37XT9A4/L2f9qwc98/ZhjIFi3ganLr9ltdM6vpXU3/Lm3Bv3JMbQEyXlYwyCMQB6uxVScMRvflAOWZdN1mkLxWi0bx9u+zSDzNgTIg83aGuLscetCqRRr7BkxmSoeRyI9Q2RMfOHD01GkVjN4GoMoQYQNCOh+C/P3btxgpfsUzcb/8c7s5Y8gzh1MwG7/xpQYCM1Bsr4IQGaFY27wLtxqdP9yTE0NEwpyX5T1t9XLxjozcvfG0zvKhbYFByLzrU9cOYxzSS4jELQ6Wp0EprbW6NpZDjurdgCfiUzSN0dcWT/PmzgqfFdRiKOdjyMCOnPWLJ8R4nTS9MhlyuwdFjNIn+4rct/nZsQerde9EIVJ9coPOeaJkbPVTH7U8VYdDJCc2zwbjU1BDorGH9UDUFfTKQhQT46C7nbcskziLeN4Wsiv0HQm/2+fadteMB44H8N8mFuVE0LQ2x7HAu5Uk27sNY4ldAox3eXX6G5pQEO790H++kTkLlmJWLUChg4O6BXcjwstl/CZe0Y+LYy14zY9Q6mbsh7Z5aXNoGAz6Q1Nx356TP2xxlSqVSUkJiQ10kENYbcrCPPGCj0BydG8UpqMvfwAMfu7Te+ZDrYli9tq9knZI8cbgF5jk6eQZy4GYLQ+0HwMM1AeqocRvoc3E3Ug5gnxvDedfA8WwjhszPgmJjC2qYaImLjYEhKOUfPREJt447uvoaIiomF7NVDJFv6wdTUkty8DPCEJjCUhUDP1AObV6xFv9FDEBwvQ3RsIhraJOPBzXt4rPREXEYavJxtoOZrQ1skhrXqKWw9yHnP7ibpMIJ33TqoKibZjxl9ochFZsRtZEIbqbHJOHcnGhyREMaVXRFQv3CD5w/JMuiP8edAb2ZeqlZCoeIhIz2JxGXFGAXtFF3A4SL99RqIGjVCxJUTGN2mJX4JegLjLh3xKj4e2R37YeZfP2ERpz+qWptj1byxGDx5IfmxS/4FFk2HNCXp3PKJTZvmXz+jQ2XJ9hpRBfYtYAj5IUZhH3dvhb25x25sfFlwm1pdoKSZt9CqthtuqhJRu64/yMNAPOYk1BcbM9uuvs6AfyVdJOt0hpGYh8zkMNg7khuvzELv3m96U7GvTJ6Eym/eAGdDC5pc14P5T42BSZ+ZkEzU97BCjVZVaBaIyGzAJieLjrl9CJY1O+DgzVfwb9oNffNi9M6b07GtidO3YtGpVhW4FOzQpRD5FcLbwfCdXdA/CEvpTsPE6Njb62e2qpVbU7m0qSumnnsIiToJ9i71ER8ZTW4cB8RGEHj2CuTateHg1xw7jh6Glq0NeBevYreHK/xbtMKVqi7oOXM8atR3xQI3byydVQ1T/ggpcF4TvTwDYXpOT0yXOeWuKEohKOOXnKy6yKPKo6K2BQUFqatXr067YabD/2giN68+uuirLthZWgHroMZA4dEk5BgDhRoDhRoDRcfIQbOBp130OXJ426eWklOL8pcws14D2pqOL2zy7UyNgdKxNtPTEpQyKVGawiPZdKpl8d7z55JfIQY0dnhnGfZdEk30AAubVMdfjX3Q8sZDnOviDRVfirW/j8A9/msoHqpw38EObs3qohPxN07cv4Pf0tOwy70KlCIRRLqmWO6Wguz4jUgVFD79zK6aB4bD4VTOTYceV3589oBa7ywfOnpUCXr27NlYFxcXpoz/78+jrg/76e+68fHxVp6entQY0KtXL8H27dvzH6YmWQUzo1KpeETplHAPKNDLTZ5BxCpUiCfFStOYuzByqolbe1fDv8dwbP7fetiZyJDo2B7pUa+Jp62L+i4S2FnaQy0yRParG5ClJUBm6I7UiMe4FGUGZ+UjRBnXA5d44E3ruCD5wnpcS7dGM7+qEBoYw0JLhcQHJ3DmYSbUWenwcdLFtXh98E0tYJoUjER9N2jp6aFL02oQkp+HGsPZPZvwKtsCVZs2QEJsOhxJduXu6ojAuGyYanNgJlRAINQt8uaV1Kl819OYC33VcKquV86SCM7aQmx/yIeWyBgDEtOwTT8Ld+fORquu3XCjmgFcNqwFNyIKHFsf1L2YgVk2p/DLwksfnY5ciDH8j/gNy2lW8erh5QRaBDUzM4smKOzt7fnEKN74CTRryHEgiTFwuSkvlDB24RCj8iLLD35esfHV1iW/OeQZhAWfCwt98vjqawYWq0+MgTJgzKB8SbAulCixc/28eTsnd1Rl5hoU2Mem3eCc9W+wrN0VfWu/WX6z3a/Ii28a0D9fMt7IiZd58WX7khpEaWk1bQNa5cw/jkrEOitjooISPHz6Gk28EpB58ypOHTkOPnFAm2xZjVvbo94bX2kIDw/3uxp3Jq8nk34LdjWnjiZVQysrKwFVCMqDe3cmTjprwj87pXGekRFlIE//Q9mzeLnazMK6Erk/nOTYBCZ7zDOI63duwblqZdx+rQO3rPOw8myBkLO7kJCQirhUCRTm1eFSoxpcBGGQxUYg8HUy9C0ckJ6YhASeEap4esAg4hISBW7gk6zl5r1QDB9I5EmlwOnr99DYTorjV57Agvgeteo0QGB4ErzsjFERfEw9RElxNTeAXs0aULVogy6V9HHw4jHIlBzyfNWE46QJSP7hV/DrFm3spWXSj/PUo/t3qd7TdxCn58lBzDpzA/MC1hYUHOI7dfbC24t/noyzU3wLK45bV1H+oeKObV/J7JNnEHV9azFha2Yvjd17tyyq8scHIBJoVZQjZ98Vjrl7eeV4+1w+mvtr4m7f642aVJQxVBQhx1rij2aeUAlXYOSfXfDt9ydhNOZfXFu2BBcylHh87x7cOweUybmCn4XCxdX1fv51cXHxVvNXbLizdunvTE+lnu5ud36dOqxWRkaG+s6dO8W43Rp8fX3vMAYhkUi0yCROSUkxpFP++fxTmVxNCaFPNZ2o3L895V9fkWl6H0IdAyzfdhlHdvdApMQBJiZ8zOrQDgtCgtBUkoItq1bjp/bN8dvsHzHzx9ng8j5suB36486ZNrxmUT/ymIEBnXPX0x+3Zs2at0sbP2MQ+W/+B6XyP07I05cw9d+B5+PmYe/GfXh44CRsqnjg8upNCNu4HlECLWSoFTDWN4LY0gJqhfqDh4enP3TZpr4g7OvvUvL9ohWkGCxHUkYaNnYaBz1tEYRL5kPn2VM4NayF7yp1wZjh+th04Qx61W8ED1LiMFyyACqpHNmSdKZSq22nVvD08IWSS3ufkUAk1MHiub8RG/n0g3SxBlFCktLTER6TgGuX78C3fn1kxL9G6+VTcXjTOtwNeoB2TZuCm5QKZZcWWNm3P+7duQZuSiTaZcaB36kNsmMiII7NgBAcVN23A5ci40nJVQiu/BX4elWwRM35LEbxZA2iBNAKy+27D0EqzYSntT5IIQKvstMR9ewlnD1qoF+PAYiOS4IwORs26xMR1XsGTtVqAavgF0iKi2fedkJXTH5/IQR8PtrVbQiRvh7c3V0gFGlj4tyJ+AyaUzKwBlEC6I81enAfjO/fF5XsHRAf+gJulgZwMqkKHnmyL5+7gksXL+HPRr1wIj4ekdwU/Pbr/yDTt4RFZUukdvUhxYBMyOWJkHHUCAt6gYYkO0kPTSTWFoNp7UezBvElYmygj8zEFDwIeQg9sR4sLa2QnZWN2MQEpKWkIF4lR2eVA/7XeDjcHN1RSaAPZXo27gc/hFqmQEJSCngyJRQSBWLCIyDiCxAeG45spZwZMzT3TeqnhDWI95B/RF/tyg4Y3ncAoFDCqUETpvWURKHAleNnoGNVCb8u+h9i+w3GU5UAMrkMai1tqMxsIM/KQH1nI1RPCYcp2Z8nzca9+FgoLEzJPiKY21lApaa1ivT9OGsQnzXpxJGUSqW01xi8vn4LKoUMAoEYKpkMMlJ0TM/OxPihg5FB1CGb7MchP6ichJmSbKSlZZCSiBLEtUB8fCSMq1ZBJStrGGlz0HNQP9gYGYCjVEFODEtOspGoyBjY2pVu9JvygF+Kb1H+63zojfo8nIMSwKoDSx6sMbDkwRpDCaGef8MR23FxZS/UGrQJ19b0hd/QN+PA31rfn1lfq4oVVnzbHF2+3Y8Df3b9hCkuPawxlBCVSs0Ywq9rrjHLHPKnJRIQ51Get8/Pw+rjp38vg1YhrJje4lMl9YNhjaGE8Pmadwo/DavHTJTL/xYc0qh9g8rMRLExK/6rqc+N/5wxkCKBelyz6vh5RDpENjzo1Av5ctz9cuY/ZwzPeneEi2UKWozJxI2gAGQ+qApd70dM03lap0ArlbS0tGBjY4PIyEhmXQ5F2oy7s4s6+Pmzd55vxDfD1BfOn0cR+3x2NvifM4YRZ25jcTUjuLnwMMbvMHh8fSwPYT6DZ2oexWIx0yKZhgqFotj46D4x0dGo7+ePNevXY/rUqbh++xacHRzxIOgRbGxtmX16de8BfQMDnD19Gs/DXhYb76fgP2cMu/62gXQDEJakxJTxKZiyVAKuWsUYAn1PQA1BIHjzNXb+qumiyP1hc0NqCPmXZ83+iZm+BP5zxmAScAUZRo5orqqGu7P52Blyl/zgXKbomAtViVLw2cn9h/KfMwYOhBy9ppHMfIPmpT8+f7+TFLZ3e5avEtYYWPJgjYElD9YYKhBSKqHfrGRX9HlL6tewxlCB1B6xNXteb2um+PE5DoTCGkMpeB2b5qr3Vj8dC3fc3jy5Z81+JY1DrWOLZ1Fp5T4IilgsZrrkUqh5Je5slTWGUlDJQv8pcuoVwuPSXezM9Z4RQyhVHDVcTJnpfZiZWyE+7uPG/tbS0ppR2mNYY/hAqCGUR7z/tLMkhhDDzL9+/QJ1q2s+bpaoRWglkmFzdGK5fZPFGkMJsbGxkfXv318wd25B1aVDODdv3hzbtm37JiAgYM3HnmfkkZg8ZTCvVDlvfVJSOOIVfNhZmCEqNv5jT1MkrDG8AwsLC3Us7T42B/oGsyjoWN45HWmszply+eCaydwsgnYFGpWYlLfejPxa5WUIFNYY3kF+Q/gQaF/RtCue0hwz+64Cs2vwQXshEdOWF6pYmFi1g+M3y7H+ez9Uzel6a98gB0w7nI7nCYkflca3YY2hGBo0aICDBw/Ssnqx+1pZWSE6WvNUf8wnCHkFFkk8jQj6Rvp5hkDZeDod+pyy/8SBNYZiuHSp+E68csk1hA+FqgJF830VERXtakiMu1tov30RZasIubDG8JlBswr5CGv8disub52p20QkhCwu93OzxlAMeiIBXHlKKFVEsS2M8eBVAoQCMfj61tDquAyJ69ox+42Yfxi3/hmCu6FxxcRYPLmGMPqEFCtaZjDziSRXMCFiUXX4AVh6euL0BOePPs/bsMZQDOk5TeGjiDFY53wfK5NTFy8bfPvp4Aim44/qT/HPkb/B+fbjDSE3q6CsaEU7UBUVUIVHqzp99DneBWsMJcS60IfSWuC7V4Xi1dIyP5etpRkiYt4UIdOIKugTVYgnoVk5NqVhjeEjkJwYXi7xyuWaZnfhr1+jds4LreikpPccUTawxlBOHD9+/H+tW7f+oGP5Ak2F86YHckSFnUNd5yZoWcMFa24/g7mNE+KiQssyqW/OWy6xfh0wgtyuXbtXR44cqVSSA+g3FkOGDNm8adOm/h9qCJTInCziuw6a6ugbcW9UobwMgcIaQzEQQ7Av6b60iT0xhPJMTrnCGgNLHqwxlBK2qTzLR3Hu3Lkvoq8k1hjKmYj4dM/ceXuLogdU+VxgjaGcsTXTC8ptEvXv5IYVck4ul/v8Q45jjaECaNKkyRfhV7DGUEJuPQpTO9sYFFqf61Dq6hmIBHyurMITVoawxlBCalV1YJ7u0QvPXPhtcA1G79VqSIyNjcp+wOxPBGsMpWTF5GaNPnUaygvWGFjyYI2BJQ/WGN6DusStWhW4dGw4JAmHyZwp3KsIwOVzAVUW7KsHo6St5jmfeBAKagxfRO3YR1KuN/ng3qnYuuk8Tl0VIPDOQQSd9MZW6Wr0446DyOQmLCvVKc/TlxmsMpQB95PrwqtxLWzf1xePL52AiggKR5iNn4/Wx2juAvQetrvczv3HH3/M/O23377PyMgoUfWmVCoVi0SiIgeIZY3hPQw9/ABy8sOefxYBoVAIEY+P3CErA4c3zdvvx6G98Nf8f7D2Rz04uQnAUXPQNH446nioIEt8/0e2H8t33333+8QWNhALeGqJXPleBXy6f5Ha1d0OK5buV41esbnQJ5t5xnB7z0+oGfBzgY2pEhUMxAUb//VuWQfbTt7IW67r7YHrD54g5cVxGFZujT96N0dQfBI2ny7Y3n/ZiI4Yt/IgMz/23wtYPqwkJbQ0Mukzc/5DfsfVtd+V4JiyQywAeCry46oUUKmIIeR83UK7CFSoVODnDCmkJH9VtCZAJeASP0OA16ktoKN1CVxFNuw8ZpZrGqkhUJzNxcXuSw2BIuALVEVtzzOGM/vPYMJ3mzFp01l0qW2PwJhsOBmK8OLKLlSu1x1ZchW0+Sqkp2bCxb0W+KokPHn6AhLdSvCu1xbnto7EiZ97MHGlZGbDuUoDVB/0A3Z9S4dc1rglrfqOx5PLT+Hf1BKXWtXFoo3HsbphKF6lamHfvsN4FBaLkPAkbL31Amu3HMOcdkaIiQqHi3935vigOAm6tWoGJ8+aeBwlw8uzf3/EbSweI5EQqQoV2lR3w6lHYcxXUrk+3o3jV1Gvbf2cm8jD3c2ueBwWhpoTv0frDq1Qv1VTxIenIkatwsKUBEwyNC1Xx2XfiQvFOiY7t51Dj95NMGzpOkFR2/OMYcLGS5AvWYeWrjxsW7kYHTt3wLL52/D9rB8Q9/wGGgRMwd3bF9FsyQnUOb0JP/6g+fz//pUTeZG1+qlj3ijzBeEwqjDurbUNftA0+6ajZ1fvMLLAtiVj2jChnkNdJsxVhSf3rhR3zR9Ezu9cwJme09AZk8+/ICtVRPo1Pcjmdh46JYGL6zn7GazbANngMVBevgg7oQ6qPnsG7k9zIJgzB8hIhU62CpOXLgHKsLAwsZ9n3A7bJ2aoAVQ18HrqUrXmzeKOiY9PVStIVvIu3yBvvZgk9IeJQ5j5fiMmMiE1BIq5cx2EPLjMzE+qS2Spbqn7gfjsmbrhPiOdfw3yefOLcQXERyBZKx1PKqcP6dzR5tQ5P6xCKSOlBj42/PQbFB07IlxfG7zMTOhzFIgc/hrBTwWYPi8GS2b5YsKvhT+VK4op6++rC6TjLSa1tFUvPhnB2TGDyxjvo9RAVysyHz1XxcndvohsR/ButdKli4Bcg4J+BDyR34cz8V+6x1G1fHSWJjL3bnnn+c86kPSGv2v9vP5eQj6Py/z62SrqZ8lhxuch9y0UNQg6PX31Eon3pmPLq2oI6NgSD1q0RNzaFeANHwWJnhjBgUZISolGh9bNkfB4XYHzTN3woKjT56WJpkOhUGLJN77FykkDq/qPbI6EVqVG4Naw63rGECjkh06Jed3KJOXmMc04u/m+3qVGQIwlfzx5xkDXhmXIYKMlwLNsGaIiFagqeI60pBio0hNxPkIf+tZuyIiPh17GU/T5ZhDO3n8JN/0k3Dx3CeHJOjBy94Ze9AN0HjYMkaGBOH7yFtF5a6gMrTCsrRfik+KR9PQmshNTEZeUhlsZtvh+dEekvLqEV8/CoZQqkZouB19XG89S9OFcwxem+rrgPj+BB0GhSBI7Qc/WEbIX99Bu9BBYkDRHx0VDmhoDS10eYmIz4ODjX9y9K5bpmwJlL5+/fr7nV5JVNnbCmDOP4O1RGVdeRUEnRy0EHC6Wx6hQ/+5pOGjVhKubD5wlCgRa2UIrm0N8jVi03RCIgx3vQR75GFzJIxxe+R3aj/i9xOmgY1wUpRK6BiYFOmnYOeFiNUx4s5ycnOxsZGTEtGkwsax0ApaVckoBRws+AMQgJBKJtlgsZmQizxjo2Rx1Ne6yp64I967uQovB/WDl5M2su7N+C/o2cyFzdPLH3dNH0bR5W3oUujj55jtDbTy/cwPOvnUwdKQXsybx2W1yAi7MTCxwS+mMtu09mPUtc44wtG9ApoI3okH+BduucG+cu0Cvp3reJitzK6SQifrSDlYoMd4OhrvyLz8IS+meO6+5+T6aGkgyN963Mn65/gzqiECoK9dhjIFLfqiHYRkQRkuglXwdy+4+wITJ3yLKygyOVd3Q78Bd/GjoiMGHdeFraQMrUzVqu4ci8PpheNVtD5O3ewrTwLSDT0yXOdFQS5l9cM5Qv0Lf0/2664E5DUm2UFQcagsLi9wORIpVFXHY4czcrOKd2US/wQU7MOs3qG+B5RqMIZCbGJsCbwvDAtuoIeTHxKUmIoOuwcbTD23rexTYtuXQJfTtUOCnL5Lw26dgV5MO9VP4+gwLrSmeAY0deuRfpk+gJCX58v8mNimQGHo2dyMtbG3jBZ2gM/B15uKUnw9+n/YN0kgp4UWYHQyzE6DXswdm790Nvfp10MTaAA1adYLZlk3YOXkYanwzBN2c5TDy3wRDtcb3mNm14H1gzsXhVM5Ny/t8hrcPg+YJydufjslZBGr50raF1xblM2RGPcb6E2GQSSToNKg97j9JQi1zJbKEejBCJgwNdHD36F48TxBDl5+FWINqsMm4jWSZEYI5WTCt1wqVjYQwNzaCQJGMuIRMIvty2Dk6gNbU6Lr7Yd2/25Gm4KNG4wYIexkDHy8beHrXQGRkBHMtenpZyH4WgniSjVRtGYBnweFIjw5EjSYtkGbqjvDQB7CwMkXQhfNEaDjw8K6Kl6R4qzR0g2e1KiW8d0VTkpufOXVS3vx3f6zAmJG98CpBhj4D2+HQhClo3rod4oaNRejly6j3JAQqPQG85q6E0toDjc0FOUMXFFmqK3VaKDM7e6T9sf8JneXkOY1kXk9PTx0dHT3BxMREsyPxDVKtmpkZGBgl5Byal128ePHi9YRZf9kd3rLiTSlDx7oKxgx+c0OdaljmO62m4qdul0GoWyA5tYpOpcAMNnZmBVYZED9s8LBeecsNPCzybdV5cyZfF+RucXEnWRKdCJ4ONA2aSpPqrd+olLuFV9FpKGdEIl2sXneYmQ98HYtsJQfnT5/Aog2bsPjmBTTiC7B+2vcws7ZCk/0HUb9aVWzZuAl9B/QvqySoa7XMy9nw3Z7befMki8gzppCQELUbefrfaqPFOI/W6/UQNbcVZ/2SX5gfO88Y/t50Fj56L5EmMIc0MQlxMEC77q2QTTxac2RBnhyBQ5dewrluQwSHRKNjI0+Ykt/w2sWzcLPWwolbsdAj2UATiyjwdHUg1vFAJsnSti3+A99MnoG/t51FsxYNcO/GZbRo0xT7D1zHkC4FTetLxauSBR69jkT1Lh3AEQjBD3+OpKZNYd+xHURECVp16IzfD+zDGXsb9C0+uvfStvdI5aFNy8Hj8zldR/+St95Mz7LAfm16jVAv/nky3NzcilYZYiBROR3XmZqaMn0N5hnDqP5NizxGgy4pFZhjQN8azJJ/5TdPvV9DzZB9vfP6jrDO26ZDfNhvJmuqY0f1bsaEru005/laDIGB+JkbfhqCAFsF/liyD3OnNUU7y3CMvanC65DbaHPmBnb9PBOuf5S4s9Z3cnTbPyXqBpLsxx0y/vvHYwcF+Ba/N+Dr63uHP2PGjHmkaCExNDRMyT+9vY4uf9xllA5S5BHHxMRY0jAlJcWQTvnnc5dLEtfbfTeWNfQt5YGdN2BfzRmDOjXB4rnfo3qboXCtewALFixAY5USf1w9iuGkRDVr2gz8suDD0kPvR2RkpM07NtOuapkfXltbO8vDw+PJumW/F/ZS3wOf3KjPsjqRGp+Dg0PYp05HcYTHJyEqJgavf/kWKXxDLJIcxbrFu3Fx1lq49++HS1u3Mu80skQ82JpagKev/8HnsrS0jKFTGSa/AP/ZGsgPYfTMH2BqboOsrCwkpKVgY//J4K6fgcoHTqNv115Yb6zAlRpj8ShwPx5YibCwe384RcdCR8iFJCMdKpkUIh1tTJ70LVNXkaXMgoKjxsZth5EdXX6f2pcU1hhKQHp2BlKzlDBz8sbdu3fJDymAPCYMHF9r6KoUkDTww/4jh5H54B50li/EbzcuYNLB3YiIeIwpvTojKyEJOgmJkGdLwNcRofKs6eAI9SHkpkCpkMHQ2u1TXyIDawwlwLxmR/wxoQ/EiSmo7WyO4ODniMhIg4eHDynGZSDs8g3oiETQ1dFH3YnT8euOsxhOivjXyfrU5BSSifMh1NJi1CDkxn30rdMAFpbWcHByhFQhx6jhoz71JTKwxlACsoPOIlOpxIpZP4Kj5sPZUh/2Bk7gVnEiTzZwnkj/snl/om6rpri/bB5mHH+EhYEhGPvyNvZs/xvyVDnUSTLwlHIY6xqBn0rK3KTcHfcwFFwoMUO/KhZmPfnUl8kaQ0nhcRRQKlUIf/ECr+PC4FTZDXyZEuFZKUhNTYGBmQlawxY1tBwhSpahslsNPOoyBS+8laToKYFUoiQqkQR9E2PI0qQQ8/hQyLMRnhALc06pxtEsN1hjeA+0IQsd/JSG3876lb5GBN/NES6eLkz7hgvHzzADoupa2aPlH+uh6NIV10Q6UHJ5xDcwgFzXAEoDMcarwsFXKKArV0IpzUTgq+vw8K3FvAa3cXHCGsVjrCHn4fHKaySJksEaw3tITk5m3v7RH3zW6FHgiUV0nB8oSD4vkSqQ1qs3eeoVyCSlC9qDrFwuIwogQVp6BjKkKiSmSxEdGQmBowvsLY1hoa8LfZEAEyeOJEojgEopJcbGwXeTaCukeFp0/KTXy/+Y3s9ZWIqA5nmFulAtR+h7/E8rqV8pbE7BwsJSJKw4sJQ5MrkCHPK3Ys99jOtRvfgDikGlUmPDkSAM7lC1wPrc9uos5QMrDixlDp/HRa1BmzCpd00mpNxa35+ZtzbTw4EFnZl5Z1sjbJvTnpmn2+sO2ULK6ipmvvagzTj1vx74+d+ruHQ/nIljSMeqaDluN44t6YZv5hzH+p+KaNzJUmaw4sBSLtAHnGJupA13B5O89VHx6ZDKlLShGBJTs/PW1xm8GTfW9cPZ26/x4Fk81OSv1bhdGN/TF39NbMw0A289YQ90tAREODbh9oYya1fK8g5Ycfj6KVDjrFRLwFEJoeCqIVRx6Qu5sj6fKr+737KuY978nY0D8+bvbhpY5PrmtR0KrcvlxNLuhdaxlB+sOPwHIHqApTOro6d7IvMllnHtKuDY1IOCbFBk7ce23ywx+N8znzqZLJ8ZrDh85VDHfXw1Y8i0rVH/oQ7iM+XgbgyDnjoC30ZnY9fuHugz5hqejOoN57+3wruKJ548eZI3wDdt4EHHhaYDhdPOrigKhYJpkEEbevD5/LwOLXIo7WvF+s4OjpeePHuKiIgItGii+djmedhLXDh/AUMHD8buvXtha2dLvwCCf526iIuNhYCk5cixozh86DD31atX6r8WLQSJhzmWLxAgmMQ3/JthtOk9jh4+jHkL5iOge3dm3arV/34Rvb1/alhx+MrhQI4xfQ1hclcXmZnpSFGpAGkmdkqBFSMEMDB8jjUBL5Ct/RwTVTJYWVkx4kAb+mRmZjJxLFu2jAlpIyAqBvSByxUEKiL5heRDOHPhPPOxcnxcPCMK5GFHLBEAB0cHnDl/jnkrkUXTQsRh284dsLe3Z/bR09PDwMGDkJKSggP79zPx0G007XR7fSJoAwYOwJRpU2Fja4sD+/ZTYSiDu/rfgBWHrxwxBBA1WgbLNr/g4sYq8I2LRkJyOLo350LKl+LQsKvoNMARlpMukyxfgTNnzjAP/vLly2FsbEz7F2U8hh07diAwMBCHSS5MWwlSkaBiQQUhNTWVfu2X51mUFvpAU2rXqV1gucjrCegD3L4Cl9u6kAUYM+t2bNuG4SPf9B1HvRoaBxWG/PF16tL5g9L3X4UVh/8AHk3akSy+DRr5pOPo7BaozBEiLSgd3oNWw3dy+3z9FWrMgebiEyZMKBSPl5cXM+ViZGRUFsmjnQyW2OWwuK3pGFIW8OYNSH5hYCk7WHH4+tE8eBwu+CIDdPxD07mo5/uOKEOIl9GHFE+2vG+fr2kk2a8JVhxYWFiKhBUHFhaWImHFgeWrZtyi0wm/DPJlKiju378Pb2/vD36r8rnyIDRR1djXucy/TGXFgeWrpe23e1RbZjbNUwIfHx8mVKrU4HG/HoHwdjLh0gHcy7ruhhUHljKnx0+Hg1ZOrFfinuCpYdOQy+U9XX382bEpPX0nlkU64lMkzMNCPYZcVpxLx6pJxY968KWRKuV5lMm7o3yw4sBS5uz8uX2hlyEtJu9O2fljswL94Y9cciVox+z2Bb7DJsJQ5unJ9Rgoq3zeudsXQ1EeQlkLA4UVB5YK4dTCboZvryPC8AlSUnpU4asBu2+Y7q3UCcfAMW2DkV0a4m6WI26e2AArY1OIhFxIZSomDIuJhJWZNa68joeTlhqVrCyw/FwoOrqXaFzrzwZWHFhYioEj1mXGvxTReW0TQHYe/+y7CCgfM33icTgqIggJsDYxZkJjYyskJcWju5UJdkUn4nV0HGqZG6NjXNInvpLSwYoDywezZcuWf8ePH/9NVFQURCJRuZyDdrRqZ2eHvXv3tm/btu2RcjlJMXDMeiHv6rRrv9nAq8J8ZRaVqHnoc8OkpGgmpMKQy60vTBgorDiwlJa8/iH69u3LTOWJmZkZHeGKzh5+a1O5v26YfVeB2TU+8BFRhQFch7JMToXDigNLqaEfYzk7OyM8PLxCz0vbJ3zK3tK71XPDhShjxL+8ilg1BxZEnoYclMLKnIvf6gqY4kT0hjYQdVgLxePt4FedwXy8xsA1Q1JCCKxMTNByxk5smNbsk11HSWHFgaXU0CJESYTh6NGj2LVrF06fPs301VAUbm5uaNGiBfr06QM/P7/3xlfRwvC217D7SkhuSvL63l/b8U1xKrc4QaHCoFlXsDgRnZiILwVWHFjKjbZt2zLT1wcHZl9PG6p3wooDy2eBmBQZJJ/pAEu5dQ8DXEyx8VkUEtVCmBBxOJkNDOy3AdG720PNMYGZ20QkhCwi2zmoN+4Egpe3wtpnSuiGPESP9j7osj4W+wZZfOrLKTGsOLB8FByuFtSqbHTS4eJApgqVBq7Cyw3DMb+WEDNvyaBO3gSOUX8IfH6G/P5P+PWREj9WlRDHXAcCp8lQhC4ER+DGvA34raYQ398mxyRtAMe4Cw4Eq9HJ3aDYNFQUG58lMKGruQMcdTJxOyy+wHZV/GYosYiZ5+/sg7Y3DHHk1guY+gdgOpeLdLEZMOhqhaf7Q2HFgeWjUKsyoSvi40KUAiY8LhKVKoyqLMDfL+TwsTdF1f4LsXkOwDXQtOEzMqD+uKbEznfxgFggglouZTwHKgwuFrroNvcM/hisjxnNjLCs6Wic3vTbJ7zCwnUPifFhefPRezS9ZNOrSkrSiAf1Kh7FJebb/0W5p7E8YMWB5SPhIkOqYOaoMFCoMFDuv0rI20t6YTwTjrWjwqDFPEySE8PI/2HM+twixbPYjLxjnkQml3PaS46xVRckRe8jc/Ra+bi//Te0mbEZ0WFPYGVsjOikpLywfW039P73Blp6GUL4ZB/c28xE9KvgT30JpYYVBxaWErDoYG5nVnzcJNq303oaEYbv0XzJ87x9GHlTBOLwTfpWQ4Z4smLYvQZfpDBQWHFgKS2c6tWr37906ZK3rm7FfisQFxcX3q1bt7CLFy82rNATEwbW0s6bry0gU0NNZ7qnJzgDEzSvK2MKvLYUMm80dvY1rchklimsOLCUmnv37vm8b/vp06ebz58//9tTp061sLa2TmrWrFlctWrVeA4ODvpGRkY6ZJeM5ORkaWhoaNqtW7doj9cOKSkpej169Ng5bdq0BTVr1rxdVLzm5uYgwlAel8RSBKw4sJQ5zZs3P02nT50Olo+DFQeWckWtVhuoVCrn9+3D4/HuVFR6WEoOKw4s5YpCoaifkZHx9kdTBSir7s3+WHcmKSklw+j8k4zid2YpFlYcWL4Kzp07p67rQF+T6qO9rzG8qlVFy+lHP3WyvmhYcWD56lApFUy/kSfnfX3fdVx7Ev+irX95dApXGFYcWL4KmjRp8h/4FAqoKGGgsOLAwsJSJKw4sJQbg34/lrRoVN1iszraNX3Az6dTzi7uXnHZIkuxsOLAUm6s/66Ncf7ll9GpVaLjku4/CU87OKitVy8el6PI3UaEoeITyPJeWHFgqTAcrQwek0no7/2pU8JSElhxYGFhKRJWHFg+CHUFdeiYmXoPoaF7oVKHwcKoEnT1a0OobYtLiRZoamsMDrSLj+QD4Hxto+1+APy5c+fOIPfhj+nTp2PevHkYNGgIli9fhrFjxxUKV61aiYCAbjh58gQaN26MO3fuwNXVFVFR0dDT02MilMmkMDExQVhYGNnmhmvXrqJly1Y4dOgQ04kojaOouL+WkF5f7959cPjwIea6z58/h2rVqjH3w8rKGhkZ6cx90tXVQ3R0FHOPHj16CD8/f+YeBQQEYMOG9SU61/r1a5H7u5HwKzBmJQ6tNIZASMfN5EKuyMLTYDHcvHlQKwwQKYuBNtl25RYPzbuMxYGDy4lKKdFmcCRE2sbFR89SKhjPgRpY/vDXX38pMvzxxx+YsEoVdyb08Xl34bFBg/pMWKdOrQLHvCvuryXMvUcTJ04ocN2596MoPvQevf27fek8vL8bi/fWxehZK/HsxjHMmDwKUeu34k5iGLwNNkJLyEVqajPcqNoBB/7aA2OPsRjhsBj3r65Gnebffurkf3UUKlbcuv0c2uoMuFWxhSQrk/EEHkdoOvaq6WmCeIUIiVER8KrijJdRaRAkP4GeeSWo5TIYWlf+BJfA8ikI2HUHlSz5MJKr0aCaA+w4cqRFJaM6mVerlMRexMiQZ+FgdDqW33iOmMQM6IjEONvDB2b6hkXGuXbXExzedxCpKSQexWWEPGuGmOhQ2KsvwN2vFkIDX4GvcwHC7RGIixehk/0yKBTA/Rv/fLHiIBKJpDKZTJh/3axZs375+eeff/rQOG9dvdL85qYVp95eP+bvLaXyLguJQ62abz6g09XRdFRR3+bNdjprY6jJ4RytiStnXa8052P5SujoYYWrkfFIk3Px6sZrcAR8HH3wFOLzoUyRQMAhk0DTV6SamKQuLSuoVDgbFI6efoZFxvnXb7OxZJqA9hmHOjWFyHi8E94iFcwtdWFuNh4hCh44kKFf50donaCEQk6WyPl1RF/uh1YWuhxhgK8NsmUqxjOipF1bMwv4cHGo5V/vdC3j6ALrlEoV/jeqr7o0AlFIHOr3nI5hLXioVycAAZ17QKJvjb27tsDTWR9ytSH6Tf0NXH0XbJzVA9GPr2LXsbPgBZ2A/6hlqF3bB/2++x8Wjm0KM2sXPLq1G64ujRDQqgV+WrQEvUbMRs3es+CuH4vvRvaFiM8h22vg6bO7+KVnIwyZ0g/DJi3Blq2/439Tl8LbTR8X7sfDVE8ELzcj/H3iEWbOWYQhE1agz5xf8eR5AupYZGNc//b4vkdD/PrXTCzbEQThs4tYdfIh9PT4iFTVgTo9hlxZAlpNX4Qz+07jn2XfQvniGo5ECLF8RFMkXVwEY8fKiFa5Y/SALrgTJoU+FKgy9A9cP3IWo2aMwrPnERjWrw2S753CrXQTzOhZB2mJ0Vj23WjsP3kPPerb4YdNlzCRCPbN27fgrK+CoUst/FwvAyFyJ/Tt1gU2tJuT+iMR/vwVXp79+0N/+8+CgVWtcTc2jTz4CvK4qhkFEKhlJCfnQ8QjZsXn5u3LUWvqECi/hCSiin4Iqnm6FRmvq5U+9I14SJLq4WmsMTafCUdWajp8jZeCZq+7HqmIN6uESq0mngvJDbu3x6gfV1fEJZcL/dv5BWbFPPPKFYbygsfjfrzncHnHvLz5By8K9porIFHv+Ov7vOVKnv6YQibgh7x1m38fkzdfrVYvJjx84xETvnhYeAgwKgyUWTsuMOGxK5oOR3/c1ZEJO+bbt91sTRjyqHWheObt1PQQNGVqG/J/KsYV2iOH0U00oXtrtMlZZdxwEhNakWnfhaCC+8/qnTNTQxNYtUXu5zxmJlb4ZeU+/JJv98WjW5D/LQpEUZdML16FvStFZUJKSoohIaVcT/IWcxo4Y/aVF+AolZBDiXa+VbD92mMI9fSIk0A7m9UYfP6KfwEpBsx/IcVGDzU43IK2SvduNzkRKrJPgjILKy7chrxuCvhpGUh3qYwQtRIWUhnJBZVQZ8sgJWWK+WS/P44dZUbDkpIisAGXh9c9B4DHDHH7eTP1n5Gr5q07XePXgfWep8ZFONB1RiaWEbO23bb72LhjBXbTLmzcuCA+LoVZHrl8o6C0d6SQOHSbshGBNw7jzy2bMa1vbzToOQ1pMQlIJ2W/+vX00ci/FSrjOaxc3KDim+NecibOjmoI6w6TUcPeCGNWnEdqXAKy4oKhZ+GN/buWYkdwHNrqJcPNsyqi0lWw1itflfwvIpFIxGUV15T199U3bocoLy/v+d5X3XpCPjg8DjEiHpQ5LzZ71vXA0aCXEBBPgT7EAoGAEQoej8eEfD4fT1Mzcer6FbT0z6mkVcgwaMRYNG/QAM9FYmR6OqDOi3D4RafhTjN//HH3LhZHxiAmUYKfEiPxPUeJyn6NgSO7sK5tLCqZheHRXR0cOBEPryokPnl/okJldTfe3JPHj8OSj83v/MGvRaa1c5Io5DJm/LwjfunpmbIMvS3frxoGW7LCNnevKNtZJY0weI+CKKwcbl203t40/oKgzt7kdnn3YeKkE0yX4O09TQ/vHV6rIxHsYl9FF/rxd/81gPwfwMx3vLznHYdZMf/pI+5rpAPf7W868jlbv/BnspPqMjUVzDwrDOUD9RwsLS1jyiq+OjXdePSBoPNEKBREKIp83ObXd8LUS88hUvEgJTk7V8AHl6f5jakg5A9p0SKvePGSA/LcM/l7Vth0PHawxs70RMhvvIRLegqOm+jgefBdyCxEWLlpHTJ7DYCEL8YMC3OILS2QMGoYbmzvjPDHpwDD4ajX6UfExC/Dg6CHWDWnAUb9fK2sbkUeVao4GOXek0ePQlNO/Nm1xN+CEKHkUWFYdDKCcZfo0DdWM7iFHtCF/ounTmppq/79wDNtLS2t7C2LZi6/fWwT446712x6aMTvG9840+4BRGn3pyN4t1qpY3mcZ1efcYaFE46+sxXKvhG1O6SlppiK46+fEyoyqipduvDJ76Msat9C4iBXawYsPRGUChsjMeJeh0Ia8xK2eoC9pyMeX72E2l2GIfnZdeibWuO10gwPbtxEr1Y1kK3igy/iISKQbNMWIZ2nBx1OMrKNayEmWUYMQY4aDnrEzRRi2+4jMORmoEY1J5g7eTMVW+euBqJzEx8cP3QerdvVw72rZ2FpoIXVu86jupc72nfvhRVr9sLYRB/NmtaGMDMS+lYeeH3/Euy96yEuUwG+IhtG+tq4f3AjqncaQn1abNq2B/aWhmjQsDZkHD2cvhGMdn7umL/mMKx1OejXsx3OXbiEJo3qIzJVRmQsHPtP3ISzjTVUtt4w0lLA0twMp64EIeFFIBp4V4IjOd/BXdvQsWtnSHlaeHX7LFyr+zHzEXcvorJzJTxKNUR8fDxkCa8hS42Df62q0NPTxqmrD5CUoUb/3gE4eP4u0sKf4GloBFyc7ODvaYVKjk64de02/NsGlNT23uk5UGP+a5APJ39Y4khzIELBzz3u5p2n8kvLeuTVrnO5fFiQ8nI0ufcq+kdEoqWLK84/K1gkzRUFWsSg82ryXBy5fg+NMybhj6NqGKY4wsDAHAYDemNS8GP8nJiMb4hArCVehalAjKiqThC8iEWWVIqMx7dh274ODi1disb+Kjw5swKyU7uRIRqHDIjw9Hnge69n6oYH+HOgd4GwCN57n6pWdTLMvSeBD1+knvorwPB9++c+gEsmdrw6YfFB/75r2hR6mxA6O1NbS6yV3aHp4FXfdXLJ8mrRd8XgafPG9J30x9h3RuzamWlgxMgvEQkayMYKUuDSyUgw/miR16BvYJgAg9bVwp7cGuLwbJ8C7t2KrIsoJA60XoE+UB2q5VyrLfXTquRtb9hNM2/l7s+EHnTqqCnH5/o2jj6NmdAkX7yVDAvabu9u7QosG5Cr69xUU65v3bEpE1ZvoKkV+NGrcd5+o4d2zXeV+kxg79OACc11qc1q7LZ656F5u9GHMBfq01FhoHw7tH3e+iaNNHHYGNA9nNG5R+FuD9vW8wTolEPH7r3z4nSt2TRvvnINTc/pVWny7AzJP5cC8bTr8OaVb8fG9JprFDqXf1v7QuveB/UcSnXAB7JvTodCCZvm54yxpx9DpaTFCzWRCDk8DUhx88BhWDTrCD1tHcZ7yH2LQeFy+PgzSoC2HU/B5pI9rtn1QNWURGQkpWLMt7NRy7USkmrVh+z4RcjNbVA9i4u7Il0IslIxzdwVvz8MxpSY+hAf4yHr+VOijlLMaLAX9trPSBp4WLekLpq22Ydtq6bj2/mrybmFha6lrFgwqmH74vcC+s3ZVX/zD90vR4aHOQb49m8+rcVspYOZK89QK88Byab/dHX10nM9DEpmZuaPYrH4l7Fjx8LKyoq+6qSrCz/Q73jI34WDR621CH61hhGVIo4tJA6PX4SiSmUnPAgMwu1IAep6CfHq1Ek079oOwa/TYcKNxMvIDFRp0A7RD6/CwtMPxzfvgVOLRtBNT4KXlydukLKiLScDYvkriK38kPgyiDzHauy8nIHeHeti/7pd6D+0HY4+SUcNOyO8zsxA4vm9CFc4YviAZth39hZaVbdDttAYZ64Fo0cTVwTHKfD65BZU8muCGJL7e6oe4v514tXo6qN9QB+EJsqguLMVVi0HYfO6I6jXpB5IlkzSUxWr991A7co60ObI4VytBgIDH5L11bBy3QGYurijPvchEkxr4O7zFDRylSDywWP4BXyDwAcPER8ZimZtOyHm1iGY1eqQV8118OYrNNJ9AXmVptizfA16fDMERmIZNqzaioZVjHExjIuqppmwa90L5uRxOXA9Ch09lfh37wOYIhW8Ki3hKIqDvjQcWSbeuHvpAloN7IOsuyeQaeiJsBvn4de7H0rqt75LHKi3UFT4Pt72Lsa2cvSkH029a39SWMCK5lULrNOesR3cncsR26Qn+D2HwSIrBTUNuEh+fAGN9W+Ap2uPuCwtfH8hG4Gh1lDXfo2UfadxX5YKLXM9eHhWwy0nU6huRiGlpieUm1aj7ty/8E+GDErJE7Qz7482h4/iZPu6SMzOQsstB7H39gsMHTUTdaKXQGodgLDjfaDIisCB9X+iy5Dv8tJGvYWiwvy83Xz67XvSytOod8ta9tuLu5f58a3td8U356HvaudA43u7jpCuK3DerVu3TjY1Nf2lZcuWWLt2LXbu3Ml49iKRSC2Tyd75WyYkJPjJl5agJywiCtHP70+xKmJTIXGwMNSYo0fVKvD20pzbc9BwJvRifn932OQ4Esa1NDlkv+G9kJ2RBi1HzSnqWNEsk07WzLKOiya3HZ6Tgfcfrvk8t21VTbt4SwNjKANIHGqmrgat6teAtpDHtJrv0dxLcw0karcB35AfjQdXUnTIlDVBC7eWeel2MiE5AxEGyqjBuV6JIfP/my51C1wjFQbKiMGdcta4gY597OmqWarkrPGKqnpXA9dbs68lEYZcVHIJOtamGagmE+3ZpzMMxfReCdCxV0+mWGPinQV9vdx2/1y09LUBh7hlwwfmz3jpWTXxV3Hqo1lVo1XOcj+UhrKskKT09bduXMPV/MKHHp8191fNzMWdb20JwPyhtVCjlhUpAkjR2V2GwMd8aJGi3jNdLfxh74rxD+dDt19/NAwOgnzjengTwWjw7SzcfXgT1s8jYJglRXZ8NOQyOepsPEzEQgJ5WiKeyBSYsWIlbMNeoWatg6TIagAL33FEGMZ/xJ14g5elcMbA1lXmFb9n8ZAHnKdUyFWH1/yOziNnMw/an2Pa34t8dl/dsPuY37sMm8m8FuzTp8/Ce/fu/UXnqedFvAimYjcqKmpGoUgjLl5CRlx9uHUFERS6RkqmXLtQ5wsLiIqVsw8T/4vQMI9KdjbPBQIBU3lZSBwMTYyQmJwGXQM9KKSZ4HB5SM9SwkBfB0mJCdAWC6AW6UObGHp8QiJEfCF09HUZNYuPTwSfx4FCpYaYbNfVNyQpIW6kUgouXwSJXAUxOWNaSgr0jYyRnpYGPX09yBRq8EiSdUncickk59DRZdKgIuVNsYALPUPNvrp6OkhNToJMxYFISwfyrHRyrmRo6VbCw+PrYenbgTyYYvBFOkiMT4CxsT44PCFSU9OZlp7aIh5JqwHSMuXQ16LlXh4yUjMg0NZFtpR4HlIJdIjGZEpVZDsfAh19UsZVkOP4kCnV4MgyIFEJoSVQMZWxmWlJ0NY3hppcr5SkRUTcZ0NtHpLJfeIKtJj7Y2BsAiFXCVlWJrkHHPIQSyEWiyDSIUVFlQxpJG16Bgbkl+cjI1sGCfGizExLXyFelsWKkngXH8O3a27h4vkzOPRtXwR5VkfbHuMxZkgA3HsNQCb5LTbs2Q+1lysuCTmoMWUqwrWFOEZ+N8WxE1AlvUZidArUwU/BTZUiIyacaUdBCi3ELpQYNqQ3qvfrC2VGNsZNmgyhqGw0s6zvCX1bcGrnv99e2Lt6PhEHZt3U/x2uHvriucuyUY2fdhg87Sc+n8/0d1G9enXm3NnZ2XnH5zz8DJVnnc7WE3LEl7rJb+m9u2jBIc8ohxOyh75jVnU8Zc6Z7GcCK703EjBp9iLmOT62fSUTR+G3FVs3IC5Rib4juuLIoavo0LE1Tly+Dn9HKbIyVXCp2Qyn926Db00PSLPJw0lK2WK9muS4Leg5ZCjuhCahtp0I8Uo93Du0B/4tmiNWLsaxvTtQr007XDtxmJQ7eZArFRgyqDd2HLqGjPhn0NURo2v7JlAJDXD84gO08TVFhrYVdm07gp4d7KBn4YtVq9aict2WaFLFnDxLQlzbswN+Xfth1V/zITQ2wyAzY6z8Zz28mrXD44snwePT12lS4u5z0bXPQCSR20LbIe0+eQ9DiDdB74CegabeQiwQU3Vi5rX13twPKgwUIRE9aOkVeEOmo695iI3y/VAcIoJGJqLcPXJCHgxyzsOIQi6kHKxv9KZmRk9LSKYPe1NW1p5DedOwcTM0vFnw5crTXTuY8Obzp9Bf/S8ykxMRlCmBigh8up4WRFYmGPDXMmz9eQ64dS3BeR0D3Wq+UGSmoPKWbeA/D8XQhnXxNCgcPlVr4uXLCLi5v3fIjAqh29BJyU72tobzZ03JWzdqW1+Mr/ld60V9xxR4mAVCoXrRyQj6e/JzxSE/j4KCa1f1dL9J56VSqVZYWFjW0b52cHNzK1a8mNeXOeJx0L3w9qPb/sGazbvmBIeEeLm7uQUWEoeefQbmzffvpimz9GznX2CflgG98TYDRoxmQn83TdGC/rfqqCk+2GgR135QT2beY0Cvgufr4Ef+++Utm5Gpa3NNBR2t4BzyTee8bcOHDylwrF9Af836KW/a1Y8YOVizzaWwW26c8xZ1yFvFjK+BL00c3sX0bzqgln0cbv2zEzMX/Q/f1NgMmc1U4gXWQlpSMsbWrYz2Hcbil9FjEdCxLQZOGI+pkyfAeskK6BOvbfLoqYzbbbBkDuzPJ+KR+zubw1UY//45q3KfUTMSkc+d/7v3liL3NTe3iHxfXFQYqAfQtvdIlVDAJ89m4xFuLk53CL4fk0axWCxxdHR8ObRf97wWjWx/DiyfDfRdn712EOYeNEAf7V0I3LsF8bZqBJ7fjEqeERCSMqmOvi2ubFuNXeEvMH/6d0gzs8au9XuwvmVbCKNe4OKJo/g+7AUcZGpMcq0EmVwGJSn28bhqCAWfRj9Jjq0SCHjZJXmAt+w5MutJKKMP2TnHqtcv+cXSzMwsPrfhEg1zXX8K/XArNTXVgE7p6el6VDzo+tzwrbSo6UTrFfT19dMMDAxS6VRUWvgzZsyYS8K5H3TVLJ8N5Hf81En4YNRKNTo/C8OJNClUQ4/Cr2kwlrx+iZU/meHBwyjIEgNheqcylE+eYnKWAt3atYTKxA6vzWxwNCEUkiwJFAdDaZkOcrkcDdRymBgYIvVpKP739H/Mum+//XRfbdJm7Qc3rdCmD6tCoeBnZ2drUU+PFAtEdJl4OlwC/aeaPmHEcJFINFBbWzuLFivouuLiFwqFMioedCrLdLOeA0u5kZQtxfrjZ6EgPgHTZI/k4FyFiuTkKgR51MZu6ELN48Jgz1wMt62GsBULkKojgsvgUbgkz0K77K4wteXi+tiWOPP9KrwED5d0TZEpMUEnrxoQKSSwlopwLSGGlNU54HC5kEmUyJYrEBMTD5laBpmSC4WcB10Hb2jr65C0iCFVKSDgiaCrLYaRmITknNWqe8PaXBuzx727vdHHkptj5+ba5XaiMoIVB5ZyY/X6jXBy8WDEgNa0Z6qkkJEHMzroKTL27UQbrgIr/vc3ZmtZ4M9TO9HZzRkXnzzHma3r8WLzGtT9Zx0StPlwO3UBwmNHkCXJAMfAFNcyr2JMyBNYD5sKt4GjYTZ9MMRa2uARD4TkxJBLs8Gl78mIGImI+OgJhWhS3Ra1q2u+5ZBDxXz3kS3LJEIiZb730BFIsWDxFvw0eiQ4PPaxoLB3gaXM0HOvh7F9OzG1bvThI2VhxERE0wY7jOfw5NlTWsMOWVY2srKyGMFo2aITggJvMM2qj2qJYO/sgthHIfDy8YOKxLF3z2F0kadCnRgDF+Jhj9lwE/3a2+HgnkPgcuMg2/Y/qL19mQdczlGBS7/jYN7kK5jiBG2ZeeXmdZgYGiE88hmE2lrUDWf2p1+Vi8TUa5fhr42zkfVMUehL0f8yrDiwlBlcHV3suHybEQWFWgJkpCGVI4KWqTn8HCshOEECBSlqZMaGQ5EcC2VWMiMiVDwoWaQIEPaaFJtl2UQ40pgvOof36Ehy89HY+CgCtZbOR0ZEJB6eboqOCxZDxiXmq+ZBLc+ETCGHIiML/AziXaQSj11BijL07RQpWrRo3xoBu79HdnQ2RLo8CG1FMNURQC4UgydXE6EAXAK0waGfmXM//0+9KwpWHFjKjNQ7J5hQQXLsH8dPhLmTI/OVZkZaOnHh5ahu7ExycynJzd0YUaCvHKlHoVQAt6/dAF9HC7/+MgsvT13Fod//hbZKAK1MEXT+vo6hgpsQCtzBtfPA8xt8PG8wDS5mdgiy50Di58AUXSh84rfIJZkkEcSLoH08kMjFBrrQfmGHJvXrMd8NaQlFUEnJfumZoFUhfLKuGvEYRuu7Y3lGMISsQDCw4sBSpogb1UO7Ov4QW5oinOTizMOvlENJ3HxtpRAZShVURDykpEgRdP46ZKSYIZVnMa1a9bLFGDDlWyidakHVozskKiV42nqQEc9Cnp4FOY88+sQj4CjVJF4B8+6Tr1KA/1SIgB61EX/yNFM8oW2GedQJIPuRU0OVloYQBQcRly+jsksVCHV0oAXicZByhZqjhop6GHIljnnVwEr68aSaFQcKKw4sH0RSUhJTpqdQL4DpnYkIgZDkxFY6BkzdQmJSDFO2F4vFMNA3ZYoJxqR8T7J5SKRaeMlXQ4s8mTI+eVTF2hCKRcSr5yM0NAIKDvEqSJFELRKAT9x/JY8DDk8LKm1taBsaQ8+yEnT1NPUHtPn79cfRaK1Nu6lUkiIGmdQKZMmySKlCwhRzzIgImOsbw0ZPG5nSbKgE2ox4KKQySLLTaSMyqCUKRIaGQ0mKKw4ODp/2Bn8G8CtobBKW/w4VbVBsDWI5wHoNLCwshWCFgYWFpRCsMLCwsBSCFQYWFpZCsMLAUuZExqXD2kwHtQdvxo21peu9qigePk9ANWfTQutzO6hlKXtYYWApczpO3QsbMz38b1oLRMVnMiIxau5pOFjrM+8Qdp8JwcqZrTBy7klUstCHQqXC/vmdUWvQJtxa358J2/g74djVUGZ5yJxjTNh6wm78MMQfkxadYZZZyg9WGFjKnNsb6MO9GXPWXkdkfBqsTHQZobj6MAqt6zqSnJ6DEX+cwM11/fEoNAFJaRLMWH4Rc8c0Yoayo/wyvB4jDFkSBSMC8zfdRGJqNup7W3/iq/tvwAoDS5lDn+129Srjp2/8cOhSKHS1Bdh5KgQHFnRmtutqCdC/bRXGM1j7o2Z4gTO3XzGTSFjQJBuP2ga2qU3FwwoDS7mQlKbp6LRDAyckpEoYYaBCQPlhiB8zf31tPzx+mcis4xEvgi7n7kPDwwsD0H7ynrziRW64bGrzT3NR/yFYYWApc2ilYP6H19xIGyu/a1lgny6NNf37e7uYM+HN9ZqhFe9sHFhgv9zlt0OW8oUVhq8b2tViXtW9WqViejsSclVMJyZCtqMSlnfAWsZ/AKoOmUkRSD7YBvoCLiS6GRBqC3Exwgq1Bp+FVqFhS1j+67DC8NWjRtzmpeAK/4FSkQa1jhB6tbojiwhDnepxEIZ2htRuD0QC9pNjljewwvAfICx8NfTVafjmHzniM5Ww0lqPfXtrQeDkDTUvFWt6OGP0vpfMvrR7tLNnz6JRo0bMZ9NaWlpIT0+HnZ0doqOjmc+nY2Nj8f3332PFihXMJ9dvUSrfY1D/AWraecvGLZs/6Nrq1qqtrle/PiwsLHDn9m3s2L2ruEPoOIkPP+hk/yFYYfiKoZ/Up40bgsfxcaiTVQlzbVORQR5bIVeAh1NfYFVGBDYf6InB428wXaOBz2OO6du3LyIiIpiHno6vSMWAdoRC+1ag/RsYGBhgzJgxKItP9gcPGYxGTZrA2cERz8Ne5q2ny7/Pm4sePXvmbZvx7XTMnT8PJ0+cRMtWmsrMhPh4/LVoYd5x54iojfhmGJo2a4Z//l2Fq5evwL9+PVR194CXtze27ijV+LX/WVhh+IqhWbckORT+icbgidXkx1aCT57/bKUcXC0pkrgm+KbPHlx8kIGQyBTw9UwKx0EEgfbI7OPjg3v37jHzVChWrVpVRqksKC60U1imk9d8myo7FxySjl5XUFAQPD09GcHILyo2NrZFChbtZ1JLW7vQepaiYYXhK0ZNHiGelj7SOUmwojk+h4c0nhoqqQqTHgtw698U/LHaHD+4aIMrNNAcQx6quLg4RgByv0WgxYvHjx8z3gLt8ZlCXfeoqKi8h5CGH/LtgoNTZSYe+mBfungRrq5ukEglOHPhPCMAMaT4svfAfoSHh2PUmNFMccbV3S3v+EMHDzHHRpDttA9IkViEM+fPwdLKCrt27ETtunXw6tUrnL986SPv5n8LVhi+YjgcJcTjZ8D8/kjoHzFEVkYGEknR4JlKiZUDSfFArI2RPbOhuKsLtVCTS9OHm3bbRrtNow97pUqVmGLD9OnTmfoGWh9AhaJjx45MHUOuGNja2jIPb2mxt7fPm2/QsOE799PR0aF9t9EEFljv6OSoOb+dXaFjuvfsUer0sGhgheGrhgddr3qk+FAF8V4cmKwzhtazFzj4OA4N9OWIiMrAkiVGmPfvt+ByNO8sqVfw4sULbNmyBRMmTICxsWak7ilTpmD+/PkICAiAM3Htad3Db7/9xmyjYqGnp/eedJQN0R4+sAoJZObbXk7B0fqG6N6lKzOGBUvZwgrDfwCO+2YYnbQFr5sNtEQ2mHM/A1e3yKAtMsPicT7QqT6ISMKbYoOrqyt+/vnnAnHQ9TNnzsxbLkMhKPFbjFxRoFBRoLCiUD6wwvAfQKQlAL9TLEKunEbWrskQclQI19dHs10XSJnc4FMnj+UzhBWGrxte/pkq9ZoD9TS5rlcFJSA5Ofm97zSFQuFWHR2dvhWUHJYSwgoDCwtLIVhhYGFhKQQrDCwsLIVghYGFhaUQrDCwsLAUghUGFhaWQrDCwPJVEx4Vp6adz9IRusf+fQdbv2/xqZNU5lx5kjy4vb/T+rKMkxUGlq+WdYfvqTvXc2DmadPu5aN8Mfnva1g4yu/TJqyMqedhtI4E68syTlYYWL5ackUhFyoOM7u7fprElDNLdt9dMKFbjWllFR8rDCxfNct330J9Z0He8r0oPga3LdzvxJdOt0auf5dlfKwwsJQpGdlygwOXgq+2rWVbpST7y2SyPnSi879tDTz655hG7coyPcfvJRBhsMpbHty2allG/1nwKl4yx8fVKLQs42SFgaVM0dUSpPZtWc2TzjeftDtp16xmRu/bn8PhZP+y+f7phWMbdySiUObpOfxbG4xcdAGpWUps+75pmcdf0fD5/Atkuk7n1Wpkt5t5aNK5JT1+LPPzlHWELCy5nF7UjenM4VpQVGtdoeqQralOnr2duBOzqUN957E6YkEaEYVyTcc/k8pecD4VTaccbHRrVd/GuctEFH5+z+4fDCsMLOWOn6f1cRIwBf0+vxx5sHVWO+9ezd/rSHx2NK9dBRLzWrh8eMOnTkqFwAoDS4VCReFTp6G0GJvYIynxFTO/aOdVTOrmDitLDwRGx8KMp0ZW2AHwHTpBwYSd8fPA1jgeYYA7Z3bgz3EBWHkpEc/un/+0F1FKWGFgYSmGKtMO581P6uHPhHfu3oa7mQUSk2Jxy7ozaGHlGg2VIZiz4Thop3dz7iuwPtQbz+/P+jQJ/whYYWBhKYbHC1oCM6KZeedGo5EUdAFJCUF5Hd/b8/OF6jed1QpI4en5ESoKKuxM5qCH0ZczDiArDCxlRmhoaNVZs2bt2rJli3vbtm3Rp08ftGnTJq9D2aKgPUsfOXIEW7duxaVLlzB27NgTs2fP7mtiYpJYgUl/L0mJ0WhV1xMKu8Z4fmEFnuz6ATVbD8Xv7R0K78xzwQ8DWuJsciVcO7QaU/u2wv6gbOI1XKzwdH8MrDCwfDDm5uaqb7/9ljN16lRm2cnJCZs3b2amkkKHvhs5ciQz5dCKTAm5CzNmzMCePXsSnz17ZlqGSS81J64H5c17dJ+D293fbHPgFgx/23gyb9ufW07gz4pIYBnDCgNLqSAewdMxY8a40AFn6MA05c3cuXPpRJsqqolHQkVi+rRp0+aX+4n/47DCwFJiSO6u/pBBZcoK6pEQUZhHwpFEJJzK81x9rYyxJTrpg483dZuIhJDFZZegCoYVBpYSQwe6/dSkpaXh5cuXjhV9XhMTKyQmRsOu5xYoTo1DdFISTI1NEXN5JlRVpuCXhtaYczEKZp4+iA+6zxyjDF0JhdMITKtliaW3YtDyz0c4OdURcWodmH/m9ZCsMLCUmjp16mDp0qVMWFEsXryYdkVfaCCciiIh9AxTiRoYm4RaFuOYdaMcORjT+nesfD0Fc87sZ9bxKg3PO2Z0k++wK10zSM9SMo3h/kvi2IykpM+mXvWdsMLAUmpu3LiRN793715m2Do6juWoUaPK7Bx0rMy//voLDx8+hIeHByZOnFhmcX8IZj7Tmc5eTKtMQu63miteqhF342fIyfy0eh2x+GZM3v7qtNdYceUPLHQajovT6jPrnjX8mcSxEOckQBNxxV9DaWCFgeWj6Nq1a5HDztMRrDds2IDjx4/j4sWiX9WJxWI0bdoUHTp0wMCBA5lBc3OZN28eM30q3q5fSHh5SBM+XgQrY02z6MQkzcsTOphPrijEHBut2Ra9lwl1yNRmwWVmfmpNQyb83EWBwgoDS7lgbW3NjHWZf7zLrwVav/C1wwoDCwtLIVhhYGF5D/lfWzacF4SL0z3ztqVs6w3D3ts0C5mkqKHT4VMksVxghYHlM0CNWDUHFp/5KzxrWzOYGZsgPikRrqbGuLmsFbPe2NgKcRcngF+1A6yMjbFvlD3q/nYPk/1qYN7+9RBYeGFTNyv03x39ia+g5LDCwPJR/DywBba9MELw5Z1Y8FiOaVUEmP9IhuHqfbCsPxGS1GikbOyIndeycFB7KDKPTsT0U5GodrAV9gbysT2lNq5s/4WJK+rSP3Bs+xOk6bFIf7wHRtUHQi7NwOeiF9v7msN4gqZldkdz7rv32x1OhAFYeO0utvauhD7bXkNf73O5ipLBCgPLh6N8glkbTuEnMrsiUoW2R5uBVy0ISmUiOmxowYgCR9QMkq2mGP73QZzU4uBwthocjhhJ61tg3D+HMC5zV150g4O7EFEYCY7AE1xFMJRq5ae7tndw409fTJw2FWdqLMAPePNNBL9yLfTs2ZWZXxwSCfcGnRD/KgOPlzTHjOkTMLp5Q2y6J0P/6sJPlfRSwQoDy0fAy5sbbcPFpishEKlTmGWlUvNQq6VnIN3Tk5kvMo+VS/JmVSpVzhw/TxS4up2hythfxukuOW+/tnQZtBWL85YGM/+TkjRFhB07muVtCb50IG9+bhdN2L+c0lgesMLA8uHwXPHLoJZYc1eB14GnUHlzLLJ0gNb/vMD2erch0B4Feda7X+2tndQR6+Jr41LOx5gbPQ9Ax2gO1PIIvDj6J9y6zIaMFCVYKh5WGFg+ip/Wn2SKEhR/HU14fGRl8r8yEQWNpyAK2MGEO7M1DaHUagmSN3TAkEUHMSTnWKbisf5IZCZrPr+u3HYqFNKpFXMRJWBcr5bYfzsK4c8fFbndxMoHidH385ZpJeSX3N6BFQaWT4LRwEOfOgklRvXyf1i2/SSWkfl0OaAnALxd7ND+h734bWBNRKs4jChEk5KQlfw1rDzevLZs4OWMpt/uwM/9fD/dBXwArDCwsBQD13EMln47ALNXH8bZ0AT4GHJx7cYd7BjjAwyMLLCvsXUdpscnc+IxLGpmgUuBsVCnUI+JFQYWlq+K0Mv7MH7+RjIB1ibG6GnGxaLgBMRH0QpSDjTVpLTi9E31Ki008XIWOYY9KzzNHwsrDCwlZs6cOfTr4fGfOh3z5s2bXpHnc6rfBYM7NcapRwmISqT1Bpkwt3FB3LMT+PmSFGdGu2PUtkDUqaKHxNcXYOXoBVMiCuNPxaIhKUp49JyDld/3qsgkfzSsMLCUmO+//36CWq2e6O/vn3rt2jW9ij5/lSpVnj1+/Nj122+/rehTY92B8/mWdBAX+YyZ+6kBmR6+zLfJFdEvA/MWLwY+r5gEljGsMLCUCg6HoyaioE/nSehHROLqgQMHFB07dixzW1q7du3toUOH1nz27JmLs7PzcyIKZX0KlnfACgPLB+Pn53eNeBAF2vrKZDLhH3/8MfN///vfmPj4eLP69euH1qtXT+bh4SG2sbER6ejo6GZmZmaEhoamhISEyM6fP6939+5dJwcHh7BJkyYtGj9+/NLcuIYMGcJMLBUPKwwsZYpQKJT99NNPP9PpU6eF5cNhhYGFhaUQrDCwsLAUghUGlnJFqVS+t2UPh8NJ5XK5X2bV/VcMKwws5UpaWtrt920XCARHdHV121dUelhKBisMLF8NzSZsV9PWhsmZmraIJ+e1/cQp+nJhhYHli0etVgvPnz8v/aGzBbP83c4YHP299SdO1ZcNKwwsXzy/rzoi9XfVyVue2LXqJ0zN1wErDCxfPPlFgWLOT8D9+wnw8fH5NAkqR8wMtSqkkwdWGFi+eH7dF6v+sUvBPqYvv+TiK9QFHJnX1bQizsMKA8sXz9mlvbidZ+xSTmhlynzoHC/Rwtgubp86WWXOkD8vvtj3W6fC4wGWA6wwsHwV7J/bnVf8Xl82RBQq7FysMLCwsBSCFQYWFpZCsMLAUqY0mbAze+/sFiUe6F0ul7dLTk7OKzdPWXnz+toZrfzKJ3UsJYUVBpYy5dySHlo03HTi8betfS3n8XnFD832Oi7zlVCs1c3D3uQ2EYVyTyNL8bDCwFIu9G9VZT4J5qvUau6KPbcT+jZzNnp7n+mrb91dNa2lr5FRoU0snxhWGFjKFS6HoxrbrZYxnW82aVfq7lnN9aevvn1p1bQWDYkofOrksbwDVhhYKowzi7ob0JCIwqdOCksxsMLAwsJSCFYYWFhYCsEKAwsLSyFYYWBhYSkEKwwspUZNqIjzKFUqXD47DSkZyfB058FIyx26Jv7gCmwgERpDj6NbbufmEMot8i8AVhhYPhvo0DXpCfdwct8A6PPSAG4a6JdRbh6DoUi4hQzdG0iKX4m7j63xOtkYjavGQKk1DHX9B3/qpH918LlcrlpFlHnevHkYNHgIli9bhrFjx2H58s8/HD58BLZu3YoOHTrg/Plz8PWtiadPQ+Dg4IDExETmAoVCEdLT02FtbUW2PSX7+JJ9z6N9+w7Ytm0rE8fncC0fEq5fvxbTp09nfjsSfvE53OF/3YhBxkKbowSHJ0BkFB8cZRqs7F/j9fP7UKnk0Nbi4/7pUNSppYWoJypwuT8DrDCUOXwqChRqYJRff/3liwqnT5/GhFWquDNhnTq13nmxDRrUZ0IfH28m/PHHHz6La/jwa5+eF86cOfMPOjTcOy/+M0etVkEhi8Khsz7wdrmFV5wWuHzfAGYGJnBwu0oEQA06Gt6C/f4Q1muD8wdP4fuAc6S8kQXNoPNfvC5+VvCJQYEY1KdOB8tH8iWLggYOflrfEHVreON0fGN806sxvp9WF78u/wepEEPIEUAk1EXlcZNw/sB52NjZQi5Tgs+TgRWFsofPisLXwZfvMahx49J2nDz6ABsPLkKL+rMA4iV807snkp4cIsVBDlTqLFzfeQLchHg8N+UhQ8aHvoh6vF+ux0AHBTYxMUlMSUkxDAgI2LNr167uZRl/bHS0nYWVVXhpjyvSY0iMegZDS2ckZquhh2S8CI2DpT4QmaqESWVXhIYmAdIY6CsE4OuqIRTrg8fnobKDTZldEEvpqEhR+O7kRSSrdCHS4aC+pSlcDIQwFPCw/Nff8fuiP8FRyKDmqPEqG/jz4Wscvx8OrlKKhY290amKDYqu8JdCwOWhcYuqqG1XGxsX2qNewA1cOroElS2joFRzoZTL0Cp5OdY+cIMhssC1k0IloHF9maJA4XK5qtz53bt3dyP3Rv32COKlZcXofkoSBzd3md7vmj0G96rduNmOksZRpMdgYu3ChOY6NH0mqFrNhFnO7YXStpol+W/54SlnKXMq0mP4sVl9jD1+D7JUPg6maCp5zz5+Bo5NE+xZchIinuZlFzF68HiAPgnB1cJv156hk4dtkc8xl2z/Y4YbjJ3GY8/q9RjZLxZrf3KGvpEabnaWkEn4UKo48GvKB1//KQR8FZRKNRQKVeHIvhDoyOA0HNXECiI+F3Hpcpx7koLdO7f37taj17YPjbft5O+ayYNOnrO0MoaevjZW/X0It3as3V4qYWDrGL4OKtJjEPOZoeXAVFwrSE5OHnSOgE+e9zdPfK4oKIkm8HKeXamKeBFRsXCwtSgyXmuDKETdn4yZw/XA5/LRsK4CxiY6UCvlyMgyhhYvFXKZGq5OQFo6yPk5kKuUFXHJ5YJcLhdMbKHxsuVE5Mz1BOhZ2ww3N/2w5mOEwUHx+BzcbPOW+w1ogfDw+FLFUcBjiLy1Hya+nTDolyPYPvv9wwl6V3HBA5JLlIRXyVLYG4kQfHQu3NvOKLCtau0m6DllAX7sWTNvXYshi3Fq7URm/kFEKp4npSHAyw4vUxRwNCy+6cXGay8xwM+xwLoHqybAe/iSQvt6VvND0MNrJbqOQufZuInc9P7g5luX+WAXdLzLtJhYIirSY+AQL1XB40AbfEhIKCAi0NbTBYcePGfcVhWHywgDJVcUcpfDngS/Uxi09C3gbPsSIkEmsqUcjUdAlCU92xApkhZQC/ZAKs8Ahy+AQJzJCI2W+QwoSdGFxxdWxKWXKT/++OOv57Yv+bGmgx6eRGXBy04Hi09FYsnUnpc/Jt5bN4JRq47mLd3WTafRp39zJCellyqOQh5Dl+rOMOiyBEePHkaj2m649SIdjevUQGVnX3zbRIhQsR32nAiDDl8Nd98mkMm56D5kAH4Z2x/nlwyAaYPJ6DfhJ2inPIVe7eHo6OeE7t1a4fCezXAXZmB25wZo99My1KruQ2QyFXdunoOIXsDdCKxYdgD//DEEyeky7H0Ugeev01HH2YyUL9X4/cgjdKvjgb07N0D76VG4VXPB90sOY9qC9aju64P9tyMwdcQoKFOT8POmjZi3cCE6Dm4HDyNaANJ0BKKMPYFRw77Hso2Hsfyn0ZiyZC/S0+LR4rstePk4DIuH1YCuZws0duDj0ZZZqNqzJ3b+8SequDjgr/9txLJ9lzB3xkjsPxWI8SMHwc67Nn7achUz+/rjwK1k6F+ZhfoN/bFgeBPEyvXx57oDqNV2KHav/Qu7t65F44Z28HWoRMpkdT7mdy+Siq54HFezMtbdCiOiqACIUFBxIDkgRCIRIwKaeoSCDSTp+sspIjR+R5w6Bq5EaqKRTQqtGQpt7LvHwenr0fCzeQl9/hpEJQhxLiwdHOKi8ERq+LvoY+0Kb3B5X54oUH755ZdZk66v/ZHOU1GgUA9i/ILtH9VRRa2GtZ9AmuZB56koUB4+CEXdUsRRqI7h2IMX6DX7MMZOnInzD65j6EA/vAgOJL+qEEMXH4ZPg274dXAdzF93DKNaOiO1clfsWjAec8YPRI2ORAjGzMOU9g6IsxqP5ItLULP5YKZM+f2ijdgxvTZ6jB6CyfPXIymRj5sn/0STGp5Ye+ImNozpgD2nbiJdpnFHuRzqJqoxpEt7zN+2ixgdNTIlFqw8hJ+a8EgZU4RvhvbD3NX7sYMIg6FYhBFNjKHwnkqPxqZ/V2DypHZkXsAsU1zrT0EzDzG4ynjcCc96cxP4fMYtHj9+Ihbuugq3zp0Rcv8cmnXrgSP/+wEDh/9Ackdi5mlheBKbjZ3r52DesRTcmjgOTccsQS1XZwQ9fQ5Un4OAgcOw69/tqNaoB/4kx5inBiKdI8LalSswaeIDtOg2CKf27vqwX/w9UCMj/FLmEb+DqvpicLgcCMEj0qCkNVwQa/GY+0jfMOROVCDyVzZuy5DgW7USQk7h3t4rVf0Z56ZPREJkJBKzU3CMPPDNarVF8w6t0KlBM1y8dwPYtA/WFjr4d+kyck4h8SqAUJUM9zIlCJWkY5qZBTnfl9OTvLlt5eC4iBfuZRlnsp7nYPnro9ePHb6B/oNbMoLcasL0JqWJo4DHYFOrMxMyxYicogQjCjR8qnG3H909x4S98+VPs77RFCnMXBri2smGbzYMedMhx4PLJ/Pmj+fTw6t3g5jwxLV7TEidzNu7vmXmO3va4Pv215l5WpSgXDuzO+9YWopq2lcz37iqGRr/uSFv24AnzwtcKC1GvBj+Znnr3uNM+PpV/v1+1Jz3/iUmPLN7LxPuOHA0b489h04x4QZ6m+eP1ayckFPs4htgz5adzGzQrQtMeOTKLc1yiOY85SEKlLFjxy4vl4jfAY8UF6AiRQbiFXCJQFBB8DQ0xOOULOR+SpErCPmFgfgTuHTtKpr5NygUp3eNWvA4cYXkIzLcTE3FjfOX8Iic5oVEhlXJr5BsYQrFxMEIIp6J46mTpJiRRYRBCaVMzpxfQYoU50hGcKz3oAq5Bx/D+cAzzW0NKr2aufaCx6SWtnmu1cIT4dz3HVcSjExtbvy79zLJTJX4m3jhLVrVUrh27na+NHEU8Bhinl3Emp234GclhaDeAFTKuIktp15ArVDj+x+mYt2us4h6cgd6lo4Y3ssfQXHa8HU2xoHNK9Gx3wjM+XUuzBWJ8J8wG7v+twKzp4/G4jFtoevRGj27tUWithucTEvcgTBLKVizZs3QadOmLajIczqbiBGaIgHNthUkV6pcyRQhyS+R25o2P7l1DFQ0nhtVRrN82zIUcjhsXo9sOSlUcmWQZRBvTiqHMj0DUxVSbKxiB3FcJcQkxUOdkQY5mZCdAW5yGilWyKDmcyHi8sETCHFWnVXo3J8TIUGBPo03+dwrsLIGEUyeUDW/zaqR9HVlWZyn5cSfXA/P//GptY0pEo2qDC/+iIIU8BgSIwPx3chW+GtET0zt0gU37l5Ex1p20Na3Qi8vT3SY/gM2rt8Kc69WWLL0bzQ1SMS/VwJxOpK4lFee4/7Vo9iw8n/ETUyCjb4KLevVxhA/K2zbuQXL992FfkYgLt8LKYvrZnmLPn36bC2ruCavu6f+a1B1Wpp7r5GOqW6PKedCwOOR4gQpHnDIwymRZEFXV1M0o54EUxmZIxRUHPhk350PnmK4O23PoImn1up18KrkhLZxr+FSyRH1fH2xdds21GzbAkcvX4fMygaKhHjcrFEFaiEfvnv2oba9K55d346Hc18hPcUb567cQeADFRydy2ek63H/3lYvG1bzg9sXPLx7s+7aGV0Zt9vfyAgLFt90rrfEJc9dlSpl3AmHB62iU/RcVbHnkUolWqLIc8Fppv419PUNEt/e7rwo+CkExJ2OIwt7FGux5+haHpejPNjTsl1Lv+oniou/UB0Dx6Qqpu7WuPd1ei/OW789sA8T9u3bt1Aky6ZrBKnNsYt560ZMmM5MlD7FpYLlozl+/HjroUOHrimLuOjDPHXDfdX12yHKK8t7vvc1EH3YNSKgCcV8Uc56Wr+geTuR6y3khpkKAWJSk2BlyPQRC91MObRfP8PF2AgEvQjF6MMHkJoiQNajQLSu4gOfbCWeWRghWy2AOiEDfi3bI6WTP45t8kNGvIp4DRFo2XU/9p2bjLHublAqOODxy/bLcKGAjynr76uDgl6mHF/QpdTdWueKwh8Hn2uJxWLJkTP7er+9j4AkWk4Sr1Ao+ARF7vqsrEyd7zu7ZYi0dNPnHgjWp+tEr09EQSk31I86lQD9bgWERDzxmOLtuClKlZrXblv0cbnRC2Y5TO3Yx8HDt8jXogU8Bs/GY0t7vSyfCbVr175Z1nHWrenGow/D/AHeApLbFGlsuW0XiA6A+gVd/Krj0L3HeRWPlPzz1Lug3A58jA4N6jMVzDFiKbQUOriSqYC7mw/MM5OgxRPhZVYC0uMiYECKCk7NvdDg2jWolAloIBci8FwHvIx6gLBwOSy1xAi5/wA6qlQsvirB77VfwNjeqaxvB4Onp6Ph+NV31Eu/8S2196Clq59CRYHOjznbe+Pb21/+ki2Y2sZeMa2tg3zRyQiOTCYTTm/vJM3d/vOO++Z5O7t0MkqMi6htknT9BoJ3q9VuAVxaDIlNSrOiAlDU+eVL23ISklNM0+TRHfSTHq924LwkXuY7hIFt3PR18Pz5c+dq1ao9zL+u1vAthbLNhv6epY77240P5DdvhyguLe8peHvb5JrWWHBT0xSfegwSWSaIQYOf4zlQqChoGjzxGJGgb4GuWzugHUcJ7nVjeCln4PG2PeC1b43gHyegg38TaFnbIqWaA4wsrJB15QYse/WlOR442dngGhpClfoS5y6lg48s1GuUDWu3Rli0eCXqmluCp/XuDlxqjyhc4nrHPXmnyyHg8xjv4eHD0NSTf3U1fM+ty8O1Rv3TT+9ebp67TIoOhTwxcn+UU9dccfxzaL2Xk1vZqXKbRs899EIsEomkb+9vYm57E+bEWwjereKE7FGlGPoOsEi5s1E+GjBYrZeVJVNqv32MqZFhAmC4TmXqtoH7dO87W4cV8Bgig86RixbC3IV+lsxDyK1LsHaojHv3XyMiIQ5PXqehWiVdWHg1RXx4MEzUyahaxQFmjj44fuwAEpOl0PVsCWFGFCTRD6AU20AgiYVAqEtygCw4WtGKEB/EhgahTTM/hCVJcPvaDciTIlDTxQaudZtg/46tcLc1h9qlFl4/i0Fm1As0aNUMSRI55EGnYO/hCbGxAY4fuY40OSna9OiE/afuQhobhJ79+mP1vquoXaMqLNXxuJ3MR1M7OUSmtrj4MB7hgVfh5WAIr3qtsGPLVrRu5A0DW088eJWEoFu3IJDFw8TRG7X8quHikYto164hDh85gqSkdNRxN4dzlapQ6Zjj6Ln76NTEByv/3gBdfTEpXvXEgQsPkBERjCp+LVHdyQgnjuyFrlgIXedGCH5wHw0a++HqpVto39AGPOIS8/WtEP3kJqxdPJGWEg9dPSMoRAY4sms7WnfvhcDgWHLOohsCFYWNjU1kiXf+AGrXdOPTh2FuPy9R/tGlTLU19Qf07QATQgCRWMTMAe9+bXj2ZhR+rGwMSaoU5mGniVubBVMDIySlZeHG1Wsw+vVPuBlyIVHIIc5MQ3pyPPOkcg31cSs+Az6/38eEIV1Q12A3nsfq49/1AdDW10HQnZXQHj6kHO/EG6pVczKYsOaOesnQ4r2HkX9sa0kf9uL2s7GzD6MhFYU/j73iU7EoNiHu3biKiGubDIko5CxzBLyTKe87JPcbjZd3T//lWKP5lLe3F/AYzGyccejAKQS4+SMs8BouhovR2TQZES9fo8/A1lix/ym6966OzWdjYZsZjNqdhuHojn1oa+MNO/8WqJKVjHtxSlTSSoJLtx5ICjyFaJ26uHPhPuy9vaHKuItwrgyxyRzExkfjxeMkeJkQ1fdphrNXQuBQFzCuQtzL8DNIynBDzJ1D6D9+KrLSwrBv601MHdkKmzfuQafejVG1cVtkBB6ktxCV1MHQavjmHejDx0+JXT6GyKE5duw6jvq9eiH68ik0D2iJczdew4vs49O8Hbbv3YtOQzwRlypB+ypqhOvUx42Tl+BdyxpPAoPgU9UUUqPKaOCYgnOB6Yh5eQRcr06wyn4EKOyhzJagYdfWzDmNk+6hevM2OH31MRGGetBx9kPEjbMwNpfB3zodp0/fQmN3PhSyNFw9ewu6fr1w90IoOsizcVviCvuUo3hl1BTORhzIlRkIu3qUCEPJOyBJSkoyLvHOH8HMzYGSBQO985b5HF3Ql5a0ukyl4JFQBVcrPTyPT4GC5OciriDPW2BCWgFJX3Vy1Hjw8CEMwpRIePkKfK4F9AQipMhlsDAUIWvpAsimTkBMSgJcU1MQfeseU1xRSpTICn2C7I5jMfbnJVg7yR6ZYfvIMf6wESfjUWAwlv1siQTdHhgxbAJ50DyJh1LI0SkzaGUqFcy/Bvm8Vxxy3zZIpVJRUbl/VQvP0Nx5WowoKo7k5GQXIyOjIpsb8239+mdlZo6WZGeaUkMQ8riyYhMv0H7lqJ0ymcwVFob8HoPQ0A4BAzVq6+Dlj2Femp36VNY0Vx7dW5OD9W9KXbXKzHz3/gFM6CkkXouBNipZ0SVNhyiWXq2ZT62qV7bLOZ0zmJKfj5lmeyNr8l9Ti1y5B3MgGlarRKRY80A0Gj+VCXX1HTBjogMzPygnN2Bqfxppzl2j5ZvqzW+6+OfM5TSxrjpCE/8ozXG9O2jakLtZGMBt1OCcdNJ0WIM6lJ7DNOeZOnMUEwbYa6JxrJLvrrn309yPySPyVjXoMogJh3TR3KP6buR63HIrattiwJsW32jaXXPNtUf2YkJNa/ne8GHCnpr/Qz6/XomiXkU+2vZTu2pv9/lY10YHp19p7JAWJxx0ifejJUd4tDqvTUNeBWS+L6Tj1SI4kvtU7XQ4wtVGkPNoM2gBBixbj6CH1xGsViFeoQA5IVJjo8DRNYRdSAT6eFTF0tNnwG3dEeNuxCMtUoVv/HzR0HwVzBu7Ej9FjfjYGPw65190amiIDgN/K7d7wpOkH54/skGHku7/z4yeZ6oOHvn3tiFH1A4mzmp7Y2daN4DIyMgf3nOYevHixQgJCcGyZcuQlpa20NjYuNDDrK2jk04nJl1cTrGeRrp5fW+9yJMp5Dfj5v/Kk1JkHcPmRX9Bu1ZndK2vefhjn9+EhXNtZn7LgXPo3alJge8DoEhlGvfkXAPOHt+Dpq27FUyEVAU9keYoJTGO88HxaOZhVlzaiyQ56AiMPNsVWHdl+0rU6zUCN24Gok5tr2LjuB74AnW9KhdcmUE8DV3Xkick7g5g7vveXSIeXUamSQ2YJN2EqWfjd+6nSnsFrr5GhW7ee47a1Z1Lng4CMZSkkuzn7WBYbAurB2EphT72mNLe2cna1OdlUfsHuJihtaMF9j2LwpHXKQgkRaXQ1o2Q3W0mbDq0gQlfpKlnyG0FyeVATTyLreSBDrtpgWTtWKhkMiQmkFCLi/+tWgRRRDwwsDdk6mykyiTI0hPCjIjLrbET0H7AOFzauBDNx47Grsk/ovlPk9CvXw/06r4VZsPa4HKjW2hTawsOreyEkLi2KO6pNdErskl1aP6FxHRZodrMLr5m7etX8zlSTPR5TFpxuopdZbfg6OhoV2tr6wLfixsYGNCPiIqqCGQU99tvNY3++vXrh3r16hWZy+dHwOPIi0tPzPN7w/S0yM8RdvoxnFoWaH1ZwGOIfnIFZxMcmOatreo4AtmxeME1Bj89Ecdu3EPrOtUR0LYeUiJDceXcNfiRcvOZQ5fRc1gnbD5+Db1b+YErSULTFh0QmJgB3fh7OP2AC115BNoEdEfiq2vYc0MFw9SXMPFvCElKFMTauoi+fw5WtTvhnx0XMbJbbWz/dw30Lc1Bbh6uP5eTXMUGLbztEff4HO4qneFHfsdMWgueEAW1gRV0OJrXuPKXZ/DgUTopD3tiz+mb0It4CJ36XZBxaRc8Og3A6ROX4asXCbFvFzx89JIRhstPEuAtDMHRcCP0rMlHCon3wfZ1CNd1Rs+mVREh1cHLiGR4qe7i9sN4aBPD9evcG9sO3ULPegIc37ARjtWrw8urGvafvYfOTaszadm2ZQ+5Hy7ISonHK1U2TIgmXnr4DA2qaT5p/3v7WYzq1RQPAh/CTIcHS6Kra/ffgK+TNh49SYSh8jVcazYt7rfNg1Y+vv1m4taqvh/03p26xrnz6QkJ11ZNbe7/vv3BEUCHeOv9qtgxk1IlQ+29Z6H1z1S8WHEB+n5V4HTlJJq7WaIqNxhnzv4LI9cuiI+Px977YqjkntA3NoQyNROVZs1B08C7uKElQfjd+5C72CIlIQlcK1OIXqbAUE8EV+0wVNbRR/C/6zCtax3Mm7sOLef8gNerO6CbYS9kx9+FjlgPysQEONYo+DHdzZUle3lOBKxArpH/nuiopYd/GVynxF5CLpWc3Z/QkNh1ocY8urq6XgqFYi6fzy/wlaFQKBSQ4gfOnTuHJk2aUFFAcHAwjhw58sOUKVPmvOtcoT83NX/Xtlxcqjf6Mz3smqu2Xe1Rb9cGFfAYrDzqgXF+G0zWrBBYaAoM1dsgx6OGWCCE2MYJHfppBLTnKE3Yr7WfZgctTd8NXiYiIsUNMDy/Dtn7YTgTUb036+QZjChQRvbUNKfuNXJM3maf2m92tfRqRZxySs5vZmYNtUICDt+U8RYow4kdrF27CUOG9CdLOWlyHc0EQ/u2zovrpZsmDknIGeh17omeOVEakqlR7zduvKMOmYxp8aANWvu8ScuALppPUjoNfOMx5IoCpXdfTTHHtb4XND5IY+RvBExFgeJNBCWXIZ2Rsw6l5u03EmXBN03sannY+9wu7XE8rhDrmvigxcbDqD6sNWYpJfBuNBrapKx/9tJmhKXqgpOuxqAAD2TJo5CdYYNHMIWxqTUi7t3EpQY1IIvKgva188hKTwavUVOk3rxJytFeCI2KhE/zQdj3IJiU77MhcGqCnft3Qp6UBNsf4jDO+wwOOBng8OpWGDHqb2xePBoB3XqV2T3xc9QZ3a2Rz99lFiHRoNTU1KnEY1hARIE2/CkgDC4uLoyXRQUht4Njd3d3Wqf0rt+FvlZ+V61voa6u9Bz8imwV+dbXlTnHqelHMRX0IYpA82qJ5h5mZmZ4Xzdd9Jt1AS2HkjJnZnom9PWJgfELN7HWiMIb0lPToGegX2BdG19NTtK8c88iz6VWZJO4tYrclpCYAlMTw7xliVxJBJOX7xoKxfbOayorbt68WbusxEGennpl6bhG9T8mDi8TM8ROKlxP0qRxH6jDVuFZjCkqOXoi7NU+OJha4YqSg9oXziBk1w5YThgHoTwdF5csgYeHB8aOH4PYtr2R9ug1/C8RpygzFelHdpPiVxKERHRk0THk9vKhJMXqPde10CnqNcxNeDh/6BdsPP74Yy4jD22O/OKvA2s1KpPIckiIegVtQzNdIgp/knJ+symtK9GcS52/8nHv3r1DJBLJ2piYGLi6ujKvgq9evQp/f//j+ePKyEg30o04oSlOumuK8REREQNsbW03ZWVl6Wlra6fRdURQHEmxs1CRkL4FOXL8dL/2bVpsossFPAZ5YijWnHiNTh3cYKVnjUfXTsDOuzGkUYEwt7DB0b1H0KxHV9w5eRC1W3XF6kMh6FvPCAcPXYFnJT4cm3XFhS3biVW0QBVxMuxcXCESiiFJi0OySh9PDq6HQYteENzbD5FrE7g52yMhUwZTHSGylEokRTxHFPRw83YI2nooITWpBXngASjt6sHNxQERSdmw4ifjQoQSr+4EomU9Ezi4+OHEvTB4ZpGcRuKFWvrP4FyzDZauPIBRg1sgPCYdVw8dgWPL9jAOOwu3lv2Y+pHwV08RozSHtw0XF4LT4W+jxLVoARo78rD3dgJamIcj0aoRnI3ECA2PwJWgeHRv6gmxUIidmzZi9MTxuHVkM2q164cjV58ioJEHbodnoZH8HracDIOROgXubZrjwZ5D8Ow5GLIbe+HTqjeOb16H1v0GYcfqTWjRtzmenTkHjwatcO5hCvivroLn2QYda5b8NWUurVu3Pl78XiXjY0XhfXC5PGQEkSJTDT90GbAUpuaO2HX1Lqp27I5qQh3ExkTj+PLl6Pb7bFy+fJnJMSO1jZCwdC10qrghmU88RKmMaRoNY0sIk19AwTGDys6UiIMcjf1qYrJbCIxrTYe2qWPxCSohZS0KBM5vg+qpu323plG9xq2OcLncNn8df80l4qCkH1XlioObm9s6kuHIiSgwD+yaNWuUQ4cOLdAGIi0tzVQ/6iTTE4tKx/hqSlLSNPLwXyWiwGwnokArJGnlItc45fp5GLdl/HYqBvfv319Gto/Jlkjxvw27kScM+T0GvmElDA2whECoaZxi6VYX+toiqB28yUYh6nXuDpGWPnybB4Av0kWf1lWhSx7qjr3MoaOrRZwMIdoP7A8OV4DUdFOSU2dAZCZGhlwA8h91Sdmcq008hEZdIZVqarG52UTIdEjuYWkJtVzCdP7h2qo6BGIdqNQcKOp2ZPalea6tsRZ4pDxbj2Tkde1NyJWqEZulQj0nfejqd0O7bAVkEism9xjYuyVS0uUw0Cbp69+L8RjSbLsjMT6B5OqmEOsYw8vQAEI+BxxlAnSM7OCnR4vLXLSpbQClzBpmqgxyVjEMxCJ0bqQRBUr3fhqPxK1BRyZsU1dTb9C4WiUI+Pbo18cLYh79doALh/79wNcWQ9o4gJyLi8YBPUjIQ+c+AdDS04Nua1ImTk9FCy9jGDToy4yn8CFs3bq1T0V/RPVhcNBxgRTNqmuhT7ch2HzyPp7fC4HTrG+xKZTkoP+ux7+79kM2uDMmBF7D2GqLoDuoD2xsPPAqNBCcqDhwI+Ogeh0BgakzUm6eJA6uEjyRCKvPXEL05bN4od0Py+f9D3MX/PmpL/adKJVK5uE+uHDcjnqNnzJu89tvBnIhXuhm4ilsfldcO6+/iG9o7wlXNw/6IhjveG/NQ/Ae2nlGpfSQ08o94aZcPzstKhrMRrGoYAVsAY+BGqUgn2GaGmveNHByescxMDBkQpGOxi3X19NEqmdg8CYOruZ4Q1rdqadxxU1N3mpaztcnD6Zm1thU05MkU0UrEIMpGAj0NFdCVvK09SDKab8lYBrWCKBLdxLnf4A0t0KfiAC0c9OqV+jOGBCRg7ZG9MxMTfPWN6+hqSfRySk96Wnz8vajmLxVPDAz1VyPvr7mPmiLNAKuJdB4gHwhjUgTmW7OPuKcIo++niZdevqaUFdMr+dNWj6Uiv7s+mOgry7PPdC8yh+m+dIdYXP+wNHHYfCysMQfv3wPMZcPuUqFU7dvYcywociwckGbpnURJxUhOSkDxrZ2OHfhBKSkGKcWkf3io7CKuObNsiSQvHgGfevSe10VAcmln76ID3H54dAEWBWxfeae26UqdIaEhKgb2GvDytrapNid3QP4tPm0njqF++8dHpo4G66zd3DI3xpM/d1vC4/9/v3kNgU8hteBp5FlWQ+ZKTF4GvwAktg4VLYyhL2lHi6GKqEgOfrgvgHYf+YB4l8FIjqdB2s9NQwFctTxscOFu+HoN3Aw1q9bj0GDB2HT7hNQpkeDo2UEkSwJqSqSS3SoC2lGOvRsq+Lmsb3w9HSA0KQyjt8JQ3rYPZjpiOFgxMGt8GwojDwwrEtdJEiAAzu3w1JLiXbd++LQ2n/RYcgwxovYsWktevQIQLREjLOHD0GXm40qThZwru2PjNhI4ju54PzNJ+jgTDwTS79S3PIviwULFkyryI5ayhoeV4QOVd3Ib6qEYet2kGdmQUXsRJmthkpXFxx1Bi4G3oHatRaUCYkIN1TCoFZ9qLJS4T33ByiFZkjctxcPnOzR29cfR9f8+6kviYG66217j1Qd276SWaYVic7m7hhTb+qrBkNbOLy9v7neuztZlkgkYupV5HYiS0WBhgKBYJyenl6JXlfTVpE0uFJE1zAkjZw2vUYwcRbwGJSyDDx9mYjWLmpIfNuRcrccq9cegrWxBL26dcHl43torRyEmZGo5mILVSQPjrpRiJRYk2OzYOKoOZuDO+1VSg1jeRy03TwQrTJH53oOWLZ4NVF4BV4+fAgnUw88i1dA58lzmPnYIjnsBVzcPRERlQxZdgQcHCvheYomXUEP7kHOM0PN9m++4o97cAzm3m2gNnaGNOE5IrJsIdWrhA7ta+PA6jUw9FYhPeQxBFUrITY4GHC2xsOIZOJoCOBiUX6DoX4qvmRRyI9UJoeM5Jk8UryS2prA08oDvMRE6AkkeBaZgJTn22kTXagjwkDbTnFJ8S6C2JRRdiaO/fI7GrZoiN7X7kLWyB9dgp+hsbvLJ72esFevmfddTD8UIY/PExtnelJq4N7i/QcWARWFTgPGMa6Ws4ONeuzgnrRF6T0nJ6cy8xapONCwgMfgWLMzcqtrNC8UBRg25E2tfZO2mldwbTtqGhe93YecQ07Y2E/Tp2G73gXfDkybNIwJTdtrGu8MGfwm7mGDAwolMrcvqkZ1qpPpzXrqLeTSu53mFSfdXMdF4z4OHD6UCS0adWHCb77RxP3mxeDXx5c+4Ewuk4ZVg3uCCis27oPWw9/wND0GHQYQD7RfZ/Rs3hy7V/2OrYeOYv7AXvDtNQCN/Bqi6tFDqOJXCx4ebkiM1vSGzP+mH0ZvXIfHv//+Sa/H0cGe6XOBeA30oStV92pvQz0F+uB+98eSE/cePG753dwV+H7cwG/u3Lnz/lZ2xUC/xzA3N4+ztraOyl3Hfl35lfA1iMLxrXPhXy0Dy9aEY3A7O4wdVhe1LZwQE/0U87rGI9jAFT91kCAsIgbHorNxa9Ua9OozAHf79Uft5cvR9vlTHOnThenSzETfEJV09Ys/aQUQ0K7ZwpDnr989qGo+YuISYGluimfPX3i4OFd+UtQ+3Vs3nKGWZqV1btPsr48dnIaiVCp50dHRVnTy8vIKJEUTOTuuxFfC1+AxcAydkRCYhfv37yMqTQdLll5Cy6bm6FLHGRN3xMPF/zEWruaj+sm+2LdgLo6fO48tG/ZCx80Rm7r0hU7YUxjPnAxXAxPITc2wf/xoUjSREcPnQCDmQvAJOoml3yHYmhtdyk5PTi5Jzv7DglW5s0wDjE4t6s0e0r/XH7n1CpTq1avfI1Nes/W0tDR9OsRdenq6Hu3DgYpFUYKR+yEXDXV1dTP09fXTDAwMUnP7iMgPn0Qwj4QzCHMJn1U4m7B48eKJgwYNWr9///7O9F39+fPnG/v4+NwPCwtzMDQ0TKEXQW+Kg4NDGDEon8aNG5+nvRn16tVr+z///DNy4sSJi2k8n/payjOk10dEYXapLPYzpEXrrkh9MR89O7RH64AABNS/hO07zoO/6juMGzkMrQcsQYJEY8MchQL7Dh1AdHIYuFdfkjK8nBmHYrqBEYRZMiiEPCxc+BeUChV+mDWbGSm7okeyu3v3bg36gJqZGofTqSTHzJk2vObb6x4+fMiUgn19fe8UdQx9wOn0caktCJ8aFp35XENq/DQcOXLkPzSkIkFDKg5vX0zuutx9c4/91NdQ3iEVhrfvxZcErUQU3XzMvK62q78Zyu+P41RyCkaYu8FauA+3bkZC60gdzDU3xjO/RqjkYg9FtZrg6HLBUaqQJZcz/UeSzBlKUoyQi0Uw0NbJ62uSw+d9ktEta9SocTf/Mu2yjU5ZWVna9A0DHYmKLtNtuW8baO6tpaWVTefzd+9W0RQ/rBMLSzlCm/jGEhG4ZShGRFwUJhw9hCgTfXC0+fjznBiVONp4fJ4HiYMEeq7VEJEUjxp8LZjYmiH97zN4bXIT6fTx4dJ+LJSQ0g5otUSw1DOEQM8AXCEfMiIWAj73HYPpVhz0QadTUa775wYrDCzlAi3Mbjl5HjFZmVCSnJuWbpm+GchDKiNu/x/eLYgq8GDHVcBNVwXllPGIJPs+OnEUlvP/RJqYg9161fCNVSV06JwNBD/C8yfB2KdtjF4unhDLuej67XcY/yoZdTkJzBmpg6CScpEhzSC5cxYUSiIoZL13w5YIj46HSN8YMtpLAZ8DoVAEU7EWdHVEMLIxh7ezG/6aOaaYq/rvwAoDS7kgUytx+uIV2JuaUBeamYgLDYlKgRgtHUhc5eCLuDB/cgSHTx3HEHtXPL11Ed6du+L5rv2wWvobpLTPBit9DFaa4LW2Pp7ohCAhPROXXFzRpKE/oqQyWKzbBpFczHRVD5UaSiI6fA4farm2pscokRDabpZwMuHDzcVbMzIW2VdKP77i5BQ1IMC5g1vwuHcHVKFDCLKwwsBSPvBJed+tqgsqm1gxY1pmSCWQK+TErZcjce8udHxwi3gQUqxevQ59Hz3HrnvXYJ6UjCySi9dp1wy/TJqBH57cRTLZZ6O3JxK69WcefLWJNYZPmQ6hVx14/b4Q1cysEHH/MUREAOSZ2YwA8aEkwqBgRuEmZXXI1SrmmwBdXV3wxEJmH56MC7VCyngxWmJdZltkfDwrDDmwwsBSJtDm8lLipmfJZIhLTEF8fAJO7T2N8+Rh1rG1YR5UsVKFjNRYqPlCnDl5ErK0RFjvM0ENZ1ds/mcJpgwdiZTIaBjrGGDH/L/gUb8eHhsKkCA0QafFczFt2yGYautB+P0EyOnHb5IgCOo4gVfXEZLMLCgl2dDOyAQnSwY5SQeXw4UWn4cWm37Fi6dJOBQUAg4RCjrehUBAR7tKhoI4DVwda+gobdG0ZvXiL/Q/AisMLGXCiZuPcOr4MeiIxJrxJMlU3dWO6QSW5tqPnjxGBu1enjy80qxseHpWg5IUBV48e4JbT0PQo0UbuHh4QisqHNnZUsRFvsLhubMx5MEDRDbwwyEycU3aYlwLWwSvXQJtksvTikv65iG3e3oFVw0uHbdCTYSKFBXo+itXr6CauRdq2okgEIsgFImY9ORWRMo4UsaDiLA/pxmPk4WBFQaWMiHsdSTsrKwhFotJcSELKvLQx0fFIDVbgipVqiAiUh8yiRRyMiklMmaSSCSwtCTiYWOD16HP8OLFC8ilUujqCYg3kYaATt2RGB0JcWwGVAnpiDnyBy44z4ObHikS8Ml5yL5qmRxyqgQiHlknhJqn6VdSJNajHxfB3MoWoMWXRPqNER9C8uwbGBrA0MSYjvwEmVLBjIURJS6y38f/LKwwsJQJDWv7otOoSUx/oWmk+CDMTENWeiKURnbgP30NAynZRNz2VFKO5ya9RnZGOsmuM5icnnoX9AEV6pgBSvpFZQaJRgUelLC2s8Wda3dQ7/xOVNu8Dlbr/0WrB4FQ6IrA1xUjO1PGeAcKIkTKjAyIk1KgIh4A7aOBq5KhfoM66L51ElKkKojSldA2FUKspQU6LKeIowMBR4FUhRz2OsV2kfifghUGljKhWmVbhJ7UdEC90K8Osv3qQaSjDY6CBwlPTVx2BXlQ1STztiOuu5dmRCpwGFGIjU7Ai9dhEOvr4uiBwwgwcIGEr8bFrASmKFLf2wOTK7eGzNgfj+Nj8T/vPqj691K0GdYA29YsgYqriUetVDHn4KmoCMmZRk2ZXCU4t/TQuXVbCJUcUlygdRMC8GhvwlAw+9Au7S9vPI2sPll5HZf812GFgaXMSTXQZVx1nrYueWAVkKWkQqnWuO0qDg8cIR9qWt9AHmYVeTRlfD742npkvRhKIQ9cCGCo5KI/rzIM+QYwINu4KRIY6BpAoGMIbTUXGfNW4N6GQ1B3rYRstYJpm8AV8In3IAVHRkRCpdCMmcknIkA7yxHSwezUTPFCRNZJObQ4o2aaSdPxN7N1eUi68QTaTT7qQ8WvBlYYWMqcSukSJGdkITs5FekJaYiIj0FqZiKEpExvYmzBdBXPk5Lig1SCFPIgR0VFIiY2ATo6OuCTHD8V2aCmKSOeAJc89CLykIuIUHB4YghFYhjytCAi8pElkeL1oxAouBJN4ykiNgq1EJKMVGRmZhIZUENPSIoNmXIkv4wA7TiNeiBSLg8qaRaSSJElNTODiIkSWYlpMKlbpdhr+6/ACgNLmfHr5ZM4vXkn7Bo1IA+wADKhEGrimqttjWGg4jKNnuKkMoTLpMhMy8Cj2/fAo20JSI4tFulASnJ1x3qNwe/VH1IFycXpa0VTMyIwaVBy+XQcdyjlCqaooOKKwJHLwE/jouuE3og9dQACuUrzlsKCFBFUmvlkhQpWds44ce0S/s/eWQBGcbRh+D23uLsCSdAQ3N3d3aVQSgu0QL2UCqVGqVFoCxSKu7u7BwsJJCSBuPv57T+zl4vg/YsEOk+7rM3Ozk729ntn5psZsUyKKtVCIJVISQT2EBP1IKADD9Fh5GJisfriLYxq8n+M3f8Kwj4MjP8LS3OfpbmQsiciAsHkh2wllMCkN/Lld9p9iRNJicwX8mGNRDWQrwYKjUAeUQgcp+T9CoRW1lAoFMgncRUR064kP15aqWgsKoBYo4ZIJiZFDQUMpLghs3MiaykpChTxTZbbo7IxVkI+BlIBuQd1jSZKw0gKCHoBuZ+eT1+CWAQ7ext4KWR8hadlyjyOM/8E5M4uOB11m/8w8EWQF9yv4kXDPgyMf0x2djbf9s//0It9FiieKhtSlM+HlhYH8tOhIUUFOuW9o6MjFFI6+q8OIo5O7KuHMQ+kKKDmvRlNYqIM1GKoiQIgZhwJEbfID5r+qIkqUOWQQoOAL0aIZEUwkntIyH0dyIdE7uzIN49KFCZo6ceFr9w0kuIEB7WaOjyR47o88sGxgkHAQULUS15RIf8hMorlfPrV+XkknWoUkaJHFRsbxMbG8n4OlqHX/6uwDwPjHyOTyfgfvKUPhMX6/tKyEwQdhHzrAEd+fHQmaH42A/JDNRjoeAn0O8BBQz4A1NKL3noT1AvBpNGTfT3/YTAZDZAQxUHL/TDq+EmGqBu1WquFycChsMgANYzkB69Bbm4GKWbkIy+uCPJKQbBSymFrq+I/Fk4qBZ2Dm9Y5wpqoBZvJ4+ColPNpNRW3WpjIx4fWW9JWDY5OwG2ilZMy/vn+64jvmbiYwWAwKhJUjr7KLqnUdD7/ocUYjP8TVppgMBgMBoPxRDDRwGAwGAwG44lgooHBYDAYDMYTwUQDg8Go8FjmC6EkZRTCyVaOwxcT0b5BxRivs/vbm+Boq8CUQXWQllWEdg18Hxne4iDIYLxsMNHAYDAqPPlFerR+fTW/vf37Prh5Nwfv/3oEbesPhfCe/m8T5uzD+ciU4j3q6C3Aid8H48SVJMz8+XBJuMMLBqLFa6v4Qcgs4Sjn/xqGIR/tQNSdLGz8uid8XK3543VHLIN5MA+gWqAzfn+/PT+vEWXBu+3Q451N+OqvM1j1eVfznUmUTcev5Oc/orw1oC6Gdgp5BrnDYDw/mGhgMBgVHmulhBj+IWgybgW6TtuAk38M4W19//e3Iz45hw/j7WJDjHwP8zRD5Ny5pcNKrjeaOF4wVPKy5426WmuAQibG+WXDcTUmA6M/24Vv32qFFrW9kFOgJaIki4S1Q5+Zm0vi4eMl67Nkf8PBm2g8dgXeH9kIvVpWQs/pm/DRmMb4fPFJFGkMUMrFqD9qOcQiIX99Zq4GHd9ah/ScIkwdxMYNY7y8MNHAYDAqPGeuJUNOjPznE5rjgwVH8fmfp+DpYo3moV64biPHxagUjOpWvdw1gz7czq/b1PPF2B41eOP9++arqDdyOX+cioCzS4bed692b6zFgLbBeHtoPdQnYZdsu1YmbnNtRLsGfvhq2RkcD09AVHwWf6xH80B8ufQUWkxYxddWUGxU5n799tbm9bWYjKebMQzGc4aJBgaDUeFpUN0du07FolMjf2J4Q/D2kLr88VV7IrHwvXb8dlxyXrlrBrQLLtlOSi9Aj+mb4OlsjTf718GPay+YWyTu4df14fy6U+MAXL+dCV83W/y64VI5QbLt+G3M/uMEv/3NWy3QYNTffI3DtZhMzJ3UEtN/OoQ9p+Ph726L2ORcLNp0Bav23uDDL3yv/VPLEwbjRcBEA4PBeCmggoFiEQyUQR1KhYGfuw2/XvBu2wdeX7a5Yljn0lkEagQ6lZyjzROv9w0tObf+q+4PvL5b04AHHi+7376MM+T4XmxgcsarARMNDMbLyyNHSzRBD6FJYp5SlDMXrE0Cc7W8kBNYatorOny3CdbbgMGoGDDRwGC8Ulh6AXDIz87DZ+9NA6c9gno+JthKZEjPFiGZq4pOoz9GzWqhL4twYDAYFQQmGhiMVwBaHNfr1Li7fjnO3NyJTo0yoM+6g6mNVZBYcbD28ITMzQ+QiWHkdDAVjYIpSgWtVQOsWZCM4R/+BoHChmkIBoPxSJhoYDBeAQpvXMIbw/rix4litFAZkXVVD4NRAJFIDZWBCApJFoQKKWROlQBRIPR2gRAKOEj1eeg9MhrGuDeRdLUufPu/US7esvMA3zsncNl5j8sep+Hootfr+TWdWbAs9LrSuYcfO8/ws57MyY8sE57xPR7HV884/h/IkvK4QAzGk8BEA4PxkkIHDyJ6AEVn9mPN+5Pxbk8xDpwowppoGzjkidCYy4edSg6lWAod7HE0Lxm70u7CSsahin0Bfts+CpzYDUpVY6hTD8HKKhkpKzg4DR4PESeFQChAtWrVEBkZyRt3f39/3L59u+T+IpHZnlMBEB4ezouERo0a8VMb3wsNS+NITEzE559/jl9++YW/jgoGOoXyC8Tr8KGDM8eOGoPouFiEVq+BL+d+hQ3r1mPca+PxwbvvYduunahZtRquRd7gBVAlP3/88NOPSEtNw5dffIHoWHOe0OM0DgtvTX4Tp0+eRKcuXYiAMyA0NBQfvf8Bfvt9EZ8fTZs1w+xPZuHatWtYu2E9pr41BfPm/1AuLrr+ffFiHDp4EA4O9pgybRpGDhsOIsuwdNkyvDZuPHr17o03J03CzdsxfH4GBVbC1RsRqB4cgsvXr0GhUKwAEw2MpwQTDQzGS4oApMRuAvSLvsaya2ocTTLg40rWqGqnRL68EAVqKdRGE3QGPYyCZDRWGNHWSwPIHeBs44TpPTYhX2YHB+dCKDgtJvS5g8trVqJD/9fASUwk/vKF/AeJAQo1/jNmzMC+fftgNBohFosRExMDDw8P3jju2bMHnTt35sN6eXmhSZMm/DYVERXBwdFSz/Hbr7/i7MULqFoliBjbq1CprBAfH8+nk1K2xuTAvv0oLCws2R89ciTc3N3w1huTMf/nn/hjxFhDRPJi1uxPeePfvHkLPn/mfPEldu/by4f5+NNZ6NG1K3++Xv365dI1eMBA+Pj6olXrVrCxscaAvv3w1tSpfDpMxTU0dNvWxqakxuaD996Di6srpFIpunXvjk7t2+PwsWPPMPcY/zWYaGAwXloEEBBjYSLGPFWThqW+lSAQSSE2EoNlEEBnFEHLmSASEMPMCaAhAiJPJMM3N3KgE+uxYnYleFc1YvD7TvhWVIjkxcCOu3fQoTAfJjtHvluGpTbhUdCagt27d5fsNyMlaCoOLIKgQ4cOsLW1RU5ODm80jx49+qwy5P/ELBsmvP46vw4OCcHa1WvRq09vyInhV6lUJSHHjx3Hr3uS0n1qSgoOHjzIN7fMfPddBAUH88b/23nfQyKRlFxz984dePv48HehIoIKhlkffYxZn83GO1OmYvO2bTh18hT2792L/n368rUOlJVrzMNmjx09Gn8sXozBQ4cirGYt/t609ofi5uZWcp/srCyMHjsWa1aZr9u1cyf2HTr4jPKM8V+FiQYG4yVFJ+D4eReMPtVxdGEsPv0qGZ/6VQYn5sgigkkkhEFnQD6nAfRyGImAmBBPxMIEPVxd5XDxzIXRuS1mDD2Kxd8VwDPUCsODbSC0siNiBLwtnUpKtuPGjStpWggMDESrVq2wdOnSknTQ0m4WMVh2dnZ8bcThw4f5ku6AAQOQmZmJvcQYWkrr9DiNjzZP0GNUcHh6epZ7LmoIL1y48NzysQV5nrLNCtt37SzZvnYjgl9bzi/64/dy1/YfOIBfU8FQNhzlq6/nlmwfOnqEX9+4GcWvqWCgfPvDPH7duEljfrFQNh4qGCizP/+MX8ry/ocflAtv7+BQsh1RfC8G42nCRAOD8ZIihRAmgQkus3/ApskX8PW36Yi87QKHm/ZwzoqFfWoerDS5yNUKkA0DNqQZ8ecYa0iFBZCQ64p0HERJG3Dpkj3qNlaiU00FxKN3wEgEh5gfAkKE4cOHo23btqhUqRLvs0B9Gsr6NTgQI5WUlMQ3SdDq+l27dqF79+58jcLKlStLwtHzp06dQs2aNTFt2rRyz0GvL0vXrl2fab49D7K3bINp83YIrFQQ9+4Bm1Yt8H1ENlYncwhzkGCSvxQ17GSY+9VX2LJxEzp06oS3pk7hhReDUZFhooHBeInhmwDI0uuXY/hl1miMaHMSqG0LsUCOi0cFSD3rhjZu/qhckImi1Gy4+BEBcasI+87oUVlvBS+ZFIVKE1qPrgdx5z8hEVuaI8xraux9fHwe6s9wL926dSupVXgYP//8M7+8ytj36EYyowuIegKKmyqmVrUnC5CrN8FOYm66mTJ1Kt+0kZKSwgQD46WAiQYG4xWAui1M+ugvGARqRBw/A0H4ENRQaVCrrRJa60QUGORoUKRGnlaOXI0BlaWOSNAZ0WDCB5jYvRusDSoY/ptfg+N4VkNcFQs6C5abWAQDRSYzT2RV1jeBwajI/Dc/EwzGK4aQ/pQltH5AhtotW8DUMglCzgSTIAcCkxV05IxAKIKNCfCnI0jTMRZo1wuBCUZqzsQC9jFgMBiPhX0nGIyXlwd0bSgeO4H+IxCStQO/I7OcthRyRcKSnWc9etKzRKfTDS4sLFzxb+KQSqUrVSrVkKeVJgbjVYaJBgaDwWAwGE8EEw0MBoPBYDCeCCYaGAwGg8FgPBFMNDAYDAaDwXgimGhgMBgMBoPxRDDRwGAwGC8x5yNThn+z6uzSd/rXEhwLj8fB8GQ0q+mJ8V2rveikMR6BycThwq0MfL7ikm77V73cbVTSrBedpieBiQYGg8F4Cflp/fnIgS39gwJdZfhtSjP+WCWPGhjVuQY/FwidrnzBgSwsfLv1C04p40EIhQLUC3LGltntpUZdYeb5uGSTvZ1trUBPu2svOm2PgokGBoPBeMn4ecP5iKFtAoMedp7OCUKXOT5ZWLDxHCb2rvc8k8f4Pwj0sBFynOlqfEpesK+bTYWdbYyJBgaDwXjJ+GtPVMiQ1oEl+2/8cBANKylR3Ut+X9jGleQwmjiIhM9mtGzG00MgEGDriZjRk/vUnvmi0/IwmGhgMBgVluaT1xSqtQblo8Lsndv5X92DjirZcurmwY8KM3dCs36tw3zW/6sbPSOGfLEX07u4PPT836eL8E1NJhheBib/fOry3x91rrCCgcJEA4PBqLAc/WmAquz+HzuufnTmetLQL0fXrSIWCR922T/GIjxiU9VLp/92osf8t1p1qu7vdOap3eAZMmtkA8TG3IKLTennnJZY52zLwKoP2iA09MWljfFk3EgonFKgMaYSwbD6RaflcTDRwGAwXhrGdqnxGV3KHpu15NRfJqOx7tQ+1ar+k7jyNaZ9E384GvJmn9ozOtT3W0WP2dvb48AP/Z5mkp85lT1tyVL3vuOra72AxDDuo/3MnTi3aMgjq3oak/fuZYGJBgaD8VIza1SjEWX3dQajbNL3Bw7UruToMahVgD89ZjQh7r0/z6U2qu6xbXTn6l/QY/QzvWNurxeQYgbj5YWJBgaD8UohFYu0v89o3/Te4+TYi0jOf4bZrbwxP8YTB84eQqibAndOrkC9bpPRZ2kkfu32cJ8LxssFEw0MBoPB+FfsmRSIRd7zkHmob8kxn8ZDkJpZOuN4fRcH9N2XgBm1lNBE/AjPVmuRmXqcP+fu4ICAVp/hxIZJD9xv4+6I4N/C8UsPb5gyj8C5Sj8kZqZh10g/LKi+HXvfqU5CFaKSkw9OpmfChfl9PjOYaGAwGAzGv0ImlcCo1jwyjJ1CgGUfzETAuPZQiH0B/S0YyXERqOMmsGLdpJKw5fa1O3HJ4I0DRDBQhI4tMNGPw8QtWixZGgfh92/B3fFvaDmg23trmWB4xjDRwGAwXlri4uL8yNIwNja2dlpamidZXMniXFhYqMjPz1eQIEKtViuSyWTUPunt7e3VZMkmSwZZ0vz9/a/6+fmdI+toZ2fn9Bf9PC8rLedF4p1O/nB0m4Mtp06hqb8N7pxchrrdpmDKjni8Xz8dFwrESNz+E+QwYVBYMwjI2iIaHomsM+pLh2HS1gT80t0L+qSdWBAnRkoPGXp5OaLJ5hgkT5tPAhoR5uyMQ5Oz0Or+4SoYTwkmGhgMxgtnzZo1Xy9cuHD0oUOHHCtXrowBAwagV69eCAsLe+R1xODzS8uWLZ9p+vR6PY4cOYINGzZg7dq1/DDN/fr1Ozt+/PiP2rZtu/eZ3vwl4e1dsXi7zL5P4+FIyxxevGeNzMzU4m0hVl08Ue7apMysR+7vTsws2ZZ4dCZxJfPbmxIyy4QS4WL6SzF9w0sNEw0MBuO5sGzZsrWjRo3qN3LkSPz0009QKkvHbKIigS4VFYlEAiIO+GXBggWWw/XJsqdsuPT0dIwbNw67d+82rlu3rle3bt22PffEMhjPECYaGAzGs8KaGNu85ORkODk5Yfjw4fzyKuPs7IzNmzfTTVrrvpVuJCQkICAgwKDT6SQvMm1PiyHuDtilvf949ZnHcHQmm1nzVYeJBgaD8UwQCAR5HMe96GS8cLy8vKBWq8UkP0h2cK+Em96b+9MwK+zB5qMgahu6Dnwb15OL0LzfW1j/03Tef8HNwRmxUTvQruVAxOud8MuOI+he2TzgZ8qpJeg26hPcLVRg0PQfMe/NDvzxzBX90fDMa+gd8w62Bn+DtCX9wb9RQkdkZdxCbsQmcq/piM4VY/wXK/Dp0DrFqeDw7eS++HH9Mchcq+K71dvRPdjq2WfMfwAmGhgMxjPDZDLB09OTr7KfPXv2i07Oc0Wn06FWrVpo3bo1fvnllxednKfKj21d8OM9x86lZyFQlA+/RqOw7MxtdK5sU+asgPzH4bJdQxy/HscfGeXliItbUmE93RMnR17DuchR/HEuexUcXBohK+0UrGyskL1/J+ZGXMJccs4wagbc+qQgI+oHzGvjirNjI3HsSrT5upw1cHD7DFkpm7FtfGX8ljgcsckbHu9oyfhHMNHAYDCeGUKhELR5oiyXL1/GxIkTcfbsWcyYMQPvvfcerK2tX1AK/z3nz5/HZ599hq1bt+KNN97Ad999B6lUyi83btx40cl7Jjy8psEaGVkZJXuZh95BlWEpyExYzu8by4TUGulAXIBIYBaXJRhpqNK4xb6BeBB06hGjqTRGgd0AIhjMfjHdFkWjW8kZE+q5OGF6VCb6278SFT0vFCYaGAzGc4WWvk+ePPnIMPv27eN7KuzcuRN3796Fu7s7mjVrhiZNmqBu3boIDQ0t50j5tEhKSkJ4eDjOnDmDo0eP8umkNQb169dH165d0b9/fwQFBZW7hqZny5YtTz0tFZkH1TQIVH2Qefd3xO77GQPf+g63s/Ro2nMCUhO+BcyNCqhcEI6WzXsiWuuMP08loYMfMUEHUjHo9BLUDfoIyTpbvPb5MmSlHXvgfcXVZ6CJMgC+wacRG5mKERGb0LzmdERmmNBr0pdY+MHA4pAavDOkB1buvwilewg+232TCYanBBMNDAajwtGuXTt+ed54eHjwS+fO/2667VeZFcmP7tbo3+4NnIl444HnBHahOHw17r7jrg1H4XzUqPuOy7otRkq3skeE2HKp9Hrbqr1w9MqD5g+R49sVe/DtI1PK+H9gooHBYDAYzxgBkrPYGAqvAkw0MBgMBuP/hnbBvOAbCg/aWsQZEHn1OrSKKki7e+qRTog5qwYhZH1HJG+gk5QaUc/dHwuuxaGuoxEuQdORFvUDjRDuDo64lpkFR9a6UCFgooHBYDAY/4qBv+4t5xh5+YtG8Or9F5I3jkDEon5o9v4ZDB7RHxe3LkeCzxuIP/BR+Qi4HMRo9ajjyGHW5O4wFhbi09nf4JOP3ykTSIvqLh6QhLaCV95lnErxRXrcfgi4NLg7BSOkdQ8EiO9i495riMpIgTMTGc8EJhoYDAaDh4NcIES8iYMrMzj/iHsdI62qDUfysRGA7iCavn8VWRl3zCe++xbTqzlh4u4ZmPPAmET48LUW+HlvChEM02FxoKS8FeSJDuuS8V0LaflLBM6Y0LUOfty2BeHFhwb9GI39b1V6Wo/HKAMTDQwG44XRykYIt19vYtXQSjAkbIbUZwQMplwIteEQKtvCYMzA7Z/aoO6mHsg5+CaM0T9CEjIPRfpY0DmJxgbJcGdKFPZO9IN2/QAoBu9AtrYAtgIOTiIhdLXeRd7FOeDSV0Lo8SU4/TVk/9UNjqN24a7eAE8RsHqAByYZv0Pm+oEl6TJc/QLSOkug0UWDmqi3Q5XY2moPbs2rAaHQHYUmNehsWFfmdUCT/QOQv2P0i8rCCsFDu2BKm0FkSkc6sf3mkj+HDakm/NVKBmx8RIQPGBRsUF9v9F+wjogG83Tb74U6w/BjIjSjPWC7MhmZf0n4+MOcHcEGFXt2MNHAYDBeDNotOJxPPu7DKmP1sNLDr+3R4fcOoTAZ06ASCtB2STIRDG78uRF1poAzcFAIylQFvNEAmJhKfe0gafQlEQz0oACtibVvvuULPojAuRdRAqNLyq3SNr/wgoEy8O/1GCRrS7dKohzZ5CO8dVALS5n2u3P78b20DTBPDWPWWXSq4Yo919IgsquK22n/bcHw6N4UEqRnlZ1USoDbGcXhB61C8iDLYUdkZZnH8xBXfxcZN0vDWxwoHb+4hDtlYpoTXjwpaXRGmaMCNmnVM4aJBgaD8WKQ9UArawFs5l7G5ok1AN0dKOX++LSdFKb4xRCHLIDJxKHw5EyIXCJhTNuCvy7Mx4rgucg3JIAOCrx1Uk2MuDvtH99ad+QtXC54DbVIJHPbdYb7iDXlzv919mtIagTja30MaPl1fI1WqPPZOZKuXyEJXQt9diqE5HjyltHw8nsNxsSFTyVLGIyKDhMNDAbjhXEwr8xIgFIfFFlG+PMdDVORuQSvajyXCAbzYVGlyeAMk0su6f7LFWQXb8v6rIG2T2l0a9Vlq6gV4DhN6a1a/cgLBsrMwzmYWXxcY6nWDn4HJn2pE96iSMsMTTVhzH695Lh7j8Uw9vgHD8xgvOQw0cBgMP5T2I/YBs2IF52KVxAuFU6OVbHwxG30CbHlD33TOxhf32yH9Gs/Pf76wgsYND8Dq97v8Kib8F0wr2Rmsd4RLwgmGhgMBoPx7xFYQSTg8OP3v6H5zzPgLBNg+sZITC8+rb/4Kdy6Hkdm0j5+v6u3I5ruTMW7NUS8ELiYkUUEgzmsu4MDv+8uBKo7O2DMgQRMranEjd8HQlccn/H2Qrg0XICktHDIyP6b9dyQMPEsNo72ed5P/p+CiQYGg8FgPAVUSM0s74QYs+0D1BuxCMlZ6eAEKnDaa5jz2wrU9LZDvRpS7D8YR0TDgyek4tHuQhJ8ecFACRm3GtKZjvz2663e551iqcAoYUY7YHTUU38yRilMNDAYDAbjX6O9NBfubX/AipM30CnIjj9242oEBCJ3vhfKr+99i+rTduG9CbWRc2kBFhhskJpIe0xUQjUJsCZajylVJMg8NANGS9ODrCOcuCGYez4XM+va4tjs9tAXn/v10BysazAfd9OvE7kC7JreFK8nvn5/whhPFSYaGAzGM4HjOMHHH3/8+dGjR2ccPnxY8qLT84IwNWrUKLxnz55rSX7MfdGJeZbIas9EVubMcse6vr8JmcVNDpN2p2BS8XG72hOxfe/EknD7U0trKBxbfY20zK+L9wS4mVF6rtnH+5DxcfFOwHhkpY8vOdfpm+OIfVoPw3goTDQwGIxnxuzZsz8kqw8t+4sXLx49fvz4RW3atMmYP3++a3Bw8AtM3dPlxIkT4VOnTpVERET4LVu2bHjv3r354YtOnTr1opPGYDw1mGhgMBjPjdGjRy+my4PO5efnW//5559jVq1aNejs2bP1vb29Mzt27JhNFrfWrVtb2dnZPefUAvHx8dq9e/dG7t69m9uzZ0+VwsJCZadOnXYNHjx45aBBg1aJRCKjJWyTJk1A0v3c08hgPE+YaGAwGBUCa2vr/ClTpvxAl6cRX3Z2tn3ZfYVCoZbL5ZqHhX8Qvr6+GDduHL8wGAwmGhgMxiuKvb199uNDMRiMfwITDQwGg8FgMJ4IJhoYDAaD8fwx3oSDczOkZ6WCzh32fWtXbOtyFIfeDiq3zahYMNHAYDBeWvR6fZeCgoLt/yYOiUSyw8rKquvTShPjCRFVQRYRDBamHUzFtAdsMyoWTDQwGAwGg8F4IphoYDAYjAqCVqsL6jJzY+QHPVzLHY9J06JFg1pwspW/oJQxGGaYaGAwGIwKQO/31nGT2zvhXsFACXSRYcx3R7FldvsXkDIGoxQmGhgMBuMFk5CS3ZkKhgcRnSVBdKqeCQZGhYCJBgaDwXjBeLnZ71y1owh1A5T3navkoCcLEB4eXnJsUo9qzzF1jH/DsA5Vv3nRaXiaMNHAYDAYFYB3RncRDv90g2l0C4eHhjEJ5SgS2qFHY7fnmDLG/wn39dpr6+e81mzGi07I04SJBgaDwagACATgls/qw0/83HTSarWno0zfrKq9PqcIuhM3Mm3/fr/dbrlUZHjR6WQ8nOQsdcKfO294V/N3OjO0fci3RDC86CQ9dZhoYDAYjArG8V8GKl50Ghj/nEoqFV5FoVAWJhoYDAaDwWA8EUw0MBgMBoPBeCKYaGAwGBUek4kTvf79/lM1/O1DB7cOlDzNuOlQ1NnZ2VyZe+HDpReu1wlyWzu2a43ZT/NeDMbLDhMNDAajwiMUCoy/vdOuftljMYk51cd/s+/0l2Pqqap42f7fcW88cefIyeupkoXvtG0pFgn19NiCt9v9yxQzGK8mTDQwGIyXkkBPu2sHfuhnVfbYin03pu05G/vOd681cCcC4L5r7qQXnZj268mqv0xr0zbYx+EiPTamqz1ZnlOiGYyXHCYaGAzGK8OQdiHf08Wyb+I44aGLd3u3qeOznu7b29uDCI0Xl0AG4yWHiQYGg/HKIhQITBbBwGAw/j1MNDAYDAaDwXgimGhgMBgMBoPxRDDRwGAwGAwG44lgooHBYDAYDMYTwUQDg8F4LnCEF52G/wuO74UBk8AAkzYLCdEHEHF9Mbj8DAglhZBIJFAb3VBoCoPCqgWq+8yGtVxDLjTCaOSghQkiCLH0ylAYa/fCuXWbsLDHVuw57QylzBtDRs0HRNIX/ZRPhIDwotPAeLGIs7KyHL7++mt+6s6vvvrqXbp+9913v/q3+3K5XDOLkJKS4vbDDz9McXNzS5kyZcoPcXFxfr/99tuE4ODgyJEjRy4NDw8PXb169cCGDRue7tmz5+bDhw+33L17d0fC7pYtWx6m2/TYwIEDV4eGhobTsPQaei2Ng8ZN70HjpvegacjJybGj96ZpeBrP8qz2NRqNnKbTzs4uhx635JWfn1/chAkTfouMjAxeunTpSPrc9PkteUXzheaPJW8sebV58+aep0+fbmjJK3otjYPGReO05BW9F70nvTdNQ0XIi8e9R/RvSo9b3qN788qSN5a8ovlA8+PevKLvGH3X7n2PLHl173tkyavn/eyMF4uRM2LdsvFA0XHIJGoohDpwgiKIRaScJSwi+2KYZCIIiKAQ6AEneTZq+gfiRvg0XMvKJjpDD5FYCGdbK6Rk6JCRooav8Ae0rOeL1v4LcPOaAP5OiRALInFkbQs0738cAn5cCWaTGRUb8datW7t7eXnNfOONN+j+TPoP+XBZzv/b/U/IB7js/jzykS+7v4R85EEXC+Qjzy+WuMgHn18s+8Qg8Itln3zky92bfHyfVtqf+T4xiGX359yTVwuIMSu7v6pMXj0wb4hB5BfLPjGI5e59b14RW1xh8uIx+58Qw13uPbo3r+55j1YRYQC64Anfo3vz6kW9R2Sdk5ubaztnzpz3wHghcCYDMu+ehEq3DUIUQGQi4kBggkgigUCoh0Yvw5Fz3ZBRIEBubgEUJgHaBO2FSH4XaqIgrEUaPh6DXov8fB2c7Zpg/xkFTtxOx5Wi86jnpIOTIgdatYjcS0CERz6y0q/C0S30xT44g/EEiGlJzGBgU7QzGBUBS40D48VhFBpw+/YuJGa4YsepXkgtvA0nIvCtDFloXCcWa0/WxOY18yGzkuKTWd8gpG5T3D0rh5fuDiwtMCboIBTIkZPXEOfDPkdaQQRurZiM/KRcbPOti9a+d9CtTgwRISYSWIfk5BgmGhgvBWJajVtYWIhKlSq96LQwGP95aPMEq2l4wRjEiExvCZVnLWza0h8nDp/B/gvnkRF7F+rK03BsVhcIJTJkZ+Vj7hdT0K73J/j7m+kwFs7BjaJcmEz5EBDZIJZI0dzHFmutvWH0z8fH8XGY26wR2vlXQZRtPbQu+BEyWQERFyYkRh9H9dp9XvSTMxiPRUzbgPV6/QNPFman4OqtfNSu5oirV+Ph76NEvl4BP1+f0jCZdyGx98TN+By4yQohsndDfloSxNYuMBWkw9HZGtHpBjgL1bBy8YKV1IToLBNkKVcgsXKFnZsz0lKz4O5kjZRsLZxthMgqFMLD1e45ZQGDUXF4lWsadCYjBJwAHCnJcwIh+DK51ojs3Hwkp6YipyAf2oJCmIwGuLo4oWb1GhCJRPxyL2X98dScBgKjCAaIoDQWQi8kcUIGOSSwIsEedP2jEIoEOHX+PH6d8y6yM3LRqFkYgmo7oFufvhjfR4PTZ2zQsHFr5BapEZeQgt3LPsbR42uBjFtw9fJGRmIiOCOgRxG0hYfQd91kfH7dGidufQobXz/sO34AbfzJczoWwGgQkq+wDnfuHuBrKZif4fOF+i25u7snU9+uh4WhPlWffPLJp88zXY8iOzPT+dTm1V/EXTw9mjOZSl5ur6Aae3pMebfjs76/mDomqtVq1KhR476TKiIAPNzEkKkc4OWhhtxKCnu5EiaODs9aHMbREzeu34CbbxXkJCRDopfw7XxWEhvoCgrg5OkFmTEJuWRbI8yHlYc17Mifx6lqXSTGpUAq4JBfWAgXWwkKyI/QXipEoZrmg92zfnYGo8LxKtc0HE3Pw/rzscTIi4iR56jzAAzkO2IyGIihFtLOBlCLJPBx9SPGVIZuKemI0UoQmZSDKxkZuJumRraGfLRERiiJoRWZJJARI2uUkvgghEgoJvGWFoCaeDhghA+HRjVqE5HyDxJKLP7Pc97H3Sup8K7hgmUr1qFqJTdsf6cl4tLP49bNxfjxaxcUoBH51mXBThKD4X2L4G5nBXd3HyTfPQcxyLMZTLicnA+pYgea5+qQp1TSR4R3NQMCq+hgNNJpuMVmwZOXxQTDc2b27NkflxUDQxu5wEiM244r2chTlzbZzyrGYDCIiQA1vpDEEtZ99/nutOgbHcoeo+9Mtep+aNayFhb8tLnDLxOH8Fq88fDXetZu1HzLs0iHmHqGKxSKhwbw8XHi124+ng8JIURItWr8llNwteJjbuaVhz2/quLvVe4KJ6V59jlPP3O4asGV+XWIjfl8Zad/8ggMxqvDq1zT0NZFic3kW6xQCIixpN82YuhpSVsk5msdTBIp7ITW2Hj2OtkRYTMpeYvF5l7hQqEQEoEQzlK6LYZIVNpbnAoCIS2lG8vXmJ5KzcVIR1sYTHoSXvLE6RQKpXwXy/XLvZGY7Ym0ggZYEn0KnRpTnwYDvF30GNIyCVkZa2Ag4sfVXQxnWzkMQgOsHDyIXLAhgqAQtCcEreQQkw975/YypCVrSQHJSApcVDAJiTDioCc2SG8yws7mhdmi/ywWweDnKEe+xggnK/M7Mrqp6wPDfzai6c1Zf58KfH4pLE+/bjUawRT80PPBIT6IvHGH3z65bOFmIhqeiQpl4zQwGBWIV7mmgeOkmNW+BmYfi4JeCHPzxD1NBwZy1NvKCtcTUqFSqYjxNRtTKhjK90ak1/FSAyJT6VEqLsrcELNicvG+OBzN6tX7R2k1kXt5udvC0eYOjOq7sAsTwMNDBpoMziCHTKqDnSMgk4lhay8lzyOE1iCFLo9DgWAwNHlH4GR9HTpdEQx6cwIV1uSDKxfy+yZySG8iwkENJGe7QWaj4Gs4IPhnTSmM/x9bW9tcW5HGtmeYI79P36b5+xL51+ytdvcXktW5GW7PN4Wl5OflOFub9Db3Ho8iIiEuNgUdOtdHm/Z1+IXyy/xNzywtYtqWQ5YHnkw8txktBr+PSm1ew+ftsvDt1Xr4qFka+k34EpWbDcHmxR9hXL+OOHg1HfNW7kaPMHdyVQ4Cg1vBRizBidM70LtjJ6TAGQcPbsf2uZPxxZoLMGrSceFaNH59fxR+334BHy9Yh3ZOt9Ch/9twq9kRu9f8BDH5cebEn0O7zsMg9a6N/dtXYtnnk/DV8gMYPXM+PhrXFtOGd8e2C8lYtGU/vhvaAgt2XsCmeW/ix5WHMfqjBejvdBAfXG2BL1umo+vIj+Bfrxe2/v0tUf7FD8hpULNyGI5FRUClycacn1dg5hu90LF1Z2SLPLBn71aM6BAGocIO785+A1PnxaMH9iLJzgP7j1/Fzt3b0KZ5U4iU9hgf6ogIqQtOnY/CwROn8HqnWvh+x3Ws+HAglu29gg8WbYbXyU+wIskaJ/edwMoDp8Cd/gvD3/0RYZ1HY/XPH/BJmtm/OW6rrdD3o99wc/kMLN1zFXOW70Tq4smwHTkftcnHaOCbn6NWh3FYMb8n/OrMgIMmEi1HTsPhpd+iwch5mGi3D5N3JCAxMgor9h7H4sltkQonvD77N/w0pT9u5Uiw9/Bh+NqZlbWPbyWEL5+Efr/dQOKZg/hwyS50cryLDgMnQe5bD0e2L4M/CXMnPhrVff1wLf4murdsjCz7INxJUuLbrgbEVJ+Mbtpt+P5ubXBbp5Jz3pj67Sp8OaoLEvUOOHR0P9KPL0WfCV+gTrdJWPHFQHRo3wUJWjscPHYAOedWo9eYDxDUpB82LSkubBuy0Kxhczi6+yHNuTGO/zQSndp2RZrYA4cObYcdlbz6bHRp1Q6xRQps2n8Q2osrMGDiF6jV+TUsGOmLHgticPTLVvDvvxSxBxc8sx/S0+JVrmkQCAVwUkjg4ShEYpYQ1KtBTyy+kVpo8qkWEItMf5s1vF2Rkl+EIp2BFwF0MXAmSDhBGVHw4JJ5Wb8Aur5dYIJtkga5efmwtbF+4rSKiBiR2HpCKSwEZ62HlY2QN/Q6PW14IOmBDDKJBDKlDXI1CiRl2eFqDPDjtlPkA58EvZEjaXUtDqnh/SQgNkFFnlEg4CAXiSETGsmz6yEVKpEXZwU1+Q4rFKp/mcuMJ4WOUSSViEteJPrWTLlHLNAKsb9OpKJHbUd88uuOOs87jRasbezSj6wIR0Z6Lvr0b1FyPCjEh18s5OcVYdmSPWg9dNTvzyotfPOETqd7aAC3sO5Y/0EjVGvaH41G1cPrr3+JraeuYGTjmsjNfB0jp3yNVkRczPtmIXqsmmW+yCTH+Wun8NuoBmj1/lrM7OAOI5eP2SsuI/LGGdSpVgnZ0Sex+qIet29ew93UIiRGJGLdmuWYMGwEkvO18LaVIfFuIZavXYlZE4bjxLULmL8nAbdjrqNOSDX0rP05LgraIeLsCKzaddF8X00aftl8GVevHkGNBn3Rf05L/nCy1gWrVy3FuBHjkZz7FbztiitYBBKo5EZk5pPnz4/B6o0HgEvrMWHRTjTWEcEzeSGsyYdt/tpDcMjfWZInHabMQU+vqVh7NQ/9anij+3ebcHLWQPR8ey5GbJ2BJYej+HC6gmSsOl6IiItHUKvxGPw4UIWQLm9hThdfTF19GpNCQ7D274UYOWoi8rgPYFMsZibN+QuNrU+i+hUnxNy8jrTcAqwxZywGTZiNEzdvQ5CWQPYLybfTFpcizhHDP5AY9VtkXRMTP2iFoA6TcOh3IwKHzkV7OfDOz1tgtf99OHScjtVOpzF31VH8OrFNub+1tFIbRP7QCwFjV+OccAveXXoYh7/sgqO3dfy9KVSNX1/5BRRtZuD4B9Xg0+gH80fdaIBRX/ohn73sMO4umoCQQXMwqXAzftl0ChvenYWr8XHYvWwRNv72GSr3/RyTjTvx47oT2Prhu7hAzn07vClWns3C4PoO2PnrbDSduBDvhyWgw08xmEsET9dPNmJyR3dYWhwzEpPw6YK/sGfRLCxaexyb5szCldg4oiXSIbx7lKRJZy7BvSS8yjUNFqaFBmMG+Y2IhNS3wQgd7XbIv1lG8JKACIRGVXyx98oNiIn5pkKAnqUiwCIKaA0EXVtEBN2+1yeA7svlIkzNNmH6xbPo2LINnhTqJeHg0g65BjVJlRQX7xhxLkaKG4kJ5MNdCF2RETak0OHp6YD6fkqohXLISOGiX+eusCVrKwcVEpITcfZ0ODZs2YzcnFy4OTthzLA+GDR0IHz9gyASP7mIYTx9yLtjir4ZFTx/YqvIh4Yhr9Sopq4IqBZ2IqBSlYeGex541WuyoIVV4cRHhbG2UaL/sI4Zzo3bjn9W6eAdIYuKilC/fv37TnrW64nja3ry2/Hkg84zqyu/On4zml83beJKljAMKbnKjhi6U/zWpCVnSo6KoEB05Al++3KE+drLh5vya29XJVna8ttHrkaVXFOtaUt+vfrYNX4dcWorv754wxzm8DJzuOH92pLFHCbq2ml+HX3tOL9e0630ec5eu4Hs2GMYsUSDv6a241N16lrx/ezq4vqZslU643F8CV2/Yd516I7z67qTjRnFiVsL3iOl6zF+t8Wq/ebjNVeBf5KeN/jdm1d2mNfXtvNrs2trDazrUnqn8KhbJdtz1x4t3uqBm0d68FsutlaYvND87MNjb5tPe5r9ROIvmzPhTvzO4vUVfm2RmXF76XPOMe+M/g7m0P3wa5knpTUIlF3Nzfu3t3Yi/37Eb/fdcL44jPm+1+l7YExHbIfx6H7oLvqMnYu+4ywf4/pYTFfTu/N7tWb8BvPb0gH0ST4dFIdlHwxEWoOP8c7w8ejNn+vEh/liWBy/99Gy4yXp6vzmD+hcvH2Sj/j9knOWdjUnv2qgLjB1f1rH7383wRwPHJ3J0gcnlpi7scUebIyXgVe5psECLXV3r+KCbZHpfPFOCnNPCn2xMBUQMWElIaVyclBjMEEsJuLARIdxNp+nvSGEwvLV+Lyw4LjyzRM0LLkkm5Tus4R+iE1JhL/bw3yzysNxAjTr+hW5/huYJEa+loEO8sQRgWOgQoek+PjpM9hw5Cj+jkqBol1zGNvVxUaVDSmdkvTp1FAF14CqSVO4vfkmnK0MkBsFWEeed3l+AcSRN6ChHSfoENUGI+QGLQoEJAxRw/nk7ZYYyPOaDHDkhPi1Wm3UdbDjh6JmPF38AitHfb0jVjqza6CG40wPzGBbR9ekyfO2Nn3eabsXv9rtJ8fs+nniwf2XeOfHqtX8sGfXWb72oWnzGqhVuxIgs77l3LhvlWeZDr7L5T8d3MmQdRO1+36Bqwf/eoLQGoRWb44zV85C9n++8wZNAc5E3EGTsKr8/pCWYXh7xRGEeZqV+so32yJ18F84+UEvfL3uKPwdHtp7Bvb+zYhg+P/SwShG5Iyz+/+/NrPhX6x+yol5tfgv1DSIiVJo5eaInZfvglOZnbD5QZFMdGAlcxgqDLo2qoHt4TEw8X4NAt6QUwfIB01hUbam4d5aB1pPMS82mRhhE/xdPR85UjN1ShSBiAJiP6KL8nExJQXrYhIRk5eFPJIOvUYHvUgItVYDE/2ghYQg1NcL9lYKhO/ahu49huHwhbNINOiQRz6rOoGORGrkn0FI4jUY1eCMJph0evK8dDHwNSvQGmCuTwHkQjGEZFNLok8xCND55GGkv/MJyRQmGv4N5y6fatx3dcsjOk5fzpePvC0IDLJB1Ww7KFP1kCut85w8fGLqdH/9i5Yde2x4Uem9F9pzI7DrW4J9e0eqL124JSdLybnjR6+i0D1sROOW7Zc963SI6Xj8BQUFKB5GuhzXD/+MN//IxryhHpj4xw30wi5Ehb2LT4eEQacjhnzVFHx6pBKG+Idj4QVnzGhnwu/XqqGb4S/ktP0UssNfQDx4MdRFWmRFHUOr4bOxZMFXGDd4KP7cuAGv9e2FAxevw1EpxqFfJmCVvgPaCg/jiKI/Dn/7Og6d3IbP5izGrNe64I3pP+LUxq/x4bfL8OEXMzH29S9RV7MfAa9/h/jb2fAm6dUUFkGozkCVkO7YsPYPDOg7EucirkDFfIsYLwkVtaah/xf7ubahbt+M71J9xtOIjxMJMLllMBacj4O+uD/kvbac9rCQ6jXI1RGDq1DARGsXaE8JfiIolGueKBf3PbUOfGcFslx3qoSbN28gKLjqfcLDyOkQ8OdyYsP1yJXLoNPoobgdC6Fcio5iOUxpBbg5dyZWJ+WA27UJdr5+CCdCYoFUgkF5RlhlJuF83caIjr2DjAIxDEUZsCLnfbKLkGhnhzwQsaEuICUgIxQyCZpUrYWYC2egzclBfft8tA7OQKuqhShIAtLiNdDZisBlGbEiNwTbRU58TQtegu9Yx3e3cnPHNW5VK9Dp8ItOS2x0VMif7/U7WpibxffHWx+aDIPwfsFJxVq0MpdfYK6IsgGiauPwvvU4DGwedLRlg1pNjzyLNGo0GoVcLlf/k2tGf7vIVqPVyk1Gg5KLP5hMn4i+zhdSNI7pGRnO5N0W2NvZZUskkgcPwPQvEdPJfCweyg/i9rk1mBEnRe0B3wFHdqHbkAHwsIkpOe9RrS5qVdXCJS8AVUPV0J3KQNOBfdB92tvwszfi8w+VJWGdPWujUd06aBqgwmcffYgiqSus5WbRV71VHxzs+Q5S/GzR44t3cF5iwOxZX+L41TxYvzcMuXcuYE9EHDavXYVbpxTQoga6dgvF7D8WIC86FyOL7yGU28FXqcHHH38KsXstKF6CHxqDYaGi1jR4ezohKt0w/e2l4dMTkzLRorrL9xO713j7/41PSD7evlZy2EhMyCou+NHaBVImL2fQOzcKxZYz12Agll/KGSGig0PRinrhA2oUyHU0Dsv1lvMWcbH3wk20CFEhkBhu0T2ldoFJgBCFLYYW5WKnSYhmciECO3VExN272H/6FMbPmIGTl8OhDKiKrXfyEKZKxfw/l8E0bTK25+fBlJqBgpOnca1xfSjdK8GYoUb8qGFY9fsydGjTEpX3b4cytAFqkfjjDm1HxNef4vt5fdG42iWYOCkxHgEouGOPU8djofIYT/LGHwfPHYF7Siw+89uCtLvt4OzfssKP5VAt2AfLjiUcwrEERETexZdjGrSuXdnl0PO6/+Gd6wdv+WHKinuPB1YOO3H1vaXd7ewdskwmk9DzffETOzr1XNX88BuR7371wYAvH/ub/Gvu1D/sHZ0ybp4/0qHX2z8ODqwcfOO+QLc258Fo4KvJaZ14psBlgWNQ89cfF/elW0m16/8UfrH80XLDJXyPnWe/f9C10cPUnK+TVVSCxvovgbX3Wk8v79uPu9/DENOZAPPz8zF9+vT7TnoGt8WSvzuhZb3irqmTrxefCUHUcXOtTQP+34ZYX9ysvau4Z9PNa6V5EHX7Mr8+trkZv/5t9/n77uVctR2ib14u2R9z7Xq587dvRfDrHlHh5Y63e6t0e+pps1/DvgsRD3pWBqPCU1FrGsri6eGI6CzjNCIgpiWlZKFJsOP8N3rWmvL/xDW1QQg+PhHNG0PafED9G4xEINDeEhSdQYcAIlgiYpP57opmEcDxPg0Pa5Iouy47GqRUrMBugwPcrl5GaGjtcunQoQDnc25BfzICkVpgXbdWMK5eB0GRFiJHG0T8uABWzo6InfsddG5u2HA2DfpWLSAyGpAdfg2yu/FoM24sLhap4ejugrT8DASs2YCWMhnmnz0GXYA3+ns74+i3C7C4SQScenSBSBWJ1Ew1BHolPHyrQpd/ARM+OY7Duw9gy/p1OB1+AVU8fNG4+ST88uMb+OCrC5DKH970WtGoGuyNv08kHSQLbtxMwOwR9dvWDXI58Czv6RlYLdyyHdSgw8YJn/1539jcH6wfvftJ4jrxxo2aTX4OuSI2ChCzYvG7f0Tl+I79+NfB94ajImT1D+//fH7Pigm0lG85/uvkDle/2x1//7AGlXvaFMWd/FupSeJdAR25tImIXD8xw6p6byev4Ae2+9b44khEZGphyJOk24KDUpKVVaR38LMmAt2apEubH+wlyJ+DgqQ5iDxDu/dqENTr4YM0PQQxnRqYPOgD+6LZuQWj5Qvrmcpg/Pf4JzUNPT/cnpaYlutcxcfh6ooPO9Xs/dH25LupuW6Vve0jVn7UuVrfT3bcjU/O8XpcPE9C88bVHnjcw80BsTncW0RAvJWcmo2Glex/erNP6JtPGq9CLEBjbxuciM/Gg5wNOCOHKs72iEvKhFAk5ntN0FqDe4eGfpgjJBUiltoHupyOz0JgE0dUMeqhLB7wSXfzDQhzDyBEMhMp19dAZO8CaW4O1F7O8FDZQ6LJQ4vYu/D1tIeseXNsdXVDjlSMW/G3YRQK4BxWHQVZSci6eh3yOrXg6+OBGzeikCpVYY1MDxQmQy71wc29h1Bw9iKavReG9fHVAVJoFBhTyTMa4ORigLVJBAkVQ+S/xg1DkXyDFoKKcPx6PGrZWSE3NhrOIdWfNGtL6P/pTpJ/OQj2ccCyDzpiwKe7EJuUjSCyv7zM/hNyf/1+GR72noRU8cKqU0n7yYLIWwncJ8Pqdqwf7Lb3Hz7KY6kcFFJSYow6s6f3g8Isvbis3ePisZKptAFeQVeTvzIJpncJ0Bj0Otn141sHfTn6Wp33/jwSvGbejIVndq8ad+91Ic16rBj7wU/Dac+MR8Wv9Gs8lKyG8juplzYhO6anU8G1jYi8Rj2BtWrvjm4KpTKHPx+9PSm8t8bdf5kCyYUCiIUCg8HEPXaMpdSv2jk+6Pj180fGVbNKX0RePHn8lSOf+9Zs8eHj4iqL+IcffphCPlIgH6kHBjiwZT2sKjdGg6oe5oTcuoCDVzIwsE+H+37iJw8eQ+PWzUr26dsVdeYgghu0BvVyOn49HU2rPXi0rYejx7l4Her5Prj/cl7qDUidQiB/0c0QtEr03qpDUlravH4Devbrx+8adGnIMzrBQfHkDk0cZ8DB8DtoUzsAp/dtAeddHw0rW2P5yu3oP2Qg5P+nb9SeIxfRoUVYuWNX7mSgps9THI7TkIcz0Xlwc/eEr+39BuHs0WOo37zZAy78JxhJutUk3VYPPKvLioXBwR/KB569H75rH1ly7lyFwqcGZGS7IPkWZAo5dHbeeNa96P9JTQMVDPQjffTkdX4MeCoYivd5j2EqGB72EX8WuLvaIz4fk4mAmJyaloM6/ja/TO0Xdr+z1D1093PC3tg0KOlIjMTIm8hfwVRc02DiG/IFaFuzBvZH3IBIYN6/l7LNEBYswsIiJuhiFBhx6GIaqqTmoW29EBRFvAtp8kp8s7c6PFUnwFWuBO5OLArSs6C0toOV3Am3338XUfXCcP7MOdzJzcfIcWNxwdMJOmcnJKRmQqeygcoggCHuNnRtmsNOLINELoFBrQaEeohFUhju3oSgWU2oYpog/sxZdPMuwPw7StgT0SGQFeLInv3oW0+ImRNCMffXq1j0xyJIvf3h7uEBJ5EJqQmZcAwM+r/+LlQwFL8X/D4VCA/af14EV/YSrDmdsocsuBmdxH0wqHaXhtXcdz2t+LtMnjdkx09T+SaKneuXje3cd/gflnMbDv19n6F/EHNa/T7asv3Njtvyua+3uZwSHVUzPeF2lWkdvPmXk7xvXLXm3VeOeu/xIuGRuNbuRRZoNGor+d29d2HU2ynu7Cyn4ugwH1dGWa+yq9J8sP30PfkFWuODP3hPQLW6LX7nbm75SmDSO/hK0z/Izs6ab2/vkP6k14uLP1IPVAw3j21Aq+59oUuNQrbRBHtDJhLsa2FQHzERKbk4nCBDTVMEtAYjHAPqIu7mbWTGRaHboK6IMNrj1LF4BBXGgmtQgAM3BUhOLYCH/ib2nTyP7p1aw8azBlYs34lAWRKc63dA9cq+EKrTcE1nhYQ1PyFV7gqrmm0ghhBfbVkJ6k7Uto4ffJr0xJp9MWjrEgODygr2DsDyP9dAK1JgQLcgaFSBSD2xBjqS5tod+2Hn5Xx0CXXGwrUnMaC5FDmFRuw9GoWcTBOquxjh1agdKjmZsPvMXbRp3hBxF4/i1IULGPPWDGjSwqGzqYUDy1bA006NJr2H4FhCPiSxl3D23CV0rusL/9aDseXPZeg1Zgi2b9iIy0kCBFgXon2vXqjZojO++WoenJwVaNm1LQTkr5+rJh+j3AjsOHIabfuMh5eKpOdGClr7FiFFZ49jx66gQ5umcLCSkRdThNi4FHC13GFXrR3EV9ZAEDwKQd6OiE7JQ3UPG2TfvY5th0+iafcB0GpkCN+4Ehz5QA5+bQSW77wE/Z0LaNWlK/y93ZB9cQuUtXsgJi4ZSR4Xsf/ICXQYMRmupNCVmqPDsqsX0cM3FfKqrbFt3UbcFfrATqjBqGbWyOQEMDnVw8Ely+FizaHVgOHYfyUZuXk56FrVGut2HEZAw24oLFKjhb+EvCNq3Dp3GPr65H2IKUAH7wxk2IfCJfsykh1qITE+Hjv/uoB0OMK6SmP0bhSIlQsXon7H7og5uhYpJgeoAhtBpVTBJeUg6nQeAqRfRrpzLZxesRxFRgEGDB+M1Fwt/t51Cy2ckuBdrwuWbTqNuIiT8HIUoXPXJrDiCpHiUB37/lqLYSP7Ye3ROPRv7o9Pv/4V7nI1BvVqg2OxBrSoXxtrNxzA0O6h2H70MmxC3JCfk4NOXhnkHVTBwGmQYF0Z+ac3IqTpAwsx/5qK6tPwT3F1sUNCISYRATEpLSMXNb2sFr4zIOyBYYXEKE6u44+lVxJhNHIlvQgscEQkqDX54Mh3hpOYB0cqW+PwIGfIkriLz/FhqS8hJ0JGEZBv44zcAi0MideQlVaAsCpWiDh6DkL3ZhDdSYQtFRnWDghr2xChPXpjxMTx+Oi99+Hj545LSXGoY+JwNJh8r7KykEfidLFRQqzOI79ZB0TfusZ3u6TQYbKNYhEEeQW4eOQswuq1QcvDHNzPrkHLBjbo1tIPNxIjocBl7L3sj67VnPHzrJqIS9bzgtWUqUWRkkNKejK+HFwTM9dchEQgL5385yWnSiUPwbpzqTvJgujbyaYZ/Wt1b1LdY8e/ibNtt34rd/487W/aVLBv0fu/lxUNb+8f+0QjvPVtOXBl2f2Zvx6olZqc6Bl95VTLeq26r5NKpQ8f3Oj/RC5XFKByD37uhbTE2OYu+RcszpdcgVcHezsr61y6IxUJdQ8b3OxJMQZ0cRVHb+YdJe3Tj8bAvud9o00+DHHxR+qBNQ1Vmpmbg+RuQbzDBu1qV0dmPicQ2aKlL90q/RAMnjCiZJsWdap2ot1FzV1G2/Kb1DciEK+FlpYux4/pWv6mShdUJ8XC6q/NLHe4R+3yAnFYu9K4+f3xA0q2qYeJc/vSkSO6hJrbAV/rb+6rb0fvG9jgvuft3dGPX9dq2oZfKBKXUPPzkqVJf3MamvmRBPp1RMNWpROK9R5rfvZuffujzNAQ/BgC0999QB9PYjRH+tcq3hGjfTVzLTL9d1CPsoPQCDC2lzndwbSyx2MUv92gZWkNm713NQwfVlxSsAVCJo4qOTesM227LW2/tQ8zj/3w+gjzQBHDK5f+/drVJDeo6VGy33fAoHJJdoQWEfGpyJI5YsAA8wgKbWvSUUDd+e2hw8y1bTCIcOJmIZpU9So9xue6F1zoJjH69IpeJedKGfzaa/y60rC37jkzpNxetyHDStNdwxGgS/FzDu/VkETesFx4KsuHjezPb1PBQPlkRqnfTWdv83rUkPbmZxlqSZszWcxzo9C3yJZuEMEQcfE8qobVvS/9/5aXwafhn+DjIN0q1Qq10wfWoe29rz0ojICToKqtCBIiCoqE5ko7Oo6zySTkR27gBCZ+Doq2tUOw+9Bp5BfkoFAqhYurB2yUSkiFwvucHim8ULBsc+WHmP7mciz0dw3o03oH4n6zhUIVBw+TCLcDfGE8fQa+IiukSQU4PG8ucjZvQGp+IZzcPTD70/cwaMRENH+zIeR6DrqoeMTUCICdWA6nIhPs9Dok3rgJGxsXIrLJY4gFIM8PsZK8gRmZCN+9CULPmpC2fxt/xl/CX4sLkKMNg4MuD1aCXBQV5sDTSYE36iegQGMDZ1sH8luKR4CrEQJhNP74rDHEVjrkFUgglZDndfbBxLEbzTUwL7GOcLWRht+BQf5vBYOFas17Lb92ZONwuv1er5CcT9dcdom6faOq1qh7bJ30WzU/mPfANLp7Jrq6973PydJCZmZmWHx8/NGwsLByFZKXL18utLe37+7j43PwSdPv4ul/FPAv+YuWrVaQiIT/ulcE+T0ZEkSVB3kZb62CyWBdEHdqgZVfo0cOHFVy7aNqGhil9Bt/v4H77yFDVV9XVB3W+dHBxFZEMPzftWePhggO52cT8z/iWQgGyqtQ0+BqLTmekJBa+ONbrZ94ml4jMfbTmwfjvUORMBARLdULSAndWDLyI+WmUQhHXyUEo4dD0m0MOd8CYmcHqHU6qMg1VFiU7V3AN0lwpdtlsRJYweDlguhL36NQFYydp2JQYCyCkAhiMSnF5WdEIdmZCJUpU3F67y54+ftgwTff4s13JqJO35GkJJiMIi9X+gGHWl0Etb0NP+qjMj2ZpN8AhasrrGNz4BJ1B7vHj0RKXDwEHSsh+/oVjN25GXcSUvmRJiUOthDrDEjN4ZBaKIWY02PzvBXo0q4d6gSJEVwlBq5EGNTz4eDr44tFW5Lw+Z+3MPfzqciPuYrGYWPwQZ/G+HztUYglsqfw13t+2MjF8RcuR3Nb5/Q0q/jeVZ9a3KPfnz9yWrFocPEKiKQ1A2JOap381eNbEWIJ/+ReaWlprVxcXA46OjqCLuRe5c5rNBoVef94B1Dy2x5oa2u75p/Efy8ioeCpDHErs3M/jEzzWA9WmsQJaXej17p4V3psT5dH1jRkxl1EbKYBGodqaOqvwqpFf2DQ+DH4/bf1cFQWwcFRgcbNGmPN0RioUzNQ1U0Nv8Y9ceTIGXAFWaS01g93Lm6HW2hXiExaaLOSsONKGuxzYyFVSVEzNAz7wxOQk16I+i6ZKFQ4w+jXEPEnD0FpLECPIYP5MdBMBWm4rlOhhr0CalIaWb9hF2T6PLRo7I2rSVKkG5yhJWm1lusQUsUNBvsw7N60Bt27VIVnYAus2nQCipwkNAwSIl3sgHyH6si4cACOchNa9R4MmSEPS/ZchiwjFh1aVEZUugBqa19c2rkBQyeOg5tCgPV7TyE/JQNNfE3IFNkgsEZjnDx/BXlqCfy5GEjsbeFXqxX27TkKoTqbGLZc1GtSA9G5IkhdK+PC3t2krJ2O9kOG48jJSKRmGVFDEg+9tRWCG3eHg4yfIxh/btwHXcwFtKntBf/KXjiQYI87t+6iql0+JC4uUPo0xLVTx0meZJCSfy4cqtWEnVc1bN9zEmNaOSMySwiZawjCz5wCl1eEUaN7w5h1AydjCmFUuqNlNU/8vnI/6jnnwGBjD7vApji4ZTMcjXfQpVt7XEkhf2+ZBzLO7oKTkwSBrfvizLELyMooQGVBHFR1uiPmwjlAl4PWtT0Rp5HC1r8BqrjI8MfvK2GDZPQbPhzbTsYhO1ePET0bIzf6FK7nkL+kVQAySJpyyAe3R1AR5E622HldTxS6BqE2WTCIRAhq2QP2fFnAiI27TiE7MwdjhnZF7PFNyLWvjIR8OfRx4ZByReTDGor1l4rgmB+Bqq0G4vJdNRJP74CdPBe9+g/AXvJuZZKP8aCOdRB1YhuyxTZIFQZDH3WEfOy16DFoGMnHBBy4mowCrQramHOQq4RoEOKKFIfa2LZqM14f0xkXbyYhX61ALvn7V1IkAXIZmnTq80y6zf+TmoZzi4aYLeTI0EfvPwXeXhr+SAc4W7nwelxcct7idzsUD735z9rIqfufHSmVL2hXajg4zggDrXYglp8OCLWR/A6G7o9CDYMJBWt/Qkz9Psjv1htWvt6QK6T8KKHlfBostQtUSBR3saQjOspMxFyTz+5fR2Ix79eNEKryoVIFwFV2F8a0BEjI71Bz/gb0Tr64sGUrqgyfgLWzZ2Hf3oPwcbVFVlwuGrdvAi47D8aCQpjIjTh3LySQ36X1+WsQ1QuGf4A7IqOv4dDosbCyIgmxzsWXb3ZB797T0Stfi402zlg8dhgkhXrcOLwRvvXb4q2923D38nl8/d132LRxC9YeXIjrbg2wJ9AXNoffRmObQgyb9jXUmlRMbHIRfwkycGrfNHSe9it2/v0juo+6vwcc5ezCYof/EbUevf8ECB7T5/Nx74lcIsy+GB6t2fVNb3OV5sB/7tj5JFB/g3l7E8qlNTAw8P5uew/A39//R71e31oikfR6XNjMzEzH9u3bHzx//jwSExPx22+/3RdmzJgx+Pjjj+Ht7Y3Q0NDVu3fvvhUUFHTxAdE9EV52cp1UJIBUTL71ZFFIRLBRiPUqmVivlIiMcrFQrJCKHquO9Bkxo2KNXhP8xQl8ol3U1zcDlWwfd90jaxpkCmuEXz0PpyquJCdVUKpo6VGAcRP64capPTgdW4gW5EPfnPyAYrasRa3adXA+QQ3bokToXMxTePrU7ogNq1dALbJC79ZhaNesDm4ficfNbAGqcAK0adkQx9dthkelIOw+cwvBlUTwtVbjrs4JmdGnYRPQEDIrF6j3bcSSVCOGDu4DN2kRslW+vLJu3Kg+Nu06Dx8HDklqO1QOqY0NxxLhGdIAlXz9sfivpRA6VYPSSgkX8uM7deQq/F1rooqbHFGZEkiKvzGm/ESY7ANh7RyAqMO70LZXLSQH1YObSkantUPB3ZuwUing4B1MRNF1VK8nRUHSLbhXbY7KNsDO09EIbSyDvTATWq8gyLMjIVHZIfzEFfiKveHrIEZ2kR1EEivkxEciqG57uJO83H0umsQViyS9PzysRLDTpqPQwR5iuQoCsQLNSH5tScpAlaqu2HEpGXV9BHCR5yOZxKkwSFC/YQucu50NX0k6JG4NcfXQLnTuGwZbYzrUbpX4ZxNae+Pm5XVo2qXUkco/OBjr913EgDAJfG0NSM8WQaSww6VLx+FWhQiZADdcy8hFFaEMutRb8AtpDllSHuoGeUAdrUNskTus3HwQvu04ulcz2wl/FxGy8mwhkNohL/46qjQ0N6FYeQTixpGd6NivNgo1OlgLbmPXJQ16d6gHU9ZNBFZuAh+lCtuOR6JW/k0kqarAQwJk34mE0tq+OMUC3LhwHqHt+yHhznHkc0Q8CCVo1a4h5IlKXErOhVwqRyVPBZKzTeBEMsRcCYedq7ndwcOvMk4fugrPGtVg6y5BRLrZNVIod8LdiL2o2bw7tGoVorIl5G9aHwcPkr9RrbpwsnNEQsQeVG/aDYb8fFgTsRuTxyGL/CSdn8EAfS9TTYNSKkyIiU7IXPlx51DzkZpPNX7qzyMp89nvGuCF3aMGoPP3v6J1y8ZIvbIZRQYhcqViZBpN/BwjTdbvRPXfF0DZrBVijh9Dkq0SNU2bEBTaAvbcTahsWuJwdCGqVvLCX8s2wNffH+mJOnzYpw4+XGWCqigTYr8gFMTHg8sqQrwsB2IbB3j/9C2c76SinkCKC+7WOJGehtRTlyDwd4ZAQ943Kzpdtwzi3HTky6rBXSlHslCBzNxUqOw88NHnK6BS+MAgl8DOyQVitRGNgmsRAR2JBRdOoLWLBz4dNRYTlhRg45UobJo6Fl+8PgFf2B/FO+tPIqhKG2RpsrB58SzMmHcCCy+2gtBwCHezi7Bx7Ub07d7w4Rn5giElY0345Vv5e7/ry7dOYsjTfU+eFKlUSgdRovMMlM0sWqsQEBMT05uIipLRH4lg6ElW3J07d0b7+PgseVicO3bsGBMREcH71vj6+vKFb7rk5eVBrVbD1bW8439ycjK2bNnSZcaMGf9INKSlpbV3cXGhNRR2J95+4JD4kuKlhKSkpFkeHh6zHhanR0gji91f+E/S8siaBitSQh47snLJfo8hA0u2Qxp1IIt5m7o2+PYzq9VWVD/WGF4aiUCMPoNK26PppzqsfX+UdYnqNqwvvx4aWDyJmMcA3Ev9dr1hmR2jXe+B5c4N6Uarikuriwd2KO1pMn7syHJhhw8pVraVeqJkslypDcYMLo1z1GjzNUO7NSh5hpHjSn0qxlUy32voCEvcARgdZE5d174W1W5+lnEjiktOtfqXpnekxefAE2OrN+G3LF4ofYaUyTuY51gYPKCDOV2WqdR9yj9/vQAHsozlt8eOMaep14DSNn8BESpjxpb6OYwbbJ7nY8xoc1506Fv693lttG/xVhAsZb6Bwy3XmsfraNalHyxeKePGlvqVtOlR+ncbOnJkybZI6ULUtnnfM9QTCC0t2fTpU8PyUBhT2fwXMXccFmHMa2NLwvk37UkW83ZI/9K4+b+0Z000tUwpENyv5NzE8SUO0LD2DMaIoZYM7FX6txfLMXpMcbjKfdGk+PCQDqXfFcv5+kHmhpHyPfyfLhXdp4EUarJvRt3J2Ph59+I//PMzADKRBC2CAlE4cxJMBjUK9AIUmUywpRX9RNiLxETgD+xhXnjMflFGjsPQdqHo0LY6dCYBlKSkEBWdjlM738Hd81vw1TY5tpyPwYAWdbH2wnWI/IIhsyEFpFtRsPK3R1cbJ5xcuQEXXR1wjtxDEKFH39Ba6PXVHDR098UFXTb6/rUM1UnBR0cKCuorkcgIrI1UImQaXjyPsE2RWPbRBxBZybH94nV8a9RBIDTC64OpWDJ0CD6fvxyrtu7B2wt/hDYiEuQUn+b3flmInfYOKLidDmPSOnDHqqJ5HVfM+bgdblzNhE5sB1txIRq1C0Djtk9eW/A8oMNuXL4cnb3nuz4uQgGdNKTW4y96DuTk5HS1s7PLKHPInxh3ayIYNpJtATHOvxLjPNFkNODDfjWhLshbHNKwY9fxs/+4b7wHyrBhw74JCwv70mAwiIjQKDlOmyioaLiXo0ePmurXr//Zk6S1qKjIRpp+YYtYLGnp4vnPRSERDJ8YjcaeRNCE/uOLH8Ejaxry4sNxW+SBas4q5BYZ4WSrQCEnhkokgEFbRI4ZYGclRX5+AWQqa7IugouTPV/NrielBF1eLj9yW6FGT0poMlIqEMPehsoGDhnpGbBxdIKGCBat0QgFKVJwMmuYCjJhTX4oGVn5sLcWYPXxVAxtXRkZmblwcnZEUS6txlbClpT40jOy4OjkxDsSmwxaZOYUwtnJAdmZ6VCQ0oFJnY8iLUmjvRUWrD6FsT3DoNYJIBUZYRQpYGsl59OSlZEBK3tH6pWK1ItboKrZBRc3LkXTPiNIOvLgbK/Eweg8NPNVIl8rgCMpvaSlZ8LB0Ra/LDuA1/o3Ic9oglzCQQc5eUYF9Op0UgoQYN+pSIzv3Rjx0dHwqRSIFQdj0K66E1wcaClEzafXQj55NjparEomgVhhjbwiHZQiPXKKTHCyN8+zkZ2ZARkpfSvIM9A8ciZ5kkOOScmxjX8sJwKgK5xcnFCUl83nk5VIB5GcxFWghY2VjP/bpJO8dHa2R25OHgRSBTQF5BldnJGbnQ2OGFF78jdNS88i+e2AXxetJcZ3ILLJ38vOyRkGdR7y489CVrUtKYllQGJlR0qc5i7DmenpsHV0RmF+LvQmIfn4acjf0okYGgG2bNyBTm2DcEdQCZXIo2iL8lCgFcKR/G1QFI9YvRtObNpCxIZFXOXhWIwAAdnH4VC3EzTkb6S0c8DK1fsxclArpGfnw8XRhp/yWChRkB832Xexwt5reajvawO5UI9CHR3Hn7yLJB/kArrPkfvZIC8nB1q9kc+7vOxMCBU2UEkF5H3KIXE4IZc8l1hpS4wLkE7+/i7OD+zu/EyoqDUNNyPj7mz7qmexogx9oWmhCMUK2IhLxTbED28souNMjhraDbrkBXBw74CzNwpQy9sGH3+3H/FXE3BLa4Mgdz2Wbj9Cfo81kW3nDP/bkejr4QWtRohpo0bhq3dnYiP5mI2a/T7GV6mJhDWbkEp+ux4Ta8BbZI86Y4di8sHz+HPJIqi+m4OkyHDyXsmgzs7CSYULKm/eDKGuCAaNFjDoyW+DfCuUcgw+fhR0niQu6ipM6SkQEtHBKe1IomUwCYQ4Tn6PKveqGOR4Al1bBeKPrZfw/vfnkZebBzc3F4jFBgiIzK4oA0VevRqTvvPrXp5i3lkv9EUn5z6IYMjkOE5o6TZJmbv9ttayTQQD9Y5+nY4JIhJJUumhG6d3957a3osL7fb6ZyMmv/9x2fhoM0j16tXFmZmZq0wm08A1a9Zg2rRp5e7ZqlUrLFy4EH5+fgeIYGj7qPTlZGe62qUeSqHb5bqJp1ykU07TLV1KSkonNze3BzpV6nS69USwlAgcIhioWstDmZ/Ko7h5K7rJlI+/PWYZpGryqH6DO3dou6psmEfWNCitbOBh54TFa45jSEtnJIttkcm5oDr5tS5adQIje9dD9p2rsPGri8V/7cPg9uRHpreFTJeJSLUKkZvXwt1ejCZ9+mN/DDEQ0Rtg22EYMmPPQOhUC3+sPgrb/CT0H9AUWWJXHF32N5xs9PAJqQV7RzvEiN0gU6iwfdkydBjUF1suxKOykxLJJ1dAJRehTtdBWLbrCkZ2romVSzeh38D2iIu7AJl9NRxe/hfsZRxa9u+L06lqSGVy3Dq2FdXb9caea9mQRKxB28ETkB1zEib3elizeA2GjbP0FhCgef+x2LP0dzTt0QXhSfSd4rDyAPmQNK+EzDs3iJF2wq4r6Xy8CWd3IKBFb2w9ewdOdw6gaf8J5lgkSshkStzYuxx+pKS860oCf3zrrpMQkJJS/25VcDfPFi4KImRI8eJ0XAHcEw7AmRhDm7Au2HoqDuqsZAzpFGqe/8+YjCSdDQ4tWkOeX4ihw7oiNT8R6Vo7HF61CnZEstgRYXfyZjIJK4LDnbXE4NlAUb0TNhyJxKgutfD373+j74j+UBuKcEejQBR5yfu+NgQXYvOQmV0Ap4TNiCYlneod+2NPeALkRDEnndoIh9qdse7wNerVgy4BJC35t5Am8EbkqmXoNmI0Es9ugbJ6B+TlF+Lg1SSEma4hoHk/rNoTjkEdQiFVWRGBoiRihkRuLcKq3VfQwC4Rjq0HkI+oCQraPkcMu1GngUgqL/k7nI9KRwe3c8izqgYDEQiUVUv/Ru/B/aHVpSGB88DNTX+j15AuOJfCf48RdXgjpKRITLvbbjqVAKvbm4hAsEJoe3NNyI4tezGoVyju5quRrXdE7uFV8AmuCVsXBxy7nQkPIprOrFhF0mRCN/renU9Bj7rPZ5SzilrTUCoYXkIEYrQf+Tl2rw3AtR3TUK9Kdzh72cJdVhlHT8ejWnAgkonwNsr8sX7LcrT85FPkkQLPoHFvYc/enYi8HgFbaytc3ncYOvKb/jkmEu+2bIl6dWqSkuMxePr54tq33+NNWxXEgZWg3bAR8lrNIbh9FVIi0g0mDYREtEuJUOVM5FuiLoRQr4ZRXQTBjRvg8gohJ4UFO6kVPHx94eXljXpVqyHA3w8ORLheJ9/RnJx6+H5lNpJSnYjwKIKXp3ux/4bksY//PNn7XR+XF52Gx0ENvVe1JvsSrp/g209//3jMxklfLe96b7jP1l12vX71Sq0/3u4cTvfDt/360ZCJM2bTngf3hnV0dKTGYxCdw6lFixYNTp48WY+ODxIWFnadCIXHOhdSI/3e2rOG10OlQrviYyaJKsbg3bqGVCorV2VBBMND4yGCoe/du3eHerva/4nYvVKYxzuxTpCEdPHwD9l17OKNzjvvGCeNri4npb4HDxA1b9ZUfLPgbySmpGHdjkOf3ycaHlXTIHYM4LvIvTbYPGcy9WhwLz73+khze7XSxlzRO2F0+9ILJa6ooQJqjC2d0rsd39vSXGXu7G+uanl9cIuS87Tlp9/40ip0Ci2DBzexBZqYq4d71jGLpWq+Y0rCUMFAGTrWXGXvZ2UuufcaXVo13dCTLP1oW4q5PaVTKElcqNmw2weaK6QtgsE1rEfJdR1Gmpskalua1YPNdeBWNuYD3eitapdOtduzYWVyI3NzjkRBSisKwL8rVYfmxpgutSw+JqVV+qVeJ0K0q+UDfZW+5FpzrcLQNrQ6PbgkBETuqEb+ANXeKK2KdJVZwZUEr/5Gae+OJlWKe/yElDZRUMHAx/nayJJjNci7V+M1c3NIHX8bqD1VUISV/g061Sbx1C61FYNal3daon+NyiPM+exZvzTf+jalo52aRzylgoGPq4P5b12zWA+M7F1mplkbf9CfQb8RZbtV2qAZ/86Y02dJxaihZd4zqFBNRvJjiPnZ69FI3Eim1y6Np1cjcmWjMSjLoBHm2gzq7eBNs7pXaZNQs+JKhcAxpcd61H1GPUEeQEWtaXgV6Nh/NAx9R9Dx4pCTm4mExARs3T4UVtY2fM8LCx8O6Imx2Vk4uHcLjFZKrF+8ANyP30DoHQhbV0dIDRzWFN7FnuMFCFDZ4vTS7TBmJaBAlw9Z3Y5QOrrhlqsLlPbN4AwDEaEmSOLSEL9wEfJio9G8Tn38tXQprBQySARCCB4we+WbX89FnLMKqquXYWWSoueoqRg3aizENir88tNPePOtybB3+qcD5f13MRqNot1nt0+2cVbNCHQOck+/HV5yLvrioS4Pu87RzdFu4KLVpDCshsaggcFgED9INJSlRo0aZ+jypGm7FRW5WW9Cj9VXclHfww1Nd9twNz9qrqITWUkff/l9eHt7/01WfyeKAoZ7GqL5qai99De24+YNZGmqINjGiM8Op+GjFqX90CRiYaJUJPxFprL+29raJj0lPYOW0MQpaRkBn3z1045P351ckkePrGm4F0NeMvJVrrB/yBStSREn4RbS+AnHHeGQVWiAg0qCC/u2ILRtD37Uq5jTW6FydIdjYL0SJ0ULCVcOw6lqC8jFL6YuLub0bgQ07FjSHXrV7otAZjIGkVKuOuMylmw4B7nSGaOH9bjvWn1RNrZu2oU+pIS85M+/oDdq0WXQeHjamD9W57auQL3uQ0oEw78i5TRuyWqisn1xBZdJjUVbrmB8r/vHpiiLQvqih9VkVNSahlcFMZ0pkwhNJxcXfnkQfWo3QK/fG0LE9+mg/gVGCLRAuq4QV25GIDU9FZeu3cCl8PM4nnAX2XoDpI5O4BRWMMWcQeFtDnYnt0FdpEH9UWNwbE8MxAWZUFSvC2ONajhmMMF/wiRICnJgyi+CtqAQOi0RHOR76PXzR/BxrYlM/wCIEtJQEHMNP/QZicLcLPTp1g2XbkbBmoiFv1esw+S3HjvY5n8KOgdEYmKSD7EtcplMRn0TaEmFtyLUUbFLox6o+bk7REIxDi8+j8+HN9JWbtL7r+HTPr93UJgScrMLMlM1SZBLFMTuyGGC6am5P2ekpwdnZmXdoDWq1KTuGe5Lawqa3vmi5omnEb9npdBleXn+W21Sj/CTU8UUWM9rUdVrl0QivTmqrVXeo67dvmKBJPzy1Ybvzfn51LWo6OY6nU5qGdDqkTUNsRe2wzesKxYv2gi90g5juvjBaMjB8RRrCM/+hUa9BmFPZCEu798CWxGH1jWdEX9lD7xqdMBfu65jbBsXXC2yQXVZFuYv3wGZQISeLYJw8a4R7dvUw5HIPNSTR+PanVQ4pt9FAe0jrSmCJi8VSVGH4FKlFU6v+x2tBo7D2fU/4kquNayyvDGwZSB2rFkB+8q1IY07iPBsFZQBjcjTyNBKeR32YV2watdV6NVFGNHWH3fSM2DvHozlP31LfnBOaNF7ICqpDPiWKEgbqRCdm1bG5RQRWterhmSdDEVn1yMzIwsxelvYVKqPsJAA+NqJkJuWiKL0aGzedQKOddujQK2DrcRcdDYZ9WjapgNcHB2xYMUBjOrhgVxhCLas24HxI7pAorSHVFvIV5WOGj0C284n8IKBDi0dn28HTVEBzq//FZdzFVBUboLBzSvjjy3nMDJMhwLPpli15gAa+xgQHhGFLiNGIitXjpPr12HkxCHQF0QiTRgM9fGlqNR+JKggo34XJQgVED96yHhGBYHVNLx4hEIxyloGOgLlhTsxaPnmZOiIihBrjdBL6RgQJJzKgU56TUqgRnA6DcQZahiJ+E6zs4K1UobkzFwiBmJhMBlQoOVg0Auh4QwQku+lWiIh16sAK7JoneD+5hCIDWIkRN0Ad/k0pL0HwHXlaoybMA7eYXXhSG5uJILjl42bYKxWBRONBohFj52C4JVn5qyvd12JjOHHBPnzh8/g4Xa/GDTotYcLCtXjL3+QFE2bJ+ixb3Y8fqJHO1u7nO5VSp3YNRrNEw8XHXUrusaUj765QrfFIpG+Xu3q21s3rv1r08aN9kffjMwzcoKSEqKTk+MyR0enEQ+P7f/DxsY2Bzbd+er3wH94bWitGqd3rV4o2HfwcJ+yI2A+UU3D2NdKh8ylIxzynuq9zJ7tHYNVZLl/OO+xXcz9tGvwNtUdUyaUesJ3Ka5x71WHnnTBiGplupC0LN8zgAoGSv2+b5b0nuDjGFBcBR1WFeWH2TGPEzKqc00s2nAaMltXVLY1V+NNfbf8KJPTJ5cOUOdNkqsvNA/3XaOz+W/X8p5nCuturuYeMtzclRFV3UvOqVzromZxbeHEIeYRHenTjR9RWvPVbXRxPpGPUrf6fvymWOqCQEfyBx1oTkvZZxnbw/zEdjTOQeY4azUx96RwcgaqEMGw9e8t6D60h3kaeF4wECQ2OHI2Fr0amvOCFFfM3uCMCg+raahYmExGfDdvHr6Y+w26tm6Bi4nxyE5LQ2FyDuyJKPhrogrXpYUIDXkb6Tf2o0W/eUgqMiJu+3wsXX8eu3a+ybfj1Z4xGz5B1bFj/S5IMzNgys+DTJMHk6EI6qJ8fDj+Tcy9mwhHIirsXPzRsG49LAolX4M9O6GlM3AYTTh/KwYt3psGibc77J3s0Om7L7FvxsePf4hXnLmzZnT67c/lc7fsOz5jzJSP+GPU2JUNQ7vn29n988GvlEplwf+brqDKla7SdCQlJXuPmTbrzqnzl3vRBT8uxQ+fmp0lhUJhfuXKlUucFKlvAxEmcq1WK6MLLeHr9XoJbRahTSx0oTUqlsVyDV1bxBCdB6PsQptTRCKRUSKR6Knxl8lkWrrI5XLNk8yZ0a51yw1l9x9Z0+Bf5z7fkJeK8X3+WTcVicoegc96RqKnDBUM9+FYDb3KOvxbBWJIm/uDMSoerKahgkA+vxlpqTjxSz/Uc4vEvoWv40Z+AN6vVQ2vTRwOzkqCee9WQ3LaQXQP+wB5ytao32YCZAnzsHvdSmw/kYv+b38P75XTUdWhCGkZNzBt5BAk+nrg2Pp1+Pmvv9CgbVsE+PphxKhRcPBww8qvv0acNgcysQqHbRXo8udv2DZ+IuS0SUUsQqOQYAhNahgMMqisnXAn84nnGHrlGTti0AdUNNBtkUj4SJ+Df4JKpcovu08MMy0Lav5JHB4e7nepeLh161bl9Vt2vXX0/PVJUz75HjKpuOCjt0a3unDhQp2nkVaLeLCIi6cRJ3n+wqCgoCiLIKH8I58GBoPxbGE1DRUDE6fFntVfoEplI46FG3Fj7yH8tuEj/Dm7HzSa27CXBkCbloeoq2o0qZGP1NxUFKq9kB55iVgtKTwcOX4I64QsCf48UACjaClSr+Rj6IfT4FO/EeY2aAB9fhEWLfkTM957F0IIcP3tqfh59z58ffEUVJkGGJyqAAYNbhcWYklMNBEbe+EW1gwmMaDV5eP0zA9edDZVGGhpesakkb2//mXpxl4d28y/9/y9JXRLKd2yLgs1kHShpfA/l6/5ZMfBU2jTrCHOhV9DXn4BXx1tpVJmvzV6wLCmTRo+8VwZlStXvvXOW35TR2dlfZaVleVAaxQedP8XCX1uWgNhZ2eX4+TklPGgibnY3BMMRgWC1TRUDIycFJ3HfoXPpreGs9QOxy4loHqVyvhjw0b0l9REePhVbD51Hd5ugejzxjKMHJCMCxdPoEfPObh9ZR0UAZlwc8hEtVqdsO74j+BMJogMHLQaHSQKMXS3YzH3vfdQq0Z1zHrnfZiEGhi1WggFQvRRF0FsMEB/LQav79uL2D7dEX39KmxJHFWSM1DlVizknAHfbN0HvVZNhIUR2pwc1G3RHIUyOV7//TdIX3FfB7Varbh582YVWm1vOWajlN759O2xDURCofFpld6DAryP7Dx0esbhk+f0Hi6OEdYquYiO86DV6VVf/7Z83bw/Vmoa1Ki8uE2LxiXdEqnh9fDwSHJzc0u5Nz7aRODq6ppKl6eRvhcBn+Fz5859l65nzpw5l7aj/PDDD1Nom8eUKVN+yMvLs1mwYMFEZ2fn9NGjRy9OSEjwWrFixZDAwMCYvn37ro+IiKi6bdu2brVr177Uvn37vZcuXaq9d+/e9i1atDjSsGHD00eOHGlx+vTphvQcDUPD0mvotTQOGheNk8ZN70HvRe9J703TQNI205K24rRWmH1LXtnY2ORNnDhxQXp6uvPixYtHe3l5JQwZMmTF4/KK5gvNH0te0Xyj+detW7dtVatWjVi/fn3fmJiYQBoXjdOSV/Re9J4vQ15Z3qN788qSN5a8os9Jn5c+N31+y3tE84Xmz73vkSWvLO/RvXlF/w7072HJK3pvmoaKlDcPyisi4n8A44VAG3dFRg3+vJOEJclpKOK00Pb5iJ9Yyr67ALbZGRi3YRkcNu+HpCgTbcYpIcjn0DwoHadOHkVQiC82/paON97+BHeJwV8+7R3kxd5G8vix8KhekxgTEpNej6L4OJxetR4CXy+I7Kxg0hkhFtmCWCN+wi1XsRhazjxZF8WRCIy6CiuIrEQQ21pD7BtYbkJLOh6AnsQ7btasF5FtLwSFQqGuVavWZVqDQEvtOTk5dvn5+dZUMDzN+zjY26V99s64eo8LZymdOzg4ZNG0Pc00VDR40WD5cFHoh6vsPv3glt2nH+Sy+/QjTxfLPv2g08WyTz/4dLHsU4NAF8s+/ciXTRD9yJfdL3uvirZ/b15R0fNP8ooaRLpY9qlBpItlnxrEsvd+lfLq3ryhhr/s/uPeo8flFRWhZfepcHlez/o09hnPl3yjAb/GJ2Bzpox8FQUQq9xgI+RQKMiFR0EqjlwJR77ICJO9DazHDUKLzCiEhloh8noiHJ3l0CVGoCjRgLxtF/Hz6kNQOTnDzdYGnqG1sfv1t9HIxQX2DcMgD6sNVc1qqPLOVOx8cyRs3dxh1OpQoC/1tzOZTEQEmJuQOV5ImO2gUklHwrWCTCEtNw04DU9+Xy8k31401MGPfnfp8qLT8l/h1a7DYjAYryQcpwPH21Ux9AJiWPHgsVskxeMr86VxmHtDbEpIwPmcdH5uiAxicNX5HAxffMnPgNm1TWtcOncGsXfuwqBSYPmSJbiVI8CJohwIlFaAUIJ8gwl7bKrgVLgRxybNhTAxEvPebonFu/P4rpVWOWkQZ6fxTRJaA/i10ETuvmczdZaAWkibP8CnX7TzGBZvP41R778Pg6GI7xjNN2WQ5zPRESS1JsgNBqhI2hRaLVTaAih0+TQQrIUCiEgY2iiuIM959rNZsPMNwju7NqOypyvMFRVC6Mkzy4gQok0f9P46IpCEQhG5H8k3KYmDE0FEzpmKazpgSYOIjdvCuB8mGhgMxkuFgdi1TdsOYd3py/BwsoNcJOFL2xwxpDqdnjeSBuoTQIfiIaV4PbGe9JgxOxf7X/8QRXoFCe9FLLYEIMYyrOgKYhpWgVqtAlJyEKBS0omKwIklGDh+Aj8kecqW7Wj58ac4bUOMKzGs1D0/k5j4Bj/Phxf5ih5YfA2BPy+Hk0GInJxUpKWkISlbjXiDFjmcBG0C/dC9czv4BFeGvYM9dHo9UtQFSNKI8dGug3ijWQg/JD3ttkEFDmdWRLwRp89iwajRlZznaxlEAt640/342Fu4lpiA0UMHo3rt+uRaCZFSQt4hkzMZ+LiFJOV0nAgavYkoF85A8oXkk5HelwgKI8nHi0dPIlsgw93Lp2i1Ox4zGzbjPwYTDQwG46WCGrrd+/bAMyAQXjZWvFGj7f+0XZ+6xRlIydlgFNLJe6AXUcMpQaFGjWt1wlA7/BDkJgGkVGSo1fC3kUJjLMI1YuQNAiPOF6jhqnSHrZUKaTkZvBHVK+Wo2qUjPnhjIr6pWht9lv6OVHIdUQ8oEBhwixMi+O+1cMksxJ1zZ2DKzYYxL4dWa/DT23NEDESRuP6IT4LYwRlWfn4Q1qsH97qNIPNVIGD/IejUxDDTwZrEIkikUhiI2DGS56SzLdK1gYgPATHs9DmNRBjxVl8ogIAIBjoENhUQVtYKuLk60YGE4E4n7CuuLTBwpmIRwsFAjimMWrI28qKKxmdp6hCL5NRRD3Eerrh7KwGHzl9CxyYN+CnKGQwLTDQwGIyXCiExhJPemIQF+48hjZPBKBLSOgViw2UwiU28saRG1Sgzb+upMZabkL9uOdKvRSK3IAcGTSEfFzkM/6BA3Nq5F1kGNWb+vACHo67Bx94JisREZBHhUUBnclUq8Pk386HVFSD+xFl0nvMZznEC5EpMfEm9kJjg5NY1UHNwX0S1bApOJKWDGcOGo00HHATEUMvz0+Ga7QBFahRUEeEwbNiIVKEM1wUm7MlNg0Br9l2QcHqS5jwIqeigPSqMIoj0GlAfPyoCxHxjjAByYuD1egFMKgmKdPmwk9tARX0eRFporl+B1qDjRQV/DW1qENDprcw9/Ohsukby7HQkS5FYxI8sqaAjYUrEMOk4WDu6o2oVf6JLmGBglIeJBgaDUaG4HpeEpv0mwFapg4uDIz9fhFIs4I2biVg6iVRC9omhIwZOTUrZSmKgIeX4dnu9XoTbt2PpaP7EImrB0SGfqd8CMdzUp93dLxBe1PHQWID0hLvIzc3B9ahb8KvsDwXAG0+fAD989e2P+GXlShTt2oq8PA10EhmdF4Av0bsGBGFw7y5YPWs26n37E5LoVO8BVaBu1xMXiAGXavMxIF4LR7USPpEJcExLRa8RNbDqp28hJmkX0e6QAn4aY/55zLUAjhAIhbxnhlgsoZPU8pUJRmLdBWRDSvSEhCwmIpA0QiqRiv0PiCDac2wvctUiyKxpzUMRyScZOGE+lAradCGEWCknoQsgJHmmM+kgNlmDDgRo4jTgaLONiCOCi4OGpOXy7XNoN6E6zow4QuQFMw+M+2FvBYPBqDBQA3ro+CkU5mXh9R49+DZ1vgeB0MC37VOjKqIVCToD3xxRWFgIqGS0Cx7t6cVXt2dlZPAlaKNRDKPcCCG5TkQWI981UQADMaQqnRByv2B4ceYqf+omeSc+jsSnRsSt2+jeoytfQqfnFFI9rK2toddpoSUlf41Gg99/X4KVK9fiVnQEPvxrFdYGhiCblPxBkqMTSLDCS4J+H3+NvkNH4kjr2lAnR0Hl4EqEjnmuGhqPWGTkmyHoM5mE5iYWek+tuHibiCI6kxEVE4ZiP4riTOLrC6hooOGd3ILhbCquVSF5wjswikubLSRi8/TZtPcFv8mJS3pd0DV9euoEaTRpsffyYWT1yYGBiC9pxZp1m1FBYKKBwWBUGKgh9PbxB2RWdAhb3qhRsWAyiWmHAb53gVpXAE1eAQoKzIuBGGk/Pz9Sai8ER4yslZ0db9hpu71RT8RFfj7oqLdUYNC46GLxgeC7K9IqeIEejk5OELuJoZJSoWJEYtJd5OXloUir4Q2rSCiDWkO7RnLmpgOjBKHVatKR/pA0dgQaLduMq76+4Ipy4JGdA8+jG3B9y2I4th0O0advIvf2cohkEv4Z6b21Oh10apJOsk2bM4zkWWkthJE2IxQ3K9Dnp9C1pYsl7wRJz1HRQISFTMIRIWSCxMShMCsPeo2eFw0iqZgXDkqpFclOJUQ2Ar4Wg/ot0OYKOugwFSzUAZL+p9OKIHOkPU71EEpoTjMHSMb9MNHAYDAqFF0aV0fB2a0Qy0rnZqXOgFs7tkfMnQRYc0K+7Z32IDSQNR12z3jzCtSkaCwXSuHp6Ag7IhzsfT1h42AHpZUNMdZySBVK6hDBt/MLRGLeUFLMvQNIqZ8IifxCLa6fvQixVAqlrTWsbB3h4uaCkJBg9KhVH2qBEnEoJILBxIsPKkQuXboEV3dPHDx8CBcz87D38CWoVyzBzZwspHAGWEd/i5u/zYejSQIZMeKmgACEBYShwVdvY++oT0n6ddAqDRAQ401FjIlPjolvuqAjThkFRPxwemLCTXxzAp28is4EQPdJIPhS487JkJdJhIpJBKm1EgK1DmJyb9q0Y4KaXENCE9VllApxV5vHx2cSaCGElIgHE58XnMGEmbJmsP9AgHmftML0hCMP+Qsx/ssw0cBgMCoUYrH0vi8TrUS/XqRHYesWsHF0hlCuLPH6tzOZHQhpCdxoMJeOc8mSSQdJSMkgx9PNxthUOqFf2W3LtRnpGbgTGQuODp4kEkKuUuLj92eifr362DB/ISqZ7CAjgoUODyjlxLAyyIi5FsNaqYSXnQcODP6SnzK7sYAIGp9aKHJWQ09ERGFuHhQkiXoqTiQyNOjQG3WG9YBIJcUdT2K0G4fAwBn5JoqyiGgtAxEIQgERFEaOFxXUCZKKFUuXTForQWsMaE+ITatXQyqVo1HDany+cEIBL4to7YSQr8UodmoUGIrzFCU1GZY8yYYB+3fuR7XbWqTdiIVziD+rb2CUg4kGBoNR4aEGVVhUBEc3L/j6epOyuZCvotca9NAUFCI/Px/Z2dnQGkvHNKBGkFbF07Z82tRh6ZrJj+Gg1/OLZQhmjpTQ6SiMdPhm6gNBhzWgRpcXG7Sro7UCBqOBH/eA9lyQEEut1tPSPDHYRSbk6DNhJPZXKJXwzpm0Z4O1WAKhsxM8XFxhRT61tCmB3i/nyBVcORnL+ybY2ZtQQIWAWMR30bRQtlmC9nTQFTdl0PQLueJmBSqSyIbKygoKlQIqsYo8r5R/Vl2xkKLNE/S5ZSIxf41MKCbx6VCk0aBIp4GOd6gwmWskyFoqk0GitIZJTPJEwswD437YW8FgMCokBYWF2JuZConWwIsB+7ffwpWF8xFtIgZcncf7LVjQ094S1EjLpMhUWRGRIYLYWgWlkx1sjXaw4gxQKORQWKkgV9oikCyFRIRkxtyBTgreP8C2ki+qVw/B+ZOnYKIjJJLrf1q7AQHHzsDP1ws3fe2RXJQOI2eCnNwvnx/ukdxcKCMGXAt5kRHiAsBRpIADscYOkMKGnLM1iXlfBanIGnYCDmIB2RZIoJJaQ5QjgXHJacg8rHE4NRNFJG5TkQYGiZAXFdTgW9HwZNtJKIW3kdxZZxYEtBeFgIiIIvDDKaOLshZMEhES9sYhN/UOblopYSD5UCgXgM6aLKFihqRbRoSPQiqDh0kOSWEWVSa8oJFKZPyYFHXdm0ASbAeZjyNMei05998coprxYJhoYDAYFZLxm9Zh+zc/YFC3HqTkLuV7Adh0HsA7LVLHSGFxF0WOK72GGk8/OpgR+OmQi0eK5HifiKIiNREiauhyCrDp/FUYDBrab4C/jqOGnZT2ObLwvRIkEog0Oogz8xBzMw6O9VrCtddgKIp0yC8oRKFWD5VJA5PSEZBK+dI7V6jhnRkTOCESi0vp5sGTxETPUEFjxddq0JI99VcQmji+JoQi9PaA2K8yWsvyEJR6gwQp81CceaRG+iy5ptKmFRONu7jLJj1Gh4TevWkrJEQ4SQJlUIoEqFQrpKSrqAWLM6V5cSnZNhhpDw09kjM12HozArptJzGnZ4tn+jdmvHww0cBgMJ4LFgNOKWmTLx6R0HLeAj2fqMsjRlEPBztrKETSEr8E2i1SpzUPzmTueijkS//UAMuU1sRAm9vuTXT8geIeF5xcRErNQl4i5OarYTKoeYNL78wP0ywkAoOjIkTOj4XA946gxpSsdeS/U2fDifEVFneZFEBMh2gmQkbEzythhI46LdImACI6VEJzrQc/HgNJEyci6VIoAJkKHCnh01EfBUJS8qc+B8W1CWJraxKUQ3KOBgF5BRBJJfyw2Pw4DvzDUJ8GE7R0Qgl6jI72SBaDSc9vm5sWRJCIaNqEkEtFsLazh8jIQa/TQSQ035PPaV5AcOZxIPTmIaXpTfj7CIwkDtpcoUdhfhEMOi3URm1Jzw3a1MP4b8NEA4PBeC4kJSXxxs3S3dFSQrZMAc0Vj5lgERed3bxwsk4t/J52F+JcDT98Mh9GXwQNKRXLiGGV0XEIJObqf34I5vw8SATm7onUAAp1xU5/tArexN8E6vRsJNOREAuKeIFhIOE5YmzpHBO04C8jxl0jFfNOjbTGwY4IBe2Zy9DRMQ1o70zaHVKggoGW6KX2RLAYwUlEfMmf+lnQgZKISoCQXGsi5yGljgcKyK08YG1FjL21nBhlWptBB6giQkRM7sNJICaiJV+gxJk72RBo9EQn6IkoyOfnzbDUBmhJWoVajjf2+dTXQl3AOzhQOWAi+ZGmo10qiXDSEEGTS/JHRwy/lYwXFhIiWPQSOk+FuSunUUBrMJT8OBD834Km26AhkSlg4y5HfVd7XLt+m3bG5MfLoDUWQUFBL/ANYlQEmGhgMBjPBXd39xKhQLFsW6rL7+3REBISgsktO/Lb9557GGWr4R9H2ZoPC/fuU8pO2GSJ/95aEcvaUnNQ9lpL2Hsnfrq3hsWybxham+8lYRmbgR/UqrinSNl78ceERr6mgDMJy8X/uBkqLfGVfaaylIwHISyNl816yaCIH/QjYTAYjArCq/6BYj0aGS8NrJaBwWAwGAzGY2GCgcFgMBgMxmNhgoHBYDAYDMZjYYKBwWAwGAzGY2GCgcFgMBgMxmNhgoHBYDAYDMZjYYKBwWAwGAzGY2GCgcFgMBgMxmNhgoHBYFRo7h3pcevR22he2xNFGgM8nFUvKFWl7D97B7N+P4Gpg+ti3+k4/PZeu0eG/ycjUjIYFQkmGBgMRoUnr1CHQrUe7k4q7D4di81HbqFr00D0blXpRScNrg5KHFs0EL+uv4zPJjR90clhMJ4ZTDAwGIwKT6cp66EzGOFko8CaL7vjyq10LNgY/kDBUHfE8nL7NkopDi4YgLaT1iK3UMsfa1PPF41remL2HyeLQ9ERqAUY1D4E0wbXRb2RyyESCnBmyVD+7Oq9kfh2xTl+Wy4T450h9dCzhfneNSo5of7Iv2HiOLzeN7Tkvit238APq8/z21KxCMd/H3TfnBIMxssEEwwMBqPCM2NYAyzdfh0yqRDLdkbg6KW7xLhX5WsexCIhlPLSTxm1yQ5EWLSu58PvK6RivL/gOC8W5rzegq+hOHAuHtUDnRHi54jI+Ex6FYJ8HeHioMLa/VF8HKYy8+yYJ2wCujevjGPhd/HFklO4dScb04fV489z4PjzGw/dQp/WlaHVGXmx4OlsjXeG1sfUeQeI6NmA3fP7Ptd8YzCeJkwwMBiMCk+P5oE4eSURB8/HQ6c3wlopxZdLT5LFfN7OWo5t3/WGXGqeVdHdyQozh9Uvub4zMdaU6oGOaFvfp+T40I4hqDtyOV/B8PennfljjceuwIB2IViz7wYRB4loFupZMgPWmwPC8NHohnwNxNoDkbxg2H0qriS+n9Ze4AXDgg3h/P4fH3SAk50CIqEQmbnqZ5I3DMbzggkGBoPxUpCWXcSX9Md0r4EfVl2ARCyC3mDkz8kkYpSt7b95Jws9p2/mtzd/0xMfj22Myd/uR7e3N/JxfPNmS7So7fXA++gNJowl96CC4ZvlZ4lg6FVyznILiVjIh6PQpoqwIDe4OCiJeLjNH7sSk86v7YmQodhYyZCdpwZrkWC8zDDBwGAwKjzjv9yLcT1q4oslp2GtlCFfrUOXxgHYejSat+KpWQVENIhKwlP/A4Ws9PPWsLo7ds/vhzFf7EZiWj7emX8Iiz/sxPsfFLsv8Jy8ksSv7axlCPSyR0xCNr9/v50vPZJboMHYnjXhViwYohNyICxWBpwl8ld9km7GfwImGBgMRoUnwNMOjWt6oEWYN9+l8vSfQ/jj+87GYce8Pmj7xtpy4QO97LD0404l+7fu5iA8Kg2bv+5JSvpatH9zLdYdiDILBmrbiw36nL/O8Gva5GAhI+f+pgRLzUZEbBa/fn3u3pJzX5E46oa44fKtNCJkiuDpbMWLCgp1i2C1DIyXFSYYGAxGhaeKjz2/HtGlWsmxg+fv4OjCQebjnauXC5+dr8XmI9H8Nm26+Hr5GRRp9HzNxN7Tcfzxge2D77tPSmYBOjYKwIB2wcS4cxj92S78sPoCagQ68+epTwNfq0Ho3DiQd36kLPnY7P8whoSnQmHhe+3x59YrGPrJDr7nBXWg9Pewe0q5wWC8GJhgYDAYFZ7erSrzayoSqKGnhfS9Z+LRuq7ZgXHfmbhyXRqT0vNLjLmNSobDCwbyjo8LNlziS/lTB9VFVX9Hc+Di2oW45Dx+PY2cs7eRmU+Rc7tPxZYIhlm/H+ePtajtjU/HN+YdJmtXdkH1AHNcHRv5Y+fJ2zAYTPjpnbZ445v9+H3zZXi72mDtl92eZRYxGM8cJhgYDMZLw6AytQJfTWpWsr3pm54l2+eWDnvgtbvm93ng8fN/lYa/99qy5wa0C7r/2nvCfzq+Cb9QqN9E2esZjJcdJhgYDEaFhg2lzGBUDJhgYDBeXh7he28kJ0UQmDhYfPVpPb6J/COiGy+H453p8UEYDMbzggkGBuMVhONoqVwPg0AEIdkWEAEBo4j8b4SQSoYyXQkZDAbjSWCCgcF4xaD1CSaTEbs2b8faZZ8i1C0NQf5y6HNFuKu2gbJybwwbPxNSiRjlBiFgMBiMR8AEA4PxCmAqXopMOpz57lukpCxBmzA9GvdTQi9RQS4mIkLBQSzSQixdBvW1bfh5kStem78eSqnEXONAYdqBwWA8BCYYGIyXHRMHnUAAWVEmvujSBR/MDQIXJ0SRQQSNiIOVnRgKnwCIbVxgFBpg0qXCqjAer8/0R+KJD+EUMAw2PtWKBxRiioHBYDwYJhgYjJccg6AAnJbDnM5t8e4sH+RHHIJJKyS2XweF3ASDyApihRtMYh/oJRxMMi8IretCqDsNL88buBtTACnehNTLDyKR7EU/DoPBqKAwwcBgvORwOiWuvtEFHXqboLkWDm2BhAgDDUx6CYw2Jtik56FIEg65TyoUsirQSexgFOVBLG0CofIy3DxO4u/3z2LEn0cgFHL85Ex8vNyDty37ZbGcM5nMHRtoV8iCggJYWVk9NB5LHIJHj5X8rKs8vnrG8VeE+7/7HO7B+A/ABAOD8ZJTdGQF5h29hm+riFGo45CgFeDPnXqcjtdAx0lhR0SAVBAPF/s0jBkRh+5Dq4FT+UEHE+SyylBmJWPYB/ZIWjgPPpPfKzHgRqMRMpkMIpEI9vbkfFISv00ZPnw4Vq9ezRv9hQsXYsyYMdBqtZg3bx4++ugjXjBQ8UAXsViMr7/+Gm+99RYfXiKR8GsqJnJycl5gzgGXLl6cOaj/AARWqoTc3Fxs2rIZzi4uz+3+mzdtxu8k/7KzsnDy7BkYDAY0rFuPzzeanz/89CPatW//b2/DBAPjqcAEA4PxkmKkExnpdMj9eyeqhCpQWKDGrweMuMPZo75Wh25+gI1CAmuBG24YNdhzNx6fL7LFd0tP4tM3M9B6QB3oTQ4QKm1hSM/Ajk2LMPH1GTAKhBAJSkv+er0eWcSgqdVq3shTqGGjC+XNN9/kBUSVKlWQmJjIC42y6Egap02bhhUrVuDkyZO8MaSCgdZAXLhwAXXr1n2u+VaW2mFh/HNMeP11NGnahN8OqVwFl65eQWj1Gjhx+jTGjh4Nb29v/PjLz4iMuIGunTsjKiYGQYGBuHA5HLa2tpg0cSIaNmqEYSQfKNTYVwsKxvZdO9GtcxdE3Izi411BRNbwIUPI9eb5KN6ZOhXRcbG4dfMWv0/FlYurKy/MOnTsiInjX+PPV/Lz59cajQbVg0Nw9UYEIq5HoFr1aqhB9jdt24rqNWpg2dK/cOXyZbRu0wYLfv0V23bueGF5y3j1YIKBwXhJERHFoNfloUCbiLYuhXh7kwhDPD0wiYiDJKkKOrUOeqMIBYIM+ApFeM3HFTZCIzamK/DDb8nYfWAPvv6lL0wSF3DqFDj6CKC9egTisLa4d2xFakip4X8Q1Ii98cYbuHPnDi8ypFIpLxocHBz4Ujs9Rg0oFQcbN27kax/oeSocCgsLn31GPSHvzpiJwUOH8OmSy+Vo274dxo0dA6lMCoVSwYdxcHTg1yKROYeoWKDs27MXR48cLREMtGaGEhwSgnoN6uPrr+by+/4B/vj9zz/L3beyfwDOXbxYPjEkz2jXWEpcbBy/vhMfDx9fX35boVDA3sGeTydt2LG3N6dr9qxZOHfpIl8j9Cb5mzAYTxMmGBiMlxSj2ATdpXNI0mRi6gY1fq0VAE+JEPkiKcSGPGhEet4BQM8Z+eoIzmSAQGiFPh5CtIARK3OcMWPcVhhNeehcSQsrDw66E0cgCW1Dwj256wAtDf/+++8lNRLVqlXjxYHFX4EaLyoYKMOGVcy5FY4dPYIbERG4fCmcCB4Jf8zb2wdnTp1GQKVA3Duo5g/ff19uf9PWrejRtesD43ZxccGF8+f57c9mfYoBgwaWnDt07ChaNWuOurVr4+btmJJhsBOI+Pr5x5/w96qVfE1BlaAgzJw+A6vWrnnss1ipVE/83AzGP4EJBgbjJUVAR15IT4RK4ogBVRRwFtABoQEJEQdSgwhiIhT0xNCJBWKiFzgQuYAckQaFAjk+S9CiiqMRGSYFQgr1+PJMOtQyI/aOvMuLhUeMOX0fVBiUdYI8fvw4LxQEfCnZhPDwcPj7+/Pb9zZXVBSaNW+Bj0npfMumzTAYzGnMysokpXiHkjBln3HKtGm8QaecJqLCIiBOnjiJxk0al4tbp9PDy9uLbyr4aNYnyC3jt2FnZ8cLhSoBgbxgqVbdPE23l483tu0wNye8MfF1uHt44NzZs498huzsbH5NxZlEKv1/soHBeCRMMDAYLzG0mSAmMR6GAiWENgrygxYRY0/nizAj4OeNMFdvU2NtMCrxRoQaE+sVYcq7KfhhiRyRJzLwU1N7vHamCAWQw8ZEBIDwyXsyWHwZKLSEbKmOt1xn8XugVFTBQKHpHDRkMD756CN+f/++/Vi+YgU+JUae5uSi335Dr96977vuu2++wbqNGzB50iR8+N57OHj0SLnzx44c4R0ad2w3C4DfFy3C199+y2//8tPPePf99/jtVStXYcbMmfy2oEznkJ9+/QWNGjfm/RgsNTWUGxE3EBAQULLfpmUrvsni8OHDvG+GlbX1U8gVBqMUJhgYjJcUAYQQOPsjRsQhXJuH4bQpwiiGia9VEEDOiVBIax2MBmgEdMBoAz6OTMHbvazRv4UIEqUj3njbBcNPAaY8DtN72iDJXgInoQFSOt8EicPib0CbHa5du4aGDRvyIuLUqVP8ebptTQxTUVFRiV/Cxx9/zC/UgY+e79OndFppGh9dqMiw1EDc20WTHrP0xnjW5Ofn48CRwyXmmabtSsR1bNm8BQcOHYKdvV2JcadhqMGm4anfBl3fvXsX3877nk/zOzNm8OHo8+Tl5fHn4+PjEX7tKn/dQbJPHUcnTZ5ccv82bdtg186duBV7G6dJnqpUKvy2aCF/jorBzMxMeHh6ltw3JSUFe/bvw68//4y+/frzYehx+jfevHUrEQw+OHXyFO7EkftevfJc8pDx34EJBgbjJYWaWat6YXCxEmJqexukXZLBTyTmh3nWCbVENJggMJqIWBBAQgzKvjw5QquI0dKbGHYZB6NYAjupO/SSSzilV6CdowGiug2LHR7pv2bDzddMEAPfunVrDB06FFFRUbwhtNCYlH5pT4qDBw/y+3PmzMGqVavQoUMHrFy5kjfKFCoQaNfKn34yV+VTw9q/f3/eSdICDXP9+vUSZ8JnDRU71veUxJVKJXr07FGy71vsaHgv9x4vu0/TX/YZaJwPiqde/fol27QWgeJTJpy7u/sD43+9jEPjvfE2atzogellMP4tTDAwGC8pQhMx6VYOsBK5oXrNQny3OwFTfKtCoSeCQKeG1iiElDNRywyOlIA3puVicVcHqJVpkBqtYTQRsVCUAVNBIVzqqWDvbA/bpt34KbAp1HjT3g/z58/nRQM18MuXLy83OBPl77//5o0uLR1bwsXFxWHRokUl/g00LmpA586dy8dnISMjo/wzCe/tn/HykrF7H5w6tit3bEdiIbp4lndKvH37drmmBQajosIEA4PxsiIUEdEgQPMpb0Jutw6zPrXH7/OyMLCyJxyMAohMyZAKBRDrjMiUSGElVMPEZUNslEFjJGIj9wKK8gQoMqpR04eDsVpHohIURC5YZrEEb+AvX76MAwcOlAgFC1QELFiwgG/7p4aeVtMHBQXxBvBeR0haC7F//34+3L1NEPdS1ufhpYQ8X9bHn8G07G9kjhgGx9kfQWPiMP1yHpbE6TDBT49va9shLTUNC375Bfv37UP3Xj0xvbhJg8GoqDDBwGC8xFAD7NWqD1Z/vhqdWudjwiJvrJmVjaZ1fGAVLYUkOxcKYshVOj3cvPVwtNWDE0ugy1ci7WIuEg77oWsYB2e/INg0nwaTRAIh39dCyAsC6oewZ88evuaArs+fP8+P1EibJxo0aMCLCLpvITo6mhcOO3fu5H0eatSogS5dupSMz0DD03b5hzlRvvRigUK7kk4Yi/S1G+AwfSp/SE6EW1t3OZbEGzGXiAWKi6sLrG1skJ6ezsQC46WACQYG4yWHEwjR861lyL82FvkxF9DlDWcIs/Ow5mYq6rh6oJJQCjtNHiTXUyGR2uH6lRwYYm0RoLRGmo6IjDG1IO/4Hji5qth/obzDIfVjoEu3bt345d5zZaFCgA4qRB0dyzo7WqACh47L8KqjJ38Tl6jL0CYlQ2ZtFkG1rAUo6OmAbK0B9jLzp3fYiOGY9s7bvI8IFWcMRkWGvaEMxkuOUMBBbmMHXfAXSD3UAV60P75zHoZNUsEoSYdI5wZxfB6+quSIi/tF8DWSkq1EDWsXHzTyE0HacAqE7iGk9C+E6NVxIXhSnsnkVlIPN34t8yh1WvRTmZ07LWKB4uzszK+ZWGC8DLC3lMF42RHKIOQKYGVbGYoet7H9i9ao5xkNW6MOWhstlFwyCo1K+DgY4d5Wh4MXBMhItoGfzIgG724DZ62EgBNB9E9Ga2IwGP85mGBgMF5y+EoBgRUkpAAr4Th0//gghNp07P/zG0gvboGbUoT/sXcW8FEcbRh/ziV3ycXdDYgQPLi7u3uhtIUKFVrqQksF6gZFi7u7u5MgwUIS4q6Xc/lm96IQSNBAv/nnt7mb2d3Z2b3dnWdm3nnHoDdCrdLj6k0DvMK7oeUfn5JdnMGViFFmTvBsXB9QKJQXFCoYKJT/EhwjBEzzNs8FXV//EjB/BT1HCLOZwy5NmBZ4swkcrpl1LW3kaIhOENd2rikUygsAFQwUyotLFf3vfEsk+0/KfgpLw2Vblxoq8Mif4O4EXiiaTF5u3vNtj8dKo8uMHTg7b+RTsWWgUP5LUMFAoVAoFAqlWqhgoFAoFAqFUi1UMFAoFAqFQqkWKhgoFAqFQqFUCxUMFAqFQqFQqoUKBgqFQqFQKNVCBQOFQqFQKJRqoYKBQqFQKBRKtVDBQKFQKBQKpVqoYKBQKBQKhVItVDBQKBQKhUKpFioYKBQKhUKhVAsVDBQKhfIC0//DzUXdmnjIujb2wNyVJ3E9VYdF77WHVExf788zmflqrDx4WykQCNd+OLrZhNrOT02gdxSFQqG8gKg0eo93/ziYtPCdNmVxn7/UFlFRUThw4iJSVVaY3KteLeaQ8iCcFBK80T9URr6O7/rO+oG7fxhoU9t5qg4qGCgUCuXFgzPtp31JP7wcec+KiIgIcKKj4WyjxapDtzGsnX8tZI/yMKz6sIN15+nrCvbOHfRciwYqGCgUCuUF49U5e1RViYVS6tevj2giGpT5WTCb/cHhPMPMUR6J1R91tN55Kn5U90jfZbWdl/tBBQOFQqG8YHg5ycTVbcOIBiAaP2+8gjcHhD6DXFEeB0bU/bL+4s9UMFAoFArlScEb0Mq3RhsyouHDNbuoYHhBGNzW/1Zt5+FBUMFAoVAoLxBms1nmqChvYMjIKcB7885gSkd7SITcWswZ5fHh5Nd2Dh4EFQwUCoXyAjNr2Xm81d3xvuuZ4ZaUF4O1h2PDJ/QKr+1s3BcqGCgUynNLk8nLzQ9aX9/P7rGP0TjYsdrjnJ038rk0G4yJz8SEtve/BjdyxZjWv84zzBHlUeFwOPHvDGv8Xm3n40FQwUChUJ5bmIJ6zcEbU9cfvvXOj1OaeUpE/Cfe5v71hCb3xF2+U/TXF0vODF4wo0tLbxfrG0/6mE8KV0cbxOWlgMe9V89w+GKIRNXaRlKeE4Z+ud9q95yB62o7Hw+CCgYKhfJcM6R98G/Mkl2gdh05a9e5l3vV0bYJc6mZ1d9DQGp4WT9vvHbIDKg/G9/8lX0RXq886WM8aWxlIszanIlP+jtXis/VWSE1GxjVyad2MkapMTweL2bIl/vsiFhwre28VAcVDBQK5YXAwUaStuO7/u7M9zmrz/+UnFnY5eOR9etyHtPJQFahfu1rPx9p/8XEFmM+Hd981xPJ7DNkw+ddMfbbg2joZw2l1oz4DBV+fz0MIgGvtrNGIYjF4m+ris8q0GYs23ejDp/H1e/6fsDUZ52vR4EKBgqF8sLx9tBGbzKfV+Kzm73+88E9X09swg32sJE9RBKmdccSl525luH3+/SOo/fOHaR9Ojl9NiyZ0b62s0C5DxKJ5P2q4r0kEswc1exZZ+exoIKBQqG8sIT6Opw+8NNgG5PZzH3n98ObXGzFQZN6BAffb3udARem/XZcMqR90G+Tetf/Y1LvZ5lbCuXFhgoGCoXywsPlcExzp7brw3zffSZh+C/rLvz6y9QWdnZyEdtfcTwme+WfW660Xvpht55rPu+VXru5pVBeTKhgoFAo/ym6NvVZySzFGr31hNm7TzYPcdv15uCGb/dqGVjbWfuPo8asqWPxz4bDMFq5YNjUz/HdG/1qO1OUJwgVDBQK5T+JlVhQuPqzXiG1nY//B8yFJ+Dg0wu9Zy5GXOoacExKfNivKey//gxZGVGg/if/G1DBQKFQKJTHon/d3mjzw0UsnuBtieDKMGtLDIST3kWymjHwA6JWzcLYj/9Cpl6GCZ8vxayxFv8XOcuHIPL0yxhw+x1sqfMTjjX8vVL42py2SD+5CL3Hf4qkYgmGv/sLfny9K7uv8sZW9Br2Nq6mqdBm8BtY9+u7eC49bP1HoIKBQqFQKI+BFkeJKIgvFQsV+HT+9+yn/sLn6Dj9GHJSk9hwL097zG6YgffD+JDJrJC//22MPhuFb6Uktc2LK4UNMXMQ0m8F0jISIST7ftTGDY0z1uPcrHrwbjEZKTlpYNxTXf1zILyGLkfS6pHP8Nz/v6CCgUKhUCiPjikDZvAgfcAmgoafErEAqHIzkaPUYGArEZYfvEMEgz9jsQqzuSNCSxO4K/zz1O/Q7vsrrFhg+HLzHPwRPB2YtQccsx4HbhWiR6A1Ql5Zj6Tn3tXWiw0VDBQKhUJ5dLgu4MKIfDPgcJ/+AM3FH+DWaQ7enfUDwj0VSCwwM9Nulq3ne/tX2r5i+GasAXWCFGVhjoysMySQb3JkxR/B4B5NMOpaFng2wbh48yQ8BE/w3CiVoIKBQqFQKI+BEF2tgYGzo3D4g4hKaz5r6YbARXdwbvhsTNiYig/aWNoJrn5nqnHqdQP5OHw9F4i0uL82F9wkJVcA+51jUw/rjlum+sg5+A6CfEchJ3nZEzgnSlVQwUChUF5ICgoKbOLj48PI0jQhIaFeRkaGW15enj1ZrMnCdGsLzKQWW+I6Wi8SiYxWVlbFTk5OOY6OjpnOzs7xvr6+F3x8fKL9/Pzi+Hy+oZZP6YVlecw+OHh0QB/lz9g4azR4pmK836sh5ifXRXaQAPk+fKw+ehVo0wD5F//EEZECGSlpZM+AatOe9tfH+LJld+jHXQDTePBWj3cQ8d5+mJIWwKnNJmTGb2VHYRjUWnCs7z/NN+XxoYKBQqE8d+j1esHRo0fHHDhwYMihQ4caHT9+3J6Jj4iIYJf69eujbt26CAgIQN++fcHlPtrAvaKiIhDBge3btyM6OhqXLl3C+fPnQQQIiLgwtm/f/ma7du32kmVpo0aNzj/Rk/wvIW2A7Nw0fPfmeAS6vwODzAMvf7QAOaNbsatf23Ubx9o2g9MfWrz51y5s29Qc/v7d8HnvO6jSb3IFeIFTEbPFCs2DPZCms8HLX23HgZHMaNkQnJ2nRvN6vojL1aNVvylIj/noqZ/q/zNUMFAolFqFFNiRf/7555x58+Y1VyqVHEYADBo0CN26dUOHDh2e6rHlcjnCw8PZhTnuXTCzN9Vllri4uNd/++03rFmzBkTIwM/Pr2Dy5MkLpkyZ8oWNjU3BU83kC4MI7/20gixVrbPCysNXKsXcTit1uLkQ6RVcdIt6Vw4zOEeOx7kb4+9J1bfzVJyOeSHmbfpPQAUDhUJ5pmRlZdV/6623tixfvtyrT58+eP/99/Hdd9+xy/MKEQiYOnUqu5RgQ5bpW7dunT579mycOHGCWbf622+/nSCVSlW1mFUK5alBBQOFQnnqaLVazy5duly7fPmy1ZIlS7Bs2TJ2edHp3bs3u5QwlJzb0ClTppg7d+58dOPGjR14PJ6xNvNHoTxJqGCgUChPjaKiokg3N7eTM2bMwOHDh2s7O0+dsWPHMgtjZdnmvffeMyxYsECZnJzsJJFI1LWdNwrlcaGCgUKhPBX++uuv2wcOHPBjDAv/HynpZpH17Nkzf+DAga9MmDBhYW3n6bEpXAM7nylVrOAjMzeTFij/cejvS6FQnjhGo7HVrl27/DZt2lTbWal1tm/fLiSC4f1x48Yt5nK5NXdA8LzC90du5tlaOfQf3VxQd0s62gur35by5KGCgUKhPHF++umnt+bMmVPb2XhuOHXqlM/Vq1dDwsLCLtd2Xp4uZvwwbRB+WXcUIud6mLNqG/rUkSFtQT+0uvgqZgj/xqfLjqN+79exa8HMkn30+HBsbyzccR5u4R2wdttq+EksabnaOeL2hX9QL3Iyfu0nwUdndICLHUauTcevHfn49tWB+G3Dcbg26Ikd2xbBgWdJkU5K9XSggoFCoTxxunTpcqZFixYDdu/ezfpN+H9myJAheOutt2JrOx/Pgo3jfLEndBsS00JJqBgBDl6IzMqBRCpB/sqR6J2ZhclzgQ0T6sJ/bDBuLxmIjq4uqPNXFNKWeMKUcxiOHk5IycmEmC3iTej8dgoS0jPZ9OttsMOXyblsC0MfL3s4/XAWSX/4wZC6Hc5O3sjKuQMuCuikVE8JKhgoFMpTYd++fazPgvbt2+P69etwdnau7Sw9U1asWMEaQaalpTFLVm3n54lhuA07O7tKUYJGnyFj7+vgC/mIO7gBxndCwYMVYrNz2PX5ZOF6TYZrSQvAgN9m4yWfrwBjKC5qhdjX15ON59q3RRehET9EGfBRBB+Mk86vV752bx6MN3GsWIicIX5skO/WE93ESixKM2GiK5dOSvWUoIKBQqE8NV599VV2+ffffzFmzBhMnz4dP/zwQ6m75v8cycnJrNOpO3fu4MKFC4zHSjaeEQ3/GR5gw9B7XixMc9+Ah709tGYS/mANlrzb0bKbT2D5hgJXUuinsOIDPOdK3QX+blzcvmMEIizFk3dVpZQhFjBrYX+XcHG4ZgBc6aRUTwsqGCgUylNn9OjR7MLwzTff4OOPP0ZkZCT72bVr11rO3aNjNBrx888/44svvoCjoyN++eUXxl6htrNVq/Sd/jO7MN0JTZwcsOalHHQBow2ul2+kiidCwYeUQEREGJnpsVEmGm6kmtA4gPfgg/CDyQ4i5OSkVWmbQCelejpQwUChUJ4pH3zwAbswZGVlsa0O8+bNY8OMqBg+fDjatGlTm1msEp1Oh9WrV7NdDbt27UJISAimTZvG5p9ZKEB/D3u03HQb7zRWgDFaNBIl4CixFOmmjCW4UjwboVbAz6NmwHnIX8xEEWgq0WHqlmT83scD+tQdOKATYkVI1UWTiCR1J48k6uyP1lY6jF4Yg2UT6pEfJxlurhG4kJUNpxQ6KdXTggoGCoVSazC18rlz57JLKUzzPRPesYMUHgcOgJlxkmmNaNmyJVq1aoWwsDD4+/s/8bwYDAZcvnwZ586dw7Fjx9glLi6O7a/v0aMHO9fEiBEjylpK/m+pwoaBYWlaLjYmp+CdkX3htu8CpK518eXum2gvttgwCFt9i+ivR6Dz/ANo0P8dXPutG7vfrpQMfDimB5wnRsMvsjeuZ6Thfr0Hy5ZMRYNwNxydexKbE7Pw7WuD4DHzKGwDWmDHrUy4MArBcyKdlOopQQUDhUJ5rnB1db1vrb20UF+8eDH7efv2bbZQZ2aXfFgHUU5OTqzwYOaJCAwMZEdzMJNQTZo0iV0oVWA9BLm5Qx6wgRg/LN+NH6pcZ8bIWSvIcnc8H7OW7sE90YTUnNxKYdduXyA944uy8IzfN5Dl3v3opFRPByoYKBTKCwOfz0eDBg3YhUKhPFuoYKBQKBTKU0UxfCXShtd2LiiPCxUMFAqFQqFQqoUKBgqFQqFQKNVCBQOFQqFQHg1m9krfNxBRvy4bVOcn40ZCNkYtisYvJd4bq4aZJ8Iel3Jy4cgBUrdOw8CtYTg5bzKOvx8O3ReXWPfP2csGI2xLP6StoW6dnweoYKBQKBTKo8PzYIe/lqE+BTv3RvgxNxPVuF8qw633rzjZ2/J90aoUjCwZCOEwai3SRj3R3FIeAyoYKBQKhfLkkDQmBYsBOWbAiaNFHSc3SJv0Qhs3Jf5dfxhb4rPQ0qbyLsuHuGJTn8uYZbUAR5VmmL/6HMqxH6H5yWFlLQw3loxEi3eOoUePltizbRde35aAD5tbY8frTTB6nQpDBrTDsY2rIBq0HOd+7lY75/4fhwoGCoVCIeQt6Q3XFb2h2T25trPyQmNK3wADR8Z2NZz4oCm03f7C9aWD2XVf9p8C36bTkH3j1yr3Deo/A44vf4tRH31q6ZI4WbpGh1Zv7UZibjasmKByF5r1fh0fHlwMcdeZiP25P2wZh5K/fgY7+3CYf67aZTTl8aCCgUKhUCiPzj2eH7lYcC6VLbBXb0hBv7/7lq2RdxoBU/YY8q1qwXBfdMdg5DpZxAKDrBtOH7S0IjT21qGNrzMSC/UlKwUwkf817Q6h1BwqGCgUSq2Rf3k12vZ6DTfyBXh97iZ8N7EZ1Gc/hVX7/TApj7HbqM9+BKsOh2AqYsImfD62K35YdQjuTfrj6OE1cCwpGSQcHvLyzqBRnbZIEQTg5I0oZP09AT1mLENo/49xavXH7P4iDh95GUcQ2bAX4nWOWHTkIgbVkd2VMz2mD2yHPzafhkeDrth1eDsCpEy8GV9N6IbvVhyAyDUMf247jEEh8md3wZ5HHjB7JSMazOaKMUyA+wgHYVIy3httzoJvm6mIyc6CM5usBvZ23o+QPqUmUMFAoVBqBWPsL7Br+CNU+myISfilYBG66G5gzyuf48vQOei9KBFbx7vDpdk3uKazFBbtrflw+eMmipYEwJC8CUKBDQymArYI4nBMaD5LhavpSuSsHgpHOwmOZ6mhfGshBtrx8NGFD/BVQx4peswIfiMFScn5RD+kgsezQYbJWKlG2kQiQsjSeGjWe5Ma8X7wZQIUm/TYMtgW2+sfRqGmPtlKCQeeNVoZTHCh7d9VMmKwNwb/swE/tR/GhvO2LwbPbVC1+5nvjhC2As+UjQxm3inmWmuPw859MnJTf4KZ51UiFoDCQx/euy/liUEFA4VCqRW+G/4Oev6TzooFhvmn/gHXZSLwyn58eCoX1lwp/tjliBZ/3kIw86YyXsMhpQimUQHs9nyPfugtKcJfKSa86m4pMZZ805r9tO89AGbTTTQvqfwPbi/EovM5QEMnNrxy6VDLCq4bBlqZ8f0ZA94vzZgxBuc0QpwZbKmpch06oqfIiK/OG9BIyMetPatg/Kg+ERgyZBtNT/kqvdg0/eokZE6uaNJ/HyIVmVix5TSOp2agCklQRkN7Lsa36Ike037Bl2UqToBjc7uhroMrunZrg/279uLNLXFESFjD2RyPViNeR7DqFLI6zYOUswhj3/4Qy+ZUNTsF5XGggoFCodQKV28asG2cPTjjKkRynUq+CHH19/bweu0sTKv9LFGGm6Sc0YDLqVydd7piANyF7HdFaWs3j5Q0HEV5siTeZCovpPwrvPm8XLhISDKURxhukf1dKxnNBRFBcivOgFnLs2H8ehJkJEENSW7AFzuw/uP/Y4t8ZjKqzAdNRkV+x8ycslC55QIHabnlE0uNXJOGUk8Lv1zLxi9la8qHVQaPW47ccfce4Vp25QmqMDX33o0oTwQqGCgUSq0QXpeP3Mkp2DHB8d6VpmT4vHYaZ76oA/ehq5C6ehh5W9Ul5YyYFPzqx7aAJ1oFriVzKN9MMaG+H3kVXi5ZyQ8GjGlsHbj0ODHJJjQLtrwuB8+czy6MPUSwgIdlr5kwyo72SVD++1DBQKFQaoW3l83BjDoNoJyQDMbkcMtr4RibNB15W8ahj7M3Pj2vQpMGItiJuFh0ZwjGewehvUyL/n9exqZXwgBdIqRiX8QajHB7SDu6YT3nIm3PdKA4Cts0HPzbkA9TqWDg1UELqQ7j1yVi8SAv6JM3Y7dOhC3hfHSScdFuXw4+irQF06xuYHwNSKhYoPx/QAUDhUKpFXgB05B30QmtvB1wNcuEodN/Qt7vYxA/rwd2u7yLLUQsMFzO2AWevSPGGHNwoNCAz8d1g+zNA7ALboOjWfqHFgsMF1e1RUMvBW5qnLDldjEYP0J5FdYfL9Zh+oDWEA47j8BWA5GuU4NpkNinLMarfTtCuvMMpO6hmHMiE10kT+JqUCjPP1QwUCiUWkMRNhRRd4ZWivOdvAPaCr6TOIouMBlL+8G5+HTxHrLcm5aqgo0CRINg1pZb4w9Zq4alp92yDce2ES4k5lfa33bsVmjGlob4mLvhJObecxQJ/th8An/U4NwolP8aVDBQKBQK5YlwesVsvD5rPuKyVfALa40fl/6LFm6iGu8/vK4j8NpOrJzauMr1+SuHo+66bkhbP7bK9ZSnCxUMFArl/wgONGY6Uv9psGZMAN5J7If4q7dZnxaapIPwDHPFsts56Kqo3s5j7uCO+DYqC1411xeUZwwVDBQKhUJ5bBbsycXn0d+XOcASe7bHrasxUJSKBVMupgzsh83Hb8CzQTds37GE9dLJtBrUWd0J6Zv2s5sp145CwPL2JDyRDe/78RVM+G4DAjq8hHW9Kh7RhG9fHYjfNhyHa4Oe2LFtERyoP+inChUMFAqFQnlsPp7gj74h/hCv347hbeuycQoXl7L1TV0CMGhvMtLqS6GJ+QXubq2Qk3HsgWkWH3gTQ39KRE4a4+xJjXou3kBLi9+LPl6OcPrhLJL+8IMhdTucnbyRlXPnkRxPU2oGFQwUCoVCeWxafX0W14dswsgJPfBaQgGY7p/Or/yC1bMsLpnOZOYCZj1Sk5NgkPUE9LOqmh2iEn98vhJdv7tZEpJg2Ss+6HmJfDXexLFiIXKGWJx68d16optYiUVpJkx0pZLhaUEFA4VCoVCeCI4R/bDnQj9LwFyMLn7eiDD6IGp2S3Txtkdq2Ah8NqkLJPyaudS+nWCAr7e0LOwd4AMwgsEQS9LXwr7SLJmAwzXGI5fwyZwM5R6oYKBQKBTK42EmtfuffsH4t2aWx3GssPnHXvD6cifwhR7nil2Qu63EObQxpmw2CQ6XQ8LlbQ25OeUeMbw8eLiSUAxEKtjwtUvXyP+eFm+cHBFyctIe2+snpeZQwUChUJ44AoGgQKvV1nY2nhtUKlWRjY2Nvrbz8dTgyLD0hzn45awQZ1e8wxYshrxraDh5K4b9G0tKmjxwzNnQkHgxTBjeZBK45JORCbLGkdC/uZZ8Y5xvmDHsq/NA4wFssq++3xf+rw2FedhucIzZmLAsh4gHsoLnj9ZWOoxeGINlE+oBumS4uUbgQlY2XGiPxFODCgYKhfLEqVOnzl9isfhPjUZT21l5HjC3adOmMxFQ12s7I0+Tgyk52PDTDETW8UFCtgb+DdrirzPJaOvDuMK0xbG/JiHY1QmO9Xvj2IXjWDymGfzqtEXy9cP4eexxuDo5IbjjJBze+io83lezaSp6z8eC6PHwdHZCQIdJuLxiOLy/t9xTmxOz8O1rg+Ax8yhsA1pgx61MKhaeMlQwUCiUp0JxcTGfw+EYLl++jNDQ0NrOTq1w6tSpbUQsdCXC6f/CgfSAN79ll6qoO/gr3CFLKVOWnsaUku+jZq8mS/m2GbvLv/f7aBFZysPpbUu/cTHj9w1keSJZp9QAKhgoFMpTgcfjGc1mM2fdunWDwsLC1i5evNg8duzY/4su5x9//HHH9OnTe+zates3nU7Xu7bzQ6E8CahgoFAoT5VBgwatY4RDXl6ebdeuXVcdPHiw47fffqt/6623xLWdtyeFgfDFF18c+fLLLzsMGzZs1cKFCyeQ81PXdr4olCcJFQwUCuWZYGtrm7d79+6uzHej0cj7+uuvZ8yePft9kUjEeeWVV4qmTJni6ubmVtvZrBHR0dHJ8+bNS/rnn38aubi4pM+cOfNrIhj+JkttZ41CeWpQwUChUJ45THcFU8gyS2nckSNH2nz66aejV69ePbSoqEjevn37hG7duvG6du3qWb9+/WeeR71ej2PHjiXs2rUra8+ePXZRUVH+Pj4+CYMHD147duzYJb///vtVsjzzfFEotQUVDBQK5bmgTZs2R5hl/vz5kyrG37x5M2jRokUtSeHd6tSpU5ExMTH1mHixWKzz9/fP8vPzU5KC3EBq+lwHBweRra2tFVkYz0ASgUDAIwU/YzdRVFxczFURMjMz1WQxZGRkmOLj48VkUZCF9WEskUjUDRo0uNi8efOTrVq1OsYsRLhkf/tt1YZ8FMr/E1QwUCiU55qgoKCbzDJ+/PhFNd3HZDJxCwoKbCrGMV0iTz53FMr/D1QwUCiU/xxcLtdEBQKF8mShgoFCoVAoFEq1UMFAoVAoFAqlWqhgoFAoFAqFUi1UMFAoFAqFQqkWKhgoFAqF8syZ28EZW3sewcG3gwHjTdg5tkZWbgZ4Fb/XdiYplaCCgUKhvLAYjcZGj5sGj8c7/yTyQnk4ph/IwPTSAC8IuUQg3POd8lxBBQOFQnlhiXxl1bk93/Z4rDS6zNiBs/NG/l9MikWhPA5UMFAoFAqFQqkWKhgoFAqFQqFUCxUMFAqF8hyRnZvfb9w3uzc6WQsgEXIQk6yBUmuqtM3jdsNQKI8CFQwUCoXynDDjl+26Oi58wds9nMojmwKfbcjEpi+61F7GKBRQwUChUCjPBR//sUPbsZ5EIODda3/5bk8HGIxm8KtYR6E8K6hgoFAolFpGq9UFKCQQViUWGGQSPlRaPaylwmecMwqlHCoYKBQKpZZ55fvtN0a3VNx3/Z8HCvBXOBULlNqFCgYKhUKpZW6mqbmA4p54rZGLv/bn4q+3Wj/7TFEod0EFA4VCodQyXE7VXREinglvdFHg2tXLZXEFOgHaNg15VlmjPAbZBRrY2tZ2Lp4cVDBQKBRKLeNkI9SSD1FNtj1+S0MEw1POEOWJsOtcSkqgj2ttZ+OJQQUDhUKh1DJLP+7j+O+G/YVBrtVrBqmkRrqC8hxw+npm5rTazsQThAoGCoVCqWWkYkHRnqva1AAXkRv3ASMnv9+Ri+UzOzy7jFEemb+239i97KMe3Wo7H08SKhgoFArlOWD9rL7und5aW/haJ1u5Qlp5YmcOh4Nvt+dgBRULLwRrjiQuHNM1dFZt5+NJQwUDhUKhPCfs+3Gw9brDt16Zuersr10iHPNcbCXGwzF54hahrqnrPmtypbbzR7k/RpO56NS1rORZ/56ZfuDnIS/zuBxDbefpSUMFA4VCoTxHDGob+CezlIZfqc3MUB6KLk1lZPH9tLbz8bSggoFCoVAoFEq1UMFAoVAoFAqlWqhgoFAoFAqFUi1UMFAoFAqFQqkWKhgoFAqFQqFUCxUMFArluWXW0lPL1Rpd/7cHh0mqWr/n2x6PfQwmjby8PPPd8Uq13jj11xNZr/SPeLtbU58Vj30gCuUFhwoGCoXy3PLhmMiRzOfBC0kDvlt5dunvr7ewspU9XdfIp65n3/hlw2Wb5R/3iNj8Tb+Mp3owCuUFggoGCoXy3NO+oecGZilU6WxHfbnj2viugY7t6rtxn+Qxft187ZTJzLn52fjmY7s3D3ySSVMo/wmoYKBQKC8M1lJh3pZv+rkw379ZdmZ+UbG6/7tDwu0fNb0itT5n6q8n1FMHNHj3k3EtVj25nFIo/z2oYKBQKC8kH4xqOol8TDocldx39vLTS3+b1tLaTl6z7opT17MP/rrxStDyj7s3IAIk6+nmlEL5b0AFA4VCeaFpG+GxmSw2RSqdYvRXOy+N6Rxo2z7CVVbFpqZfN1/bDw43+ZOxkRNotwOF8nBQwUChUP4TyKXC/E1f9/Vivn+7/Mwf+UXqbu8NDfdVaoyXXvnpqN2bQxpN/2Rci7W1nU8K5UWFCgYKhfKfY8bIpq8yn8cup/QM9XU4ve3b/tm1nScK5UWHCgYKhfKfpVWY+/bazgOF8l+BCgYKhUKhUCjVQgUDhUKhUCiUaqGCgUKhUCgUSrVQwUChUCgUCqVaqGCgUCgUCoVSLVQwUCgUCoVCqRYqGCgUCoVCoVQLFQwUCoVCoVCqhQoGCoVCoVAo1UIFA4VCeeqYzeZL5COstvPxpDCbAZMxBylxm3E+ah8kUjF0Ji3qBAggEIoh43tCZOUPPk8BvtgWXJ4zbmhtmZ1QRwHwYAPm/4sGh1DbeaDUHlQwUCgUygMwGkk5zyUvS1Mh8rNv4vq1hci8cxIcgwZ8oRpSngOC/QbhZloz3Lg4F03D02HEERTlm9n9tTBBxBFhtm4pkJaMz1y+gL5YgcXb5BgxbDTCGvcFLYcpLwJUMFAoFMpdmEwGmIlIMOkLkXpjA65EL0WxMhVyvgkcYRFkPA4RBVpwjWaYzWpcPrsHdi4XYeeohEotBkegBs9shEUGCAGeErl71yErSwrJmAKI7PLg6dwZe/ZvR1ijHmQbEam+m2v3pCmUauDPmjXrw6KiInm3bt12tWvX7tCmTZv6nTp1KnLYsGGrIiIiohYvXjzu+vXrdaZMmfKXj49Pwk8//fRmenq6y/vvvz9boVDkf0bQaDTi2bNnv88kyMQzn89TWCwWa5h85ufnK5h4FxeX9DfffPMn5jyY86lTp871cePGLY6KiopYtWrVMOa8mfNnrgNzPZjrwlwfQrdDhw6169ev36bIyMhTzLbMPsy+TBp//fXXlISEBB8mbeYYd1+r5+Fa1PRalV6b0mvFnBdzfndfK+Y6MNeDuS7M9Sm9j0qv1d330f2uFZMH5vdhjs3k4Xm4FvcLM/c7k8/S37T0WjHPB/OcMOfJnG/pfVR6re6+j+73zIFS62g02Vi7uCvkvByIxAIiDHIhF5rBJ1UsLs8EkLKdw+WBY+KQRQmeMRX6fCNyCm8g6Y6BbKSHTMgHj8tFaqYJyiwzXvKdC4FfF1y6UACBlIe6Xrsg4Mmxd99sdOz4EXg8bm2fNoXyQPgfffTRV+QlCPKSnsFESKVS2NraQigUzsjIyILRaIJAIER+fgEbJo8JG87Ozp2h1erJTc5nw2Qduz/zneF5CjN5JOFPiTAqWc9hwj/m5uazYeYcSXhRUZGSDev1Bia8UqlUsWGNRsueO/lkwySeDTPbMeGCgiI2bDKZ2TBJdwaHw7vnWj0P16Km16r02pReK/L7V3mtmPNizl2l0rBh8lnpWpHtKt1H97tWzMKEs7JyZpB777m4FvcL63Q6NszkmYS/Kb1WzDmR8J/kHCvdR6XXquJ9xDxjVT1z7AWn1DpRJxbAhpcKAdcAnskEk9kiEmAWQKvmQqvRICtbALVKD78AgCfQw9HZHiryzhCazTAZ9VCT7x72UtwpNiIxRYM8JQf1ImXQqQvB03AthhDcfOQmbwW380ySOBUMlOcbfqNGjc7PmDGjUWlEly5d2KWUiRMnVNrhzTffqBT++OOPKoW//PKL5zbs7OxYbbh161YVwh3QqVOHsnC/fn3ZpZTRo0dVOtarr75SKXz3tXqerkV14aquTURE/UrhiteqR49u7FLK3dfq7vvo7ms1Y8a7T+1cnna4qmvVrFmTSuGK18pyXcqvTcVnjsPhmAlUNNQaRBwYeUi5uRZ8YxH4IiKSifDjCogo4IlgIL9Mvk6GnecHICctAZ269UD2jQWQBCRBYeuIO1wOuIy4IIU/V28EV2uGwX4Idh+8hcJTiVjQowGU+dvAEaqh1zFbEQFSmEjEJo+kX9vnTqE8GP65c+caw6KdKRQK5f8cDkxmIhQEuTCapThzSY6YFEd4yGPQvJkJIjEw83cf8JQX8PbX3+HKzSg0avU3vPmfQySVWowXS96mfAkfycZBKAh+CZJmO+AQokGC2hc3rniiVf1YMOKEsVvgchmhaAZtXKI87/D37dvXiVDb+aBQKGCHH9JSoxYxk0KbAxF4xgJsPtUTyXlAfn4SUoq7oVXmDaR7foHP3lZh0OBB2L3vAEIH9MPCH7+BrHkqQho0Ap/Ph1lnIGW/CTyzGL/vzQCnVTSMyfE4tGU7prRvi9NWI9HG/DURCnpyQCMEIh1onY3yIsDv3LnzXrOZ3qwUCoXCqjWOHudyhuLYif1o16knuk19BfMWrQdsHHDi6nnM++FL1tbH2d4Wc35dAH9XWzjYBwJcMUxGMcxcNfhELNhbOULo3wTcLCXiDXy4OHviVrESQ9qGI/qwDer45BKBSAQDRwijicO2NFAozzP8Tp067SOftImBQnkOoDYMtYsZJigLzbh4WYeo6Kvg8jgY/9IUNGzTEFvPHofQ2Y1tPWAMohs0bIBp/GJIko5CLHWG0aAnvx8XJpMlnXq+QgivGRBQLxDcwR1x/pWPseTvf2BTzxdNtXL4uaVBIBCwoy44XPqTU55/+Hv37u0M2h5GoVAoLFsOHUVkRD2IZQKkJyrRrXdbLN2wAx7SXFiV2BwYDSbIrSVoGdEYRcqtSDSboVIWgGNmLBcZ3wwm3M7KwxC7OOw1hCPr3CVkKFOgNRrQPawfBFE7weUI2OMJhIxgoDYMtQUzvHvgwIHrY2Ji6jFDpomI09va2ubNnz9/Up8+fbbUdv4eREF+nr1erxc6ODqlPYvj8deuXTuYUOXKhPgrcPENRcad27B280f8tWvw8vaBg42kbJtzl24h1I0PgZ07kpIy4GorRWpOATxdHZCWZ0D+7asIatoYael58PFyQ/LV8/AIaYDYtHwITQZY8fQQ2DhDmZ0Ce1cvZKYkwsXTGwLaPEf5P+S/2rqgJwUoY9dnhpH9M3E44OjM0OuNyMrJwZ3UVJgMRug1GvDJs+/v5wcvT0/WJqAqSj0j6smf2cSDzmiE2KSBmSuAmsSlGETw5eohFEgeyosi0z1bnKvCiKGDwTfzkZoWi2GDBiHqbDr8PPVwk8dBWaSBSCzC1l1b4O7jBAeRDDLjcagL3aAn+WAOZzKZcCO1GI0Ee/DJwnTwXJ0gFdrC2mzA8Tk/ofOoQtaDJCMUOOTaGLUq8Eg6lGcL4wfm559/rjScjRTAgszMTKe+fftuZoRDUlKSp5WVVXFt5bEiRqORdys6qv35LSt/z89ICyqNJ/e4aeLcf6xEYrHmaR6fP2TIkDX3s2HwcXcEc3Rvbzfk6QGJjT3iE5LgUD8IV6/EICSUUeHWSExLgKtIAp0mByZrDxSl63D+wmVYyW1gI7eGsSALurwUmIlgAHlgLl2OQd1QD2g5Trh4KgpOrgIYlGpY2RtRpFLDjhmzTzv0KJT/DEqDHi9tughrsQAyCR/WdhJok3NIJcIZvg7WcPLzgowngJTxXVJcgOjLF3Hk8DGMHjcUXFIr13P4pABn3gkaaEktvsjIwfVcNY7nFOJUShpiU3TQaDUw6A0QC/kQk/fRL4E8tG7Z/KHq7UxhP3RQZxw7fh7unl5g5nvgkHfWxx9NhD7pGK6f/RtHj+1Hq7bdUJhvhCY6FWdu58PTNxWuDmJoDRyIuSYwuk8AFaKKBWjMjcXhY7EQy72gK8xAGx8NrAUaIiq4MBm5MHI50KkLIKSC4ZnCOFsrFQvMPVJVKZiXl2crk8mUjIgg4tXwbHNYzvzpk/J1apXN/daLxULuP29NVCucXW+M+PT7uhzO03Ebyp88efI88jm5yrU8acn0KHyIyLNqZy2BVYngZ8QCg7+tCbEaW8jFciTziqFKTwRPX4zwsGDEZ2jg7MCDnicCrBzZH0Tm4AYPF0/yTc0MKoKvkwRChTXyVdngkqoFx2RkvaNRKP+P/FdtGGx5QtiJDDBAgkJSCylKLAap/yMzsQCn4nOJCNBBSE5bZCVEeKAPGrdsi8S4XHx1Pg1Xs3JwIz0fOYVMbVwDHVcEK1Ioi4h44PKsYOZz2K4AKz7jgtnygsrV6WESW8HiEeEhIHlQyOXQ6Y+R/RrAw8MD585eg0DCQfSR43AWqHHz8Dj8+EMbyG3tkZIUhSYBBYgIKIRarYLeYIZYYGZ9MplNZvBlItStm0HSMkJF8i52zUOQt5ltYWEPR46nJSKnuCgPMoX7E7/ulKoxEbU2ffr0uaVhUt5iYmsXHLxegJjUYtxdhx4zZszSFStWjHjW+Syly9QPmm6f82mMmXHYUQGZTIJefZtDpdJiy8bjyM9IC5735gTl5J8Wyp6GaOD//fffL+O+gkGOkl42SEk2pbZy8uTLK20isXVFGFkY6gb4s5/2LpZ19Uo2JXIBQXI79ruCFQvsnmy8h18wG3KysQiQuiEhj39WFArl+YLUokOd7HEpR00KT/Ie4wtYt8osfD55G/BJLc+MI1HXsDM60WIMSCoOpQuDnMcYFApgzfRZsDJAxtYMGU/NYLs5wM7/wIRtBEYoubbIzs6Di6NdjbPJGC0yiTpLtHj/3U8Q2aINVq1eCaUyGaM6GSEQ6+DloEWXwANITDPD3Qto20QAo5ksRh07BwWHPRMejAYjyYwadQJIRYhnRHZ6LkQSMyRSM2sYaTSa2XPjE9GQn30Hzp6hT/iiU+5HYmKiV0Vhbivlg0/u0c71FOyiJcIvKVeDuCwN+dRi19YNTL99rQkGbz+/m6++MVCVmpQhz88vglwuhYenY1l3m6BQVbatQaeTJt662dA7KPj8k84H/9tvv51BeNLpUiiUR+C/2LpQSo+QQFw7dY11icz08cNgEQLMfyPr78gMuUSGAqUWPB6vTChUhLFpYN6RpauYq8Xsy4gE1m6wtE6l52DhnQIouA8nGEqRyBxgo/0OezelIys5Bt7ydLg5EpGj04Aj5sHHUw97GwP4QjOp7PAgMOuIqFAT0aKAkauGWa8ps7/g8jlwdyCVLo4eehPjZhyWrgjGbSQREhyuECkZlxCMno9wVSmPwokTJ1own3weB8EuUgQ7SyqtF5HfLMBJwi4l8Bn7AYLxGWeV5c7t6628TQa5m7s9mOVuZDJxpfChnVsnjn0agoGZPIcKBgqF8rTxk3FIzc0EEa9qTcTUloKDApFy8epjH0vAFyA5OxsaufUj7W8ldYSLE/kUbIWnTAN3JzFshFqoTUQE8AykwCeFitQMsUgIsZRHwmYYDAaIBDbQa3NIbbXc9sxgMIIvMEBOhIWOfNdpLfEWbcNhDNmQn5/72OdMqTlyubyI+ZzcxhVCfvn9mK8yQCGt2tD20rmTrRo0a3X42eSwMsaMS1/C7v6da4y47tmnOdvqsHDeDmg16qdiEMMvmY1v9tNInEKhPBz/VRsGBsaCwc2KjyKNGTrm3Vdh8gR2qjameZ5pZZDLodVq2VYIxiCb6axgm17LrgrzhW2XYEde8EpaFZhtOGW2AeQYHD4kfDEsk4UJHmq0hL1DIMRckiuZDvZWgJurjskZSVcDk47pT2ByzIdQbIaRHMvAEZH1EoiFjpC4vYm02GkQ85RsPgx65rjk+Hwje54CM9spA4OJC6NejKh4d0Tapjzm1aU8DMwsu8xnxVtib0werqao0DPcDoF3tTgw8AVC3TPL4F34udp4QFt0T/yyxXswdGQHcn/z4eNrsQUIDfdFgbX/paeRD/6MGTO+xX0EQ79GPuB1+QrrvxmFht4+OHwnAfKqNnwIis4sRK+lGhz+7dXHTKkcXeJBtJl5HKeWVZ4Iq3F4OE5duoSq9SLBUICw4PrwC4/Apdgc3Lp89L7b9m8Sir/PXoFTxUh1Aoa+swyrfx8Dr/DXkHhpKxude2kHRv59DTt/f/sxz6wEYwq8Gkwi6e+owcYmeHtHIOHOpRpZh696vzdUg5ZgQuOHb7Z9HI5uWov4Qg0CmnVBi2DnB27btkldLN93BR42dHaeFxkRKVR7+7pj9bUUCEmBrytp3WUme2JqSKwnAvIGN2anwygSAUIp2zXBDslkHTaX1rCYLe9tGWYKZ2ap2JWRnmvGjcR4hPoH3bP9g5AqPCDm6yHiMVNa8yAU8aHRaKAzcKAlasdo5oMvNMBM8qsnYZVZQMJSWNk4Q+pgj8QYF/C4SeDymFEQZrb7gctMjW1Ug+zKGtUROQRNsQnDJy/F4Y0fP+JVpTwK9vb2OYw4/+tQGmdaRzc2rmWADZQaI3wdy5v3/z2ZgRylAW92dkdYw6Ynayu/8am5eYc37cfosZ3B41veg+wQYBW5J9mZkC0lV3Z2AaIv3saQT19d+TTywXZJMNNb349L6z9D7PvDysIvD+mG/VFp+GfLHgSpzqDzsLfgHN4Je9f9xarnDV9NwJKLBQjqPQ3tjEfwxjfL0GfqbMyd2hotmraDm4cX4EUKqbRodOo+Ahz3Bji4fRneG9UTW04n4JuFmzC0dSB7rG+nDsffOy9i2vdLMK2jPTp06A+llS/27d0CTcxOdB3xOnyaDcTWL9qSC6eGNvs6OnUbgmKxD44c2QJlQSERACZMGtAJB65n45cVO9EzotwSed/Cr9Fs8hz8M2Mgts+fhasZJlxdMh0z/9yOke/MxYftyU302h8wyoLhlKskr6tkeEWMg6spGcF934PLha9x8ooBv/bvQlRLNprWD0JIvw8xp68CxUol0mIOoOfgKZD7t8LhLX/Byy8S9RzJS6Zuf+z+52307dgJN/L5WLP3EMKdhEDhZQS2fxVcsx2OLH8LXQe9DNfwrti5rKTLyKjC8L5dEZVsxKZDx/B5az84NKyPfVeKMLiBHGsu5CP6/AGyoRZtGtSDfeQYrJ7ZEQ0Hvg2u1A2f9PHA+79vQJ9XZ+HHt4ezSerUShTrJagTGIwG/g64bfLFqd1L8d74/thw7Ba++XcX+Dtfxa2Oi9E1/XvMTeqDl12PYtxnSxFka8R3uw+if/OBuHb9OCKCwrB760J0H/URxAoXvNreHl/+sw2D3/wBs1/rijF9uuF0sh6b9u3HnRun8MXfm9F/qgQtAnuT+6oHDsbk488NO9GxjkWWbfxlBmb8uQ98s54Nr/zxfXz820YMf+9HfPlyDzZu9/wvMPWbJWg3fDrmffESJg/qiaNxSqzcuQfv9m+Gf/bEYNagZpj2z36Eez+u3H36/FdbF0pp6KHAiuvpTEM8uc9LWgMqGHMzMeH1gnHixm3wRZySOE6Z+GVekqxh4n18zZW2IpR+/l1UjNeTslDPLwj36QmpEj6Pz85UySP55JOKJZdpbWBKeqYVg2sCVyCAgG8DI0eCvGIbxCZLEJVAKgvaWBiK34LaZIKIawcBSYcZ2WHmGSDk8yEyyyDkMeKJpMPnwMpkxJGvxkCmeP7vzf8SRFSaGjSIiLJVxzcojZMKuejf0KHSdg08ZbiRoYbCwTX5mWeyAlL3sF/0+t3/Gk1mlFabmHv85Vf7VNpu17bTbMucjZ1dxtPIB2v0+CDBsGzDX+jdfxpYzWUqxsCXpqFn0llMmrkQEx0Po8fLn2BkuxBoDCgfctn3LXw9JhJe3pPxxhtT8M9P09Ewvx06vj0f032voN8KA6aOHIKmAyYj9+xmLNoTjQ2ns7F0yQLY2grLjj1vazR+W7oY7goFPnp5BN5YeBRhcX/jpS+XoGjtp9gTk4ATf89CmqU8gdogxnuff429S3/A0r1Jlkh9EvbGAssXLICNvPIbIysjA47uHuz3npM+BAyJ6L3kGhLjb6GJrw/eaP8rYFsPlzbMQSf/TZadjBKcvnyJnFsLJF7ZjFVdv8G0Ti74nm+HM9GHSXxzoO8n7KYciRO++O43LP5yMo7mMPtyses0s68vCm+3QSLPH4v+mQG5jmlqshiy6Dj2uHN+E1qT4w+a9jqiN83HpkSLYDiy+ENkSBtiVDcTpkz7CszYlKkLNwNtfNBxTgJkL9XDxgTmRcrHkQtX0SbQF/kzOgAyN1w+sApRZ47h16DWeG3qx/hh+vC7XqAGLNp2DJ3DgqAqVqPfqCmIbHsLX33yAT5uXnE7IyZ/thiXYq5hSsvwKu8Zoa03zuyZh6jTx/FzeHO8+fbXGO51AfpmM3FmgQeW7TuNVu374aeQZnjr7dkYW+8G0vwm4eq8ZgjtMhI3zu0lL2clPvxtF6KuR6Nb0zowFqbgo7924tXXXsL8H2bg/ck9YEXy/+P8NXj32z/Q1N8Vl9d9gisIxpgBErw04SvY3veuptQWQuiJUCAPLCmA+SWCwcRlfCnzWFHAaAcvZztcTM5iX3zsKImSeAaLYKh61HzFLgnmk5lIKo1sL1SZ2e6Oh3Foy+eRApxrTZ4RJURCMfQcKYxmK6gMHOTp5LiVxMX5W5m4kUwqKoWZcLLSItyBAweODDKuEdlKI+ILTLiWZoKBnK/llEmeeCbIyY3rKhOgZag7eg+aiBY9hoEn8Hi8C0t5aPbv29/x86FhDzQeCfWwYpeWA1795lnlqyo4Evsjk6b0qna7Tl0b4/jJW+lPy2dE6bDKv++3gcSzE7rbfIJ1TCD1MN746QBi5rfDm1suodPItyCxtsHU8cPx++GbqFdiX2RjzcgLZqwTH29Nn47hI8bg2PwZRMGJyQvA8kAzSqnX8JfRacY4Nhz079/Iu7Edr6wrxrWtX5MYFX5fuhDc3Mvo+8Y+DAvmgMvhWZodjYzKt7xEbBydy7oRNvz0DtIbzES4ry00JnasFQwqAf5dMBeJJ1bij2g7HF1YbuDZsFFDzF93CJjQDGNaBWL08j0kUYt+45ScgUh+V18W++LhlaxF+TuoLL78pTT3tUFoMfcE/Gz4sGSnvEnVJPfEXz9+hGMrv0OM9WAsmGm5Gbhiy/GYX3vklOl4Z+J4iGUqvM4c0aSDe0hXfPhpb0wl1+CViIWkBkN+I4HF8pov4rNW2Cg7SsmPLBaxKQ4Y9gqir10k1/Gjql63bO1JQD416dfxypcLcXrFh/hyxQm2WVWvZTzq6UuuDKfsw3IgM2vhXjp4WSiQkcwqMWzsmzgTdYD8Pj+QWqSAHVonEErhpBBg2OhpOBV9CALzt+yLvbKJe2l++MwuJfVLEzhCK7zx1rsYPnQsKxYY8fLurJ9hLVRjxPBRWPRZBzgGtcHMTwdjGlGwA9ptYwsd0wvk+fy/bMPAYCTCoL6rHa5kK8viLE9FeReDVk3uM40SPKm0ZI1FJDDPOSMgTIzFIflVeRVsIKoaUcG1DG4kt6UChXn5UNje1+/NPZi5BnAFchj5NrhTyMelRCEu3CxCcm4aiotSYFZp4SKXItDeC3KpHs5+jeAd1AIBweEkLAVHysf5s5dx8OBBHD15gogOCbp3b4XRIwagTbsOkFg5kjwLHvLqUZ4kClvbPHtXr7ictES/6rZt2rn/imeRp/vh5OyaiLzqt3Mhz1bjYZOf2nCbUsdN9xUMDHPXbMXGgEaARzu4ZM5Al7fTYMq2g4JfRAqh12Af0gFB9xgjC7H4q7EI8gtAi36vYOl336Nl45aICg8FrNvj79Wb0KlHc7wv9cfW3Rux/qcx2HY+HrPmrS/ZX4ozq37C4j0X8OrXCzC9mzs6dW6JIrEf9uzbBN1QF3QKDURA6yHY1MzSzdBj+AS0HjAEdYO80KR+tuUEbRyx8OuROHw1Fz8v34xvX22HgX8cQgBZF9zjdXTaNwZ+voHoPfkDdCY1/+/Hh8DP3x/D3vsV1dpXywPhqzmHn3YkVLl62CuvYFCPSIR4SdAkt7Lgs7F3wDfDuuNaAQfLd3S4Z98tu1egW2QwRB6NsH/r92xcu4k/YPGgHvCv9xn+WPUAewa+EO0ah8Cpy8tQlDUj8DGinQ+aRXYkr1qdpWy/T7EkdgmEtykW3Sd9Bk0xOe64jzGzUytEN6wHSRMuFn09EfXrhsKBx9gAKTC4uS3CG3WCwVQhQa4cfSOd0aJtfxiNWnKt34FofjfUX2PAtgP7MKCVG1q07ksKEB38Or0Nj4W9UK/1HPy1bqdlf44Vvn+9G0LrNoADn7z4rT3xw5SuqBsQgI4j3sKfX75Gss9D4a0jmPLDMrQZPA1Nh7wC7039EFD3G3yzaCM+mDwUnZqFw0v64jgCY1zR1nYeniqkYO/iq8DVHGbcuEUkMEaNhgqiTijkw95GiiSlDlZCxtMi+f1JDaGijcLd9ovlLQ93HY4ku0ZvhiIpDk0VEUSA1kyLccn9F5XUFoqUTGjjE6FQFiFSq0WxTgZVgQoZfg44l82DvZsjXPzboG7zevDyDkaLiAZQqZXIysmCRMiFT4smuJIai0YhjWES82Fbry3EUheYSG3HRHQhU5HII8+jnlyLQhMXt3U65OmNyDBqYE/yPdrJmdzl5ioFEeXxmTBrZcfvJ7SMf9A2ApFYbW2jyH9GWXoQbGXi6OFLiLudBqlUhKaRdcuMHVl4wjxvv8ALTysDrGtoQpUrN51PKPlmj/g7lu8Hz12utM2l64MqhQd8tLDse4fRMxA7urxGfzzqWqVtj50vT+u3Vdvx213H/+TPVfikQvjQmQqGnxFdcOX6rbLg5Q3d2c9bt3uUxX083JLnRRsPlsWtmpaJ6RXSnDl3KVnKw0Pf/p5dSjm1pDf7ue+2Ja3E6M2WzzuW3+Twxevs55s92lSKP7KkL/sZ17vcTfngO2dKtrGktenI6conbB2G+GMWWxVHUls5f+lG2apSg8fFG8rPpetlSzrfHrJ8Nl9muT6jbl+slGz0jiXs51cLt+KrykfEmJ8t6U27Fct+Hrh2k/3cdrRyGtdu3S77/tWQ99G+YwvsP3wBAXIJvl68E19X2PbE9maWfC3egW8rxM9bsQrhROAFOEgwa8E2zKqw7o81e3E3PV+bxS6leLz2KfqRpSL9X/2cXUr5dfXO8pWRnyJuQuXtn3dyc3OfrfXpM8ZMSkgXmRVMvGJwDEJWJnDvccxrRoM6wci7cJMUqpaWK3aLEsFQseuhbI+S1q2745lWyIt5KmSlq2EKMYHLq5nhLDPb5Cez5sHM0ZPCnYgWUpgzaRmJtNGQsjszPR17z17ADz/+gqvNImEdSqog9YKhYrpRhHKIxAIIBnUD32QFz9H9kSaQsK2qr/CLob92DowLBmbhG80QGLQwGwzQcUzQG/jQkSwKdSZwtMVYGc3Bno7dH9JdJaWmuHl4J7Qf8c5HB1f8cPersYw3/tj3XHgTzOZ5fLPu599neno5oWXrUDg5KbBsyV6IRAKMn9QDPB4XefZNmz3Nrlh28qmnmP5zx/JTMbWdhReej9YcfaT9hNauuB79aPv+vxAXF+dHiKvtfDwt+CU1ZVeeGOklrV+MHwJ2lESJIGAWa1I6C/h8qI16sP2PJVi2M93TwvCgGrieJ4RGagsDOQ6/hoKhtLuDAyEYh40ckwZGPg8JKjW2J8dh5ZV4GMVcCN6bChcdyf+deDR2dkCSJh+JRhNMWgMp/MmCXBQTEaHXZ7OCg2swsf4azIxdAzMNF8kTM+kWSs67dChpgdEAZqDmTTOPbYWg44OeDGqNWvLl0vd+iAhueqpjWPed9nYO2T1HT5t9P8Ew9I0fRrt7+jywBeJZYR/Q7CMe7+/3DAYj/9jhS6w7aAatVo+F87Zj5IfvfWRr53KrmmQei9Lpre+t3j2ALyd3Q6Npy9Ej7F6PU3dzYvHHWG/qjDkT2jxqHnH97FG41m8NGyFQHHcQXd7fheNrSuuuBnh5N0XskR/Q8r1jOLv6kwemRaE8z/j7+9/+L9swlNKpjiuWRaez35nC2UwKTmaIoqnExklPauMmVSY4Ajk4/FKboZoVm3eLB8bOJ0rmiMikZHj637+7mlT2SSGvIUJFiDxS67+YnoFDKTnYnZKEfIMeeTo1KfCBIo4RAp0JRrkUEhHQuhA4blBBdewIRO7+yNEpoTZxWJsj1rpHpwePiBwDET8msw4mDRELjBBizsmoIwclgsFUYgcExgW2EGZSW1RqDNCSTRKKC+EvpCa8j8uUP4es23xn3UA2EPf7qyhpjHSTOGo8Pc3wThOR39cMN7960c5eAdeGTp8zQSSWqGsxy5Vg7JtCeg+dcHH9v0vvXufh5WyQOoXMqmq/Jwm/U6dO++638uevPoOXjyMK7ZphbI8G+PSLebAVFKFx6x7wtpPgm88+Q8MQD6RJ68NdFY04Xh28PKAVClKu4PdF2zF22ltwD2iIVjxn/PvTF7Dz9UWddqNxe9d8RGfw8fbr48ta2rT5SZjz+zL0nfAaQlytseSPOSiSBmDq4HC8/NI4BHd/G/O+Hot5Kw7izSmWIYHn963F6RzLcCSm9jqgS0P2+/I/f0K+lT9eG9P76V49CuUJ8zy2LuQrta4KmSjtSaYZrmCcIKWVj2q4az1fIEBYWAjOX7vDGjkyza2PCrPnxTvZuOFY9EDBMDvqHH6KvsIKBx2HKehJ7Z7PFPBk4XPBz1ay01e7avOQevsOzG07oF5SNjp4y1EkdsH14lR4uvvidtwN6DRKCPVatuXAoNKQWqCKCAMdOAYiIohg4Gi05LsBHGb+CY5FKjDdJTwBn6w3sesEjvYwGU1Iz8yGv+3zLxjMFj9Iz5WFMePOee2fX373TuKX043cqrOWqs4SpzoCpx0tYXdrs1dbZ5c0vytnm7Rq3ObIM8xutbTo1O3fxAunxubE3+rIhBn/Cx27NIJ/gPszqWTwGzdufI5Q5cr1Sxdj7v6z+HRgK0Q0uYRFC+ZiweKFOLboI5jDR2HZosXouHM33ureFb9u3ol/BnXHoAE30KjtJOze+TdaRTTDqvfa4whfhux1SxA8fi5kGz/Fz2cleCdSjZYTf8TJBW+xxwqP6IQNe7ZgaIsG+H3mQJzStkIn7iXsj2uBOo48DH11PNb8OgsRfV7FnLEtUP/AHvR//U/sWvGpxetUUQJWbD8Jx7iluGw3AvUSV+PN38T4aWrnZ3EdKZQnwu3bt/1rOw9303n6ulSRkG88+utQATta8QnADKW0EnBIAWyGvooGFROpgXtZyxBFClGm2Z7x1MjMGcFnRz0wBQEqjZKotG/JcMxSOAYTUnKykeVo8ZZblYHk9sRYzI26AK7WCJWNFSSJGTCnFaJFkBPORcVi4ZB+8Gwix5JbqXDJisfwGTMRvH8n/DJzkavUwN/bFjeTEiGwU4AbpYbQpITj7UT2ONl29lDpVeAyY8/JsesG+UGQV4Sc+HgU6cx4NSwJLYL1CLIHYs4WQyoyQU3OLzeVgw/NzREVl4CWwYFP4rI/VZpOXm5ytLXK2/Ftv1q3w2Gmo17926dzzu9cOo0Jt7K2w+GAnBrtm1KYZruicFm3FbeWdXPc6VBw6eNMxdPK56PMT9HjlbcHMK2QnLQzJ42agrrM85Cj4XBM2dmObDyHY3Z0cMh6Gvnlnz9/vtGDNvDzdERYgDNSUwtIyAmd27cmgqF8fdN6wWAc8EVG1IWdBFBrb8HEE+Pcmcv46qt3gMLy+S/6DeqL01+vQvden6F9Fy5SfiztPiiE2ixA1Nlz+OCrL9Bi+GBcnPUJ3v9tPZb1ehuMq29mfvuUWxdwOHcD9Bwuim5fh0NEO9SLaFkpvzHRV9H7r35omFKE/ouukBgqGCgvDszzSHjik8Y8Ls0aB/OGfbPP1CHMZc7LvUPfedz0BKTob+1ui30JuWzzAlP4M8NfjebyccFMwc4zGGEQ8aAna4VmIztbJQOXa65k98CIBCYNpoA2mcrTYLZh4nl8KaSensguKoCD/N7hlbtS78B30y54NGkJbz1gx+fC85VR4J05jy+/G4m9W7eiZcM+qHMxGueT0yEhBb0kNgPqiGBYxV5DHgmrOOQYJK8SvQZuKfFYNmAg0lLT8GpCKhIN+TAJZejdvSeurv0XRRkJ8FMXYsU7GeAKtTDp3HDnti3MxkwcOW1CZoYGHh58zPHdgU0ZXnjtcS/4M6JuXR/bbu9vMc9+qXn7iADHQ8/6+My01Yu+eHntlRM7B1SMdy0UgUeE6aBmExYMDOpxaciygT/XJL2s4mybdUeXjxrUeuSyJ5VHTfzB7WJtbjezU+gn3KLkfvDu2KQm++kNBoH1O3tVBpO5gkPismH/RCGfySwNMMLe0Yqf+0Zj+fdj2tRZ4uJgm/4k8s4/d+5cY/JZdRMDYUjvgYi7lIKPQh8sGsskkigU9WRpWLl+M+vcZOkEt0rbjZo+HcFthmODwohXPlxcEmuNLkFCLF61CUmxqYiwzsD6k4nwc7ZCfrEeTi5O+HD8FETK8nAn6xxu5Zhg9O0G1cnpmDgmqlL6096fjh5928Jal45P19SaJ08K5ZFgWvyeVxsGDzcH3MwxvN1y6uo3j/46RMgllZrHSa+xpzX2JhYSvWCqNMqh9DuDk70CCTk5kPIFJetxz3b3jIy4y9sjg8Zgws0sLVwLsuAQEoa7rSZtOELU79gYgXw5jor4GO7pCkNCAvZdv4m8HBWUXmJYyaywwzsUjfPyMPDDT6GzleNgGhe6pGwcPTgX4i5dQHZlvUR28wuAhGx/I/oiPu7ZC5OPHkLHjl1xceE8FB87iPZ2mfhtdiNodQXk/B2RHGsDvTEHd/KaoPugsbhx/TIOHzkCvToDPZ3+gdE0ETzufZ3cP1eE1PHCgoN3Dt6adyx/13f9n2lfynu9/NVGg15YMY4ZFjnotTljfuzWl3Un1Pf7yOsPk+a07aP/7dd86KrqnCExrQXk+Jr67Qct1OrybSd99M+Qu7fJyUwLtNfmsEP5OJmXWUPL0laB6vIR9OWR25XFwv1hGtEylQb7Dw/lzV4cdWJ2zChDYQHX7rBe7v6jXOF2SkSuSU3SuRt+dbWZNVvXo1QqJN45zn5+vtQyYVePkuGBJ+Msn7tLhvntOl/592AH2U0qGZZn0wQJN+4dJvrP3spxRyuYH3T99xiqGhx3PT62Urh0aOW18xPvez4UyvPMg2yKnhciGwfzRsw+YGxTz/HHV/uGT69+j6qx4zOzPBrA5fPKugkYPwlMF0QpDer5I+O0iq0dWnyLlWxX0rpQOsSyUhfEXWEGHp+LLVez4e+nRdMqfDFYCyUovJNPvnBwwssRzZOz8PeJA3BzckDajRgci0/Gr8s2QqPiYVsTf3D1GpicfOAukEIRGoKuwfVwylPB2h4wjuWWpKZibH0N6rZui6Vkf8eIRuBGRUNw+yqWjSXiw7stcvPPg6M1QscVol6gDRasjcdrM3/EH3/+hcPHTiAvtwADOvTBjbPfIjvpHJy9mTfpc6kl70Eo4COknq+ixwdbzV+Nb9axYZDTgWdx3IpigSmE+779x8C2XXpvrLjNdeU134dNd/KiIWsXTtrQ/0Hb/Dlj2F5yP/Mv7ls1mQmfP763W6OWnXdV3EYssdLcvZ8x8chuvnfbLg9K+/MN1z9LztN4Pmy+GUYGEZ1j0lvbmDJ6I49ZLrD+GhDY56G7jviMVTahypUzv/0BVo+SQwqF8kiUjFp67nF3tcftPNNbraaunnbklyFiLpfzUP2wDMwOPtZiJKosvt2ZOhbfXHlaKb5WRwSEnm0QuFsE1MSZUamdg4CICz3HAIGtPcwmxlV05YK3nsIZO91ccD05AwaVCp/ZyMA1iZAmdYBIpkLAoP7IvJMEY3oqDEVqmJnhEVZi2MrESP91IcxyK/A/fh35uTlQkzqgyc0VUw/uhR1HiF1CIxRh4YhZvwyOV87AY1JdWNmakK/SwWwQIafAhHBnFeSKuqyjqfCQetiw+F+oDFrsPBeD9zv3xp4lszH64/VlnmhfFOoGe2Lx4cT9H8w/nr/7+6ff2iBTOGQq87PZyWjm7k6q8gYp1BYJq4qvCJ/LQwOXZifOpp5o4ZcjhU3U2b6miSYuMwfF3dveuBrV8N/PxuwuLsitNBFFQdzNUNwlGKzk1kmZti19nfJPXic3oog9ljqrs+balpv8gO4hfIFAf3f6qVm5bl8dinsohzJBTlY3bmYWBzPfWcFwN0ad7aXzJ/qHN2qx8d6V94fPjPu+38pu/QfdbxWFQnkKMH5RCGtrOx81pVnjYP6o7w4amgfZ/TJtQMQb1e9RDofDR5+6TvjrbCZ00Fa5jdlogETAhUZnYt2fl9oq3LPdfTw9lh/L0upw2doeYdmp8HEqmYSOiBGzuRhOiW8gWdMYni7u8HN2xOWMeJhs5XCQSuHStQdMuRnwaBgGs6AJbkSfg1bIAU8qRrHBAIVYAL7BBCsDF4Up6TCRY2nMIhwm4oQxuOQU5kKhKoRYxMdH4x1Qt04E0nJTIFTzkF5YgMxsPgxaV3h6hoJxlupiZ4cGwT5IUuVhWs8eEMpTkZKwC0xJ8iI6k2Ys+UNDfBU9Z24zfz62SefGwc5PrRUtqE6LPRdObRnFfE+4fbOOj39QpebujKw096r3rMxLTSdsf6fL3KEfDqmXx9EZBcwUaX+8M/jg1Lnr25Zuc/vG1bAln4zaW5SXVWm6XZHESjli5l/9w5u1r/I8nZxdE0yO/aTFt/dvlxsLujFxYo4uEHFbVJk2kSFOLu43K27f+++zD+086PfenlPa1fc7pNXpRUIBXxd781pja0P6FCdhcR8YtaywCbUp/JJ8PJxgKPEs98AJOCgUyrOB8bxaUxuGJpOXs/2eZ+eN5Dwo/LjYKWRMFeW+faeuLnZIKMTrraaufvXwL0MkPC6nRhPfMB4evWRSmBgj8fu0T5iJNmjmG4DDNyy+cx4kGBjuJxpK15++lITWdgZ4O7qzrRYGFEF/yAuXYzrDqjAPKXvWQvTKNPLW54JvbwsnVydkLvoLDUlufRuEoP7gUVgkl+FSxh3omXlKSH3TViaDslgDhUCMouwcxpEzmBlMWN8KbB1Xy1isgcMXwcZWBL22EAfOqSEz5LPOnQqLitnJuJJSb8BIEpRYyWAns8b1jGTk5hfBxooPrYmZSrvmvigq0vRlyzQIZ/4e8cBwDbnvPeVgK2NUn+h+6+sEeeDfo8l7Z84/XrDnhwGKhzloTWk2+KVfSwXDjcPrxvr4z/yg4vqFh/6qkf1oa6f+C6ykVsUvz1rbZt67A1hjuNtXTrfJSk91VxYVKP54q9c5g04rrrgPaysxY16/pq067KkufaalQh7YuXtBfo6vTfpBy1Bqs5nvlH/yRk6R2/f2gS3eY6KSE+Mizg8qsllynY+XDgghFnI1RDyLH5g4WANJthWFqAVWiQcE1zsLMAvh+jr2N+QalCGMkWhVrSb3g/+f911PobxAlMztUiPatAh5qPBjUCNDK6a1YcwPh/RN/RS/vjGowes12Ycp3iVELRTdFV86+oHBxsYKVlYiGFh/DEK2kL67N6Ji90TpyIiK4VJbh+w8HfJKBkkYGR8JVyeAJ5QjMeYSrOq1hUokgBUp3Hmk0HckYkF37QbyjhxBXHBdnFt6FRkGCewbBMPZ2h5JxXlQiqygkYrB0SrhwBfiUkEupDIXqLR6cIzMu5p174jc9Ey4C3mIic5B6ya5uKq0Q6jOBL1RD76BbCuyR3HSLtaGY/3WzTiRkA6lmofvV63CrHfawEjECLdmP8M9PMP75L5ioRQ+n4ewUD+bXh9tN308slH3ZnVddj+pgzN4+AVfLf1+eNuyKV0nVBYMu++sqpGBW4cW3diad936TU9Z2dhnFhfksN0c34xtlnS3oJfZ2GcNnvlP7/AGTU5XldaDsFHYxxtl/QRI2HeRZ1CGMnH2xtR3TbHbhus9OwV7qC+yhfzYOgb09udmHcvzmDRwXfqm6tLVGYz373aROe+GMqMr8zUrLbGRs7vP2Zrml18yO16VK/VF6Vi15SiGjBiMkqnpsXH1SgS16oEQ97uHJhUgtUgON3nlJ/lqcgFCPGyQe/MMbIOaPrTJTtyFI/BreH8vkYdPXUfbyDoPmeqzISc+ConwRwNfi3OpIwdPo037Zg+VRtK1c/Cs2xj64mys2nQIo0cOwrk9m6F2boDW9b0eKV+a3BQIFG7gVejHjbt4mNSg2j5Rk6qTe3fCyUEB/wbN71lnJrWsVIMI7lbVvmMeSPz5w/BpdP98Hzx6Ee1bN7jP2vthxr4jV9GpDfv8YvfxGIgMxWjXtkajnx6LktljX1hcnGyRqMS01tPWTDnw4yCZgM/VVbdPZx97bLidXT5EssLQShNTSJr0pFZezEzHirJZYquYXKGqOSbuRsATIM/JERqtBlySnCn7GPgCF5gNieBb20HIEUKfRepQ7mLI5Aqkzf8DAr4AWo2GnRb22oVoLB0+HK/FXoeKvAKLlHlQiyWQCYVwUKlJRDF49kLw1BaX16xhhpaDgtwM2OUWkBpuPiYPzoaDwA+Z2SJI+AayFCIuXYQuzRxw5cJW9OjXB3Hxd1CQX4iw4HpYvWwDvOq0B59VSi+WDcP9CA5w56w4nrLro3+OF+ydM1DxpNKVWlkVl874qlYW3pPujexbTtWlIeaLKrV3vT1vf8gXQyNYvwYVxYJIIi8c8dnSLo8iFCrCY0ZfBHQLS7t56j1XUzLrwphr0HiI4rcVV9jMbBXU1Ut09nLb+yRTCbXOcN9WiBxR4Kv2ygzWcNHZEP8b4FPjQumBknXxjlhMGjkYfy7bg1dGdcHJNfPRb+gkxBzeAbh1xorla9BryHCc3rcbLqH1YGcnxqFDJ2FS+CDt0Ap4dh6LzNjLuB3njhaOBjA+X9atWI7InoMRe+E0ktNyMGpEf6xfsQxNug+El60E5w7uQabYC97mJGRbBUGWdodcKgNWMccaNgInD+9HnloCkeoOIrsPQezli0hOuYORnYKw9dAFdG3bDGt2nsHI4QMQFZcH6/zL8CeCozj5Ki5GX4dL0+5wMadhy7F4jBjQAauXr0DbvsNQfOs4LufL0a91XSxbuRF9ho+ANbk6uXeu4FBMCiLbtUNeIflx404iqEEElq/dgaGjR2D71u2wkeQhJt8DI5s4YfvZJIwaxIg3LX5ftAljP/gMm1atQLNeQ8kLxIRj2zbC6NkM+dcPok77/gh2kpK3lgYrVm5Ab3J+edeP42wKHwO7kN/QoCW1FBX7W+zduJGIhTHYdj0fjVv1QdrBZUD90ey6LWtWIKBNHxRFHcKtbBVGjhqCs7s3QufWFM2DrLFs9VaMGD0cjNedlKvHcepGFvr07YWtq9cgossg+NgK2Ul+shOicPrCDTTqOgjmO6dw4o4JXRu5Qmzvj0OHo9A0UIgD0alo2bEZsotFEMSdgFzGx6k4HXkBkovj0hQt6tli9Zot6DtkCAykhpUCD/iRz1WrNqDn4CGwJrUsZcYt7Dwag9COnZB/8RCyJHXQrpE3TAYNVq5cj75DhyHx7H5kiILgy70DbyI4LiSo0cjXBru270F2kQZD+7bD/r374eksR1HCWRy6EIewiHCcuZTKrtuydi2a9hgAg5mLPdt2QRpICvuEI+D7tkBkkDO55gasW70WHfsNRnrUAcTr3aBNOgSdZ1fU4d6GnuPDTufN5DvN7IowdysYNTlYufEQhgwbSAqWmj5iD8e8efMmP0wrw/NK00ZBgvE/HtE29Lb+Y/qQhvdtBmZ8HIa5K1jBwIbvqrww004xAsLezg45hcWwtIg/+sVnnETF5iiRzNHBz9sZGrWSbQnI1glg5vJYp1DFRUXgqXjkedGBczsNQjG518miURUhY+IgfPf3T9C1aAt1QQ5EfBOMUhG4eVwYiCDQkXeVtbUcOflZpScAHoekpdbAo1UX3ExMw2cLjsJOno12oSrIxBoiJri4Gn0APQN1+HjpdHiFTGdm6cLt2zfBN+fD1SYBOTG2rH3Df2kOKqa1ITzM36bPJzuM7w9p0LtFqOsDpuCtOTKFQ0ZRXhY7hWNhYYGNtbUN40SIiLsieU32D1KEXK0YtrV1yPZv3GnL7XP7+jBhqVyR+/Kczc28fPxjq07h0XANivyusCBvkXX6wWTy+1dqIUjg1ensIxJpGHuEmqSl0eok91tn7+gSh1wOM6EJH+qcpg+TR36JYqqyicHa2TJXBF+Vz35mF1lmjavXpjv2r1iIEaMmYv6KPeAS5d/ZS4LruQYoAiMRt2sVOrcKgUOIO46YBRBe2wYVV87WFBi/8Gt3nYaNUY0JXb2RZShCkcABUp4lC2l3UtF7XCv88OsJ8IrPoE2YF05tWgytUYR1u89DInVGU8VN+PYajuU7LoCv8MCIbu64nZmPyD4DsWLZJowb3R9bTiexxjam9CQ2XX1RNpr2HIiVy/YhM/48nNzkSMgLg17uCpFZj8NnY9CwS18YSSHN45qx9ch1jOxQB5uOJGHCoDDcNBiRRcpufkoi/EPqghmCvudiNvSKEHRoJoKj1hXqtAvQ59xAzHU5YhK5CPX3hvDWPuRrjVi2fAMcSY1ewiOFXQ8nbL7Fx+b1W/DeK8NwZttajBg5Govmr4PCnI6QziWjd8h15Ze8IXR6xvEqH5pCPSTKOziXygNbbyYvKKbWsWX1egRI9Bg1shtu5ybCsX0/4MoO5KU4w80vmN1fQEq57bFyDG9qx7jSA/P6Wb9qN95+pTfy0u+Q2pcGXfsNxaItUejgZUB23Hkog1pDQDaPv5OBWKLdXibX4g65RtkqPsREzHGJyOvVawiO7NyKi3vWQBhng2EjJyAzX80W1hf2rYVVkh2GjhyHxQu2YsLE3li+7SomDWuB64W52H0lGUL1FbRq9Da0eYmQu/jCqMrFzktJEGkuw7qOE7zRDOkFluckJV2N8SPa4GJqNoJbdkXW0RXQCZ3RZcBgrNx+Bf3qkocrejv5rfRYsmwvbGVCCFRKDG9TjGyXXtBErYchaBgSDi5HkY6DBRvPwl6ZjDY9wuAS3Bwa20DEnriNW2c2IMTsh0EjhmHeqiPs+S+7eRNjJw3BmiM3MaRt0MM8ZzXm5Zdf/vu/IBgYnB0VSFHh1TbT1kzaO3fgDSG/ilYB8mfNDqUkhTM7ydS9YoDobDT08cD+K5Z5dRhRUTphVXWtCsw27HawTHXNGEucSixAS50RAb7uKCpkJEsSlBpb6Mh9LTBrYEhJBdfHGnI3FzRoWA/TP/ocxQWF+HrW1xhAxMBlIjj6uTpjWXEhEmQkxw4KmFOTkZ94GyJXB9jYkUpsYhx4RqY1gKQvE0OdlIyLfCG5Ji7487AtpjTNQZYoggiLIyhWmyERxeGng8Hw8zAiO/YvFGdrUD9ADEdFOvKSVbCyOUKEcQysvUPvGeHxohPo58Zdczpt+ycLTxTumzvwXq9aD0nTHmN+2798DuvjIO5aVJOIZm1Z48OY5CuhNdl/dOOX598dN/XrxX2jDuzszXdwSQ0Nb/DUHKtZ29hmGWV9pUUxO08oBBpLYW7tudfHLXQ/87XGguEBLQwMKUrRDncrDSuArl+52LlOaIMazSf1wBYGfuwJJPkJ4N7Iku+27esiOi4Vp3ceQK9mjkjKTIfCzQOFWRll+zA3M9OSwCeFdZ5Gx0ri0udZk3wWwc2a487JBHAY3/CM43F1LsIbRWD9ztN4eWj7ko3FsHH1Q317X0CVDG8/L9g7NICSVDBupWVUeEGYkZVwC1fOZcGvji/UJNqZn4fMxCvw8grEhVPnUZiahlJXluxe5GXjRWo0bTo0h5UhDw0b1ceGPechtvZD9uX94KZZo0WLRjh229IqZa1LRlKsFPBxQeqtq1An5EB6Yj9aNa2Hy0VGS35JDSI7swAX9p5HWIA9XOu0QL06wIYl12Dr4oagFvXgm5WL+KvX2e31ubFwaxiJG7ssfi3cvN3ZaynwqANllhBqUnOHb+VJRH09rZERH4P6dfyxmwgVd4nFqlyfnwjnBpG4lrLfcl2Yk7RywKljVyFOSkPDMCHCgoJxOF6FHsEycDOvISNZBnuTGYFNmuP27srTWFuuEbD9XDZC7cUQimS4k5KMYiKYfE3JpNYjBi+wAZJvxUN7pwBt7Mi10RciXxYAETceLuQlnJKahNgCI3Kt/CHmxRHB4o2k1FQIvS0Dcjx5ucjJTAXH2gs2Dv5o6Gpmb8QCIiDCQ7yx+1ohifdFYzdSmBizkJiQAD0s4lVdmIr4W5fg5BBUbidXcjtwuexAfUhdAhHa3ArFhUbE3iCVABIvFNsh6lwsVLeKEdAWcAwIRhNyXxWQy5gca0LyyW1wbtcUufmFRMgoYCsRwM7XD6nZ+SjWmKAgSQfZa5GWmgBvcv+cu52Fxv6OD3p8HokZM2Z8W/1WLxZDuoRt+nLJyc5fTmxZ9QakQA8hwvMCKYh5pFBleh3MJsbPAofUqk2soaNWq4WZbGckBTozz4SZy2fj+Xx+SRLlYoSxXygNs/YM5pLuipKCVqkXQGBlBb2JD62Kx9baHWzFuEVuJAFJ05iZAmQ5wbaOFIdPnsboIUPw6dezsWLRQsz48gsUSmwguXkD9ra2SGS8NCrsYOCbUZwUB7WvC6xtLRVZY1mWSF7t7JF7KxYhjVpCZVTir5PH8eeq42jVNgQf9kshFaJiyPjnSIERAQ15r4R6SqFV63Erjrxj+fkQkPvxjzntoXEdDV1aPFpGtkOnkS+DZyB5FlRrA/fcw8wV0q1N3WvjZ+8+uej9rvf2YT4EDTr0X1EqGA6un/9mqWD498Afb9dk/5a+HXZVFR/RofvWmuxvMBgkeXl5gTY2NnFCoVBZ03yXwriKVoT1ahYbfXSam6u9tdSpXtmkUgI+/55hl1Wh1Rse2Ndrcgx9F6pzrGBwx51fyVWrUb/+A20YBr5ksQ/xLOkqt/Zrjfrks/5ro8q28XRiWn7qsd/rlLiBqD9xfNn6dvWI2q43oSzMSLxQb5+yMNOh5ESer4Z+bAsSeo8dy35OGtSuyjw1CHIsSQUY2YNIgR7lnq0ZnxHdh48vSzeiP/OCsrykFHUtXT9jR1ce5s7kvN7AVvccx7vEtcegiZNIKZWMmwY+RnQhZ88sJfhU2L49MwPAa5MqpTFgrCUvLZjMBLqhbYtygdvEgSyverPfPep3YD9HdXcpO7dSXOpZ7DfqdxrGfjLjdwLbkmo06rJhga0fmAnQm71Rfo2ZrPh3CK2UVo+SnruXJg0ti2MGRYe8YslDkx5jy+In948g/yPKwkxRHTzJE5ePHGQ8o8BBIMOIjiFAx3JjqQGtScWgteU+OHdgCxFkfcBlstimfBtvN4vXzx7jLHllfsm6FX5ne/9I9lg+bqVnUY53yafE2g3+4eXdeD79Xir7PrZ7WNl3J8vthA5Nyn209GpBMtTCct1svCNRVpWpwxzQcp8wHvsD+0SS/5FsmJl94L1x5XlkcDHqkanCU2H27NnvP52Ua4fE+OTrHw7sNNVWVod5aSvut12XUE9EHWeMxc3MCLYqt+EVq5GfnQalRg2hkysc7exYI0FuBQPHe7w+VpEOnyOEVmZPavek9s7vBgH3LLz4BTidmQiJlRPMSjVMag3UWg2bHT5PgG3HokkFYBHad+6KUwP64s9xk+A0Yzq4xVoIrK0gNJGKgNYAkUAEoVDAtmZwmPcqETxiso4vFMOcn4Pzp05CYesFXoQOAvJiPUyOcXKJN+yM2dDrNSg2ZWBGuyL426uZmxSp6SrYKQRsd6bCRgQz/yi4njqcvxqN1O9WYOCEtbB39LrPmb5AqAp3dGtb74t3+tV7LHsAhorTUSdGHeq5benPM3uNeePrM4VHWtRkf1/vh+9qYFrqs7OzNwsEgl4KhYLj6GipTOTk5DDCdreDg0P3mnhzrEhA/da/3h0nENRMMDzQ6BFMme5zE9ctDp7lfEONm0tfDF+jtY3EA0+nAfrFIqxN+xpt17hDn6eWh4kT+z61tGsKhxQgzjXqDX143n///dn/BdHgbiPYm5ySpVn7eS/2ZrhfpaQUD8bro95Eri0zL0S5AKg4L0REPX84+objzOY94NjbIEWnh1goKhMJpV0UlUZMVHVYrhIH04Tg5+SjTfMIXDidhSTVBQhjr0Mvk4FTnAuB1gR1MWNoyYcOXOjd7PDl639gycHD4H7yMfrO+QFH0uNZm0amBYsZcWHMzYeeJ4KVwAQhM2M1hw97tRIhGfkYXCcY//BjEFWUDeXNZAhIjVpo5QKeVT5UqjTkm6XgkHe8LRHkjXusQ8GlobgUfxNtmjbEtdgYIlo4aNauG7buOIVeQ+bi3NVsxF6LxuJv+uON7w+R9daP8WvVHnYywfVzF2MFG2f17fkk05XKbPJUygLWUZSAb5ncaWr7D1ycZC5mG7HCrJDame2sHHg2UluI+fft7q8ROp1OLBQKc4hIkF69ehUzZ84su99ffvllREREMEZteqVSaSeTyQof51g8LrdGDtKqEwwsUofjUGUzNSVOduzprx0Cms2sbpcH2jBQKJRny7fffjvjRRcMmclpUe/1aTvR2S4kqab7mLmkBi0xIFsnJS8lprtNCK1ZV6nFYJdWjwb1vMAJnowzI6ah4+jXEJeRChl5hQmJiLt7KKUl4Xs9QvI5YlzI0qCBrRH5d5Yj9qYaBhjhqEpEqrU3zKlG2FtZoTA/F9L6TbHup+9g5Ivg5OMOh1u2yO7dBeLjZ6D394LRpIRGz4ETV4CctDS2q4SrLobIpCOFhg4L/P3h0aw5xD7OsJHwoXRyxqu//AiDSg0jefdzyDnxbBTg5BaAw+Whrocvzp45io1rPdEoRAWV5ipa+HnBykoIVVEC+vT6Ht71W2Le7K6QkDxKWkzCtj8/Qu/XfmSNK18kBLrirV1C/X/6cFDIE3cb7Rra5NDtU/tYY7A2/cb/xnyOaTaFKayZhsUn2hxDxAIz9buU+X706FH8888/ZevkcjkjGJivPCIWmO0ey3kyn8+rkY8TbTU2DCw6ZZnTRgdD0gdADQRDTQ5uIQ8b9lwHNzMG3UdOLBtmWZF952+jU6PKTclHtyyHwMETRoe6aBn04D7fgyuXov3wMffEFydH41i8FqkZxRg/6MG13N37TqFrp8jKkaZinE3QoomfHU5uWoHm/UY8MI3HR4P4AhF8bSpfpAVbL2Ji7+qG+Cmx/7YADgI96nvJYCzOhM7KCQ/SwNl3ouDgHVGjnKlykiG280BN7KYWz9uAcZMHVL9hBf7ZeQ0vda9b5bqrB1egTvsRZa5n5i/ahknjez1U+vdixIHoDAQLkuFez2Jr89eGM5gy4F7j38s7lyCs+9h74u9mx45j6NGjcjeVvuAODHJvSJ6ymfrDDKus76NY+zDhRyU5R90wp0hb7bTb9la883fupCv//ah7u4c9Bo/Uxr9qHU5+TRNrM3I2qwBX0jIRU2iAUmsATyhBcVEuxCod4sSkFp+ciounjsKjQWMYiyz9QxXFBad0YX1Kl8cbiTDhGTmsl8kQczJ2XAnB9Zzz4PBM4KhJzd+3GRBzHVITY2isR/joIZj+3nRcu5SA+Yt/x/yPZqPT568hxpaHjFu3SInAQZ5ZCAem4DcZYGdjg8LMbEgEYrjdugmtvRyHtkdj5cFdaNGwMTo1b4aQ8Pq4cuE82knt8cUb43A+NRnv/jwXepIHtVqF7n3644/1S7Ez1R82TRogLTEZ3epLoI9NwLD3OuPnL1/Cq8NFOBmXA77xBo7s3oeeL/PAu48bSHu58KHC1RB3vxVKjcFJqzfJqkvASshLv3ApVr9tdv+n1hQ55ZN/hlw+c6izT50GZyRSq9LhiYzPoWqNKlNSUuq6u7tfq8lxkpOTe3t4eChKw0TwV1r/+++/47vvvisNSjMzM6c5OTnd09VQU4QCQY2MHtW6+4+SKMOgca0YTE9L9XJxdUt80C4PtGFYvXobeMUZGDTBYssQVL8h6oqtyQskA1djkpBdaIYk+Si8QoPhGNAUN6/dRAgnHklE3Nj4NkGwowi3uXUwrkUjFBWrsWDud+gwcjiuX89EVpEBhtijqFvPG4HtB2L7xj0ojM9A06QLuJSuh1kRgBaBFkO33bsvYsDEcVAXFSL98kEkaq2gk/vi1MZ1qOstQId2TXE+RQuOxBnXb95CY1c1zhY4ICO1AP3DzBC5OsPZPQALV2yHLjkF4SmXyPY65Bvk6NMiGAdW/Arn0KbQ2Ych+sBO8g4oRqemATiRJUP3luG4cXIXion6dwhvTx7OYzDq9PATZcAkkiCyY0dsOXAdxXmFaOVrQj4p2mWBYchWi3Fqzw6AvOQ6tvTHzQLG+5sQu1YugsRWgSadu2DL3svQ6wwY3cdSOF3atwrFYnLOrq1xM60YG5f9i0mj2kEmFGH9rkvIuHEZw3vWR0qRCVLfujh2OAqkWoMIPxE0kkDs238IYn0RmtWxQpZWDK9mneAi5GDFv+thxVWhz8jR0BSmQyBXYNWeaOjzstC5kS0SigSIKXCEVW4MJPp8dO3ZHoeupJLanhHHNiwFR+GIupGRWLH+BAa3dEFclhZ8j3BcOXIUAkMxRo0ehKzYk7iWZYbOoMD2lf9CbmeFhu37kbwDS5ZthA1PC38XkJfsESh822DZ5tNQZsXh0I41aNC6Kw6euUIulR6jB7djr8X1wxtRwJPDL6ItHGVc/Dh/E8IUajTu0AJXYnORz3dB/MF16DZsAK7fugMZ/wasFWKciVOhSMvB3vVrIJEDDToOgRVHjwUrd8OUnYWAyNs4cz0HemsvdApxwc61ayEWGdCwbQtcuJJO4l1x48YNRHoX4kKhG4wpt8h6I0LDPcAROeDImRik5Jkwoe/D+dOoKQ8zQmJMO58hDxN+VBiPkdU598nPyDz95vjmoz2dwm49yjE44BLRwPgx5EEo4aGLly06uzvCwNWBbzaQwlSKHV/sQO5OJQoG9IRzl3aIO38Fzg2aQms0QFzSulDWPVHySmMNHSu0MDCt02KjEQZ1EQ5wyDthZRSc5Tbgcck7y5gDg40cWrOevJh5SMzNhoOnB66cYbwLK7Hg978gNJlw81AUOJENkX89CqYgV6ikCmj0Bpg0Goj0JmQosyAg5zG2cTP06tEV+7Tr8b2PL05fUCOgTiA+rB+O6Rcv4IsJo7B16V+YOPkN1PnlT3R99zVEpdxGWkYKWoW0hNhRhT1dhsNh74/wd7RHNHlemRaTuq4iFBOBolRroMo/iqxiKQRsS3XVLQwfDKj7UOEH/k4czn2FY03uE6lJs6V9HfclX4zov6HGB30EmJklG7TotLNinMlkyuByuT7V7UvEwlWNRiMXi8XF1W3bokWLtUyLQsuWLXGLCMj09MqzSBsMBkRHR8Pb2xvbtm3Dm2+++V12dvYjCwaRgK9zJ+85Cbk/xQIuRDwupEKuQSzkmWRivtFayDeQeJm3IOuBVRtmZk0dT7FPYipsUzqE02jQMFaIDxYMD1qpI4VkgHW5jcXtq9EQ+rhBkX0CXdv0x9Z/FpMH2R7NWnXGhUQDbJzdcOTccVK4tIZcYOlq0auKkZV0C8sPxcHGzg++tmLEya1QFHcdVgpvNO/SknWB2rJbNyRtz8SFUxdhE9wUMnn5zS/gW57+rOREREUlo8/o0Vi8ej+cAsLRNsIEk70/ODFHIXYIhMLGjrV67tgiDCvWHUNZb4shHZE9eqLgUAFioq6iTc/hWLZ4FfnFg8kLxZqo/mZYtvcU/Fv2QCPuZeSRgrwTEQsMquJiWPsHkYICiPCUIDqFh/BmbXDo0FEUJtxAv66RuLBlOS5fN6D/2LHQG/OJYCDnzpfAUaIl1y0WrQaMxo2tF5EkCsGkroE4nipAXTsdbuaXC/LriVoMmdAc+0vmAnP1D4ODmGiC9EQM6d0ae/NiEH3qDNwbdYLcrEVE207QHF4AD3cPuNtpYOXujmCFAC4+zrhx4AAs7U1aKGWuCPUv73RXZaagX7fmSD6xHjevxKLdkIm4sfcWNBwRPG25SE++gU7kN824norLxR54pX8z7Lqej7DWHXDr9ALI67YlBbQZrcKccfJyCptm3KUbaD1gHG7uvIZUq/qY1M4D5/PUaORggl+LTmjlbYWYI2sq3V9yRz+0bReJU+QaOlnL4O8mLVunVGphS16y2RotEQwSOAVFoFldLXILVBBbK5B6KwtSWx8EerjhnKMenLwbSL58Ge27jsStDWfIy9QM1wBHqHREMJpVaNupPdQXcnDn7FHYuDSCSM5Ux0yIU4rQOrgOBCJniA3XIRT5QS63g5A8iC2ah2PhwgS0aVwHHFMBea50EImtoExjKh9PRzAwrqEJT6SgfxbIRbzYuLiU/KUfdousfuuHgcfaMwiY2RM4AghJpeadyMaYffIUInv1xLmZ0xDy659IXLUKLi3bweBKKhE2QoTfjoEDl48uET5IS05FUXYuwot+xKEEBUKdM5HM8cXxi1rymzogId0IH48g1pA3ntyD2XwpJDJrSAVW0FhzoTFxkZ+Xh+DZH+HmOzOQlZEGA6m0DOnTB2sSEyCUi6Hmi0BuMWj4zBTaBiKgjcgnhYaepGUi301ESPDJG0nH8YSfZzERRjyEKpwwpm0XFBYo4aqIw+5Du9C+9xDIyXtIUzcUI7/6Ep7KHHzctAPSbEhF0bM1ESQX4WonIGkaUFhsTdJRw5CrgVpIvnPUrCtq7iO4jX5WCPmcogsXY9W75wysNSOk/Pz8WXZ2dltqsCmHiIVCrVZrJRKJ7pldsiJEIIjIMwsiAtiuB6VSiUhSuWJEgq+vL65cuQKBQMAOA/7www+Z9Y88pIVx4+xkZzM74YsOd6+6pxy/ffu2/YPSYkZiSAI7dc7OTAl3MKV+ksPzmu7u6PxAscAe6IF+GLRpyCQ3vkW7clEnLAJBjkSM+Drh38WLEdyuG7LOMoUy04dI6giJp9F3YHcs3XgIAwIsFvODmtth3d5jaNm2J+IvnCNHlOPW2c2QErFgxTqB45GXtRsubFqMIpUJY8f2xaIV29CpbyC2H4tBz1b10GNoPyz8ZyECIzuja+8OWLBwMXoMGoETx6LBIQWXWafEtfh0tAsTwk2YAZ2gLtsc6StMxI5zWgzo6wGewA3XN5BjaMwYPYakt2gRWnYvGbpoUmPxggXoM2o8Lu1di81mJ/QKKTcicnR2xP4ztxHS0hdRV2Jg7dsY2aQmkKU0wzagEVYvWQy3+m3QuZkJCxYsxRCSvog8u4LsWygihVunTq2waMlKyN3D0dAlD4uW7cfYkX2xdPdtuIa0xrKF6zFqwkB07FAfS1fthH/zPkQ18qCTilgDO7lHIJbOWwA1qb28NGIglm/cj37D+kJERIlZYgV7Zx/svKCCTeZlXC/0hrvCgKQcFVqyTbEihEmScemWHOHhllEEVq7+rLMs68DGaN9JjvmLV4Lr3gQKVQrSBAo0C26O5UuXkl3t0aehBAv/3YpRI3riQjoHDXsMwr9rd6Lr4CCcvnwVImvLKISIjl2xeMkKSDyaopljGhatvYExIwaxzcGmy2uw5oozGjhIYedeD0sXLwHfOZwUwoX4Z8VOjBs/HOtIfuwDWyJq4SaMntAPzqR2eehyLBq7BLDp59w8hTVJEozq2xK71+yGzL0+eFYS9v7Rx50Az02GwNZdsGTxMggcwuAu5uBKLHkhBxjBt7LGJSLotKRWNmTYYCxdvh7NezATq3HRJVSMY5evIzjQEzGxKWgaEIlgOyUKjbawJeu715fhyKUY+Lb3g4kIwFtRp2AjrzTXzBOFmXzqqSX+hCnOzTk+ZUjD1wKHh0U/i+N90KkN5kedw+Wf/kDd6a8i5o2XYXpbgDiOGXydHjaFxeD98gVE7q64fOcsdkTFwlaQiywfFVr5maAxuhPxLULfniHYtPkEKcQMUBapEWRvDWufQGRmZiLPbISTnBT2SvJukomZFy8Cgvzhu+AvmH7+GyO+/hwrcxIQv2ErzNYSCH2doTbrYOXhBVVONgyaPBQUFMLG3RaLz5/Fq+3a43ZqFvZuP47W7RrDs0iJU4VEnBuZuSE4cPHuiejENKiL82GSmGGylSGf5ENFBKuACIpPb3yJH45rkNdMi0aNfRF36iR6TP4EP7x9DHJ7J7Ro+x7WxLzF+mh5XiFSbkcTX/t134wauKg288Hlck/eHUd+8yAnJyemVYzxDVDRCySXiIV0UtDbM4Xrg9IdM2ZM2fBexo7m7Nmz2LBhAwYMKO/OZeI///xz1gjyYSkuLpaTvC+VSCT9SJ5qtI+/v/84kvdJ1eXdwcn9EuA+6IHqogIPvMv6jppQIWSD4FITBI4Qo8eNs3wPsMxoGeouQOhEdhpwTH6pfCibrWcoJk2wDO9r4t+D/ZzycvlQOAZ/BVnGjCsLvzTR8t2nlUV08MS2mPBSeV4mTrCsH9ij3FXvSy9Z4gIGl2/Xpk+5rQIzRdmA0eXHmDC+fOin2NoR40hBzNCuV/mww1I8wtpgbMmIvcZjy/fzb2j5HDm2PN2JEy0FXBMXskyaXBY/fuzwkm8hCCjJ9rgJlrTUDSxtAfY+ERhDFhZvcmM0sMyWKjTrIREJENiiKyS2HnipZD8HpkLe0zLcsjvzizcdXXa8SRPLh3827zYEpQObdx+4gqETGmPoaIutSHHOHfCMOvRr7Qs7cUDZPuPGlduSTAix7B3pwfx3IdffcnyP4eXHE9m4knMsvd4BCGhcfv3a9q1cYZ78UqkdQbk9x/CS30ZTYgPjGdERoyNK1xrhXb8V+ja1DK6cPLE0b5ZzHDO+PB/jx40qW1fRimHAsPKhwOMnlN8jgU26ILDk95g4sSR+QHl6/o06kaU8nUmTKz4TT54XoXVBzOdk3LqVlLPm8173jkd+ijDyN+aNaXiZCNZVfy5EiEIBj1atkKcvRrOkZHzRshl465aDQ54VHnmBv6Iphj4vEbv3HMKGrV+gSUggVMpiKLPyMaBHYzQM1mHlvmhcSyjG0NahmL+tEAKjGYy/JV16Lsyh9mxV6s7Bk3Dr0hzySSNxQFeM1C0HYOCaMMgzEIf2nERe64bIciHCoTgUiQuWQ9qC3JfyYKS6euNKYgxatGkJiZUQQltnKGxt8cn8heCQfL3epzc+3LAZ7RsFQa1Ws9NqG+OKYDZqwdWb8dbBLHzU2ggP7lm4CHpCkJuLv5e/jZc+WYSBLy9CVFQ0dm/YhkYdO8PAYTo8n68WBi6XY4iKupW/Z86gXhxO7RvWKxSK7CqimULpZ1K4upHClVTByicETYg5ZyNSuLm4urml3C9NKysr08yZM+9p/vf09Lxn25EjR+Ktt96qkQ1CKTk5OR3s7e33P8w+JXA0Gs0qkr8nWgF5oA3D/wsteg2s1eNLhNVY0nEERJTcawz6KAwvEWOlWNl7Y8LE6g0BnxVMv9y98MrEwn+d531qa11h/rFxvULfCxtV/57a2tOGsU+QCoVYOqIXFpNXVpbWCD6pidvydODVuXfgs0BsBbjWRa/+dpg3510EeBZCLLGBQKBCw7pBCHSLg60ol6Rri1t3bkNnFpOdOLD2D0Shgxy89DR817MPCq9ewVe/k0K+fn0Y9QbYCg1Y9v7XiLeXYUnduqj753Kkx56CokgJeZAvOEYBZFIFsgTJ6H/4DL4nx+rQug1iU1Iw4K9/2NEfOpEcA36ag9+nTYQ6V4XJ/y6B1koEXlYeDCYBtFwd7qTHY9kWDbxcJPjn4GVodDo0b+SNv+aOxZ00Luta2dvWiKnvnCAv8mf9azwYGVe/r46bfPv3Ywb9VNt5uQum8C+b4pqICKYG8DNTE1epVHZSqZQZScE5tXMFVv/4HuN9NPGztZftSt1L382RI0dezs7Onu/i4lIpnumiuBumBSspKembmma0IDNxqH3+hQWw71fTXSpBxMIgck4yck4P7TzqfjywhcHAKHSumBRoFuVaXFgIK+unOd7XDLXOXH0B+hTJysoiNwkPDg52ZXEaZRHEsocbeF9ErpW8wrUqKFSRl53Z4qWOJ3zyPuFNelLrETwTxxoFRWrYyB9u7HJ+bjbEcrsqBYFJr0UuSdPBTlEWV5yfC5XeBDsHe/BKDNlMzOx/XNF9R3gwRrES+Ys5Hr2Uzp077yV0rn7LZwspl5U3rt/J2vx139a1nRfm+WQMJF0kpTXqau56cs90blUH3doGYu2JFHDzTCg0WsEgcIEbrMlL3IQ+4fZQn4pDOil5E1MzkXvuFGaPmIw+9cPh2qUDfH19MGnXEXCEWegYUBeLfpmLlSuWsrYuJ18djTe+jkOIiwdi3NxwSFUER3tnXONEQymS4PWbiTBejCI3tQYaEQ9GtREckwlXyDuiw9yfGcs4aI16mJlSnzzDzDwnzLzeBiIgzvKtcOpgBto0TcMvrzfHmz8loX4jZ9hZc5CdpcV3/+wjwuH58fTIOO+NvhSbs/uHgd24XE6NfAY8S3Q63b9CofB9tbIQ184eAHnFbGvWsR+7jilYi4uLPZT52e8QsfAWE2c2m7ifD62fPWvjNWuxWKK+O72wsLB/1Gr1z5cuXZKGh4ez/heGDx/OdkkwrQwxMTFguhGSk5OZLhGDTCb7rLo8psSen+JuvPOLjdlkae1IP09u9kaVtiFC4KzBYOhubW2dUyGa6RoMr7gdOSfGYtfjYa6RUqmUHT52ql+Hti3XSySVz/mBNgz/zF+BsVMmIjc7CzKFPTQqdZlgYONsHaAmhamOqG4RY6BkpWDFRVE+KSwVcmjIfa8tLoCRI4SJiA9bR0f2QderlcgnD42jrRT5BUrwyUOlIurcyckBeqMR+tw8cKXWuLTzX4R3JzVrTQHMRJHLxTxkZuXCkZltrigfWoigKCm48nOyIZTbQsTRI7dADUcHBSmki2AmD2Bu9D4o6ndmLDBhbWNN8p7HpsGcuEFbjPxiIymsLOe17WQ8RnYOxq6LmWjhzQNHIsfV/VsQ2XcEeUCzYevgCK0yDzqIkX/tKOR1WjPKCtYKa+Rl55alu23tRtT1lcK3WV8IjIUoNkqQGX0A/g07sIKBuX4SG3tISgpQZqKj7FwiTKys2ZtOzNFCKLFij6kgx2TeJUadGjmFGjg52CIvx1IAC0lOcgrVcBIW4jbkcBPIIRNxkJWTDydHBxQU6yDl6yEQWYb/FuTmQCBTwKQqYG0irMgLzMCzAt9UTN5nBjiQ30hFCmvGaEuTfAU620A4kV1VRh55SUmRRc5x3d6reHlIJHLyVeQ6s75RoNcoUUTKczkRRYXkxSiTWUFJfmN7W3If5CVDw7XDtkVrMWryUHZinazsHNjaO4BPSv8tG/eiU7tQpBZK4GZt6aM7vHkdOo8YiwWrj+GlgU2IoNBAYLgDk3V98NQ54JN7zVhcxBqaWYn4bH4vbVuNup36syKjID8fMrmcCJU82DPXgYTl1gpy/xlRSO4LnlgGKyGpCeQUknO2h5Kcs0lgBWsJt1IcRDLIJQ817Oyx2LdvX6dndrCHYEBj54GRYyL21HY+HgXGzsfZnIe0m9loWc8ZBUYv3L52Fcf35mPzwVyMHNAHyw5cRWqBCCZyX4aTh61j30FYvH4pFE4KjBg2BAMH9sVbQXWQt/oPBIvscMdeVeacx1atR0iDFggjb6STZ6LBqeMGiURMXgsaIob1yCtUgmtQEcGgJc+wZcprLpcIB2cpTLbOMKfEA3EpzOB5GB1c2Dk1zFwheU+IUcTnYUSwLfoG30Q+ET4dOkVixGtzSAoayGV2RCw8X9NRBdlzv/hh7qDPajsf90MgEMyc98VrXteObWb7Tx18wzsRwfBh6XpSK08ly/RX5mxd/ufbvVlXiCajgf/RgLoFP+xIqPJFoNfrPYhwyD137hwj+BmbAzY+KSmJ9cNw4MAB1hBSo9FUW3Cb4/dcdjcUVnb3m0/uD+cGzJAfJrTRaDQOIULgHn8M5H6M4DCTHFXGvbCwsCkRFmeqPTbRAi+99fHN1PQstm966fodP638+wcnInTK0nygNGfmETAnHIbRoxWWk5e9HUeFvi+NhyrhKDI5vli1+wD4RjVGt5Ij0yESu9fuxeSR3bBj7RYMnTQIMRlmpKcWQnBjK9oOeQn7L6WiS303ciHTYMiMhqxpEyTqnHFm6VKMeakrrufZIidLibSkZNRxEJAHQgwZKQhPpvDBu7oGrk42EAW2QGquEtHxSgRrDkPesi94hbeQyfVC+vFTuJ2UTV4ATXAxlUPEhRaGSxsR2cATNiStdYcS0a2ukvzAZhyMyUOHerZYvukUhnRwQZwyBH4y8swSMZORfBv2DkFIJ4XMuU1b4WcvwO3T28F3q48dm88QEUT2beAOextSmFhLsHxPPPo3VEGvM2JHdA561reYkNhYkwJIKsT1m4VIOLcFAQGO0OXEQefgjwKNHms3bsDLkyw2IMc2/ovIPiOx9nQmuUH1aCa+CIlUBqOiDjauOIJJI9sgLTEBuuJMcM1u0Fp5I+boQfh4u0GnykahjyeMZilOrF/MzjzZeegorDyWDLVagw5ON+EV3gPGgiTcyjHh5voNkJBaTI9B3RCldEDa7kWwlXLQauBorD2ZinAHA8nverRq1w5mhRXi4xOQd+skXNzs4RjaiZ3Rb9nizRg6tDXOpKjQ1F2MxVvOobFLIcRGFbxa9sHS3TfRThEHeasBENt6IGnHKkjDLIYNOnU+NFotlu27gXFd6qBfv7b4c/4yTJgypfzmNRmRlZEOa1dn/PPvDrRr7AouKeFlSENCpgqxxzdDRIRHzyE9EJNjxp39S+EoNpEHVIQtZ2Oh03DQo56SfGpx5JYWGakqDGlrC21ROtLVNji6ejNsBWr0HtEXe6ITkZGSjkgPIRISouHWvBvycrNxp4gLY/RKNOwz9pnNEkheOo2r3+rZcnbeyOeswfvhYIZcGuy6QZizCY6BvZCUmIf0Agku3zoBOamk/P3vFuhMDvD3kMHOLRj2ublo1r4VVu/cBVJew8yMfuDyYSbbcgUytCHPxYrly9kuEsb6/Y7AgLWaQsRGn0OhngMrUmFRkQqBSKmGwWCEUWAGx8gHqTOQCgipNOq1MBNhz83VgJd4CUaTCHypCwwSN/Ls61hBzTGTbUgljGclgA+568265oi+cRX6pBNEpC+o7UtaJS/CfcJ0wTfsPGx+qWAoSL0ZVtV2QWENzvd/c+6YjT9NX8qEjQaDYM1vn84dMvXz6XdvSwrjPK1WK23YsOEZPz+/0MuXL1da36pVq+sqlaqpTCYrelDeVOkxc6Tau8QCh6fJkgZNMGdmZSlsbY8JhUJNRQdld59bdnb2Vw4ODh+xEaQiizsHkM8LVltX0TNAhIRdbm5uZ5K3t/h8fkMiGATvvTIaW/cexf5jZ1FYVGx/83Zc/TqBAWUTDlVjw2CZzId9WTIuWkvzaTTAxskeg3o2I4mfhcRKCiuZyDJzIoEZE6xTM9dGBqmVGHyxFYRCEXkwLELl4Jl49AoVs6MvpGIxqWkrIJRawcD6XzGgfl1PnN22HRF+fNblKiusGI9tNt4QkbQvJRfh7nntmG3kNtbgpmSzzSXMHHQyUuvXCC1jsQ3k2IxntCvHD6Jht14w5lXdWiaSyOHu3wCe3EIcvGUNAZhhpQKSdyOc7J3Qs70j4tNycOfqfjiEB7Lpikneb545gnrtSaGcUW7TwrxQTGYjojMA1wraVJuTAIHCl216LMzPg7WCqaWX3wR6olCL9RoIxVJSC3dEv+4WAXJg12n07hkKTUkTvUwhx/6dJ9GndwQqOhjnlM4ATK6doVgFVWFJqxLJi5WNHbr064/jW0iBS669jUSCXGbYqpmZFdBA7k8trhVKYcczs9dNT877ZJwKjcVM/rgQlN2sdzVM8cUIrB+OrMv7ISK/tZXchggeISsuVPmJ8GvTH3fWrACaByAn9ixk/q3Iy9bSLbhjz0WMH9EHCYUqeBNhJxbwWK93dnIh/L2tERXLgadfCFTKeOjIOcjI/dK1fx8c2bSDPQeFUYT0SiU6BxKZFW6d3Ingdn1gSFZDIrcqmwRNZiWBgFNBoJM8Ng7zRuyBzQgIaQSOnojWYjH5RaR41m2qjRo1emoz4f2/IiD37YTPfsb7fdfDM2MbmvUdibpaR8RedYSfvz+pZBxCUD1ftGzWC/YRPth5aA8+/+wreHh5wNPVC2lpaXBx94Rm/r8wCSQYvX8LPqwXQva/govxt/H9krWI6dyECNwCZBPhEOjljbjjJ2FSFbGzcXI0xeCpdRAamUm0VOQ1WQgueS6hM8BYSN5lRFiIpFL42DgiIDAIdvb2aBZeH3W8vbB4zT9QpeXh+B09ipQBUBa7sCLlfoUGpXoimjQ/trzku16ruW/fapseQ/7d/Mu7i0wmdtpRFGYmBtxvW5GIGbOGsFOnTgk2bNgwmhTcVra2turBgwczgkMnlUrvtyvLnB03lnT25oypwxOyrcYM2dZhQx3cgtc8zBR3RCx8TD4+QMpJHoostppexivHAH8bUnmU7j9x9s3TedKpo8JkZU6bmHupogbo2i6SFQwMYqGgku3GA1sYBg7tCamNHJpsUmt/aSh0SotAkvq3J3FZsFIQ0dCtGYngwpa8sIf2sXhh7DtmCPKKdAhxJqLAUQiOp2V4SZt6FsOQEf2asfPSS8jD501y4NS/O5MqAm15MFnbkVp0HnqPGgqZ0EQKTj5CbQth7jIEchEXWTmkdh/qCo0yHzpOV0sxax0IJ1Ij9Aith/CwAOQVatHQzQZqUoqae/QnokWKAvLAdghzhlWDwWyzfosgi8Ovkf2ao0BlYFsX2HPuGGaZ9RA2qG+bDcHgYaRwMbM2DEz3gA0pwOtxpNC6D4INqfHmF2vRvZEbyetAtom+Q4ilib7XoP6QkcKvSGNE91BrcMhxxQIOBHwBhHwhcrNyMGJETyICjqPv4G5o1X8ksnMtbsYHN3eGgdsdCiKAGOMZha3llhk2cSDrV8CRxBflZCMovDHq1auLYhJnQwpBYbESXoPGlzS1F2B4a08U50uh53iz/f5cWx+45eaAJ7VBx0GDQBQU/Dg8ePYdhnO71qKgQItBTX3Zbgt+8HCmeQ7aAlIzb+EBE98PcnIds7NzMaJ3U1iT35zxidDU3fIgjO0ewnZJeDTqxjrf6dXUg9wWTmAnJbUlL9ucHPQcZTHcdAntRM61AIPbWJ6/Ll0aI5/sHGwvRZHawAqGtv2HQmxtgyBzERqOITV+8pvZOtZh+6Ol+eQcJDboMrQfuESMeos4cB04mJyjmbxUtejTxJ9cJyKO6o9CTkExmvu5wFwyoZFI5gJnkr/+5N6Wy8XsNe8S4QVlQS7cBoyBDTkvpmuonltJl0Tn4c+sdYHB39//NqFar4qUh4PH5+HrTckYP7o9dnzyC4KDm8CeVGZU5L7zcffDb3N+g4eLLa7m5eFDayvIRD4k7ITUwkK8+c4fiHCwg95OQp4hOXJFQryjyfwfe3cBGMWxxgH8v3t+F3cgIRDc3d2txVoKFGgpNdpChRp191c36tACxYoVLw7F3Z0kBIh7znf3zezlQhICFBpIKN+vb1/uVuc2R2bm2xHU+m0azu/ajRy2LcAtQqoQigghAjpWMDh7egsUZw6c1jwI2W7IyVmQszIh57B/4+wPk14U8dWMP3FPuwaeR6OsEMALNup89qxkG5+VixdffRr+5zNxNlvCu99+hCk//wKRVdbstjxWQbu52+qUJZvNWuTm7dq6sVPTVu3XlbSv2S8wLTczVe1uqcjuK5bSdDqda+jQoT9fTXr2n4jr5XDk3DN8Tibe71EVbYIyIPlXjwmvUPH0lY8ukTZP0L1vAZ5X38kuPxydZ41HHdMX+2RsOZMCVmC45MELV6wveD1n4fIJz4x/aFzBiS/XhsHf39PQLygkv3uf/4VRNYNCPJmYwdeTYfA7yUPw6mtWEAgJ8hbc2Ba95/fDnzVzJh//gqGOeZsGnZ/nOnzsAp7DmPQXeoX68EOMFxogemcBM/sGoHCZzT8oJP8Vu3aw5+w8+sdO7tlu8ZaNtCxtF/qyag1mBBfq2upnudCAyPu5vULyr63zCyy4doDvhTJXSPCFMAKPdqjn4zuai55HUM/lWccLC+o6UYtQ9ofn9uZu+BgLnTPkQvlSb/Jli+c1f/7vSYwvAvLX+foFFuwbGuq5h7xQV5h/UP57o+d3yVOs9wtAy75D1UFr1M8UdOGYoICiX6zQ0AufJaTQjeNpCy5UVvczqze/4BMHBV84jkcPQgs1KtXqzexcnjvqa/IcY/HzpM8/wDf/WhdO7h/o/QyefdUr5X83vXv55H/XQoKKPnbk95lv8cv/boeGeM7l61/4O5a/LiAIN9qpU6dirrwXuVo82qfXazFl+lrIDhdOnjoJH18LIip6GszzwXW4+sFBrBBqQppiR5rgxEN/zmMb7TjJKiiBlUIgs8qFNs+KULMfQmOqoW50AyxeuxBZf61mFQ0dNB17IzeyGjT1miHYmgkf9negisaNM1//CDljH5LsbiycNx9NGzSAyWRQZ9tUr6+9kBc5ZQV3PDMODSKC4XfPPXgYFsyZMR+zFi3F8EF34PlnnsZXk3648TfxJpaRkVFpzv6p37+z4rm+KwYXbYpz9vD2NpcqMAz836Qwu8sGvjjd9l6lna7s7OxgvWRfxrPgqoF6TNuXhawYw6SRPa+5sKCyVGw+ETlnnoAieTI0RTKZBCsOpzjQMNwAWbkwYjr76cxMT91h8fV/tW69+qtMqzZ8w1Y/wrdt3LH/vtGpaW+FhATzcSpotsryxtdYNr8SE9VYyoX09PQbX0q5hWh5KN+sQZ36JQ9hzB8hnPzfp1h9/BQCjEYEW3wQZjZBb9HBx62oYzxwvIbFH0fmuNLRsaIPtmzZgj+m/wYfmwh5SQ5q9O4Ev/RsSDZ2jJn9RU5LQESVSEi5WYiLO4H6tWuwCo0ONrcTok7rmaLb5USmIuLNQ9tgb9UZSbKE5lYZboOEOwffhtMHd+L2vt2wcnm5bypQLvDKsN1u/8BkMj0bGBiIn7d+gWHN70Ol+kXn86lYu/mWS50jMqCyLAhift5a+rN7JSWeV58b3FHXD22jzHh3U+bxkT1bPVIa547T1esY7dxX0NgxRElC+8q+6F3DB+fzpDMGW9p7bdu1n+SZdrtWwXETHn3g0cTkzHr7jxzv6HA4zWcSzsYUFBiuZhyGFUtWoSf7wl7K2j2x6Ny4yj86lzUzFeYAVuuU7Nh4LBPt63geVyzbFQe95ELXFsUfF8mYv/YABnZuePHJbggJW07konV1T0029ehmtTV+WnBN8BjANx98Am2gH/qOegCRJTwVO75/M+yRbWA+sQyr955FniYYT943MH9rHjbF69G28iVmj7lKn33zB5589MLYEvs2LkD9dgP+0YRTpGyxP2wZZZ2GW5sAg15Av3o189+xjMclwc5eHU1KwOHYYzh6+CQOsUx/7549OJCVBrM5GFqLAe52bZCj5xNb+eLUll0sw3LB1XkEHMcOAVF1kWcwwhpdBY8sXw3LvMVw5fDHFHmwW1nBQnIjtG0LmIfdDl2cA3o/A9LtOZg+dQpSdQHoMfQOJGY78fHn38FsuHG9dm4mGRkZwbIsa1iedmdQUNDr7GcoKywUbH+m++toVrk1pmz+FhWq1uFPfvb1uv/1xxu2aFdidIH7+8Rqp1FvMRp1RnUa7NBSHOU1/tTJeFbTLwjTVgmxxK54vN7FA4pco+iYmtsTD8RPjtBmjVL8QjfatTGvfjZIdyY8PPyK0YsPXnu687CHnk7hDR8zsnMqetdftjo7ZcpMuGQR9/eJxm8rjsFgCMTGOdNxOseK6KhKOJqhx5jbG+PX6QvR565RyLW58NN3v6DLwDvx95K5qNmuN1rVDIczMwFT567C4JGjsGrmb9BXbYG49bPRbtg4nNy0BKbqrWBPzcNvi3cguFYzsFINTu9dh9W7z+Pee4epXQq3/7UYubmesLXsSMUvUxeixx0jkbxjOQ7nmNCpogtbjychpEoMjhxNwt0dIrFy9ylUatYTcavnonKHAUjcvQJ+tTuiW9Mq7B9rIn6bvQz972ZpmjEFAbU7wHV6PVJsAu66dwx2rZiL83I4ejcLxZwlmzBk1ChYHRJWzJ2OdF1ldK3qhqhcaBJniKiD++/tg9y4vVi45QB0EVURf+QoHnr4PvWPTpXa1bAnG2jQojeqNc3GhgTPrU87tQt/bj6Gym0GY8uK+Tht9cfwgV2QcnQLNuw6Br8qNXH8wEk8PHogpk6bjQZdBuPMsskwNh2InKNroavcDK0jHfhz3SGMvHckDCJ/PBBQ5PcYEkLRg5sF+0OXTlGGsiSobXAKkwQ3qg+6Ezl2BwS3BIfLyqdm5GP7Q7ApsOvSoOg0kE06CEY9DD5mZFtM8LWEwlSzCqyxu+E+n6S2o5EMWmhSM2HNyWN1JRtkG++1ZVO7F+vu6I+aETVw5vxWZJ5OQvNly6Dr3RYtNXVQvVZNjH7oEcTFx2P+nBlldG/Kpx+nznrnj0Wr1KmZZ/7wMfx8S34+f3vDIXh85j3O9lW7TRn96YJnzWZziYMxFTaw0bBMUaONuNJ+Jdm4eWufBvXqbPL387voOnEnj31rdysFw0Gywo0jKirqko0qr1VE/e58WN77eB50NX9UeCCBd6kcMGqcfeb8FW937djuD77+sm0YtBH1cG8HLRLOpWDAPSMx95fZcGisGHXfA1h3MAn1MhZi7ewjkBUzth+Mg93hgqFSI/jGr8Hd992LJb9MAWreiwUzZ0DRBmPvyUTYFA37RxGA6KYxaBgTiJNbtdi5dj1ch3V4kB0zb8sJljE7sT7WgPtHs4LHsWy0q+mHFfsSEGrmyezGR9+AJTACPqxC/uueBIRZFLiDK2DIyNGYumw3BjTKhex2oP/QEZg1hV3bHI5WwYn41iohYcdWtcCwegFbr/hj+4F4VnswIjLCD4eOG3DfmBFYuP00Dh5NQ4A+EStSfDBmzGg4JVbLcMnsGA3OHtgGVG3KH74XlLhceVmIj09AgNuJ3uy6U3/6E/f2bwreTJRn1zrNhaZze9euQIdunu6UK9cewOgxg7H6lB0rDyQjxMSnTO8CyeXA4OH34Nvf1mJol5pqDwctu966lVsQqfHBbdXdSGx+D8LYfj//vAgPjR2OGVtSMKz11bSpJeUNqyUFXnkvciOJgqi2MXCJCiRWe5G0Zoi+ftAoWrgFl1oh4LNjCvYcGNi/R1asAO9fZQkOhbRkNgySDaLaxoYd61TgqqiDaGPH59nV3lxgBYew/r1ZASIP1iwnNG4NGkVUwcaoGEQtWoNnAneixpY/1ShETpYLJyODsPbgfnSsV5v3Wyrr21Pmlqzc+Dj/GR4aXGJhweW023ft2vVOs+YtP5h09wzXRTtcBiss8Mz+mgoMP0xb8E1y6s9VeObrazGnDu7T+YMuHdpMT08531nQGsYW2lWJjo4OuNK8DzcaH3/ht6/fDbdabQWNFy8bYUg4uAcLU+zo26UWis/zqXYXYj+btW0CTYoBAaGBOB6fqMbwQhq2xtK1W5ERWkPdt1XTBjijBCE4wA/nAyti+/ot6FbHF2fS0+GyVIAex9C8fgQOHNjDO0Oqx4Q5j2Pj8rOo19XTw6KSvy8qhXq25aScRVRUBHYeT0ZUgA8qhfEGbEpBurw2bt0JTcXacCfFQQytBd+AZPjmh6gaNW4Ks9UES4gPssKisPXvrfDJPoet65agcbM+SDzgi6ggM6qFabFjy1oE1G7nuSfsGLPGcy3F4A9dfhdGlzMPqakpUNjpi0wyfhEFu/OqwjvTQ72qJhzYvRfwb4EwPxOqVig6Xbt3ul57WiJCo6OReszTk0IMiMTK+WuhYX+kmkTbsGX9X+jU8kKbnNMHtqrdKh1OCfltUclNQPF25yBljj+q5YuTFRgsfoGQs3JhzEiDZNKrs1C6WnZCJZsVLn8d3Ft3oZXbhaDubbF23nKk684h/fQZONgfCK1vKDQRFWE/cQp6Ww4c7DcsO13qAE1wy+qgaX6R9yCFnd8UYUOi1YoZ99yJ+5cvxJrTcQjNCkKn+/rgw4WLYI3wh19MVYz46XucePNDmHyowFCpYsTRE6fimiWlpOFk7BlUqxIFe152YkpK6juhERV/MhqNtlat217r6a95WOWHRw186OvJc35Nz8iKyM7NC508e/H/+PK/V54okvH6+/vXudKsmGUlICAggy/e95dtwxBZrzH69/KMI8GzsfvuuzCPRcd6YSy3e1B93Tm/E1jjmAtBj76dL0z8VblFL1TOf1379qJTcw7v3xngCzzjV14YtaJxkf1GP3Bh8iD/mBZo521L3nhUkf1G9G6sHptxaC3at2qWX/72nOueO3sW7BdRv2NBsbFpFU+tfPZv2WjVqZ/6+qH7hqG46o1D2Kn6F1nnLc+Of/L+IuvH3H+7+rOgz4UYghZq43sBo/tfGOazfpcL97RrzIXPElG/k/pz7MhOBet6VKiJHoUG5h3StQrSLFUQWazYJ+adQJX6DxcUnX7/YRKG1upy0echhFxeTk422nbqCsGah45N6mAvq3y4Umxwpmbj/Q5+CA8/Bas+Bu1e64WsVDuqtnkQp9q3w4zv3sDs9WeRy2eybBSMu8bfj8P7M7Bj1WqYEuKhk11QbHlwymmoYAyEIc2KPCkX2ZmZnoaVOg1+nTIZCu8j73ZDkWR8sHE1bDm5qB1VGWdwWn28QYAv332xeZ9hD6uZ2LiJb6NPt9YfPP7gfROjLKXyKLZ4Xfkfa9uq5V8tmjaJfvzFd3fGnjlXkLWdiE1A7eqeuXF8fHwGRkREHPVuc7lcOofDYfAuTqdT73a7tXyRJEnDp7jmC69YeH96j+V5OV94ZMC78KiFVqt184V3+WQFEwdf9Hq9k/+82s902W/ciF71L7e5XAus2/mqjxkyauSVdypHTP5VShwkfOy4olOoDn9wbAl7kfIovwBPUYZyIDs9FWPvaoDXh2VD9L8PPy6Nx6kdB/HVrEXYvegr9G8biGPWOLStOxBuR1007NQJn775KHJc/nAHDEVUzE+om21FDd1pjAqsBkfdeCQkR2HO6e0ICA6Do3oMzu7IxorNGzFxyw4c3LkN507EwhgYhsnbd+CRVvlzzPL5Z7QKywxfwAczfsexhGSY2FckJScLPkEhl/8Qt4gqkRUPxCZ4MuWWjRuW5hDmlx2d8UpYJu385sNXG8afORPz2AvvHZEkWTvptz/UwePsdqf85jMPxicmJja78pmujP/d8BYkrvZYbwGDFWByg4OD03x9fXNKekRy2TYMhBByK3IrGdj0x5N47fEQrDlUE599vABHjp/BoMF9YM8+g5fvbYmTudtxMr46fP106NC/E3au+goTOh3Bc1NSkKtUZzXdAagY9w16NcrEkRwJAaIWGcF6pOVaUaF2CIIDAnAgJwdajQGP1KuBKev/QprNivp16uPHw0fQJroKGkZEQOQPWgUBlTMzIZ08iaC6BtiNGnXmTOLRrmWj6azA8C5/3bRRg42leOprjjB48UpAcFBQ0mtPjmn36sc/buXrcvNsGH1HzweuJXO/HryRi8zMzAC+8HU83dHR0XG8AOHdj2JahJQjFF0oH6yKBUJWJralpkOj1EPjVhX5LH6Y0MWFj+adh15QcPDgWUjaICRnWCG5ZVSPyYRPshFGKQnn4s9BH1MXIUH+OJcjYu6eg/j6zq5IP5uDw2k5OL58JWZM+xU5Kelg9UJU9w9FrWadsGP7BpxZvRKh9Rti/I+TsWbiBIhaVkBQHJj1xyL2R9wAg68v3LY8RPj6X/mD3CJG3jXwvWlzl6oFBh56L63zjn3mtb6T/veG+jou4Rz+XLbq43EPjHr6as+TlZXlz2rw0hsT7m/9v0nTF9WvWXle9Zgq+0orndcD/1vEG2EXKTBczTgMhBByKzCJWiTn5uB0VgbCgnzg76Phsw1iz4nTOHDMwWr9TuTkOGFnmX2T3N1QNAYkxgOxp1IRElIJh05lQgMbnpjiQLLbjmpVH8fait8jKNwX3376AUIDg/DihGewZNVf0PDGjxpgddeOqLV8kdqI0rn/EL665w5WUBCw7MAe/H76KHbo9TA3qAWXIoF3jNdrik9MeOviteNKFcJOnj2fXM0zEFHRbcUXb/i+pAI6D8172wPEJST6PPjUq6gQEYrtuw/wzRMWr9w4oWWjevOfeuSeMYUbBF5OpUqVzlasWPGc1Wo1v//y463y8vIsvH1C6Xz60sMLW2az2RoWFpbMH0sUv5cUYSCkHKE2DOWDxu3GwbgkREcByzYex+4jJ/HtN9/i/RnJCAlV8MnvJ/DEvTVx8lwWLK4MnDh2BErwHZj1+1J2TBW88ekELPlzJuo2bgzXrl0wWp2Y/tyL6Ni2BTq07YZja7ai5+23Yd7s5VAy0xBUvSbkSqGY8uizSD16DMFCLuxx8Rj390ZsSk1F08rRiIoIRBockDIysfqlVwFJgdPlVGfE5MOt88nhtKxyrRFMt8wgbfzfCm8cyMPoOo2YptEI0Tt37iyVNgFeCeeT1KWwbXsPDhw+9vmBEaHBJ19+8v7eoaGhyfz5f+GpoIvj/7YtFkteTEzMqcLreQHGZrOZ+OdghVJj4UaP3kcWhf8m/JO/D4Uzeu9rnjZvY0e+GI1Gu3f5p106qQ0DIYSUIE3bHm2Dd+K+QTpsedmJRXPn4t1Pf8bqJfNhtdmx68RB+ATUwtTVabi/yQb4R3ZA234TsXrdOgSc2I2Dew9h0aoV0Gh1ngnMbG5IGhGKbMPWb79ATocGOHlwJ2SNAzh7DJLLDrfMMn9Jhi0rW50G2969BwJgwrZt2xHpcKDduVxUSI7Fa736sfMocDvtvCsHGjRpAsFgQM+XX0ZUbT7M73+7xBAXFxedlpYWXDjzfGB4/0clSSrVSnCtKpVW2J3OgDybM9Rg0KldLJNTM2v4+prP+5iMaXl5Vv+kpORwVmApeD7EM+PatWsf4T0T/sk1eEbOCxJ8Kc20Xw/atWvXdl62bNkHrzO8pDFx4sT3+Yb3339/Iv9ZHt7zdPH08dIX/xkQEJDJ1ycmJkZ89tlnT1apUiV27Nixk44cOVJ78uTJoxs3brxn2LBhM7Zs2dJ6/vz5Azt37ry2d+/e7GMu680/78CBA+e3bt16C9/G9xk9evRk/gvmx/Jz8HPxc/Jz82vwa/FretNQnu6N97333njvFS9x8/URERGJTz755GexsbFVJk2aNNZ7r/bs2dN4xowZw/h94PfjSveK78uP8d4rfi5+Tn5ufg1+LX7Nwmko7/fqUt8j773in5N/Xu+98d6r/H8zvfl94vfL+z3i3zn+3Sv+PfLeK+/3qPi9Kpw2ejxYTohafP3Fp1j1awP8OT+VZfY5eOOVlxFdvSar6VeDNnslAk0CDmzdgmxHBaxavgOjh1gRY85ClR4VsW5nItav3QpHUhr8o/M7lfvq+TNgOJxaVjBwwc8/DLogrWfQJ/CaIx8YjucxAiRW35MVJypl5qKeZIPW7APB1w/ucHaMHKOOMsm/KQr7KTsF9H/kYYRViIBbuTUeU2RnZ/uVVNNmNeVSbQk6aki/F6/2GP6ogT968PPzyy7NtJQHWv4Hi49f7w1bFB/Lvjy89/YX5Wnk772/CB5G4e/5sxb+nj9/4e+9JTV+HH9vMpn4XOXgPwufjz+r4e+9jWR4SIm/95YM+XV4WMgbZuJ/7Pn78nRvvK+998b72XiaC98r/pkK3ysemuLv+T34J/eK39PC94qfh7/3hrL8/f2zvM/9bpZ7danvkfde8e9D4XvjvVfefzP8J3/v/R7xe8rfF/8eee+V93tU/F7R/BHlg0N2INUh4aUDR7DfmQPBLSKrxtcQJvqj8ysOHNq0A7l/b8OSv/7Ez/dGILJdM5w6/AMMfAprVxi++eE0Gna4Ex07DIR+5cfoGGDGX6zG3/+LL6DxMUHjcMOalA457jgy+TMDpxt57Jp8lks+iBMvLOp0erUgYFdc7L0OcPOvENuuE6BlhQENL0yoc9V70qzmmOyfoKwVIWg1bM9Snx+pXGrQoMF+/jM3N9eHN8zjjQoL/725kfi/a/63hP875n/3LvdY4man5bUmvnhXPP/88x8U3qE8ved/uAu/Dw0NTSn8vlq1aicLv2/SpMluvnjfd+rUaR1fvO979uy5gi/e93feeeecwtceM2ZMkXnNeW36Rn3Wq31f/N7wL3Dh95GRkQmF39etW/cQX7zvr3Svbr/99j/54n0/YsSIaYXT8sgjj3xb+P3NdK+Kf4+K36vi96b4v5krfY+udK+Kp5XceBLLpc+zmnqtAycgyEbIBj9Wq1cgiBXQidX6N61dwkrZBmR2aYOQ/XsQG3AKp1az8qOsQYSfHYnHViEiog20vy3EX19OgSk7Az01euj/3gLXyHvgsGhg8AuHUqES5IoRMFWMhmKxwMQKCXaXWs5kZQMZksTL54Ka8bvcfDRINY6gLiajiRdOIfGBnPIjUby7pRptuEUjU7xwzpeoqKgzhdfzQY68YxLwwZD44h0AqfAgSMUbPhYe9Mg74JF34RWCwtuLNwi8FVCjR0LILe+cTcbgQ7FooDHDoNex2jzLlGUREe5sbP5rGZLNAtwuF7QaBVkPPozIaisx9/dT6NCqM7auXsXKEhJO/3EAqVoDwsMqQvIPhVAxCrlWFyzpdlSuVB0B7dvC2KQxLJVDEZGajgNWG5xOB/L4JFT5Gb4n8+eFB8+w1J48SeZRMJY7SmpEgWVeakGh8EKKKtyIzxv5I/8eFRgIIbe0tLQ0nidjVqARDlYocLHCQoY1EzFRlfHNuj3Y5LZDdumhYRVRmT+QM7rx2KEOGFopEadOHEflqiE4myrgrC0TAdUiYAsMgMbXFxqW2fu6BGyz5iD++FHUiKrKMnsD5E2boJGtsK1ei1RnFlJdWig8UiBJ8DZB5wUHqyBDEEX1cYWk08BPZ4SPwQhBp4VoMkPDCg6sBAODTgdb375whoZDLyuQ9RqIVIgg1wEVGAghNxkFLpcdoqiHxOd7FIUik84VpmMZJ6+p84fKMtslPicHv8afQqbDhjhokMMKAtnffQvD2bMIi4hAs2qVsXDRYrj1BvhXCsfyqbOw+OVNSNT5QNDoIGkE2N3sPAYd1tR7DIuHDcPuNdNxZtYMLFd2IiAuDkLcaehZemy8AODmjRpZgYOlU7NpA5x8QjjPR4CiNaL5wDsQ6/ZH9vlYtr9LbcTIUgxBckKxyxBZAcbAPl+g2wa9PQVmpxs6dxZ81M/MzumSYWGfceLCP2G2+KDHi89h+MNjWcFDhuQW4JI9hQ6jVlDno+C3iUcz1EEiNex/WhEaSaPOminyeSvYNrfbzUPzrCxya7SHIP8cFRgIITcRWZ1mfvrCv5CnM7EM0c3WsMxRvtDOTJAVz1N/lvmJbkXdLkuKOpHTp427wO6oAplliCxbhsFHRljXPqhQIRA6xYScxXPgcrogse2JaWl48aP3senlN1Hrw/eRGiDwbgnquAeKRouDKZkY9sNkfDV4CByTF+Pru+6FOzkJuVnpOJ+ZgiMuBUlGC+r4huHB5g1Qs04dyEYt7FotnKxwkJCei1idP35cOg93ta3NChDeQo/np8QydTXd7GeubIfgckLH3lsFJwS3hjdggIMVDNxODVwsXX9s2II/3p6E8R//CJ3ZDLdgVNPKcn9WsJKhFXRqoUCr0yDIYICGFTq0Wg1MRh30fhb4BQWiWkQlBAVq0bldW3Ru0uTG/3pJuUYFBkLITUNgGaSDFQimzluGmvVqw8/HV+2WKLHc1cUbA7JtDrcLLshqTVl2OFmtnhUy2OvMNh2RkaN2dPE0EtQaECgoSDn6N+K2O+BfryPCdQLMRiPSXDbAbcHqLVvRt+sBTB02Eg/MnYF4vZoIyLIEfqYNkh0zlq3GsEfG4fDfB6ANDoIrLRmu5GBocnIhKBokBfjiuF8gQuvXRWiFCJh0WmQ7rDDn2NUog2XdSuhNftCKSkHjRbX9gt7ToJEvPjCoU2rzKIEvjxqwNPB2DbLGU7jQs3Mmn/YFNAoio6shKCQMomBQ7416PlawEtW7orD7IbHCiKJGM8Dul5PHafh1nRk4f/Q8vpu9EHP79MPeedOofQQpggoMhJCbh8K7GDphMbKM3QCYJYcaTZBZoUDHMj43yxgNIuBwONQCg4tloAaWMfq7c7G6Rh0oDg0fU5llpCJElvlWtaViT262Oi5C1o7FOCIHwScwCKmxR+E2+4CP3vvUe29jw+/TcUdMdfwaewhpWs+Ivg52PY1bxGep53Bnx/aYg+PIs/gjgRVK0swupEgS7E4HMlm+vJcVdGr6BUDPzmll2XaGxojTPkCS3giD1g3JKUEsNlCw7PBMiSCzz+FmBZOCnhBuT3s+gbdhYAUSnqkrogSjQcMb+MGHpc/M1vPQhDfDF7UaVlDgvTEV8E5/TgOPyHj6Zup5t001K2CFkIBAdlgu8mwyJJcMrZ4eS5ALqMBACLlpyHDzLA8+rMYe4OOnNgRUa81Op1pAUBc+ABKreWtZQYEvvDFhjq8/Wm3ewsdj4sPbqo0DdfYc3N6zPd7M0iBOk8sySAf2Zqajd1gFnDyyF0qeHXaNDkaNiKZ39sfRP1dgxxtv4m8Nq6mLOrUdgpWlx60XcHvsQaQuWImc7HNw814P9lwI7GIarRmphiRMSkzBjJ1HoQ+LgFCzGgLad0FA7eqoLrphZ/uJogBJYIUbluHzTN5ld6iPEXjExCVLajSAFxj4IrHPqraJYJ9LlDRqewOZFRB8/f2h02h4l2p1Ubtc4sLjGa3siSTwrpvsarzrYcF91bDSikaT3/uCvbb4+wE0QjkphgoMhJCbBm/MZ2M1bz+/IFSuVAl+BrNaQOCZn8vlUgsMDsWTGXob8PF+iutmz4OfdrMawhcUWX200aJNawyvFoXgMaOw8vhJTJo7Heks809XXDCyGnl2rhWyXgsXy8gFjYweo+7GsinT0eSVV3Ha3wAXr6qzcztZpp6QkoKo0X2R/vKLEHJdarsBxWyCwAoAvCAgppxBanY2xMwMWHQmmBs4IbBafLbWjjPJaYg0scxaLxY0NnTbHAXjK/DPpYUnc1fbZfBCBPsMvHsl358vvPEiLxzI+Y8rvESd50+82pARnsGhZN76M78nBj+W4483dDqdp/AhaNnCz3Pjf7+kfKMCAyHkpiHJGlhYrf5U8hkEJEazmn6O2kvCrbZf8AyXzCMK3kaQPFPMc1qhrVEDG2bOYCewIT0lie0rYcmc+Xjv5dfwwosT8fR996OKSYPXf/gJa5KOoU3l6rAeP8gKDSwDd0uwGE04dy4VX/z4PeY/8xS6f/IxUnzMcPG2h3wSKIcDJ+tWh3+9JrDu2Qod+9NqYpmwCXYYRTP0LIPXKzkId6ZBf2wLfH9MgKtCJA4kJCLDEoilp7JYwUdh+/ARHvmjlTzo2Dkldm6tnS0au5qZ88IAn5lCz87NIwJOaGAVrOz/RQT7B0HHCkQ5J09AK7IihugpCPCGjfxYnXd4SMYl8XYfvAeFwLaxdLJz6dii1evUYaNCBQt/+kNIEVRgIISUK3aXwzOpkqCDjT9iYDViJ6tl52TnIC09HS5WwzYrRmxZvAgGDR+TwAiDfwCg06Nrg3pYs/cgjOqQyrwfhANZyUmspi2iVce2OH7sKDJzc+FmCyt7IMdmxwus0PDR629j4vtvYd33X+OnGbMweelCROQ4WBoyYWUFBj6ss8yu9dU3P+CO3r3xUY9eeGrjOiTl58Eun1Bo/CvCf+FstK/fHCMqRqBW5Rj4uWzQVKsL8zNjkJeZBZeNT43Nqu4C7/3ACghGvadLKM+4WSHGnmfNf+zghujKYYUVVlhwyhDtLrUtA2+kyB/KaAWN+vlMMCNbm4O1S5bhmb3r4GK3zcXnpjCZ2HlM7C+86GkcyY6DhhU09CJkVyZEd7bavVItYMkiFI0PdGY/dp984Ve7FgIDjeow1IQURgUGQki5Uq/fA8hJS0BwaAACLb4w6zVwu5zQswKB3mDgORx8THr4RPhBz7I1rdkIHatl81D+woW/sj9qFlbQcHjC7iyjz2Y1dj5gIs+s9YoW1atWgyM3B/HxpwpGWMyRXXj5xZfwv1ffxFOvvYGvn3wOzz88HlqbE1Z7LmxGN1xarRqxaNKsGc4lnMX8rXuw3JmL7CAThKZNgaoxSNG4kH70ADamGnEqLgchJ8+gbl0L4qbNhI3V2HV8lEbezdHNoyCS+tNDUdfrtDq16QCf1JgPm+AZlwGwOKG2y+BFIKso5Y/wCOTlubBk0wr1fZuatSFqjer4ClpWQBJEGVqjgb3WQeatLQRWCBFYAUuuCruSB4kXTBRWGNF62krExsZBWzsBOz9IUFNEPSRIcVRgIISUKy1bNMLRLdno1rYV+wN1Yehjnll72yZ4FzVEn//snS++xgCWsbJatOyJp/PeEr58XAXeQJD3atCxmjMfTElvRL2AUGRlpyHu9Gk4JYc6ZkOq2463Xn2BbTeh/6CBWDr/T1j0LIPmgwu7ABufctoloVajRojfdwTN33gJB6LCIZkt4M0MbJIWe/Q6+KUkw3w0EZ06hEOoE47MhVkwR1ZlmbngaYQoSGrBh7eN4J+Lt0fg+EBLfARHSWClBa2gFoL457SZ+FZJjUzILldB2wOzwYzwkEpF2iJ4F5Glw/NIQusZWjp/u7e3hcT/EzyTO/LjM4/m4kxsEhIy4xAdWPXG/LLJTYUKDISQcoNnZk3qRuH8dgW+JrO6ThB5LdzTgFHPm//x9gos087NzVWf08t8LAOzL59UDHoDH2JZVEeA5IUEXuBw84Z8rCavU6vuvJuirL52S26YTX6oXachKzikIvHMWXXMJJvLDY1sx4xp09X0BPsHgPc3cPNeBTYHFLcL2akZGHrfPVg15Rs0+eIXnDMa1WaJgkuAjZ1jX40QNH1kEPYP3Yj6kgxzgK86OJJO9jRM5L06+KLheXf+fBD880l8REZRUdsX8Axf4k8v8tsuIH8mSq1G7+n1kB/xMGnNakGAn09xeRo1akS2zaGwApKg9nYQFBFaVobS8y6hiqagcaTaI4NHGXj3U8EIp82JUxmHUDm4Gj2QIBehAgMhpFzh4XGLjw+CgoI8GSkfa0HO7yDI2zQ4XcjLylEHanLKEsxGAywWH3UMgsCgYFjznJ6Gj6yQYLPbYLfbPT0o2OJ0OdXXfOAldcTG/J4IPux6DRo2QFZWJuLjY9XMmO/Ht2VkZsI/IJi9d7OFPxuQ1OPXrVmD5XP/xIyB/fDgsQQcimAZcI5NbVFotdmQfu4kzAdTca6yBaIgskUpFC2R1PTwsSCcvLeC2xM9UB9Q8MiJJn9SiUL4KI1i/lDXPAqh1Xgy/oAAP7XAkJGWrn5Wnj5Rox4AreJpiWAymeDrZ+ZTs7NChAEaQVQfx/Dj3fkFBh8/H7a/1lMs4VEIKjGQYqjAQAgpN3iGmpuWDRfLALOzs9V1PFLAF/WRBKv5O2x2VmDIVrc7WObsJ/H2fAaWybKsTtRD0ErqGAY82C7n96DIyskuGKvBez7+Xi+InkaHgkt9fKFnGWrdOg2Qm5eJc+fOeUZaZJt5o0snK6jICh9MSVEjDnxkxgkTnsWunTswYNPfyMrIwzl2VSMrNDRcMhWR4aGIeuZuOB55RR0fwXriuNrFkn9GnrHb+eBS6ngLnmGeeaNKvp0XVnT5YyLwQgLPvHmBoGB0x/z5IXiBQdKz++TMVM8vCXZYXVmsvMD2k1nGr9fCreWTVfnAptOoPUMcshNat1btHeEtD/BBrwQ3K9DoBZiDFcSE1KKyAikRFRgIIeXK+u17cfhsBk4sXaNmpN6avqxjtWK7DVq7E3ZnLmy8UaM5GJYUB1yxZ2DSiujXvA3+PrBNfQTBM3ZrTi7knDTIThsEhxW23BwILNMUZE/Bwfts3zuOg06vg5MVKrSmIOh8QtXBjuxZ6XBpWaZq88ySzBtQ8qO8kzQ1adoM8fHx2PrRl7Cz9DYR3ajw+yzYRCDFHYuQB3qg99atUPzNnkzcLUDrywo2Ggsc2ZksI/cUQlySWx3+WY2O8MJMnpVl5G52XbtnGGfJM+6Dws4ruB1qYYIXjPrUqo4NGxbgW/tG2CUFtjw79BChMSrQSRoYJN5QVAOzwQWNXQ+3jhUmZINnPAp1PAfAxu9VTQeCtQYEm6pAcsvqnBOEFEYFBkJIubL6t89YBslqvPx5Pn8KwTJSxWnFV3VaQFJnWNSwbaw2rpGRywoPuXZWrzeZ1DC9ccNi3BcUhsDoyuojDT5iodE3gGXOrEauZ5mkTq8OqsSnjOaPOLyPCARRQU6eE7t27ICWD4nsY4Zeb0LFqMqo26guHhk8DKnZ8UiHEymiA4IkF0Qq+DmaNm2KXVt3Yfialch79gtkGyrgoD0L4YoWZwQZh1r2hg/L6X0DwyBHR2L4A+Pg07Am5jz6Oqx+WrVbJR/ymUcz5PzCAe92qfDHFIpnHgi10MC7VLplz2MH0TMDpSTwsSl8MCKpNjQBPnBYbWrbDS0rmPCulPzZAm/QySeWyNYqSLXnwi3k8YGg1UUWWMFJ8TQajdBEYdIP/dB8cH90/PSJMv0ekPKHCgyEkHJFw4dd9o4xxPNzliGfWrUR81gtv0+vPpD9faDnDf5YZu0naOHHChd2jWf0Q56lukQJqewFH1nRnZ6uTkzFCx7enhUaHuqHkD9MsqdboyQJWL9qDUw+FnVkR9Goh8Vsxp9//qnup49LQzWWKTfVGjFXOoM8yQW940LDwbi4ODz1/FOol6TAqPWFO7I+kuLioWeZu4FdQ68I6myR5nb9Ue+Je2BpWxVJU5bDPqQhNLw3g9qmwFMI0eSPwqjLT6OaZnWeKPUhC7T5QzbzFg42kR3DCg85koL5M2ehZacusGgNbB/P4xg+cyePokh8PIb8175qwcNdcL/F/JPxs6bZ7Nj+61SE/RXM0jC+oPcFIRwVGAgh5ZrEMrpTK1chi2VpTrMBZr0eDo0OokGEgw//rOTPsyBp1YyPZ6QO2TNwE+/GKOWXPuT80R+l/ExYzSHF/LA7K6NoDPyRAR/OWQuFZ7B6zxgGfLZLncwzfAMkVmsfKlRV57Pw1RjVoaMr+4bCaPFF8pqzCDP4QjaYWKHFhYrhFeFMy1JnzNSxAoma3cceR4W0c9BmhSBt/QGgjgA7K3yo8z7wKEN+ckTey4E/glDLBp6JoDRqMUeEzeUquDc8IsF7U6hDYpt41EVAnujpesrPJ2r5Z3dDq/EM96zOV5VfWPAWdgrfGy073iHbkJNjV/cqNh8WucVRgYEQUq7xCnVaXrbayM/Mav0mtii8wMAHJOKNFx0O2PmET2pPivzmeoJ3ciWlYEhlNVPO704oFxo6Wi08KBq4+eiLGk9o3q32IMgvWHh7aMCTQbtZHmzgo0+yTFqn8OGZbdA4eObqRo7OrQ61rGXnMLBM3xIaAiPv0ijnZ9A5Dhx6bTLO8p4NJhMczUI9Iyoq+dNNF8LTwVKh9s7QSCybz0+PqFyYBpt3u9QYtDCxQopBNKi9IVBo0inv+A4GraeLJm+A4XKzu+SWPZ9PVJ9UqGng19PrzaygIao9SHhEAhRgIIVQgYEQUq7xjC8iLAIWlqlHRETAaDKzzF1UH0G4WGFBYQUDK6uN5+bmFHRE5DVunlnyboS+vr4FjRt5TwheSPBOVOXNeHlezbtp8kWW3Or4Dnptfm4piupwyzrZrdbwdTzzVrstCmp+6uJjIvDHHqInk+YFA33+YFI8DXyqaT6EdcHgSYpBbZ9g9POBzsrSYdZ7CgzKhW6UBW0r4HmUwnt08LR7jkfBvoKOfUaTJw6gETw9KPgjDn5PxILxGzzn8RG06t10svTkuJ1qI0uFP56RlfyPyQoMRlYY4w0hTUbwOTFNoIaP5AIqMBBCyjUN+6/ugAEInj8fyXFn1IyXT/nszMmDyykhL9eKbIcVmTlpsDvt8ET2RXVcBovZD9k+FpgtFjUz1bOcnw/JrI4/wAoHfEhoXhvPczmQmZwIR162+kiCs2Vo1R4R/HyCr4EVSKxsTx1yBRkW9krPMls+XFMeO4fIMmEXz5wlT7jf6eCzP6qzRngGdJJ5xizmD9Qkwa24oHHJiDt0DKJBp6ZBFJwFBQF1FEs3W8syeZfCp7vOUwsMvKCjNsjIf/Zi5u09/Eww6llKcmxIP52gdsnkvUREteukoD6OsPF0sIIAb78hs3uU47LDarfD4XIWdFnlZzWygoIimgGLDgaRCgukKCowEELKHVlxYuXxODgNGnXcBSXAB1UrRuL4+g18ZCfYczI8Iy/mc4h6T23ex4hknRmC2cgyPxOU8GCIfn7Qmc3q4wyeOYcGBcLfJSLl7Hn45WSqEQReu67ToClOHD4CKx9B0miCEOCHsa+/iXpVqqHuyLuwcO4CpLgzoVc88zTkskxcUHIga9nxrMBhzmM1cpaBB0CPMKeAENEELct0/RQtjNoA3kwCFlYCMavrdfBNSUNVHwXGAD22ZaThLDuvwsd64OUOdk7+iMKo0YFPTxXICg2VRQt8HAZ1Aiq76Okxoc44KbACC/tL3tu3FZxn3TiTdApnJRcS2ed1mrXqqJeCoIFJ8cxcaWQ7B2sCEWx3weiyqlEKrVanRmNErQH1+7aDOLg9Swc7Lwxl9RUg5RAVGAgh5Y4gaXHv5G9gPnICDWJqqJM2ads2V6dh9vR4ED3jCOTvr1dHg2Q1ZaMO0XYRfDgkPpIhH00x1SkjxZEDd0oGq7VL2LNrN5T0XLj4IwRW29ZoPM/sJUGnXsdsYBmnnmXI7OeBQ4ex2Cccxqg6ELr2R6Akwia71SiHPx8x0eTDChc6uLNy1XNkCmy7oMUpHldwex4LKG4+kiOPFrDXPCLAyzky7yqpQMOjB+xThA8bg3FtauDPpdMQyAdrVOSCxxsuVjA6r3Ejia3QSp5HF942GIqkqL1AeI+KfXt2wG3NhSbMBehMMBkEVKsVkz+MtB463naBzyXBR5FkhY4cnadQwgeL4pNS8f3yMrOxZct6hIUF4sG7ugHGsvoGkPKICgyEkHLHJbKMMjsP9YMqorJPkGdCJR4iV8P2PNCuUzNV3sDQE8a3FDyzl/We7onqbJVsH7eLTwjNh4K2q48YzvFjAwLVUSOB/PkqWD2eRwp4lEIdYVGvUxtZ8veJ5jBUCQ+G1eZARq4Dcp4El92htqHgvTUEXlhxWlnBgA/vrIGDpYmP+8C7O8p8FEo+f4NPgPo4QG0jwdIjKDrPWBB85EatBKtBh4PZ2Whn0UPjp1EjBwUURT1Ok99pQh19Mj+64h0PQmaFn0TZCac6UqQRBgMrILBzRvla1P283SPVdo98bg05fxAoeNo9KLKg3lc/HzMcrHCTabey+yhQm0dSBBUYCCE3RPHpkos8ry+2LddhU5/xW4x6Vou2qwM5qd0SC3oS2NTGerJ34qb83hDe2RgLGjPygZX4+dliUET1mb3isqvP9os0HmQFEVE2ejJlvUbtOaBlGbqOZf7p2bkQzidC5LNgSlro+PDOfCZJUadGMkTePkES4OQFGVZ717Cau5s3MMxv6Mh7dAhmMztGWzBhlMLbOrCFN2aE2ojTCovBH+7cXLUQwT+TVOiRC9Q5Hzyfi/e40Ljze3koDgh8zAa1V4baEoKlXYaPkV3LaIbsdOd3H72Q9RcZW4H3ulBvgKebqcJugMaVg1BRKuhx4r2fNCYDoQIDIeS6KzxIUvH3hbs5erfxjJ7XzP39AhEaHKZOusQjBXanDVarVd3O83resM879oKBZZB8xkrP+TyND71zQfAJrWQ+zLJLpw7uBHVYZE9vCjUNGtHzSINn/m6oo0YKPDzACgJ2Wx7SUlPVdRqtgWXIovrMn4cPTBLLVvmMmRpBDfcLWs+U0nq+nS28gMCn0pYNFlbjNwB8tEmWHh27nkb0fG5+Xr1vIE5k2NGAJ05ttalRHxuoSeXpUzNtqCM7uvhMlgUDTnm6j/LCDJ8fghcZNKygIvJr6z2NKdUCGR/DwVsmE/ILAfxxBI+KKEJB75L8wAPCQoIgOZxw5I8R4V14OxBy66ICAyHkusvLy1Nr095xDwoXELyZnpc6e6TOAB//AKQ57NBm5aijCvAeAnZHrnouPtujOgOkxvMYgY8/IPEJogSNmnEqcF64uNtzTX68i6XB7naqmbGQv96TBqhheT53hE7RweZyQuvyRAjkbAE2B5/RUauOAqmw7aJWA4lfhxUCeAbNoxVudWIpnTroEvh2VjjghQGdyQKYnLD4+MLkw6eZ5oULzyMJ3pOD5ewwsIJLnl0DJ49i8N4NfJjo/Kmw5YLogqdnh4NHJdSPxx9V5KnreBryoGP7OdSoCG+8yNNgY/dPyr/nktZU9JEEP6+TT7rlOYecX1BjJ0FwkB+Sz52HC575NjT5j2eowHBrowIDIeS647MzqpGB/MyreOHBG/ZWZ6RkmVYmy88cUi5W8rkTEs+ojx945ua256rhdZZFwsQLC7yFP8sstU43y3ztnkxYbeuQ/yjCJXkeP/D2DaxmLjtcyPQLgJyYxtbxGS0F6Fimreg9GSKMJmj0FuSxfJxPRGUwGGFKPwlrSoaaAasDHQkWtSAgmvzhtrrUeS3cyH9EwAsRPHRvMvMBGuDS+7FM1wqDYIJdp1EnhJLUXpEsXazgIAoyBFnPD2SfT8CuhEzoHJLaHsIppauZOO8OqY4doQ7IxD6NVUK27BnNUeI9NfjjC3afMnPtcMk+MEpmaFJlaH117F4Y1Jk2Rf5Tr1XbgfAGjk5+ffWxCSsQmRS1rQcvBMkuK8K6VcOevQeQ0rkKK2QYodPp1AIZ/0lubVRgIIRcd35+fmq3PW+NXq3RFhptsXiEgU8GteeZNyELDigut+exA695a7T5rf41nmEXvfgjCm8BRO1u6D0X1HEXeC2bT2LlcLvUmrWRnccteiZ2UgsrrMDBIxCexpKCOrQ0Px+PYrglF3QavfperYXzHgx8xEje4BD5EZL8a7jyp8+WWS1dcjshCXx2SsDmcqg9JGR2LisrtPDHKw5eAGKFKIfdrRZc+HTX5qYNYdQbPL1CtJKaURv4e1Z4MbIMm4+xYNLyIpCnhQHPwzWyBgZWGDGw9Ua2n49Rz16L0AjeaawFtRGlJr9QI7HPpg5DzYfQlPMfT/D9+FTgfKwGeKYENxh4NEEpiDBQgYFQgYEQct2pz/J5F8P8SAIPxXsLDHx9kQZ+ReiLNGK8nMJzI/wTxa95qWsUPqfoDeeXcIx32OnixxVfX/yc3vPxdU5W8FB7ZwhikeO8r3nGzTPzC7NsSuBlIO8okt40Xuk+iJdqBJmfDvVRTKHfzz85J/nv0/6Tf4iEEEIIIaQIXuug7kM3L14roiFtCSHkOqCnEoQQQgghhBBCCCkVFGQghBBCCCGEEEJIqaAgAyGEEEIIIYQQQkoFBRkIIYQQQgghhBBSKijIQAghhBBCCCGEkFJBQQZCCCGEEEIIIYSUCgoyEEIIIYQQQgghpFRQkIEQQggh5DqSZAUaUSjrZJRr/B5JkgydVoQgeO6VrCgQBbpvhBBys6EgAyGEEELIvyDL8iW38crz9oNJOByXDofTjeqRAThxJgNj72h0A1NYfh2JTceYt5YiplIgbE4X4hOzC7YF+Bjx5bPdUDs66F9dQxTFf5tMQgghV4GCDIQQQggh/9LR+Aw8/O5y2J1u9k5AjahAvPFQO1Z59sfsVUew+cA51KwchJ8W7oWPSY/Rt9WD0XDlYlhmrgNO16WDGBxvJBHsb4L3oX+O1YV1u87gREIG3G4ZIQFmtKpfAXWqeCrrdocb2WyfKzGz9PmYderr7Dwn+2yS+ppfJ6TQ9bxs7Lw5xc4rssT5mnQw6DUlXqM2S9Omn0aor1MybVAUz3oj29/Por9k2pLTrVi3OwFnkrIBdkzlCD90bBKJsCDzFT8XIYSQ64uCDIQQQggh/1LNyoFqEGH3sSQE+Bhw8mwmhr60EHd2rYURfeqiZ+sq+G3pIVZ51uKp4c1ZpfufFcFe+GoDdhw5f9l9/CwGLPn0DsSez8aYt5fC6fIEA5T8/+PBgK/nePbt164amtWOwJs//X3Faw/vWRcT7m6GhORc3DlxAaRCLTaeHdkSd3WvVWT/BetP4uNp2y57znaNIvHeox1gKhZgmbXyKD6aeuFY3k1i/keDUCHEUmS/xX+fwhs/boKSH43wBiX4Z/zgt61qV4tXxrTF7R1irvj5CCGEXB8UZCCEEEII+Zf4Q/0X72uNh95bgZpRQTgUmwqXW8Kc1UfVZfRtDXDqbCaqVvTHlCUHMX/dCbWlgcPlVls2tG1YCdERfujXPgYVgi0lXqNeTCgmv9q7xG28sv3sl2sLAgz39KmP8UObqK8dTgn3vrEUoYFmtK5fEZ2bRaF/x2rqtv0nUjD6raVq+vk5PnmyCzo1jco/p6cG/84vW9QAQ1iQBenZNrV1xLd/7EHvNlUv2dpAYP+t/PoudXtKhg0PvrscZ1Ny8PfeBHR7bBYW/G8QQgNM6r7p2XZ8NXuX+joyzA8JydnqeAwf/rYNnz7VpeCc63cn4LUf/oa3AcV7j3ZC95aV1dertsdj4tfr1DTzAAq/bqemkVf+xRFCCCl1FGQghBBCCCkFVSr44a8v7kTfCXORk+coWP/I4Cbo07YqTp/Nwoa9Z9TuC7yFgc3h6VqQzfZdtvkUosL98fuKw5j8Wh9EhflefAFWgZYVXHh8D6hP7vlTfL5EsmPOp+aq639degCzVx9FtxbRuK19DKa92Q8ajVDoVPktAbz/l79JKbSNW7rpNHYcPq+uf25US2zefw5/sPPm2pz4avZuvDi61YW0XOK+hAaa8OlTXdWWHfzcPPjy04L9mHhvS3X7R79tV7ta8GDLFPbZ735lEZLS87BxbwI27TuHtg0rqvvxwIz3GnzfDo0rFVyjQ+NIdR1Pl2ff4xRkIISQMkJBBkIIIYSQfykxLU+tBPOfvEauyAqa1I7AnmNJalcKvm3d7nh132RWgeZ4NweNRkRGtk19z8cX6NkquuQAA3PwdCpa3fdbkXWDO9fEC/kV/W+e645lm2Pxv2nbkZVrV4MYizaeUBeOByTG3N4AYwf/s0EnrXa3ei6uVuUgdGoSiXpVg7GQVeBdkox5a4/hzq411c9XEqXQ67AgEww6Tf6YFUBCco76c/P+8/hre6waPBjes47aAuGhQY3w1k+b1O28C8Xs9wZApxWQlmUrOB/vdsJnovDS60R1nTfIkJppUwMvSuFEEEIIuSEoyEAIIYQQ8i/xCm52rhOvP9hWfe+WFPR/Zh6qVPBXBzRs3zgSo/rULRhIkXcRGPTsfHUchyw/I44nZKiDJLZteOmn7/VjQvHLa30uW3Pu3aaKunC8y8HeYyn4ctYu7D+ZorYi+GnhPrXVwFPDm5V4fOHWCJ/N2Km2suAqhfri5z8PqK9jKgXgaHy6+vqDX7fhx5d7ldiKofC61Ew7HPldOTg+UCOfeeOdXzar+/ExGHjggl9DYvfObNTBanepwYipSw+pA2VGBFtw6HSqemKrw6Wezzu2Ax+U0uq4MOgk35cCDIQQUjYoyEAIIYQQ8i/xJ/C7jiVhx+EkNK8TDq1GwNtj22PrgfMY1Ln6Rfv/wirTTw5vjq7No+DvY4DMKtxj3lpWMAPEJV2i5jxjxRF8zFsdsAq4r1mPGe/cjrBAM5rUCsN7j3XEHRMXqFNocjptsZkeCkUDvGc/cDJNbangtXpHHFbvjCvYyTuzxL4TyVi+OVYNbFyqTs/HiXjtu40F3TC0WhFj+jfA9/P2qd0iOB4Qmbxof8E5Cgcovpu3BwPZPRzWo7aaDr4TD0D8ueEU7upeU91n4fqT6jovvi8hhJCyQUEGQgghhJB/ibdC+P2tfkXWxVT0xwdTtuKhQQ2hES9Um5/8ZA1u61AN3VtULnJ8i3oV1Fkp+LSXJTlwKgUtRv920Xo/swFLPrsDWo2ID6duQ47ViX5P/VHiOXiXhHFDGl/2s/BYgLe7AvfhuM7o0jyqyD58UMbZq46qrz+Zvh1dm1cuEhhQ2H/dHpt50bmjwvzww0u9kGdzYcpiT8sI3mph0ceD4VtoEEkedBnw7Dy1+4lbkvHxtB146+F2mDSxJx7/eJUauPho6lZ1KYx3yfhsQjc0qxN+2c9ICCHk+qEgAyGEEEJIKRCEop0GeJcCPuNE4QAD99oDbRDoZyyyjk8/2aRmWMEgh14j+9Qp6P5wKbxlAm8dcGe3murCuw4ciU1XZ3PgaaoU6qOOqWA0aC46NpJte/m+Np7ZJeCZivNsSi6G96qjruPdQIoHGLhH72iintOLd2toUTdCPVdhfMwJ3lKDnzcsyFwQiOAzbbxwr2csCd6lxLfYLBU86PLpk11x8FRqwbpcqwvNaofj7x/uhsst41h8BuISs9VtvPtFLXaNwuM0EEIIKRsUZCCEEEIIuQ5a1atQ4vriAQaOz0zBl+LaNap00borMeo1aFwzVF2uhKdlYKeLu3NEhl28rjA+tsSAEo67VCuM4nhA4kqqRwWoS0l4MKFeTLC6EEIIKV8oyEAIIYQQ8i+IIj09J4QQQrwoyEAIIYTcWHyI/WuvlfLpEQWZ/XCp/d75EHki+88zYJ630Tu7gJTfMJ031ReE/IsK6n/kX5PLOgGEEEJIeUVBBkIIIeSmIcMtOSAoGoiCBrLLhvTMFJxLy0JOdjac9jyYtEaYjb7wCfJHWFgYDEYTRNkTXoBe82/CG4QQQgghV0RBBkIIIaSMeZs2CHL+BH6CAEVwQZEBh8JfO2GNj8XCn+dg8/JpqFrNhvbtdKgUCpg0WkTxNgpOJ0TZDYGdSOFTFCboAJ0beRCQnOOLXYcUJKZHoVmXMWh9222QDQI0sgMaVhTQaPMH3VMKtXKgBg+EEEIIuQYUZCCEEELKmEZWWP1egF0UILHXOsgQXSJStm/Aj5++D2OAHaP7BeLO5mm4o54DLqceokaGLFshCBpAb4LWYoDG5AfBwLJ2nQhZy0f2z4EReTA7MxFTIwQOnyyIxqnA2fVYs/w4HM666DfyeYiBvhA0OjXGIChlfTcIIYQQcjOjIAMhhBBS1gQBMqvgi4oCfV4aDs+dhqm//oAWXWPw+IRKEDIPwZ6yH3luO2Te5EHmrRAk6LQmaAW2+OqgD4kA/CtA0QZCq/GForHDBQ2cogiNYIMipcAgn4PoPA9BOoPuvU2QdelIPf4pls05gsGPvYKAqvXY+bUQoNDYDYQQQgi5JhRkIIQQQsqYQ7JDdIvIWL0c8396BfHZTjz5QCAM0glY9xyC0wU1YCArvtBJgI53q9A4IZv5YoOSw7ZpRbU1guynhd0oQo9caASWzQsWKLIRghAFUVMZitnOlgxoXMkQ8vYi0H8P7n6qBnYueRt5B81o+d4n8NVYoDFePM0iKUJT1gkghBBCyiMKMhBCCCFlTLTbkPTBm1iw+U+kB5rw/HAbziXnQnaybS5ZnRVCggKHG8jKEZCeJSHXJsLlEuCSdLDpjfD1y0Fo4CFUjjyJqMoBECr7QGuxwGAKhKIPhEMxQhZcEBQ9oFSBoKkIjeU8HMIpaLL3onWHUBwN9cOqkZ3Q+8NZMMbUUtOm0Vxcl5YkCd9//z1ycnKgKAq0Wi0qVaqEDh06qINNcjqd7qLj5syZg+PHjxeck+/bsmVL1KhRA4IgqOfhXC6XOi0kPze/xrJly7B8+XLs3r0bqampcLvd6v4hISHq8X369EH37t3h4+NT5No8nStWrMCBAwcgy7J63aCgIDRt2hQNGjRQz3ELTz/Znt3bDRnp6ZfcoVJkZIm///8SPmAq/37k5VlhNBpRrVo1VKhYQd3Gv38JCQlQ5AuTifDeRHqdHkHBQTAYDGWU6iIasmV/WSeCEEIKoyADIYQQcgOxegvcvGuEBAhuCZI7HUmvP41TJ05jT0YOXhygIDNRVvdLz9Fg5wkN9p7TID5LRrqdLQ7A7nBDVFzQCTJ0rNojK7nsZBpodQZodAKrGKYhSD7Dzu2EJQAY3CsQA++uhuDKVQE5hI8wCUWTC1k0wWyoCMFuR44tHtFVLchoZ8aqV0aj16tfQKzVEm7ZDgPyWzUUqo9//fXXOHz4sPpazq+E8cp9ly5dsHTp0hI/+7x58zBz5kx1f16B4xVYXskPDw9XK3pms1mt+J88eRKjRo3Cnj171ECB575dPFhEcnKymobJkycXBAz49adOnYrg4GD1mMWLF+Pbb78tSCe/Jt/Xz88Pp0+fVn/eqnx9fbFzx3Y8cN/96vtPv/gCt/e/HTOm/46XX3wJm7ZtUQNBmzdtxrTfflMDPk6HAzVq1cSjjz2GiAoVMH3adOzetdNT+2b3tXOXzuh3221qoOidN99ilfc89dx6gx4PjR2L79jvwsWb5jAVK1XCU09PKEjPmtVrsGTRIkx49hlUqFDhovSuX7cOixb+WeS74B8QgNZt2qBL1y6Ij49n55/Evvfui459cOzDqFmzZsH7vey7NfyuoahVuzZm/zGH/dvR4dfJUzB3zhx88vln6j78exIVFYW+vXrj2NGjqFu3LhYsXoSxDz6EVStXIoBde8XqVWrgavu27Zg1YwbGPvoIqlWvrh6/YP58bFy/oeBcrdu2Rc+ePfDuO++q96d9+/bYtm2bek957ySTyYznJj5fECzjn5NfZ9aMmXC6nLDb7GjUuBEeefRRBAQGXuNvnRBCrj8KMhBCCCE3kqBO+gAb78kgOZD17XeQziVi3qlz6NQmCNkpudh43ImlRywwmX1RWyvgNrMWYQYJGrcEl1uGpJjhlFzq+AyCqECnsUMraKATdNAoGhi0RiS4TNiUlovNeU78Oj8LU/88Bj/dYbRt6YdHnxuIwOAwuGGHpPGB4mNVB5p0pbvRrBkwdfU5JPz+O6o8EQltYAQcoqdvgAYXJp3wBhYKV/j460OHDiEjI0Ot5BfHj/EGDTj+mh9z7tw5HDx4ENHR0ejatasaZOCVMr7w7fznpfBzevfjy0pWKeOtKgYPHowpU6YUpMubTn5Nvn92djZ27dqFzp07/6tf582upLE3ht09HKJGhEYU0aJJU/X3+d0PP6Bbj+7q/b69bz+0b9MWjz0+Hk9NmMAq0/Owc/sO3DHkTjXAwPGA06PjHkPbVq3V9z9Nnqz+fhs3boIXJ05UAxInTp8quKbVasULzz2ntlRJSkrCr9OmXpSujp06YcniJZgzaxbCwsOxaesW9bjOHTqCt8j4dfo0dOvWDY88/DB0ej3WbdygtnYZeucQJLNzFg4y/Pj9D3A6nTiwfz+mTJ6C+x98ACPvGYXOXbpc8l554igC2rZvp1b+MzMzsXLFX7jzriF44fnnEXf6tBr04oEIfu8GDByIpSy9K//6C127d8fgOwar53ngoQfx2SefYsCggQgNDcU9I0eq69dt3FgQYOD3oFe37sjNzcWceXPRuEkTSOzff/s2bfDTDz/i9TffVNNLCCHlEQUZCCGEkBtIUHgrAoFV2Fn1Lu4UNEe2IUnOQaItD06rghfmSIiWDHiqrj+CJBlmWYHsssElS7BDhlNg7yUFGkEDkTctkAVIighJFNh2to13rJByodEY0TcIGCRaoBhlxGbqMTdDxL6DQXhyzGKE+KdhwjMDYNDboBX8YHLnQrDFwq2VEV5Li+TjBxCyYSH8b39YHQNC1rDK/BWGIeDdGK4UFCjOuz+vnPXt21etNHpbOvCWCbwLBd9euXJlPMcqobylAm/xwAMTvPvFjz/+qFbE+JPhwgGMP/74A4mJierTZ28QQr3/xV6TCzasW4ezCQmYNm0aVq1ZjV9++lkNMPBuAU2bN1P34b+TZs2a4eiRI+oTdv5U/YJrn5rkJ/Z7bN+xIxbMm4fNmzZh3h9zMSi/Un45/LvQhlW8Fy9ahB+++x7Dhg8vsp23JpjwzDNo07ZNkfVvvfO2+p2LPX0a773zjrrwlhm/TZ926Yvlf3dWr1qtvg2PiEDffv2wcP4CNGrUCAlnzuAIO+fPP/yIBx9+6OpvQiE/ff+D+r3mwbqatfK7Lmk1aMru/fJly/DrlCkUZCCElFsUZCCEEEJuKFZREdyelgMn90N25CA7z4azaXp8sjobH9SNQoxeo/YD10LP9lWgiOwY3mpB4OtkuDUS3JDUcwmKZ2YKXm+W1a4TbH/2n86dg2SDBb6CCwGSA1XDfdHZLGOn9TxS9RUhu01444OdkMVAOGQrMjNT4UjJRutqAmJCzBCys6HdfwSuXjYoZgt012m2CW+Xiccff1wNMBTGgw38ifiqVavUimTh8RN4awU+tsI7rHL47rvv4rXXXivY5s5vLr+JVVZ5c/SSulqQi3Xo1An9buuHjRs2qAEYB2/GD95aRoBOe6HIaMgfFJR3S5AludAZLv0dOXzoIPtdanHy5ImLtvHf+18r/sKcP/5Qg0q8pcIH772HPv36quMkXAlPH5ebk3NhJfudnz51Gvv37VdbCxTHuxusXLMadrsdn3/yqRrkSDx/Hn169FS7QPCgVnE2mxX79u7F62++gcjISPW7yVtDTJk8Gb9OnYoKFSti0jff4JP//Q8DBw8q8br/lM1mU3+K+f8+vLzjQLhczms+NyGEXG8UZCCEEELKAG9OnZuVpVbUTBozcg0JeC4qCrWFXIiSL2SNnlXZZPXZsCIrEPhSrLLMm7qrFRBWx5LVKIOnWwB/7VJkGCUrzgu+eO+kDcdhQzSyUTfUgFGtz0H2icDStSJS046jr5yK7lGROGWsjMkHczFTPo3n6vvAac2GwWmD1uQDSRCKdJfgwYHilfcrtQzwdoHgvMd6AwK8slbS/r169UKrVq1KHKDROzgkD1B89913auuGwvi53SX0zyeFFf0d8ns69ffp6utOnTvhi88+g8PuQGxsLOo3aKCuP3zokPo96N6zB8wW80XnOnPmDN7ngZ/XXy/YUqduPbRt1w4JZxIuSsE3X32tjsHBT/r0M89gxbJlareJD9//AK++/tpF+xcXezpW/cm7HxRg352qMVXV47du2YJ3334HL778UsHmp598Sg1idO/RA8+/+ALuHjUSXTp0hNFkgsViUb83Wm3RYrKJbWvUuHGRddOnTlPvk8FoULuHzJ87D4mJ5/H2m2/i8y+/LPi+F/6Xwe/npf6l8Dv41ONPoEXLFur79LQ0deHjV/CN/N5z/QcMuOJ9IYSQskJBBkIIIeSGEtT/XIIMQWuGRm/ApuOH0aepPxYcSUcHX3/YLG6YXTpPjYPHDvjMCwIPOYhwsxUa3j3Ccyq1FQMUGRpJgp23eJB4OwYnnDoLJp3NwP4MCb4mG17sm4e+LXxhjsmEKTAGRrOA6Nq5+OTTQKxIMCDUkocavsBjMQK+cIVi8+lzaNehA/w0JrUThqh2axDVi3pnlOAVKLnQyPv8Pe+2wAMGhcdB8PI+GS+83jtgI39Cy5/eFj4n328LqyCmpKSofeu9gY3C4zVwfJ/iAQYvfoy3+0Xha6rBGLbee63LBUj+q90qDuzbj6m/TSuYjWP2zJnIysrCyFGeMQJ4hXrx8mV4/LFxePjBhzD+8cfVVg4njh/HJ59/jtsG9MfSJUuReD5RHThxz+69eOH5idiyebM6/gEfQNJ77mVLl6JO3TrYsH59wbopv0xWK+7Tp01TB5LkvxOrzYoWrVpi/dp1+H36dHVwxcfGj1N/j/v27sPhw4fUa9n4GA7PP4/9e/fD4mPBn0uXqOMZfPbxJ+p2kf3O3nvnXRjZ94qPiXB7sUq5Xq9XK/MtW7WCr58ftm/disF33IHnX3gBt/Xpi7TUVMxduABrV69BSnKyes7kxCT1nLwrBP8+zv3jD3z7zTe486671K46/PvbvGUL9bOuWLYcb7z2Ou4YMgR///03/t64Ue3OwdM4b+5cvPP+e0hPT8eiP/8suB8fffCBOlAm7y4y8cUX2D2shWeffhqj77lXHcdh8cI/1e0/Tf4FnW7xsUQIIeUbBRkIIYSQG8glwDMeA3ttqNcQjuVBkELceHKADh9LeZiTZscwcxB0ggTJW6mXeBcLwdNtQvE8vefdKfgzet7WwcVeG5ysUq5zQ+sWkGz0x6dHz8Pp1uOBzlp0bWFBgEkPyScPGpc/RJb7u8w10bChFT0778Ps+Xk4mpqDYGMlLHGfw+u9onH8cCLkysEQjUa4WVq0yoWWBN5++fv27StS2ec/+RPoESNGqP36+aj7HH8q/Omnn2LBggUFAQJv5Z6fi1fYfv31V/Tv379gMEjvmAy80lu9enU88cQT6pgM3oHx+PF8gMgJEyZgzZo1RdLhxWdG4AMBzpo1q2C8hsL7FW7pUFKXCu8MGJpiTdb/K+o3bIAffv7psvvUqlULy1f+VfCeDwpZWJ++fdTlUh5/6ski77/85uuL9rlr2NCC13xwSD7IZEkaNmqIBaxSfjkff/apuhT2Hj64aL/3PvxAXUqyedvWgtf169fHuMfHl7gfD0rwxYsHTD774nP26vMi++07eKDI++Ej7i54/e4H76tLSSIiIrB2w/qC93cNHVrifoQQUt5QkIEQQgi5gXhbBJHXrxUtxOq1IHTsgIDvD8AU4caEkRUxc3E8pp+WcFd0Ffi6PU/+tRDAG1iLaj3YySrYCiSRP4X39M2WoIdNMEEUspCtF/HtyXTYZAkvjQxDddM5dj0D9Fo7DIIOok4PtxAGhW23OPRo0xCYPjsLhzTBcOWewbCuNWERk1G9YVVU6jEUCp/yUeERDI3aBN3r7bffVgde5LM0eHmDA+vWrVP7rBcfYLHwwIwF94NV3l955RV1VonVq1ejT58+6lSJHA8keFs/vP/++/jggw8KzucNFhQ/J1/PnwzzKRA3btyIDz/8UL2Gdz9vcIO/79mz52V/V/xc/Dx85gv+RJ2UE+o0D6W2GyGEkFJGQQZCCCHkBtLwrJc/FFcru3r4DhqD6rs348D5TNSukYxRo/xxJj4Cv089jF41WqKGoMAts0q+LQPmXAfsTgO0ogStJEDvFqGXRGSJbmidNshODbZaBcTbFLw2wA8RxjS4tHoYBLfaEsIpS9Dxmpcjm1WgJbiz4uDKtsLFKvKNG2ahZ50ABOlyYandHFLlIfANi1QHkRQEA4r3GOCj3vN++jwosHv3brXC7x1XoaRgQmG8qTrfh3e54NNM8ukmuRYtWqhT933yySd46623LmppcLkBHHlggQcTfH191VYRvAUDf8+DCtc6LgNvwcA/J08vKR+U5FQkDB2JyEVzIRQZD6Ko+efs+OWUFfPaB+FCG5SLww4fvPe+OgbDpVorEEIIuXoUZCCEEELKgHfQQoPZjM6v/A+Tx49BxToNEep3BLVqpqPWpJY4dFLB9N9PomvVdgiyhkJjzYRPTjb0eVoIfEpLtwSbS0agU4bVx4Ys0QBrei6qhgioXdUAjZTjGZOAXcupaKBzSMjJTYdiOwtLXhC2rnHj1y2sIu2rQddG0QgLdsMvsjqElmNROboj3Ipn5H5RrZzxwIFn6Ef+hJ8HCPjTfd5agM8OwLtH/Pzzzzh69Kjab5wrPJ2lt/tDeHg4OnTogIkTJ6rTSxYeXM973hdeeEFd0tLS8P333+P333/HiRMn1FYN3vN5Awi8iXqTJk3w0EMP4Y477lBnIyjcJYJ3r6hYsWKRsSP+CR6s4NNlvv766wUj+pOyo9jY9/vjL+D68ReYJAmpjVpCP+Fx+D9wH6DXFewXb5fx2I5sLE7m4TERkYvS8W0zM26rYFS7KXkdOHAAz014GseOHVPfr1ixAh99/D+1ewghhJB/h4IMhBBCSBkp6EbgG46RP8/Fso/eRFRjCbUC46BNOYaqVYNR5Z0aLLPOxP5VKTiwLg0Nwmsi2lgdFlaRN1uz4bTbkOe0IcSVBYfbgpggHQ7IaQgKMwG5uXDIbgg6I2wuPXJPyzi4zAE5Iwh1LRb46P1hyj2Kp+5vjepVnNCGNoDSaQKrqFfyDMgo8Iq5p1UCr7AJ+ZW04gMh8ifBDzzwgFrR97Y48A7oWPi991he4S/8vqR7wvFWBDwYwRdvkMKr8HkLj+/gPYc30MCnt+RdO3iw5Z8qPMZE4Z+k7Agm9n2uHgPNiGHwve8eZHz6BcR6dYoEGLhIo4iaJgk9GxnRMliPt/dkok2wDsV/+zzAxaecfOnVV9QBG2fNnEUBBkIIKSUUZCCEEELKmF5UWFVeh9snvoyscxmY/MNY3NFBA93ZcxC1CTC6FTStEYY6DWX4OI/DJYQgK0fG8aOZOHvagbxMPZyZbojORIj+AsJTNPhkshXNRD+EyiIEswm+OisskgMt5CD4wR9nDRasOH8S7z3dGdpKBgjN7ocxvDOg08DNxzVQUybmL5fu2+6tvF9qYMSSZmb4pxX+wlNeXu68l7u2N9hxNf6rs0kUk8mWlWWdiKsRMGwIwBcm7OvPStyH/6Y/bhZc8P7PzqEl78e+E3yWBi8+leVNKresE0AIIcVRkIEQQggpa4LIKt5uKLIPfMJ1ePjVP2DPtWLZL28hQjsDkZUk+NjsMOZkIVPQQQ87gowy2jY1Ac1CIbvzYHUGQ5NnhZiTi/usOpyIU7B0RQ58zWGoa7JBa7BA1FeE26jHgmwFrStl49HuXVGh97PQhMZAMung1thZwSAbOikQNGTefx6f8uCmrVkTQggpvyjIQAghhJQxQX3+qocgKtCLnv7/Oj8jhjzxKSThU9iyUrF91VwkbV+Oyq4DqBCYB8Fpg9Goh1tJZbm5oAYqFJn9VAS43RJCfICRg3TIy83D6UQNko6dRXSoD+r37Y/HuveGvnI1dm7PeAsiNPntFUye5Z/3LCCEEEIIKYKCDIQQQki5Uaj1gCDlr9FA5xuEDoMfhDjoQchsveSWIDsVpKcnIjXxMByn2ZJrh83mgtOthdlfi5DIKqhUswqqh1dGA0MQdKIWLj7tpSBCK4kQFKhjLtBoA4QQQggpTRRkIIQQQsohRW1bILH/d8HAxwhQNHwEQog8OKDRQjJpEF4pChUqRkJs1lOdnU9RPC0Z+H6y6OIdz8Gzej7GAA9Z6GTP+PqSRgGf1FEr5zdZuLohC0gpcLvd7R/5eNXcPSdTSx404AZqEBOy5eeJvdqUdToIIYT8N1CQgRBCCLmx/lFnBM9MDlrvm/yf/IVYaDhGTaFthdtB8PYPJUy7mB9M0Hr3pOBCmZFluTIPMKz4oG+ZpkOv10/v/NT8u8s0EYQQQv5TKMhACCGEEEIIIYSQUkFBBkIIIYQQQgghhJQKCjIQQgghhBBCCCGkVFCQgRBCCCGEEEIIIaWCggyEEEIIIYQQQggpFRRkIIQQQgghhBBCSKmgIAMhhBBCCCGEEEJKBQUZCCGEEEIIIYQQUiooyEAIIYQQQgghhJBSQUEGQgghhBBCCCGElAoKMhBCCCGEEEIIIaRUUJCBEEIIIYSQMqIoEHYfT+rz598nnzl6JqNWntUWohUUvSQrSMuVYHfJZZ1EQv4TGlcPOXM+zepTIdgS175h5My+rav+GhpgOlfW6fovoiADIYQQQgghN5AkK+Z3ft20ddvhpPqP9q+LNnXD8fjAOkX2SU9PR3x8PJKz3VhzKBenUlx4qF9ddGtaqYxSTchNLyr/ZyBbGlttue+99+funLV7zrtfH9N2eJt6FZaXZeL+SyjIQAghhBBCyA0gy4plwlerE/RaMeC5oY0wfkDdS+4bFBSkLjzYEOYXr66zOpMw9pOjqBEVgqeHNLxRySbkP8ls1GLsbXV82YKE1LxlvZ7+I+f5ES3u69q08h9lnbabHQUZCCGEEEIIuc6OxqU+88inqz/6+ZlO8Lfo//FxhYMNZ86cwdiuQciyuTD4tRX46OHWqFbR7zqmmpBbQ2SIBTNe7uq7fMfZH0a/u+yFX17o3UIQoJR1um5WFGQghBBCCCHkOjp+Ju2JcZ+v+Wjmy92h1QjXdI7iwYZXBobhzd924Jm7GqNB1aBSTjEht6ZezSsFBvsZqt3zztKdv73cp2lZp+dmRUEGQgghhBBCrqO3f936wcRhja85wFBY4WDDPe0VfDBjL6a+0KUUUkkI4ZrXDAlYuy9RmL3m2GNDutT8uqzTczOiIAMhhBBCCCHXiaIo/hpRMDSuFnzJfVwuN/7ceBQrdiXiXIbrqmaUCPIzICPXiUCff94FgxByeS1rh6WdScmLuvKepCQUZCCEEEIIIeQ62n86HaJ4cSuGXKsdT369EZUCdRjYzB9ju106EFGSPMmAb1acpwADIaVsx9EUpVH1sPiyTsfNioIMhBBCCCFXRyfL8r96wqUoSlhpJebfYOnw4T/Z54m51nMIgpDBl9JL1a3B4XRh1Ptr8WiPEIT6Xn2RXKPV4fvlSXhheJPrkDpCbl07T2RMT0jJq/bWAzW/Keu03KwoyEAIIYQQcnVcJ85m+c1Zd+yRVTvjh7SoFR7Xr1VUeP0qARWu5iQrPuh7vdL3j7lcrv48HVlZWSf/yf6CIORlWqUFi7fGWxdtOt2vagX/Q0M61/ymY+PIhRpRcF/v9P6XHI3LQLVwwzUFGGStBW/OicUnY1ujSoTvdUgdIbck59Id579bszuhyS8v9Gpd1om5mVGQgRBCCCHkKtWMCtzz4shWD/OFv1cUCDuPJXX+Y+2xR7YdSezZtUlkbL+WkVWjw31u5vkFJbtLWf7XrnMJ8zee6mI2anOHdK71da+W0b8/NqiJlS1lnb6bWsMaYZBWHMPWk1a0qmb+R8coGhN+XJOC2pV1mPt6j+ucQkJuGY7kbOm1J75a/+TLo1qt+OG5uo+XdYJudhRkIIQQQgj5l/h86s1rha/hi3edJCva9XsS+vMWDyfPZjbp07Ly+T4tImuGBhjLZQd6SRG2rd+ftG/B36cbZ+U5gu7sVPPb29vF/HJP7/ppbCnr5P0nffpYe6zafRZvLziIIS39UauCoch2URSR4TBg5YFsxCblYtzAivj68VpllFpC/js0Gs0BpyRM/mnpkeprd58Z8PYD7bYu+2jwVbVGI5dGQQZCCCGEkOuAdx/o0jRqLl+86+xOt3n5trjhf6w79miO1Vnh9jbROd2bVKzha9b9+7kNr4pwYvvx9I0LN8VGH0/IqNu/XbWfB3es8d3gTrXi2HJjk3KL69akkrpIsoJdx1Nx4mw2fll+tMR9X/llxw1OHSH/HRFBljiB/aWtGOJzumOjyIW9W1WZ9vzdLZLZ8khZp+2/hoIMhBBCCCE3iFGvtQ5oX+0nvnjXZeU5gv/8+9R9PPDgY9KJt7WuLHZqGBFl0GlK5ZqCIKQdOpP916LNcT5bDyW269EieuYdnWp+27NlzD62lMo1yL+nEQW0qBWqLsO7Vivr5BBS7vV8fgm2fz/iBgdoyT9BQQZCCCGEkDLkbzGkjexZ53988a47n5YX/fu6Ew8t2HjiweoV/bP6tooKal07NKikaRALEwTBHp9i/WvRlnjXqp1nurSqV2HFHR1rTGrboPJatlz3z0IIIYRQkIEQQgghpJypEGyJe3Rgo5f44l134mxmgzlrjz26Ynvc3S1qhSX1bRlZoVKo786l2xOSF20+3bFmZODeOzrVmMSbATesUVF6cRQNjk4IIeTGoyADIYQQQshNoHqlgP0TR7R8hC+F14+vHIbxdzQtq2QRUmqUtC34Yvwj+GB5HOwKIIh6WIJDEWIRkJuSiLQ8NxQIEAMaYfTHP+KdQTEwXPm0hJAbjIIMhBBCCCGEkDKUi3XPdsSQn2Mh+7XGs0vX4LmWASixc5D7DOY+0g8PP9AcU14egambvkAv/1Lulq84kWcDLOZLTARzpe2E3OIoyEAIIYQQQggpIzLO/TQSw3iAIXQEpu/8Ej0tl9ldG4XBP+xjS7H1SjKWPNEP9087CaehAhq1a4la4Wa40o5i6/rdOGvXIWbkZCz9rDdCC8Uk8v64B9UfXAzNgNfwXOr/8P6JaDSrG4agzi/hx/FN4bzCdh0cOPT9SAx+eTVSZAuiW3VBu9qh0GSfxva1G3EkTYJPkycwfe7LaJcfDLHt+hD9B32AnblGVGzcHq3rRMCi5ODMvr+x+VAKnJameG7eQjzfzFzqd5uQG4GCDIQQQgghhJCyoaRgxaKtcCgaxNx1L7pcLsBwGa7kRBh7vYjPe1ZFj36NEVi4cQO7xm9DGuKJ3+7DQ032YO7o8IJWElq9FoKgwLp4OrKWn8S5Jroi55Uvu92JTS+0xIDvEuBz2yQcmzIEwcUaVeQsGYtm93yKAS2TMH3PV+ipP45Jz36MnXlhuG/uXnzcqVhrCNc57NmSAG2wnZ3dDGorQW5GFGQghBBCCCGElBERmvzZWiXJfc1n0YU3QPPow/j0o2fx7rh9OJPtgnLxXsjJyoGMcBSfIFbXYCiGNtRddMRltzs3Yubcs5DEUAwcM+CiAAPn2/1uDAifg58SF2Heuo/Rs3cNjP/5Kxwe/DgmD4rAL4IBobVaoUPnjujcqSu6d2qExh0qXsMdIKT8oCADIYQQQgghpGwIIegzuBMmrlmB+GnfYP6EVhhSUm29CAW2lBS4g8PgK0JtqTD97mYYv9yGCnf+iFWnBiBcLLy/HQvui8GYBdKlk6E3QH+Zy5a8XchvEaFAuTiiUZBWzzYRQv7x2ugh+H4nW9R3MqznD2DL+vVYPeNpvHXvHqS4tIgZMwOrPuyC0h5ugpAbgYIMhBBCCCGEkDIiIOTuyZh/vCtu+2IRxtZvjQ3f/4FPbo8suaLiisPccYPwyJw4KJVHY+a6/6GLcQ82b7dC0URj4AN9iwUYFGSvfQlvLLKzVzpW4b9kNODq6dvj7iHRmPZNLOb9OBcvdR5WZLwHfu2MxZMxN1mGpuKdGN7ZACXjCFb/tQHHTJ3w4O012WcUYa7QEF2H8mUc3nauxPh6wzDtt/cx9dFOeKyqeKmrE1JuUZCBEEIIIaQU2O12Y2xsbAO2tGRL/fj4+Jjz58+HpaSkBCYlJfknJiaak5OTdU6n86Jnk35+fvD19VUXi8WiLnq9HgEBAXC5XAgPDy+yvyAI6jYuLy8P7JxFtttsNp4e5OTkqMdnZmaq+/B9s7Ky1PeSdPFT3bCwMGdoaKiV/cyqUKFCGnudzH6erlKlyj627GHLaZaWpFK8bYQwRrR4bRMSn9qNH58aizdHN8RUHgsQdDAHhSLUTwdbyjmk5LrUKSxNMf3w6rJVeLRFYH5Lgm54+aP+WPngfHzbrzbW9uiH9lECUo5uxurNGWjwymzM/+ws2j6+Ans/vRf3numCXiNfw6gm/3bEAx1avr0Zm6qPxsDnHkPtsImIad8d7aoHQEo9hs1rNuN0joiKfT7Gkp/uQ20+36YhGsHnJuCjt5/Hy4IfqjRtjUbVQmGBFSmndmPLzjjkiBUx4IsfMJYCDOQmRUEGQgghhJBCFEURTpw40Wr//v29Dh061OLAgQO1Dx48WOHIkSNmt9vTZ9zf3x+swo2qVauqi/c1/9mhQwf07t27jD/FNdPnLwFsifau5AGJs2fP4ujRo1i+fDlOnz6N2NhYdeGvExISCoIWwcHBjgYNGpytU6fOsfr162+pW7fu+kaNGu0JDAzMKJNPRG4agl8TPPjTVrZc7ZEiwgf9jMNsubQZSBhx8VrD7T/jXNqlj7vSdv7Ppcbo6Tg4+p+m1YTGTy7BqSf/6f6E3HwoyEAIIYSQW4Ysy9r9+/cPWL9+/dC///675YYNGyLPnTun0el0YBVitGjRAs2aNUPTpk3Rt29fDB48uKyTXC5oNBpUrlxZXTp27Hil3fnz2pj8pff58+exbds27Ny5s2CJi4tTW2PUq1cvmZ1vc7t27ea3b99+NTt//HX/MIQQQq4rCjIQQggh5D8nISGhy+zZs5+fOXNmh+3bt5t51wNeOeYtDLp3745x48Zh/PjxZZ3MW0KFChXUpVevXiVtDmPLAJvNNmDLli2YNGmS2lJiz5490Ol0Evtd7bjrrrsm9e/ff35AQEDmjU05IYSQa0FBBkIIIYTc1M6fP3/bl19++cGPP/5YNzs7GwMGDMCoUaPw2GOP4amnnirr5JF/wGQyoUuXLury7rvvelfzWQZbsd9pq4ULF/4yZcoUrF69GlFRUakPPfTQ548++uhXFHgghJDyh4IMhBBCCLnZaDds2DB56NChI/jghi+99BJeeeWVwpVT8h/CB8UcOXKkuuQLycnJeevjjz9+67333lMiIyPPzJw5847mzZvvKMt0EkII8aAgAyGEEEJuFsKyZcs23Xbbba1ffvllnDt3rqzTQ8oIn4Xj9ddf54ugKErlxx9/fH2bNm10y5cv79W1a9fVZZ0+wmTPwohaY7HUqUdgZGUEG650gBY1H/wVUx6srjZhubm58PfEZhg0rytm7v8MXf7tJBaE3GQoyEAIIYSQm8KsWbP+WLJkSevk5GQEBQWVdXJIOcEHkPzyyy9NrVq1Su7Wrduq2NjYKtHR0XFlnS6STxOFUb9swutN/+PVDsWJPBtgMeuBvGX4ZcZZyFcMrBDy3/Qf/9dOCCGEkP+K6tWr75g/f/6g7du34++//0ZAQEBZJ4mUE0888QS+//77EP46Ozvbr6zTQ/49264P0X/QB9iZa0TFxu3Ruk4ELEoOzuz7G5sPpcBpaYrn5i3E883MbG8F6dOGot74lUCn+zAi5Q8s8+uHu3o1QLg2E0f++h0zNpyBK7gLPlg8E/fX8FaBHDj0/UgMfnk1UmQLolt1QbvaodBkn8b2tRtxJE2CT5MnMH3uy2jnLxSkLe+Pe1D9wcXQDHgNz6X+D++fiEazumEI6vwCno9chVmr12JDrgLFvQtT334D67VmNB0xAbdXUxA37wkMeWwGTjhMqNyyGzrUC4WQdgybVm/C6VwtIvt/ivk/DEdV7bXeC0LKHgUZCCGEEHJT4LMNZGVlYcGCBRg6dChWrFiBp59+Gi+++CK1bLjFyLKM3377Dc8//zy0Wi2mTZuGBx54YGPDhg2vOL8mucGkBPz+cFesNl1+N23dsfj1m7sRKfJjjmPSsx9jZ14Y7pu7Fx93KtbfwHUOe7YkQBtshxNm8K2i6Olk4di0GxU3nMSBmoU6XTw2ER/ueBVt+3yF57rdj4B9k3FHgAubXmiJAd8lwOe2STg2ZQiChaKXyVkyFs3u+RQDWiZh+p6v0DP/M2j1WgiCAuvi6chafhLnmugKHdUcL9dQsHLmVqSxyv/Il1/L7y7hwvZX26DvV7HwH/AjTvw8CEFFrqcgc+GDaDHmMbTqEo81a55HPe213QtCyhoFGQghhBByU+EBBT7NIZebm4tPPvkEX331FXgA4s477+SVTXTq1IlVOsQyTikpLadPn8YPP/yAn3/+Genp6erv+Y033kBiYmLBPvv37y/DFJJL0kRi+Herr667hKYGxv/8FQ4PfhyTB0XgF8GA0Fqt0KFzR3Tu1BXdOzVC4w4VSzxU32wI7qx+8agOuka90K3Ctzh5dj1WbXHijq5/Y+bcs5DEUAwcM+CiAAPn2/1uDAifg58SF2Heuo/Rs3fR/g+6BkMxtKHu4gNL4tyAqTNiIQk6BGauwCvjVl68j5yJIIOAtCMz8fvOCXi7le5f3QtCygoFGQghhBBy0/Lx8cGrr76qLl6HDh3Cm2++id9//x3Hjh1DpUqV0KdPH/Tu3Rvdu3eHv79/GaaYXIokSdiyZQsWL16MZcuWYffu3QgMDMTgwYMxbNgwvPXWWzSDyC1EGz0E3+9ki/pOhvX8AWxZvx6rZzyNt+7dgxSXFjFjZmDVh13gXyxAoJR0QkWCW+JbRHjijwKE/L2VEg8ovE2EUEIQQtAboC9hfcnyrysEoOOTn1/cIsHr24tX/Zt7QUhZoCADIYQQQv5T6tat6515oMh6/gScV17Xrl2L9ayAfvjwYWg0GjRs2BCtWrVCixYt0Lx5c/V4vp6UroSEBOzYsQN8TI2tW7di27ZtyMnJQVhYGDp27Ki2PuGBIB5IoGDCrU3JOILVf23AMVMnPHh7TVZhEWGu0BBdh/JlHN52rsT4esMw7bf3MfXRTnis6oWatXPHVEw/8ABeKNbCwLZxLpYkyRADuqN3WwOgb4+7h0Rj2jexmPfjXLzUeRhCi3VfyFg8GXOTZWgq3onhna9hFEdFuRDwYNcbNTwG0z4/iTmT5uD5jncjrHhAwHEMq5ecRXi7NqgbZlSDIFd/L6gFFyl7FGQghBBCyC2Bd7PgYznwpST8SfqRI0fUvv67du3CgQMHsG/fPqSlpanbK1asiNq1a6NGjRp8EEr1Z82aNVG1alUYjcYb+VHKHB8T4cyZMzhx4oS6HD9+XF2OHj2KkydPwu12q/eEB2zq16+Pxo0bq0GcHj16YODAgWWdfHIjSWfw231tseQf1NEFSy98uPwtdAqMRvC5Cfjo7efxsuCHKk1bo1G1UFhgRcqp3diyMw45YkUM+OIHjFUr1ReaIhhat0XO09VR6XxNdOnUEBG6DBxZ9xc2x1khVrgdXy39Brf58D11aPn2ZmyqPhoDn3sMtcMmIqZ9d7SrHgAp9Rg2r9mM0zkiKvb5GEt+ug+1ryLGoK3WAk2DRRxKmob72p5A62oGBPX9CF+9tgnbGzyOIePGo3bI0wir3w7tG1aCyZGEw5s3YvdZG3TRg/Dl/I6o570nV30vCCl7FGQghBBCCGF464V69eqpy+jRo6+4v91uVyvWfADK2NhYtdIdHx+vLnFxcTh//nyR/S0WixqoCAkJQXBwsNoVoPDCu3HwffhPvV4PX19fmM1mPuClur54IMPPz09NMx+Lglf6vXiwJDMzU33Nt/F02mw2PusCnE6n2qIjIyPjooVPDZqamqr+LIynp3LlyoiOjlZ/ehceXGnSpAm6det2bTec/Lf53YVp5++6xoNNaPzkEpx68hoO1dXHUws+wrv/qNuAHjVGT8fB0Vd3CcPtP+Nc2s+X3sHUDV8cTsUXJWyKHvwNtrHln/sX94KQMkJBBkIIIYSQa8Ar/Q0aNFCX0mS1WuFwONRAAQ8K8EABX/Ly8tT3hfHgQuEBLgVBKJjakwcqePCCj1vBX9OUn4QQQm4ECjIQQgghhJQjvPUCX3iAgBBCCLnZUJCBEEIIIYQQchMTEDD8d5wfXtbpIIRwFGQghBBCCCGEEEJIqaAgAyGEEEIIIeTGyp6FEbXGYqmjhG2CCJ3JD8GVaqJpp9txz8Oj0aOaBf9oLMdLUpD5+92o89hyoMun2Dfn3mJTVroQO2ssBoyfj8SQjnj6ux/xbPvgS1xTQerUu9Dg8VVA9y9xYOYIBP+7xBHyn0JBBkIIIYQQQkjZ0FbD48s24/WmF1dLHOf+xs8vjMc9LV+By6cTPlz3B+6vUvrTNDoOTsL9j/+FmCc+xJakn2DybpBj8U3f1nj19DDM3v8Zuui9GwSE3D0dsUNkQNTBQAEGQoqgIAMhhBBCCCGk3DFUbIdHpmxF/eeaYNCP6/HKy3MwZOpd8CvYw4XEzVPx5aSZWLH1EBLSbFCMQahUszE6DhiD8Q/2QozpMhfwXqfuEPQJfgPjR7fH4S/2Y9ZIXywbWxMjZ+VAUff4FXdE/AqIwRgx8yC+7KZD6vS7L9GSwYWkLVPxxTe/szQdRUKmE/rgKmjceTAenPAYbqtuLnpxOR17Zn2LSTOXY/OB00jOcgLmIETVaoked4/DkyNbIFRTOveTkBuFggyEEEIIIYSQcsqNvFy7+spg8YG3MYH70Mfo1e1d7JGrYNj38/H3lKiCbXLaOrw6aDhavOFG1bELsPGdNjBe1TUN6DMpDknj3kfnrh/iSOA9xVoyKCWn9Ojn6NvlTewU6mPstDnY8mso1PiAkoktb9+BO1u9D2f18Viy/nU0NwCuXW+gXe/PcVLbDM8vXYZJjS4EIOyHvsCdvXqj9nN18OzKtZjYgKpt5OZB31ZCCCGEEJJPQcaU/qgwehHQ8zvELXsI4dQUnNxwMpw5STi2ZT6+fvM9zDqYB32d8ZjxaV9PsEA6is/HfYjdDj26fLYMXw0ILTJ2ghjcCW8t+Bgn647D8h8m4H9DNuDlxte5OYB0DF8++i52sDT1/noB3u4ccCFNQgBav7IKCa8UPUTX9DVsS37N80ZxwZqejEyrA5KsQPHphzvav4/Ny49j5Zo4PNugGqhBA7lZUJCBEEIIIYQQUjaks5g1vjf+NhcKEwgitEYfBEfWRosn5uFw/2YI0xc6xn0Sx064AU0UatcMKHFwRsGnGqpVFIHTsTgZJwHXO8jgPoEjx1maxMqoWuWfDVJp3/0/9B/4Hnbk6lGxzZ0YcVtr1KjgB5OWjzvhRnxWfosJpeSWE4SUVxRkIIQQQsgtxo3TMx9An/t+xVG7GVXa9kLXhuEQUg9j3fL1OJmjReU7v8eq6feimlpSsmPXmx3Q4fWd0HX7EjuXPYZq3vqK8wDe79QKL27VouPHW/DXU3Wgu6ZreNhmDUHQ0D8gDngFz6X9hNkhwzCqWwz0SVsw/dtp2JGuQc0nZuN950SM3Vgf947ojCra89g07Vv8visDxlbvYdvfz6Oep4020n7ph0pjlgLdx2JM0u/4038gRt7WBBW06Ti4ZAp+XRMHZ0hPfLl+MR6tfblioR37vxyEnhOWI0n2QdV2PdCpXji0WSex+a81OJgqwbfF81i44h10CvBUr6zb3kTXHq9ja44Jkc06o339ivBRshG3ex3W70+C06clXvtrNV5rZSn9XzG5eWgq4a4vl5U48OMlaWugTg22/+4kHDqSDqV1+EWVeiXrGI6ek9m+1VG7+g1oA8Cvo6YpEYcOpbE0RRRLkwK3wwaHm+1qMMMgr8ULw9/HjrwIjJm3G//rqC96PukQDn4oX6JjBiHlGwUZCCGEEHILcWHzs3XR4X+nEDDkd6TOHFps6jkFGXPuRs2ho1G72Wns3Pk6GmqNaPrqdmQMeh9d2o5Hzcj1mL57BoY4J2NA0wewRO6Gr44uwaM1tP/iGp4tokaEICiwrdyByifP4EBBX4VH8eQjbdAj5lGs+nwkfpl8GolfhxRUYh59cizadK+GcWvfwKuzH8Mfw3w85xM9lSvHuu2I3JuGM3UKVbaefgNfbXkW9dr/D+NaDkVg3BwMDyzpnjmx/ona6PpFPHwH/YbkP0YgpFiNLnvBKFQf/B661jqPhad/QT/DEXz+2NvYmhuBsSti8W33YhUoVwJ2boyHNtTOzm5Bsa2EXJ6mBsZPegXLOr+G9c/1wbighfisf2R+gI99vc4twbMDn8Fqpx6NX5iECfX4P7Crr64LeoM6c4SSHY+4DHb85foOaWpi/NcvYEnXt7D++R4YY5iHb0dU93TvULKw85ORuPO9Tcir+ggWbnwbrXWVUb2KFkJKKrZtOAh7xyb540bIyNz9HR65fypyQgIgIh1JZ89DQnXqLkFuGhRkIIQQQsitw7kaP085BUnQIyhjMZ6+b+nF+8gZCDEKSD34G6ZsfQkft/NUXfQNJuLv9IH4tEcrDK+owXBWxQ/u9z1OLrgfRWbV+xfX8NI17YkuoUUrNEJwKIL4Kl0z9OgcVPQpqRiM0GCWCEVGdlYuq075FNmubzUSd9e6uIqia3Ybelf6FF+eWY1lG50YfnsJ1X3nGvw24wwkMRx3PTLkogAD59fnPgypMB3fnJuHmasmod/ttfHsrF9woMcD+K6HAZMEI8LqtkXX7t3QvXsv9OnWFM26RF58IkL+IU2NcVh2diyStvyGL74ZgzbPHsKZdBe0PiGoXL8N+jyxGEeGNkPov6jtaGo+galTMjD2hV8xsWFFvBYYje6vzMAPI6JL3F9b+0n8de4xT5q+fgTt3zyKs5ku6NhxjToPwhebZ+D2Gt6WOzF4bNlJdJ39Ht78ahRqfZUMm+iPyg06YvBjL+KXXY/AaN2DSWMfxke/DUHMwlp48IcVeK2D4do/ECE3CAUZCCGEEHILEaE+3BcC0e35Hy9+wu71a8mr5eSd+PtILmAKQICciYzdq7A9eQyqRBQZdu5fXUMlCBAvvZFvvkpKyc9xFQluiW9hab7kBYX8gIVyma7h3m1iQdq0VUdg2gm2qO9k5J3di42rV2HFr2Px0h07keTSovoji7D9q54IoMElbz1+d2Ha+bv+5Um0CG99H97hyxX3FRAw/HecH17SpmCMmHUeIy7aIKJC7zexgC3FhYycjfMjS7qOjqVpDEvTmH+QJgvqDHkbv7OlRObGGPvrVoy94nkIKV8oyEAIIYSQW4e+C+6/twZ+/uAYpn8+Ha91G42I4hVcx2Esn38GFTp1QIMIU34FW8aZKYPRbMxCWFu+i73rJqKBtB2vtO2IoZUqYMaUHZg9MtITGLjma1w/zi0/45e94/Bmk6ItJqxrZmDBeRliYB/078ifkJYQRWCfZ/TIqvj5k1OY+fUMvN3jHoQV6/6RPu87zEiUoYm8G6N7GKCkH8TyJWtw2NwN4wfXYQVOEZZKTdBrFF+ewcfOpbi/0m34+cfX8dOE7ni62qVDKoQQQm4uFGQghBBCyC1Ei5bvH8Cxxvejz5gxqKAZi4hGndG5SRTMjvM4sGEttp+xQld1KH5e1Q0N+SFyLH7q3xwPLclDw5e2YtNbLWBSz9UCb+1ORrcnW6DnPdEI+/07bF/4AKpqruEa15mhfUdkPxICy9k66NGtCSrq03Fw5RJsOJ0HsdJg/LJxCgb5XupoHdp+fAgHat6J7uNGI0L3OGp07oNOtQIhpRzG+hUbcDJbRGT/b7FhxljU47EKQ1WEJozF3S+Nx9OCP2JatkezGmGwwIrkE9uxcetpZIuRGPLTdDxJAQZCCPlPoSADIYQQQm4xWlQdNgVH2PKPiFVw/6JU3F/iRl90/uwInJ/9y2vkM9wxE1Z55iU23olZtkv1VzDhrtk2XLLxub4RXlj9NT67YpMJAYH3/gn7vRedALUfXoiEh690vJcZzSZuQPrEf7o/IYSQ/woKMhBCCCGEEEIIIaRUUJCBEEIIIYQQUi7ISX/hleEP4bu9WZBhQHDVOqgZFQSz6EJW4gkcOZaIXAkQA5rhySkz8VKHwFId08QVOwtjBzyO1QFD8OaXb2NkQ/9rOL+CzN/vRp3HlgNdPsW+OfcilAY3JbcQCjIQQgghhPwnXarrAyHlk5L+O0Y0HYflrup48q9jeLWpruT9MhZhbJvR+Oyu9jj3/d/4+vaAUgs06KrchZ/2/ttZLwi5tVGQgRBCCCGEEFLmBL0PfPjAofYk7Np6Co6mtWAoab/A2/DdkVR8V3yDkowlT/TD/dNOwmmogEbtWqJWuBmutKPYun43ztp1iBk5GUs/653fsuBCiwOl40fYPfd+VCg2Dmnu7JGo8fCSi7fLZzH3wT4YOz8BkqkyWnfviDpBEhL3r8faYxG4726fElLuRty8JzDksRk44TChcstu6FAvFELaMWxavQmnc7WI7P8p5v8wHFWplkZuYvT1JYQQQgghhJQ9n9vxw+HN6Pr0GEx8pS0qvMQHOhUgGkJQrUlLtG7dBu06dUXXNrURor/4cFdyIoy9XsTnPauiR7/GCCzcvEFJwW9DGuKJ3+7DQ032YO7o8H/R+sGGFY93xUPz0hB2zxzs/Kxr/owz+aRj+LRnBzjYywtBEhe2v9oGfb+Khf+AH3Hi50EIKjYVbObCB9FizGNo1SUea9Y8j3pUUyM3KfrqEkIIIYQQQsoHQ00M/2ojW7wrJOSdO4wd27Ziy6YNmDLhQzwWmw1JEWCscS9+WfQxeuUPeKALb4Dm0Yfx6UfP4t1x+3Am24WL52PRIScrBzLCobnWNDq3YunKNMhiGPoMal80wMBpamD4iLb4cPe6QsdswNQZsZAEHQIzV+CVcSsvPq+ciSCDgLQjM/H7zgl4u1XJ3UUIKe8oyEAIIYQQQggppzSwVKyPTgP5cj+eV9dJOPVVH7R5dTJGDQrDqrUT0UCTgul3N8P45TZUuPNHrDo1AOFFuj7YseC+GIxZIF3V1WWlpGljZbb+ckcpcLk8AY4LjRVEiDw9QgA6Pvk5Pu5UQlMM7turSh4h5RIFGQghhBBCCCFlS8nE3Pua4MGF2bC0fxur/3gE1S/5IN+KuFNJrKovQF+hEsJ5kwTXHmzeboWiicbAB/oWCzAoyF77Et5YZGevdFAKAgcCTP5+0AuALSUJKTKKjsmgpGPF4k1wgrd/KETfEt07+GHq3GQsmbsBb3XqBnPh7Y6d+OHHbepxBd0l9O0xangMpn1+EnMmzcHzHe9GWPH+Go5jWL3kLMLbtUHdMGOpzppByI1EQQZCCCGE3BRq1KjxvSAI70+dOhUffPBBWSeHlC/StGnTzhgMBkfNmjWPlXViyDUQAjB48nG0WPQS7h3/ClqFvwRF0MInvApiIkNg0UqwpiXgVGwiclwKBGMV9Ht/Nb55sBF8eG1c3w0vf9QfKx+cj2/71cbaHv3QPkpAytHNWL05Aw1emY35n51F28dXYO+n9+LeM13Qa+RrGNX7FbzbcxWeWP4xejXbjhEjeqJ2gBtJ+1dg5jIHhj1+GyoumYYUWYZckFgf3D5pBT7K7IvnfhuCqJmhqNOmFWr6O3Fu/xbsSquLZx/rC9/3F8IpK/nHadH0tU3Y3uBxDBk3HrVDnkZY/XZo37ASTI4kHN68EbvP2qCLHoQv53dEvTL6NRBSGijIQAghhJCbgtFozJAkSbzrrrsWC4LQ56effsKYMWPKOlmkDCmKkvHFF1/MfeaZZ+4dNmyYYrPZTOy7cdmG7KQ80yLqtg+w+rZrCSKKCB/0Mw6z5dJmIGFE8XWRGPH7CVy0GuPxUv6rF8d/WUJSq2PMnGO43F+gZ5+96CBED/4G29hCyH8ZBRkIIYQQctPgFcjZs2f35a83bNjQoXXr1p9s37692ZAhQxzPPvussVmzZmWdRHIdKYriXLt27caPPvpIt3Tp0g5dunTZw15/63K5HijrtBFCCPGgIAMhhBBCbkodOnTYsGXLlhbe9xs3bmw/fPjwx/744487tVqt0q9fvxz2PqhXr16wWCxlmVRyDZKSks4vWLDg4KxZs/zXrl3blP0O80aMGDHtkUce+XbJkiX7yzp9hBBCSkZBBkIIIYT8J7Rv334jX9jL4d51KSkpobNnz+63bNmy3n/99VeP9PT0oEqVKmV36dIlh+0b1LFjR1Pt2rV5C4kyTPmtyeFwuLdt23Z048aN6Wwxb9iwoVZOTo5P9erVT/Tq1Wt5jx49/ho5cuRfDz30kLWs00oIIeSfoyADIYQQQv6zQkNDU0aPHj2ZL8W3SZKk2bNnT+Mvvvii/c6dO5vt2LGj+dGjR2vJsiyGhYVlN2rUKKNBgwaoW7euX8OGDQPr1KkDHx+fMvgUN5fY2NjcQ4cOJRw8eDCLLcL+/fsD2c/KDofDwFsjNG7ceE/Tpk13tWzZclvHjh3Xd+jQIb6s00wIIaT0UJCBEEIIIbckjUYjNWvWbCdfLrefoihCXFxc9Lp16+rxIMSJEyeqe5f4+PjKPFjB9/Px8bFVrlw5Mzo6Oi8yMlKpVKmSUKFCBWN4eLg5LCzMn/3UsKXcdt1wuVy85Qdfss6fP5+dnJycx17bz5w5I7P3AvusFrYEJiYmBvNADD8mICAgMyYm5hRvfeBd6tate6hDhw6H+vbtm1PWn4kQQsiNR0EGQgghhJDL4INNVqlSJZYv/fr1W3y1x7PKu4532+DL0aNHK6ampoZkZGQE8iUzMzPA+zo7O9svJyfHl8+QYLfbjVlZWf78WL7+WtJtMplsRqPR7ufnl63T6Vz+/v5Z3vc8OBAYGJjBF+/r4ODgtLCwsGS+tG/fPtHHxyf3Wq5LCCHk1kZBBkIIIYSQ64hX8CtWrHiOL40aNdpb1ukhhBBCricKMhBCCCGEEEIIIaRUUJCBEEIIIYQQQgghpYKCDIQQQgghhBBCCCkVFGQghBBCCCGEEEJIqaAgAyGEEEIIIeS/T07Bmq8+wdIEoFLvpzC+axjEK2273DGEkBJRkIEQQgghhBDy36dkYPf8n/HjHqBR+GiM6xp25W2XO4YQUiIKMhBCCCGE3GAul6vf2I9XLdp3Kq2sk4IGMSFbfp7Yq01Zp4OQ605TExNWJ2HC1Wy73DGEkBJRkIEQQgghpAzwAMOKD/qWaRp0Ot3iLhMW9CvTRBBCCPlPoSADIYQQQgghhBBCSgUFGQghhBBCCLkCSZKCdh2Of/nU2Ywmadn2UK1Wa2tep9LOulWCT1/hUOMNSSAhhJQTFGQghBBCCCGkGFmW/X5duGX39HXxMZWCdOhc2wdRIXqE6YGwEL6HA5nnjzX/fKMVy/fn4pVRzdC4WnBZJ5sQQsocBRkIIYQQQgjJ53K5oh7/eElsUpZTHNMpCC/0D7/kvgJbWlYzq8tfW45h1W4Lnr6z4Y1LLCGElEMUZCCEEEIIIYSZv3r3xs/mHWn3ZO9Q+Jv9rurYHvVMmLIpF0kZNoQHmq5TCgkhpPyjIAMhhBBCCLnl/Thv894Fm+Ibvjo4Qm2hcLVS7Ebk2LIpwEAIueVRkIEQQgghhNzSFEXRT/7rdMNnbwu76gCDoNFh7k4reyHhhwkdr0v6CCHkZkJBBkIIIYQQckv7e2/8uOoRBviZNP9o/zNpLny5IuWi9b0mLilx/xA/mmCCkNIkywrqV73izC6kjFCQgRBCCCGE3NLaN47+5NlJf39sdcgwG8Qr7h8VrMOHwyv+o3OfztBiV5z93yaREFLI/th0RVGQXdbpICWjIAMhhBBCCLnlDelcffrP6+LuHtczpNTOqYh6TF59BlOe71Jq5yTkVqcowOdzD6a8eX+7sWWdFlIyCjIQQgghhJBb3oRhLUfo9bqUdxYcfeLpvmEw6q5l+McLciQTPlsQj18ndoG/RV9KqSTk1ibJivXBTzakPXd3i4caxIRsKev0kJJRkIEQQgghhBBm3OAmT97Tu94bA1+cf65qiM4wvI2/oBGvLtjgFk2Y9FcSujXxxR+v9bhOKSXkluNYvP3c7HkbTrWc+krfFhajjrpKlGMUZCCEEEIIISSfn1mfsfqzu0x5Npff6PeWbTubmlOjc11/uV0No9asv3i8BkEQYJUMWHkoD8cSsjF+YCX8+HStMkg5If8toiieS891//7J7L2NsvKcAZ+M6/TcyJ71zpd1usiVUZCBEEIIIYSQYiwmXfbsN2+vzV+7JVm3fu/Z/tNWHn7qTHJOdbvTbbba3b4lHffyL9tvbEIJ+Q9pXit8TVKGNSoy1PdElyZR83q0iJ755ZNds8o6XeTqUJCBEEIIIYSQy9BqRFfXplF/8KWs00IIIeUdBRkIIYQQQgghhBBSKijIQAghhBBCCCGEkFJBQQZCCCGEEEIIIYSUCgoyEEIIIYQQQgghpFRQkIEQQggh5OroOz0+M9Nqd5v+zUkaxQSVVnqumSRJTVvWCUts8dA05d+c54WRLccO7ljju9JKFyGEkJsXBRkIIYQQQq6Oc90XQ838RVaeI3jBxpMPzV5zdEKwn9EwsG20b7v6EdBqhLJO4z8iy3KFt0c3/0f7xiXlpv+59Uzc2j1nq7ZrUGnRsK61vqhbJZjmaySEEFIEBRkIIYQQQq6Rv8WQdk+vuu/xxbvuxNnMBjNXH3161c74u9rUDXP1bxPtVzPSvyyTedXy7G7bip3n9s//+3RkiL8pfli32p/z6Rsb145yvXJvWaeOEEJIeUZBBkIIIYSQUlS9UsD+l0a1Gs0X/l6WFc36fQm3z1x19NnT57Ma3ta6stC3VZQl0MdQxin1YOmTtx1N2zV/U6zuTHJuyB2dakwa1KH6D2Nua5TElrJOHiGEkJsMBRkIIYQQQq4jURSkzo2j5vPFuy7b6gxcuPHk/bPXHn0qwGLQDWgb7d+hQbheqxGve3riU6wHFm6OT1uzO6FRp0aRC4Z2q/VFn7Y1drHlul+bEELIfx8FGQghhBBCbjA/sz5jZM86/+OLd93p81l1Zyw7+sRfO+LublUnLHtAm8oRtaIC/lXUIdfujlux89zB+RtPN4wIMscO7Vrry85NouY1qlmJuj0QQgi5LijIQAghhBBSDlSt4H/ohZEtH+YLfy8rirhx39nbZq0++viJc5nN+7asbO3XKqpCkG/J3SxkBRnbjqZtnP/36fCElNyKd3aq8e2gjjW+v/+20NT7qdsDIYSQG4SCDIQQQggh5ZAoCHLHRpEL+eJdl2tz+S/YeOL+OWuPjfc168SoML8jmw+eb9mlSdRc3kqhT5vqe9hSlskmhBByi6MgAyGEEELITcLHpMsa0aPOJ3wp67QQQgghJaEgAyGEEEIIIYQQQkoFBRkIIYQQQgghhBBSKijIQAghhBBCCCGEkFJBQQZCCCGEEEIIIYSUCgoyEEIIIYQQQgghpFRQkIEQQgghhBBCCCGlgoIMhBBCCCGEEEIIKRUUZCCEEEIIIYQQQkipoCADIYQQQgghhBBCSgUFGQghhBDyX/GqoiiVyjoR5NLcggsaRYAAERL7Cfbe7baDvYFOq4MMBYIoQCNr+SsIGlE9TlH/n78WIAtu9kqjvlckGYLAVytsvYMdpwdkUX0PxQ1o+EZW3FXy15EbQhCEh8s6DYSQskNBBkIIIYT8JyiKcif70aCs03ErkWUXr1BCFDVwS4InDKDIcAoydCJ7J0uszu+G056KzPRTSDq9Gqlph5CZcRxa2cr21SPPrkPz1rfjXGIuTpzQw2YPgV63HW1aJCMoIFctrLpdWfxqEKCBQ3Kz1xp1sctGHopgF5Uh6pw4nH0/9gffDj+zAefjYlHJnYRBFTZC0YeyfQzIc1WERvSBnyUIIaGt2DoNNHp+LqEsb+N/EQUZCLmFUZCBEEIIIYRcE1HQsfq5Gy4pDY7cZOSmx+L8mb1IOL8KtrwEwJUJrRgCk1aBRnGxOr0ESbbCV+StDCS2zQyz3ox92+eiXqNuCG4uIS4tCRptK2Rm7IVOOI7/s3cVcFIcS/+/u7Pueyvnbtzh7h4kuARCCMSFhOTFiPtLXtz1xY2EhIQEgkNwC+7n7nd7676zO1/PnnDAIUEi39s/v+F2Znp6uqutqqa6SiXzgOIJQooEFkJugPzffLDWCmAVGyR/rxd7tuXCNn40gjHxqDqUh/3rdmDcnPXgijlgPDxs2qHG3oPJiIrgI6vrCoybeDsUgqS/joBhhBFGGP8PEVYyhBFGGGGEEUYYYYRxVgSJEI8gBx53JeprtqKx5giMNQWwWEsQDNgh5ROmkqLBNG9JEDABCLk+QESDy6tqyoTNggmS86adC6Gk5FnADo08AgXHtkOlTYC/sRoCgQ9+nh1VTj9qeTzyKB3aFsHlcsFHAGKRCMFgED7yjmCARw4KfI4Qs3ruxK5dB0AfuRfB0h/xyOz98HlE4Pi45HkuBvc1o3u3YtQ3KMDhVWDnxsfQZ9gjUCqzQ1s4wgYNYYQRRhgXD+q11167/4EHHni17cUXXnjhkYcffvjFlvM77rjj/Q8++GBey/nChQtnX3PNNd+2nI8fP375ihUrxrWcb926ddDAgQO3tZx37tz58JEjR1rNF8vKyhLi4+PLW85VKpXFarUqW84ZhjlpiudwTmyiUyqVVovFomo5Z/Nl828579Sp05HDhw93bjnftm3bwEGDBm1tOR83btyK5cuXj285//bbb6+ZPXv2wpbzefPmffD+++/f0XL+4osvPvzII4+8cCbasPfYNGeiDZs3+46Wc/bdbBnORBu27GwdWs4TEhLKysvL41vO2bqzNDgf2rDPsc+fiTan0u5ctGHrxdav5ZztE2zfaDln6cLS50y0YenK0vd8acO2G9t+Z6LNufpVW9qwaNuv2P7G9ruWc/Y59vkz0Ybtz2y/bjln+zvb789Em1Npdy7ahMfcpRtzp9Lmj/arP3PMndqv/uoxdyrtzkWb/6UxF0YY/+tg/SNUl6zDzvUPQkCZQDE+cjEAGZcGj8/eZYjQz1oWNE8DFL9ZYGfAcGhwvDzwAiQN3wfaR64xXPB4QSLo88AVcOHn1oHicRAZkQ65WIuqSjL1BE2hTRF+sMoJP5o8M3AQoP2QiSiSTgCbM4AGEwd1DTTcThqUgA9NBBcjxzVgv1eCvN1WMDxLaDsHh/X4EGRzqATDDYDLkZATBZZ9dT9m3fIzKJGQ3At/fwsjjDDCuFhQfr+fz/4gzBbLdLVcf6H5CIEwqqGjDRY2HyEQRvXUfLe2PSGM6qn3y9qeECb91PunMqptT5Vt7xMG+NT7ndreJ4zqqffHtb1PmMjQ0Qbzmo8QWLqwRxucRBuWZm3oxuIk2hBGNXS0wUnEaoc2J10gTPip9y1tT85GG8LEn5U27dDuXLS5pvkIgQgvoaMNHm4+QmiHNu83HyGcizZEwMApOIk25+pXp9QtdKnlBxEOT70fj7PTZmDb+0QwO/X+SbRph3bnok14zDXjYsdcO7T5Q/3qTx5zJ/Wrv3rMtUO7c9Hm//WYW716tennn3+eMmbMmNVTpkz5+dTKhRHG/xI4sGLvzv9CyquF328DTUR/Pp8PHq/JnwE7OpuGKOekp0JKAT+NBqsARflZsPE6o7rKSNLzSXoTpNQhpCZYkJLEh1jEwONxQCqVgkfxwAlwgUCgKafW4d+kaOByOGB9QvL5QdBeCQK+KORVc2G12iAXuxEZVwGFQgavk4cA14qQ+QTDOpVszoj8ZRBAEFx46Ur4AxZQiPxziBlGGGGE8f8cFPslp/lrTtjlbhhhhBFGGGE04+DBg10/+uijWzUajSmsZAjjfxNMSLnHWijs3PIZaMdhcu4mYjkXDBWAx8+B28rqAdh0PAj5fshkXAiEDLzkaalbBTusWJN7LSorg+DxJVApJYCSgsVohkKmgMk/AbnV1eibnQyH6xc4zVboo+Pg9gYhohhQzUoB1hKB3R7BHnweD16/ABxBH+zIl6OqSAXZ0OHQ9xag8sAB7N28A/ROP6aPGIDf9v+GkT3FkMIICPjw04FQHlyG12S1wNAIOkxw2eogEkezfiDDCCOMMMK4SIRtwsIII4wwwgijHUyePPmXlJSUoszMzNy/uixhhPFXgN0EQXM84IEPl6MC/GAtKI4XxY1qHC3sBmcgE+6AHR6bEynRabA4SqDFNsRH2ZGYRuFYYzrM/Dn4edETmHHVDPTu3Qva6AgcOpiLXn0HISiLxrMPPQ6VQIrVWyvQo6sYLiYIiUIJmvZDIOYj6GuyZAjCB3C44HIEEAo8oLh9sHibBs6ekyDvrcXhnYeQ2CEZR5cvh99YjcPeRowbNRx7/GNwfFMZbul1GHp1HUJWEGzUC5Jjk9NIDih+AMbGCqgN7O62MGscRhhhhHGxoFr2P5+yXSKMMMIII4ww/qfBKhfCCoYw/pcR2uzAcMEwAWhUGhjL4rB6WxL25orgDdYjNYmLYcOnwuejUVpSgGqvFDDciw7S33AwT46IHjfi6NqVKCoqBI/Lg0DIh8PtCIWP/PmXZaDFfIyc0BNOoxEu4xHokhLh8bpCDhr5fAF5ry+k6GhSd7B/ueCQfNxuNSrMHBxyKZGllePotr3gicX49fV30Omma3F00S9osJlQb25AssIJRjoCiw7KMLfbZkhkplD+TLDJ8ySbK5+i4HHZQkqMMMIII4wwLh5hdW0YYYQRRhhhtINffvllMutQkrVoCDuBDON/Eey2Ag7Dsooc5JVnodTzAA7kfIwnHp2PHj37hiI6FB6vwI7Dy3DtbXOxdWc5fv78WXC0PPS/bgHWr1mBD15/DlKpKJSf3x9AVWkDAj4v7pp/Hd79ciUO7a3CtN4CxA69Hnz7T1AbGhHkCkHxheT9XjAcP3m9DzzCsnIYPrjk3sCsaJQ1xGIrlQRHahayqqyoi47EoB6dkMVTon71RiiSDNDHkPu//Y7UBDeKJF1wqHgfeqbbAa4PrJ9aLpf17cBuyaBgrStGkOEgvFsijDDCCOPiEfbJEEYYYYQRRhjtIDc3N3Px4sVXsVsm/uqyhBHGXwEOEcKDgSCOHC1BQBWJzd9/h22/rYBMqQKP4iBA7qkVUuw69iveffV9yPQUevTtDG9jPWivFxkJcVCrxaADbvC4QrjdXiSnRiM9Mx4Bxo+H512J597YgE5RByCSy+F0+ULRKrxOK8Q8GRD0IgA3gkGGpAe4HJpwqy7QcCGKdwwGGx/7DpZgbEIS8o2VoKwubBdXQt6vJ0r27cbdTzwCPyeAYzohOktV8AUoOEkWUkGAlIcbYnxZRQp4TljsxWFDhjD+p0HTNFVYWJial5eXUVtbG+l2u8VcLjfIRoaKiYmpYiNA6XS6hr+6nGH8M3DBlgxBvwfm+mrUu0VIIQsGP+CHzVSH6kYf4lKTEbRWwUJLEWdQnT3kcNAPU3UZSmutYPgypGakQin6I3rkIJwmC6BQQUq1tzoE0FCSh4IGJyiyekg1UUhJjIKIFw6EHEYYYYQRxplx9dVXL+rZs+fexMTE0r+6LGGcJ0KfS4ItQQ3IOeeEuX1rdBImJLRyOERIZh0JghuKkMBGK+AEm1Nwmo7QbzYsI/tFnzWxB9O8nz8UsvUPFa299Ow2BLa8TOj7edN7wJD8SUGC5D1Bcp1L0oS2CgSJwE8EZg65zmOY0DN/tAx/FAxbb/Jv3badqLSa8cSCuyGQ8Qn3yEaB8OPQgTyo1ErcNe8e7Dt0CG+8/iUoSQNiZUIIvJtRVeiC2epGhIYNxMMqLAKwObyg+H6IhWq4GTteeHgmvPuKkW8xQSoRIre0EtlSHeRKJczG+iYekoMmFwqhtgtiT1kVeosbcVeqBHct/Qz/DcoxekBf7P19B5xeBxqrKuGuM0MdaUBALgTvaCl8PCM0WUZCTy8ph4BQuynSBBvWUsCjYDKVgAn4wPAEl52uYYTxd4DT6ZSy4ZpfffXVB1jlwqnhzM8EmUzm6N+//44FCxa8MnTo0E0URdGXu6z/dJB5i1tdWZlgrirvWnz8yDCvsa5bwOvSBAO0kKwtAQ5faI5IyVzbY/jo/0bo9VV/dXkvBS7YJwOXL0KEToriw3Z23geHx4dSF4HK0jz4GVb5bENVrR2Up4EsTBR6dkwCuy4TYoIOkGWTEoRCDzVWF6OWVqNr9yQE/F74yH2/y4zSGhdkfCeqrHJkpargtllRWlgKZUoHcEw14MoV8JLFhx+bCH9xAbiJHRBNOVDnAATknUZ+Irolq8AuSV6XC5qEDkhSc1GVl4PqRsB4vBy67Cwo6AZUmwG5kIYtIEEE1wRTUAMZ1wmhJhbeqlx4FTHgeyxQxKRDJwsb0oURRhhh/C+AVS6EFQz/LDiI6Gj0upt5DA74NBs/gBw8DthvCzwOF/X1DThw8AgKCgpRW1sDitMUQYFIlqAEoajeoa/cfF7Tdxgv7YXH54bX64XTYkNKQiJuuukmGAyGP1S20BdzoDlaQ9NvLvuFng7C7g+AJq/zkMuNLi+qiDBu9NAwEYE8l7yzxuFBg92DGGEQ18XqoQ7WoW+3XiGFB3tcLqGYCWld/BjQtzsWLvkZsbGxhC4i8BguXD4/IrRyJKXFkHQu9OvVAzHPx2Lhf+/BtMlToOZsx9CYHGxY9jyGjXsYarUacoUU+YU12LNnL8aM6Y3y/CAY1X44AhGozf0Z4ti9EBF6OG0mSJQKNNQFyPvalIdpaiu7mcFOQieVbRcWJMXi+9xorHpnM6j4BNLeQggCQiBCTXhHOwIVxdAJA+jcldAvwsxGrmxtA1axBFZxQ2Qrn98EJuAHhxJeFlqGEcbfBX6/n/+vf/3rrY8//vgW1nrhjz7vcDhka9euHcUe7Pldd931DpEhH5FKpc5LX9p/Bsj6IAITDGmI7VaLofD40d6+yrKhVSX5Q8x11ZlMMHhOAdJUWdqncMuax5K79vpZkZy5pHvfAb8KpVIHh8P5R+42uIw+GSioIpTQa4lgbzGFtM8hj75k0Ra0IbNMroSvyAibTg6hrx7HigJI0rlgZaIQJfHDV+MkCzAHRaWNSOzYDUrKh7wqD3RRiYgQCcEo5bBJRaAFXFhq6uFRJyECjSizeZrfCDR5Dma9EQuQ3KkH2LD3tmI5dDoFPKWlCPK00OpkkJG1NGBxQy2VgzY1wuFyQyJSQaIQwWpzwuWlSYHDSoYwwggjjP8FfPHFF9e/++6786+//vov5s+f/+5fXZ4wzg0Bh4vVeRXYWWYDny+Dn6YRZJoEeh77AYQI7zQdICyKHMGormCiusDFCzYL64Q/4YpAscnJMzSXhpRVVgQ5SEtRIlKsxRWUA/lFBTi4/yBGjRwIhjxHxGD4g0xIQeD0A/VOHxp9AdT5fDhK+B+z0xtSFtSbnXASHtRJysAqLAIBKsSECXjBJgMMhrUZaFIa8EL2F6fzlYfJO8oig1AyfBTXViEtLhGXk/3kcaiQnUXvzumQimfj+x9W4YG7b0WA64FcLkVlZR12790Fh70B8DHo1H8k7p07HLVBLRiHBQWlhfA0HsNLz22AxzEaMoMMkdFKVJQ34NixXHicRuQfWIPsVA1SUmrRUeuCn9DcajUjJjYFHlb+4fBIGwRCSiBWN8MqB4ScIHw+Dmr5PORZquAwFaOXThVS2Lh8AjQ6fGCNW6UCL+QxXsRGWpAQ5wWrSwoE2OgSbCOzDi2bIk142D7isMJhroYmMv3yETSMMP5iVFVVxXTp0uVQY2NjxKn3xHwuBqQpUGHyoaTBDV/g/CaXd9555y7Wf9HPP/88ZdCgQVsveaH/AWAtFVZ/9Pai6tzDE/7os2KJEB2yEqDXq7B65W5u0YHd00COAz99BYFQ5OgyfOyrvSZMe/afpmy4OJ8MAgN69WyryRcju1fXpp/RaWjpvb26aM+YhVAZiW7dI5tOpIno3bPppy70vxb9mx/t3Sem+QkJOvfo0fxbGvpf2bH5nepuiA39UGFg6xsoxGb3wMlQoWtfVeiXLLEzWkonC91SNBegOc/mkhjUPc9YhzDCCCOMMP7/gd2Tum/fvh5XXHHFur+6LGGcHwSEk5mYmYrC2kMwEvGf3cNPcZs+NzAMG5mADx6fav4i3vQMxQqczZYADO1vsgpgAojSqeHjCWCyO7HxUA1MrhK86nHA4w3ATXPA5GxknQQQ7pJpVlLwQpEP+JwAGwMhpDBgpWKGw6oLuCH/BghtzuBAwhGA3YgALvs1ndu0u6PVGoGUjbWuaD7lBk/Uj+IFsaqwHl076VBWVonoiEjIRCJcPut+th4U+AIuOhMm2FFXjaU/PIasXvMQn6iCXK7AhvW7EB0vwZaNO2Cy+pC7/TfIxJvQPauQVMUPAd+DDM1xGJlilB6VYOdGCl4PB16XGxFKN+KjgdhIEZKiSCVoCpSAQoD2QqlWw+/3QiLkNm9j4QHNn49omt2+QoNHLmWkBsjzPNTWmclhgsfNRYKMS8ocgFhMeDsJBzIpAz6vaesMq6QIObRsJlrIqiRA+g7fB5upKqxkCOP/Lex2u3zixInL2lMwsBBQXMSqhegYw8pX6tAcaXHRqDB7UWr0oNFBwxcIwk8zoIMni41sntfMmvXdmrVrR2VlZR3/M+rzd4JYLHZN+ddDE+tqa+KMe1cvPLTz90Eupyfk7DZkNUXmf4riQSDgQ6mSISk5EnEJBqjVsta5qK7WfFq+Pq9HtmfVz09X5B4ZM+bOB6+QSqWOP7tuF4pwdIkwwggjjDDCaAe33377hzNmzPiBdXr1V5cljPMEYdaiuQGMzozDtwermvwnNDtaYMXlJmaOE7JuaBHMeazw2sIvh64xoS/cDjONzQf2w88XwEvThDkUhJQTbJ78kA+oZj9Q5E+LnwaEvCicsHgMfS8nL+JxTygQWl7GbQ6fyD3NcxXTJtXJCHJ9KKHFOGrnItrtQW1NHdKSE8+Q+tKA0+INkQ0ryfCRs+cV7Ny3B1X1MURwl8HhskAiVqOi+ghWr1iE/l0EGD3IA4ZjByXwQyKjIeUDKr4TCRoPunuCcNgAv48Cl9yPj+HAEBUAPxSBgkaQw0PA7wOPosjBa7bo4LRapqKNnSpD2pHHp6FUUxCIuJDrGNgsQTjtrDEKnzD1gZASqeUDIMOc/LflNyfklwOw2SyXjY5hhPFXY/fu3b3379/f/Uz3E7VCqCQnREN2jlRLqdDROVZ6Pq+IObTmy7uysl6adwmK+4+Dx+OWGjg1HxpS5YOyU0f+4eeVKil0ehUa6i2n3astKey7ffXym0dNm/nmxZf0z8EF+2QII4wwwggjjP/PYJULYQXDPxB8Cj2jddhTZkGx2xfyucCiyQ9Cs3BJn3AU3fIrFGmgxdkju7eC8SEx1oA8oxG0hw45h2yLE4qFM4Oimhh2lllvLzlrrRBosVhg3UIwJ363pzcQ0ELwGD++yivHf7rEoKK2HlFResgk4nOW5WLBFkeiiIFayYEuuB2ROsBsE8ApFELAoZAUQ4NKdCE9Q4IYrR9BVlHABCHi8+DncUHx2K9yDPgCBioZm5sPYiLQKJQiiMXkZqDFZCMQ2q4aDArJO3Xw+Fzgs1Elgi60mJ+00JVFkG7yo8EnL5DxghBIuVDwGfgD/qZyM82PMTwEA01WIuw2XNbfxIloEuxXRqCiZj86Y8plp2UYYfwVqKmpiWr5zVr2DExTIitaEvJZ4/QEIBZcfHiVw1uWzRo2864XIqNjyy86s38YrOUHHxAFq6680OdFIgE6dUnGjq1HQ8oGds1iFQ4+X5NfTXN96T/KrD5syRBGGGGEEUYY7eDDDz+8/ZVXXlnAWjSwXrT/6vKEcX5gtyGwYRAnZcThrf3HIeCJQtf93DZyO+/s/pXYCA4eIiQn6iNQUVUNgVIFl8sVUjS0deDYAqrVSgI4zTCh9eJJ6oymq0yTr4jTUnM4Z3Xm6CLPbDHSGCpkkFdcgh4ds5q+yF/GqAisbwadLhtatRwuqw1KRQBSsQeMz00EliCkci5UKgZyRRBCnhgBIsEH/DzQhPACyknqyQk53mQ1LuzWEj6fA7EUELB7XFgdA4ewpIwAPoaPAKsQCAk9ShwtcYF28hETpYZMaoGAlIXLY6NyNBHO62nTliR7SkR6AJ8Bj9VLBBk0BwcBq7xg+4Y/yGU9X4AXoBD0AUK+AEYfqYuIA4Hd1KSRCEeXCOP/IbKzs4+1/M6IlKBL3AnrBFmbyH4Ndj/WH7fA7qGRECHC4HTleSsgaJ9XVJBzpMv/mpKBpmm+2Fc//Xwla3a+9vto8CjuSQps1jcDe7QgQAewbs1eFBVWwx/gn76f4m+MC/bJUH9sI/7z3CsoE3fEvx58BEMz1aHrx9d/gH+/9iviJz6MJ+cNxnkZ11xmuI7+guuf/Azy3tfirfuugkzw91w8guZcvPDMq6gwjMRzd0+HVnrmnlpz4Gc88fISdJj9CO4bn3X2MKFngd9hxO41S/Hd8vUoq3dDl9QRY2dcgzF9MyAX/HEnly7jEbz04Auwd5qNf9877uztzwTgcjjgJQu+TCYjTEodPn34KSwuEuKB/zyGkWkne+52V+7D6y+/A1vyJDxw62ToJH+zdgzW47vnn8eXhwOY/8SjGN85+lJljPyfXsW/PtmDIbc8jIem9rjg9j4dDLZ/9QTeWWnEvBfewJCky/817K8Fjd+XvIP3ftgKizsYMv+NyhiC+x//F9LlF56rO38d5j/zPujU8Xh5wfUwhB3E/r+AxWJRFRcXJ5tMJs1fXZYwzh/cAA8+ImGm64XoEhmB40YnG3HxpHmTNZ9vZXraidrGbqfg8/ks44i05CQcLq0KhXNkLR04nNOZ7XMzUGfaAHFhYH08fFlZi6EJBjQY61DXoEWEJiIUFeOyRZogTLFILCPk4pK/PAQYPwJ+8i4h68+AA6mMA7GEF7IyYBUxtJ8cRNBnI4rRHpKe5pJzloB0SIbntG5NYR1ykjTN4Sl9rIN2wj14fKx/hnhMmPICKHkMyva9jLqKFeCJ7ew+5dZyBZimMJtNji24IT8ZHIa1nCD5saFAWYeRTFObMS0KItavg5eGl8OHXzQZ6d2nIXffN/BU1pMyBEJOysMI4/8bMjIy8tiQkwRDj9e4QooDVtEgE57Ms+TVulFn84V+F9S50SFagnjNyVFX2HFbSOSGHUU2SIi8MCxTCa2MD75IZuszaPjaP61SfxMEAwG+QiHmMU4fvF4/hEL+Gediv5/GimU7UVVpDJ1PmT4I0THt+y8MssoIf5MlQ9d+A3+9PKW/PLjgWdTvsqI0/zj2G4/j+h278cIPSzCzswYuay3yjh8H08eGJgfNNOzmRpisDnJOQa7RQaOUhEJJsVpla2M9zFZXSIMtUaih1yia77UgALuxHvWksysUUriJUCpU66EneTB+NxrqG+BkzRhFUmi1EZAKqeby2VBvbCSLFQVefTXycnOgijYiEGzay+dxmGFstMIbAEQyNgqGCnyyMLHxsB3WRjSa2UBYXEgVGrJwK8i9UwhAFjWP3YIGkzXkfZXNQxuhhpBqSuh329DQYIKbLLJ8MRvJIiLktbXlWYelAY0WJ3kHD3I1YQ6UUnAlOoyaNgsOcSxkzRpDH6Gz0WiGh+RDiUg++qZ8lLEdMesGKTSphqYdioTONlMDTDZXaD+jXEnyVMtOoeVJFUDD/p8w946nUKsehJeffw5dYuWozVmDBfPH48Wosfhs8bvoomAXYgepS2PIlErA0lmvIxMKF0FyvaqGLMgCKcSEaXAGKOiUBoydfR186nQI4IexqgYWP7muFsNusYSYB5XWENrzZSz4DY/d9RTWWKPx0guPY3RfFerLinA8TwyjqREVZU74GQpqNr1MCL4yDqOuuhY+eSKkrKKI0NFlJ+1I2sBP2lFM2kqnUYbCXTmM1ai10tCQdvWTdnL4gpCpdNA204ShvTDW18Pu9oXCVSk1WmjkopNJFPSivqYWDlIvpYiCzelHRKQBZA6Bm7y3nvQfmnCcMhWhtUpOmFhS38oSUn4aJqe3mcw0zA11sDo8YHhCqCJ0UMvYidqHquJyuHkKRCo5MJntCPDEIdpSXjPpV+w5GyZW1/S+xiocP56HpNp6VJeVwBPgQBFhIP1GDI7fEwrD5iUUl5KFwkHaSW+IhIjjQaPRFKojlxJBazBALjp1yHOQ1GscbjC4kRwhQMBlQmmNBSKVBmLaGSo3JVWR8aEhfbtpL6zTYoTJ4iBtyY4dNQw6JSgykdobKlBn9UEdnYgIwmh6bKSs9TaI1DGIjhATBtMDE0tzUj6hgow3txm2gBhxcTp4zQ1osHqh0keG2sFnM6KqwQYxaTOdnEJDTRW8XCmpH3k/4S11UdEQBuxobDST+vrBE0gI7fQnFsqgH2ZjAyx2QndKADWhO9uHWAdmPa68Ba/0mUby8YPb+hxa28va2DSOGMJ8KgjTriHzDredcRTwOAhjb4SHMM5UXQ0K8nJBC3uF+mLovtcJIztuCCNLiWTkPRGQ8E9XPgR9bpKuIVQPNjSwWquDUiJonir8sDUSetvdIJUM9VMVoQ/jYUMEG8EIlYg2RIDPuFFTVQcnI0JctB7C0yasMC4E99xzz5u33Xbbf0UikefcqcP424AMMxEr+ZJxOyUzGgW7c8Hx8ZsiNjQrFHzcYKvSIRg4MS5btkC0bJ1gBeb4CDlZ68jcyhHCZDJCIJa3pm1hIukWQRdNLiBaQkuejCDOFy2hGtuWqS1CszFZm35t8OAaJRe5pUXoq1KTuZh7BkuKiwe7dgrlOnBFZM6nreQKv5mLDIS2QIjEJA1ZJwKBQOjwB3hNlgTkH+ukPtjkvCJkz8En8zLFp+Aj5aUDInAJr0YTPtBD1gRuUAmHQ47iWgW8Qj0+++Q+wgNREAlcZD2UhvgPiiNupjEntK2lqS0IXxLgNPvhYEJKkCZatVHwMM3/Bdk2Y8icLILFUY4ErgkUWY/LGvaR8vghCCsZwvh/CIlE4nr11Vcf6Nu37y42dGVRvRsdYySnpRuYpsCAVAUZQ0yr49xT4SAC1I5CG8wuwuc6aRyucGJkdkSg39UPPCIQCL2Xuy5/NwiEQldhlWf/vlUbOxgbrKG5ac71oyGTn/7xjk/mvnET+8NsIjyyWAiF/PQ2KCqswvYtR+F0ukPzaHxqUmlcasbuP6MulwoX7ZMhptMg9I3x4YlZk1Dw5OsY3ZaHpqvwFBE419TKMedft8NQvAyPfbgeqdOfws8P9cHbj96Fj3d7cdO9jyA7uB8ff/ITZL1uxAvP3Yb41jaxY92Hj2HBp5sh1qVgQL9BGD3zaiTUL8Fdj34KUfdxmDGiE46u+garD1px90c/Y4L3F0y/9x34o3rghukDUbn1VzjcNFQkN8ZVh8+ffxCv/JyDHuOuwqCYAJYv/gG1it547f3ncei5GXhtoxs3Pfc0Bghy8cpbP4HJnoEvX74dehm/tWrr3rwB897+HSPmP4u5Xf348JVPUCzqjg8+egCVP7yMh99dC0OPKzG5XxR2Lf8eu2tlmP/8e7i+Nxev3XsjFh0BRs+4GtHWg1j0yybEjHkUX94XhWceWICymElY+dGDqF7/IZ54fxUiB0zB9D5q/PTua9jrTsaLH32NDsVf4s7536DLbR/j+wVpeGb29fgiX4QHHrsfKfbdePPr1dANvgmvPzgXemk7Agdtx4bVK1FkFGDKzddhcPcUliWDXn8tVu2/mkwsHMIsBLH7y4dx43NLoek2CjNHZeLA8kXYmOPH7Bc+wkOZ5bj1lkeQ28BBVo/e6DFwDGYM4uLxW++HpfsdWL34Gnxwz434eF8Nuo6YiMk9dFjx7dfYZzTg35++i0F8VsnD2jHScDqI0BpQNZXNUYTvPvkGru4qrPruO+wx6fH8l//FWM4hPPPgw7ClX42PX3oAhz+7A2+vKsOAaXdjoKEULz3zDhwJ07Bk5X+Q9+F9uPXjfYjuOgLXTuiOvNXfYuk+C6567lM83cuGO+bfj+O87njgnutg37MQC1fkYMjtL+KR6/qjtes58klb3Ynv9tYhJrUz+vUfhlmzxmDj83PwyR4fxl17HbKkjVj46UKY46dj5ZI7TyJx3aEVuP+eR5DD6YDpU0YikL8Bi9ceQP957+HFu2KxYNJobHPKMG7OLeipqsP7734LE1eG/uPnYHgmg6/e+i/K0AXf//4TmtQfDmz5/nNkCEei4fdf8e36InS76Xl8PScWT82fh7XHnUjM7o5+A4diSichnv7Ph/DHDcU9s3tj+7fv4fsDftz52ju4Z0wntO0R6z98FI//UoEFpFPO9n+JcTe9g6A6A3PnToe4dAM+XPI7Yic8jJX/Houf3ngaH2yoxvBp16KLpATvv/cpOFlz8d77T+HYO/Nw75e5uOOrXXh4iBY5q17DnAe/R7e7FuKjyVZMnbUABT4DrrpxBvSNh/H5ojWwqIdj5epncPyDx/DYwlzc+MJ7WDClKwpWvoern/wBvW95Af+eaMDjN1+DTbUSpHXtjYF9yRHPw2vvfQ1R9khcNyYda758F+uLJXj09Y8xLrYED906H7tdSbhq5ni4clbilw05GHbXh3iocxnu/vfnoDKGY+6VHbDh63exJl+AB1/7FFNTq/HobfOwqUGFyTNnQGM5gEU/bYK671x8+N7jSJW1iiSo3PIV5tzzIhrkWZg9fThs+9egzuxpiqgTdGP7wldx34uLybgZh4mDEnF41ffYXEbh/tc/xw1D4lvpbzn0M2bPexKNUcPxyLxJKPntS/y0rQ6T7n8Bc7rZcN/seTjKycTVs8aDqtiGr3/chqzZL+HNSTRuvP1J+Lpch69eeQiJ/sN48qa7sD7QA0s/fxmdEy7CLCOMVrDKhbCC4Z+NaJEQQxJ0WJ1jgojig9McqkHQRt738Zo1g6x03vzFmx3tLdsYGD+Nnh2SsXL3YdYWP/S1nVU+tGxPaAo5yTnhT4HXlEfLtgrWFLZJGXH+SoYzbZc4oQThQML1Y4OfwVAfKZPNicY6E3SxkUT0v1xOIBnwKRmEsljQ3gYI4CMF9RGaEkEkZPIrCFk5BNgNDQwfNNdDeAluyBUm64+BDU0Jrgg8rhIWbxA2K2C08VFr9KDS6EKdxQWfz0GY9VoiDPkg3rIZFCMguXEh5gVhpbw4Vh5ETR0FOxFhaixkjLI0bmXNOAhyqSblA+vKscX9BmkYmpRKTO6x0UU4PE4o4gTL53AIDeWyaqzY9Tpi9HHo2/saQvewgiGM/7/o0aPHvrffevOet557+K0RmQqegGpficBOP9RZrKLkIh6uG2Bgg+u0foiJ6zhg3bhp135+Ocr9T4BfkfTO9FkjJjbWm+RKpRRCkeCMafl8HvQG9Rnvp6TGoLK8AUePlLBzWjCm/6gFSqXSchmKfdlw0TMpTxqDG15+DqO/uB0PPjYbaxLi4G65yVGjc9dUbFl5DCu++BDioA1MwIfcnBIEJNNw3V33o/Tp17D8w6fwM+mlQlUG5kwfgyhROy/i6zHn0f/ggck9yXJlxXNTvkGlM4ioqmP4+ftcsPZ6Kcl6lOzejhX1O2DmxWPBg//GbaPT4OwhwfqDTQqU+tIDRKjZCq9fgfL9m/DTIXJRGgU134+yEhsm3/Uoyv0fYO27z2E5HYBUl4Vrpo44ydsqi95X3YfbKl/FyqXv4IEfaVCyKIyZOhF6hxEf/boa3JheeOzVlzE0ioNre6kx+tYXsHjVWvQS8LFqWyXip76AFx+bRQRaBgv+Q4esJqjqttZFfMSnZiJKvY7UaRU+yxGgwRyAkyzsZeWN6HASbeJw4wP3oOG1j7DoradD5onyuC6YOG441OIzfNEki7BUKiYdgAj4LhsInwIhpyl8V01hHhp8AsTH0fhm0XbYNP3w3xefxYBkDRrHZODakbdg1VeLcd1/+oSyMvSbhLeI8JmqoWAqW3b6u+SpuPGe+ZicHYmYul245bNClDf6kDgkGzoJaWyPAmkdshCpcDWll6Vg7l3zMK1zNOIbD+D3j/NQWGEH4k9kyeER+qRlQ7OhBLtXf4lcEQMnKb+zKhflppZvFhpMJMLm/HGdUBlVj6V7PkV5VSNkMwfh3rtvwXPvLsZ7zz4COsiFLmMQJo7oiva6Hi+6O154/W0M7aiD9+CnmLu5Cj6JFse2rgAbo0cYk4JoTiH2VbR5iAlg/5pvsKvYBrGhDptWLAldjkxKgqP6CGqsTcFWOfLuePCZfyEJNuQv/RbflcRj/vMPoZ8EsG39Fm9sLUZBPYNOTYTB0GvvxN03DQHnhj44ljkFe5atQfXsG0J3lem98e93PsKgeCF8DYfQIUaBnRW78RGhn7vBBNrpQ2FJ/UkLwhk6B1LGXI8H5s2EqDEaK1bsQWlJDQJcGTKz0yHdmIftK7/FEcoPk48Dpj4HNY2+s+Tnwf61a1Fi5mPKI4/jmRuHQ8DJwaE1W7HhbMU4BVHpI/DKx2+ii4qBrWgrkvUiHDi2FZ9W74O1xgGv3YPimipSxx+wsciOUY8/iSdv6UnagrQ1GRRsGDl3BR/JBgn25mzDZ7UHYK91wEc6TnF1JQqrf8CaHAuGLngfL8zvD7jroPTehlfWrcVvO+YidVTzHjnahh1btqLarcL1jzyLB6/qCM8hA3YcfRLsdzOvqRYrlvyCOhc5KzuEX+qPgA29G6cToaKoEK5B8ZA1D0tl1kg8cEcu3vxqDV5+cm/Isimx53iM7puJgkW3Y2uFCzK9EZuWLW6iQUoKgiXHUOHL+AOUC+NC0UYJ/+ILL7zwyF9dnjD+OIJEEB4UbcDBUgusQU6rJUNbAZ5z0qf/0wV0dgsCa7Iar9eiyuqA0+MFX0C1bpsIWR2cgRln33PCdwNz0vX20p7t/qlgTfq9jBc/eAS4maTPrywmTKsWaMdi6lKAVaUEGIpkHwmKkhO6kHmVYR1i8sHlE0mfzyXU5sPj48Ht5qHUJEFjowumRgfybH7Um70wE/pR3np46aZQnsqAG6l6HiiyMGXHqkIhOg1yCpGGWNRbCbdHiyAg67tSLgBfqCPvEMKSZERtIw/78mpQUloJxsuaE/tD7RjweiCWCeEmbcSWTyoVQkuY/eyUZPTv2xNdevZGXFI21LoECGUS0o4C1iiSdBR+s2KHjR7CPyctwgjjn4x5d9z5XqIS0nVfvfQCwwQvyvSxhZ+UKtSNV8x98DFWIL4UZfwnIjk1/TDPZN6t5zAjLkV+AwZ3InOpj9F07Pti936Df7wUef6ZuGCfDKdkgzF3vY/kpHcw7f734Gy5XLsJH36zHmWyTnjimlkw71mGw8eKQ2bu7to8vPPKi9hv1mLufQuQ5D2IN197Hx988DWysx9HB8XZ3qfEhOuuxZqCT+AIKNB/1CjILEewdnsNRk4Zg45lZny16i18/sGb0HhHonjt8tCWChV5Up/YDRPGDsHR73aDp0rDpBEdUH1wA3ICqejdiYOP7nwJayqVuOv+fyNDmIenH3obb773Pa7o9RjilS0aKSeWv/08Fm6sx4T5D2BsShDvP/E0viQC3vARL2HC5LHY+tZqPH//AzjWLxq/L/8BZi4RHseNQnZXChOGLMLC397Bw88ZEe08hh+XbULMqAfw6T0xJ6robcS6X77Bztx6DLjqZoyM8+HbL4pQ5T3Z2RQLpuEgXnvxbexxJWD+Qw8h1rIDT7z2Nf77eQoGpN0JylEBi1eCxNQYtO6oIgLj2Ov/hf0HivD1ewtQfmgtSatGwZ6N2Hy4Gj1nPoWPnr0K180egk3P/oSHFzyBa65Ix94VPyAfkZg7dyb5v/TCuwwLoQQq1iw8vwDfff8dJFcPPe9H/e56/PLFVzhaIcWse2cjybYH7x89Evr0FGTOZi3KoHr/Krz6xhdwxY3A/Q+Og3nnt3jnq5/wyQ+DkX7fGJyt6/GzJuHGoV/io+0NECWNwZie8Ti2fikqDaMxMgn4tCUhh4fuY2aj/5oc7GwMIrHvKHRV2vHbmk3I7j8OyeoLmYMd2PTVu3idzoFp/3IcDIjRZ+IYRJ9mSutDzopvsWxXMeKHzMSskXFY/+2XyCPMaPAiRrmnvhjfLvoRpTYJZl41G5GmbfiyuAAepqlPRiUlQco9ipUf/gfS43HYuWQ9moL5itBj3FhkfrUeS956BjxTDiJMB7HX5GL1QGAF8KTYSAgD27Him88grUzCvlWr4fSdobBBO7Yt/QHrD1QiffQ1mNFDgeXffIHSxqZyJA6ZhRHpW7H9s2fwnG8auPlr8N2aIxg870WM5azDur3lSL7iaszorcGqbz5HSQMTCoOWMHQ2ruywFb999hgesE+H1nQA3/96jLTdXIwc0EbDRSkxYNhQxP6yEz9+9gaieBNg2vEzGsye0BwjVBswbuoUbMj7Fn6eCsPGDge37jA2Hrejz8BerQoG1sS4ZMtivPbhL+B2HofHZwxB0epP8eHS77CwWz/cNfEGDP55P3Y3+pDaczI6af2k/6xH8mjSl2I9iJSLsWP3Grz7IRlHdXuwp8IEXCo3IGGE8f8EDEeACCqA0dnRWHKkCgy36YNB23X0JCUD5/S5OYAm54FdM9NRu3M3xBIRaNpHBGIBEUp5obyCnPadLrZcY+cYhgm2XjuTEuF8/SmwWwDYL/XsdHLQ7cAepQpd7A6UVpcjNSHpvPL4o+CwGx74FCTaodh1pAjeABcmkwkNDT5YHTw4vDwIaTP8ATdcIWcMRPj3MqGIElwv+3QAURoVG6eNXPOGLEwbPXLIEYPMDpmQJSZCKpWic0Y69Eo19LFJkKgl7E6xEF1cLjrUVqwjzobKcnSsrECDsQF5hYXIyclBUmIybrn+TrjcVoyfMBoSqRhMIAA6QNomZMUQIAIRlxynKGHaRCMNI4z/FYyeNe9Vu82s3vXLfx9imHYc0/xBdBx21SfZnbvuvxRl+6dCKBS6yy2ixfEcnKRkYLc7+Hx+BAIt1m1cCATUOSMUURQPfSeM/V6ROuSJy1fqy4cLtmQwdL6CMMNbEOTLoGUtc7kCpE+8H7sGzUaj3QehXAspWRxW7x8Cs8kCIuNDNGY0rr3XBi8RdeUxOvz7s9V40GyC3eUhS086vh5zCzSqU/0IKDCaCP49r/NBodG2Rp/uMvkhbJlwD0yNZrhCe5kn45aHVZAToZWTdgd27JuLRvJeH8PD6KFDcd1DRNwRK6FQKzD32S9x9SN2mC2kLGQR5E+ZCXWECiKKi2e+XYMHbRayYLoQYNLw/ZbZUCnloE7qB1LMfmkRpjltsFjZvekMXl2yCWqVssknw7zXMeLGZ8nia4XHz2DqzBugIQtri7+Gx7/YiPtsZpjtrP+E6bjl0TegkonBDbjwyZJV8FNSRGnVmPvvRZhynwkWp4cssiKMnXYdnG4vpCotZLz7sG7rrRAodOCoxHjzx7VwWEywOokgyUnD0gl3Qa2UgbFV4KWn5mKFeQwW//wU2qgxwFOm48kvVuEhl53Uww4v6/fhhtvJ+q+CREiF2K4e176A/TOegIXQ0u0PYNKU2VBr1BDzCWPlS8FnP6wifUAKvaqpZVQxI/H1RtIvhAqoIcP8d7/GdX4OIvSsQxMeBt39Drbd4Idcq2ddSuOxhctxW0NjyPmjMkKDtDe/xuw26Qfc+Ra2zfVBHqGHkkrGJ6SebJ/TEfo8v3RfU/8hjIpAfAUmX38PnISZUSsp8O58mzxHQ6HVhcoVOWw+tm29DkKFFgaVBAvXjCf9kvQdrw/IfBJT//U6lDLRycoJWQYefGcR7mR9TWib/L5xBVos+HIz7nbZYLLYCTPFYNrkmVCp5BDygpj73EeY5AXUWi3koiR8tX4sbJZG2EPtwsNVNz6ACKWE9W+NN1dtgYcI4E1yoQwPfbsFd9J86Jv3a9z4+kpM8xA5PIqC4OonsHXsQ5CTNvU7SVuNn4D7X48g/YaUOeDF8x/9EvLJoNWyijAOulz/Ig5PfRBm1kcHV4ix46bCSvobX6Y5zU/HlCcWYuj9NJR6EWH2bsS6DdNASTUIuTeIGIJv1v4GmtBcY1Djte+34jGjES5fEELJKMy49THSLzjQaCnIrn8Le6c+HXonyxVeO2suXDYn+Ao9RCoB3vv8U5RYBKR9hPDV8rH81y2gtRFQ8sToeMN/sHvq/eRZNxhKiLmzryM0I9SRq0PxmV/+fiN8XCl0rAaIq8CV972NITeZQuk5fBGunHQ1HKH0GkSopPh4zX5UHtuC5x+4HznK4Xhn6fsYnKEmfOSVGH69OeQHBuQ94ybODO13E5L3RBDavrtqT9M4cngI3abi1idVTWPzJJpxEDNgDtbumBbyCeENcCAbMQTX3OtEQCAj/UuO2NlPYceMh2A2s32MBpeagnkqNeTitl/IeEgecSOWDp4VGl9suk53v4K5T6qb5jGS4ptNB+Gwmkl53CGT46tumA+1XBLa6/flqm0hnxtemgORbA5ufcBF+pMQkfq/g7vd/x9oo4QP4x8KdhzRREDtGSHHfrkYhXYyZ5zy4U6AE34Y/O1saWgKNhAEx+dAZkIUjlfWESGaIYwiJ7QlIqQ0IH9DAixr9RA8OY+m7RLndsjY1g/DuaJLhPJtfg3DFWOVyY2uehWKysugM0RCJhCSub7Vu+IlQlMdhoy+BX1GXAcel9TZR3glwjsw3CYegPH5mkM5MPC6HSEnigzrl4F1puPywSHiwssXkrnRSeZhJ1YsXwaj2Qh9nBYKMUWS0Fh7JA92lg7KQ3CRNdynUYKnU6KUS8EeCID1rmFLSQMdmQaxTof6Dz4Ev6IW5fZG/PbZa+BRFKhfvkZQYcDwKdNxRc8seFhlA03DSPtgCTQ5UbNxaLiZQKi9Grkywi82taefCAOsE0o2FgWPpPdxmpRI4kCTv4kgOTSEDvGEf5gQGYtZyekh3jRI+kPY5W8Y/xSwFgcz5j3+SEJm912LX7ljcYCmL9iEp/fwid9MuenBpy9h8f6xkOiSfoHdcnfQbc3ase0ojh4ublUusGCnZKFQEApLyc43hkgN+g/siKhozWlzfiNX/4kqpf+8f6p1yAX7ZKCEUkTGns7MStWR5DhxziXpdFFt0hFB/AR4IUeQ8rP67eaS/HQn5dkCDk9IhNHIpn3Qp5ZPRJj9aNmJ1ypOzoB13GQQn75vmUMWSpkqInScFax3ZZkKkeRoD5RQBn2UrN177LMSZUToOPkhCSJPcsByOn1O/BQhLv7EGaull2v0p9HSVF8AV/Q0fPjcbScpGNoUhgiLChgkZ/5+T7HO8SJPd0rCIdejTnEYwyV1iIo/8dVXbYhCW8pL1AbEt+0ffDH00bEnLoijQl+DW0/V+jbpBYQ+bfoSEZ7Vurb5q0/0BeHJ76EkasS3ucA6e9SwTgbbq3Br/gJEGKLb6V8szZSIkihPfQBKXSROusrhQUHqoDit//Khj23zdZw8q4mOP6k8Sn3sibz4WsS3NJHilLYiddFFxp58jZRRrIgIHa35nWHrl1wbjRMjQYm4uDY14EkQHdemjXlnokkTpKRPS9v0a3XLOGJo0G4bdi1bhl151eBrYnDdK19jyrDuUDc7o5QotWhLUlUbYuii29KKBZkXSHppm/TqtsQj40EblYKsHr2g1fdHkkHWLEaQsUdoImlDl5OeI2NORuYb2Zm3yZ1ISfq/PqoNbU4hMOvEMUIfdUZateZDxkCEQdx+Og4v5Fi0vWmGxz/l/erzKHQYYfyvgXVcRtZchgiJ4zIj8emBYri8rG8GutXkra1SoC2T19baoSXEWJrBgJryWsJjSGHzuCESNdsHNvmUPg2s0qEpH+ac8v4f3S7R4hOCQ/42+LnYQcnQ02tHTl4OenXqellCWrbkJ+QL2bgQ4IgFZPLmhKI2cFsUI6xSgQ5AEGEgZeOFtiPQXD/e3HYM3y/+ASXlFRBqIyCMjACvSxqUcWNRJFdA7vKE1iJKJgEll5F1VgwTbYFLwIWfzIUCDuH5CLPOIfnLAj4EKTaCRSU0t01A8NbxoLwceIJ0K024fj8KSLnySkrgCzQx9ByG3Zza1N6BYLDZwoTtA7XNSh5WedP0PGvU4g34Q6E7A6Radk5zGwYD5D0MTAyFA7U1OGqsx0s9BoAJ+fYI+3MI45+FPsOuXNqlX67i/Qemb67IP9j7jzwrVamNY+e9eHP/oVcu43A4F2Er+/8HWq2uzsztNrb64KLdhw8WGiK0Soyb2C9kveByemC3s86+rTh2pAQOhxt1tSb8/OOWkDPIHr3S0a1HGmFhBc4aQeb1hvj0Jf9UBQOLP282DPrRUFWGercACYknoidcCnhtdSgsNyEiNhEGlfhyOVa+cDB+VObnkrpzEZ+aBq3sVEcgDIwlOSg304hKyyTCawBlRaVwCyOQEqcLRb24UGjShuP554ZfXPnDCONiwaGQ0P0KPECOPwsibQLu+s9Hf9r7wvj/h7BPhn8+uM1CMWtmnyAVoYNejV0VDRBw2w/Xe2J7w+n+E0LXiLCanZmBjUfymywfiCDLhrq8VPgjSoETSgQGXh6wvKYeUZmRCBpLYLXZoFGqLlm5TkUb95ghP1B1Pi/yrPXINTtR6bCj0m5CidMFq8sPu9uPgIwPjp/QK7sTRD26gGajQPE44PMouOwO0ERYZyM0Sbh8UAEGam0EZBFq+Mg1B3mBlQj1NNMUgjIYUg6csPpoURKFtq00/2YVES2hftjrrAVC029/yEah9bmWPOhA6++TFE3Nf1nxiRNoCX8ZDG1XEXEpSDhcLN66Ho936w0ZQ122yB5hhPFHQOYl/qG8/d0/3P/Kg2tzlk7yMydC6CjFSnu2oWNuoiLzaEd5t6P9Ow9dw3UH+JUlR7ufT948ivLHpXfZPfSaBx/t0nvAlstXi38u1BpduXjMzcmHjlXtNlZVZH/56epzPsOGqfx9Zw6cPO3igVffPCuK2+qR+B+LS+ST4dxgfFbsXr0QKyo0uPOOW5Ad2Z6LvQtBAA05W/Hhl7swYPY8XNU/5SxhGy8fHBXH8NT8OdjN9MWrn76PPrrmG7QdG795B1/vsGHuAw9DI23P02gAa9+4Gw8vNeHe73/FPR08WPrNh6jQDsUjt0yERhw2wAsjjDDCCCOMiwHrw2Bkkh4ltXaYWXmxmVdosVIICe0tAisRNAPtKPhphgONjIsoJReWgABetwc8JtC8LaHJYiEY5J4hhOUFlvsM+bRGmiDllrLbFYgUvsslQD+5ASXFxVB27Ur4oQsrg48I9vW0H9VOJ2rdbuTWNyDHZESJ0wGzi91y4EeA3QZCaOD3e0PEZcvhY7cf+PwIepq3GVAcBHmEFnZCG3KN3U7BSPngslZuQhGyAx4IVq6DkseHbsBAVMbocaQmB546L8YOHoFDhWXwWkxw+l1wsAoC1g8FITef9p1QEPjoVuUDL9hksRAkf8mNpsoEgk1KBNaSgdWIBJt4dw67bYJpUVAEW7ngYPAEb8+l+KHtNELW7SVpYx8hp59VWpB+wIadYUM5O8l7io2N6BxtCCsZwvhLwPpUOJ5zqOt7W/791JqKNaNcAVf7WlQCq9sq31G6vdcObO8VurAdr4X+dmpSxrIOWfkMFzE8FZSNQBd958NTpz51f3pqlz0SscRJURT959Tqnw2RWOya+sDTvdZ88s43lccOTj1bWtbKoUNWAvoMyKYP2/S/cv8fKBhYXLAlw7FN72Lera+iVqhHYnwkfKYq1JhpTH/yUzx+dVe8N6ML3t5DQ6URs4bNuO+Nd1G0dQN+K9Zi/LTZcGx8GDc+/Au4hgTolXyYayphpeXIzjLAYXbDYaqByUNhzguL8fjUDFTvXoyHHn0ZR80U1AoKVqMVur5z8Okbd6Jm/w6sXf0bqG6jMURTjQdvuR7b60SQyQQwpA/Gs8/egdXPzccP++tIedTwO8wIKtJw1zMvY+6Qk5USlrI9+M8jD2HVMRtJK4fbREZYzAC8/dGLyLJtwk13PI9irwQxEWLUV1VBnn4lXn32Kix69jmsy7OQRXArHrn5bjz19gNw/voGnv50AyhVNCSMGXdMXoux97+NF27qg31fPol/vfwznEI1qb8AxpoqoNnlIMeaj3Vrf0NptAx3zx4Nf8Fa3HPfCzhUH4RGKYLdbIYqYxReeeM59IqXhNe0MMIII4zLgLBPhvPH0H8tduo1UuOr8waNjtfLc//q8rQHVthVEQFyQKIGa1hPsc2rp/8P+DzjEmHaTwfRLysTy7btI0K0sPXLOhvWkhX8WdN6pvngouULOpF1m9nGlrCX513uNts52nuOLT2HDobedKCgEN1H9UZp4X5kmI2QanTnbR3hCLhJ3bi4Y/sWbK82QkAq4OJziIBOwx3wwxv0I+hnwPjIm4jQzuGzYTx9oNxuUH43kbgDuMIQA42Aj4M5R1DuIMI4ocm8/t2QJvDCntwLOzUarP/kZfi69waX40JifgP69O8R2hphqSqGICESxQNGwL9xJSpq6yCNi0FhZSN8Lg8orgfCICmPP0iEfQ/4LjpEa5piwPEGQAWC8AZc4JD7HH8gZM3A+mdgFQtBjwdcrw8SmZiUmxe6z0Zc8vDsCPi5YFirBw6PvEMApcsGtYSLeK0c8SIXpAE7JB4Lkp1BuDxCbCb57VEko15qgJhL+FU/jePllciO1hGmOvxR6FJg6pPLK7unG7Y+dHWP6/gU92zhq/4nQdM0VZyfk71z5cJbSvavm2w11oV2RO9JNsGluvCoy0zIzofdHhREUdDI7kLGfv+Gzp9/v2Fd23QyodyRqe149Pru89+d2v/qb8NbJdqHSCRyT5q/YFptTU1szs4tc/P27x4nFQkEUUqqQ4RGIo2M0iAiQgGBsMUajsOlab/wrJn+g3DBPhlalqzOo+bjo9fmQt14FI/efQd+/OAVDO70XvNdDZ75aROmJBN6mXPwVDv59JnzIj65qx+OrH8Vt9zyLoSZT+Db/0xE/Z6FuP2Gx7BrxRrUjOfg8TufwrYGL2RKBaxm1qzEjby17+GNn0bi6nbyjc4aiw++fAWdZEZ88eQ9+GafB3e8+C3umdgRXN9x3Djkajz9wNPoue5DdFK37PN346sn7sQiNnKAXEGEeQthCvywH1uFF94Yhrem06F9fR4PDU1CZ4ydfjv69euNrLRIPPHCs3DcNA27mF547qO30V9Ho2H8dbgbkdh7rBT1JiGMtQexddFCHO1mxlPvLoMnfTpWf/oE4jQ8LLxrNB5ZZj6tHu66Erz1xBM44k7Fa8t+wNhEPhq3fY5J81/Ew68kYOELdyNKFlYzhBFGGGGE8dfB6fZJ4hLS4l/46XhOfmEVNHJR+avzBo5JMChy/uqytYDxERZaGMDA+AjsqrHDGvoCfvawkqflwTCh7RHs3/SUZBRW18PucEAmlzfbMTDN/BGDy2wg2oq25aZ5fOwoKscErQGHikvRS6ECn+Kflq49iDhcvHp0N34rq0OqxwWZyQ2eVovjQg7sNitGUTL0DVpwhAjn+2khIo2VuLl3V/hsJjT0nYQKlxN9jVWYPmokKoYMhCUQQE6dCUxxObxeBlN6pOFWTQT2yOZj7O4jkI8aAXHDz8jIykJ1WQmS9TEwCyRwOGxQEl7PUlMHbUQkEaj84PN4cNNc8LxmXNHowz2D+kCvUqHAXI9/7ytCIccHCxtAvWXLA9tGRO6JSU5Gz06dYCooRuH2vSAVAZcTAFxeRIkYzO/qxrBuJkjF5aBCPiUksDuTUFTogKmxEnypk+TDhdsVhN8GSBg/rhFycTNVSerEweY6A5aIOuNAbTmuYrpetjb+X0NFrTUmKTn26hve2HJ1aVldcEyfpB8emtXzf1rhUFKY12Hb0k//dXzrr7M8Lnu7jtT6l2iwMc2IBtkJMgl4Al+P2J6HDlXt6+GivZfEvMrhtcv2Vu3syx5Lj3x+42c3rxz9d7Zw8Hm9osqS3GE+r0eWkNZxg9/vFyscue/XeMU/RCV3+eZyvDMYDHJ/2pk7/ddieuKxalvHQDCa5xdN5AcZhhdscFP+miBwrMkaKqSEZnd4BVkPwXjHt3z1O6ybG38wyGcN7ER8rmda19jFA1Xm7f07pWxNjzXkUxTvb0vvFly0TwZrfTlKympQnbcXx4vroIrshghdi9DODXlfPjtOxKwOrX+cQOv1VutDQSQ6p8ixqYGL8fe/ibtGZsJbV4A95X6MHdsBeZ+2k2uLJ2dKThiBWAjpQ9i3bzdKe2jgPf4b8mxeaDplQC9ou31BhKSMZAi2NKDbpAV47vaREHuqsWN/BfoO74/S9QuR3HsC5s28Af2SaHz9/IOY8erruPulT3B9J1JeLhd+txN1NfWoMB/DI9ffhHzlWLz+yXPQlW/Bo/ccRgVbNl0SUlkv11UlOF5ZH3LuV2Vuf94UyKRITE6Ev7gG+7btRWd+FPbu3Aubm48+cYmQCsMKhjDCCCOMy4GwT4Y/DolYiK6dktmf8S8tyTmeX1jNqGWCylfnDRqTEKk4/leWjSfggMeIQg4FpmTp8cX+UnB4kpCfhdY0zVsnQs7+mPb9bbVYFnSM06OixghGJoE/4AMlEIaiElCsc0GGG9qeEWROfNlmrSBYtLVkaOtboO3vpogUnJPedypa8jjJMoLDR05RFa6M7wKrtRFldTVIiYlv9U1xNnAIl2v1+BBlNKFr4RGkZCWS5yRgrA6MVBsQI/BBJM6E/8BBjBk/BIVHj4AO8LChtBRTDKU4evQY9gfd6JnVAanxSdh8eC825R+Bb+duiGQK5JsaMfOm27G6uhTcoBd1ZaXI1Snx7W9bYWBoOBVy7N66CS74wWgjwPBKENO7O8RcGkGvF7C7MIXi4o250/HJt4uwdu06TJkyGd+NHoaXisvwRd4RBIVi0DwGKT16IVWngWP/Uez+7HMIGCf8R3JhOpqDZDkH3388Eyr5XgQ4NQjyRLC4xeAEdbAYJeBxteCTbuIJFuPwMQe0ESNQb5TjwPF8uN1uaFQKKEQeiAV1kMsL8IqiDJUVJniZKWG3j5cYBp2KPbjmAK6+8c2tV5eU1gZG9U768eFrelwnoHjev7p8lxMOh12+4rNXnzm48YcbPU77qZ7G2wWfx/fdKBi7090tdd+EPnO/yIjJzOXz+f6nFt7+686yHZclUOu60vXDr3izy5Fvrl1zRUxkbOWlypdVqqz95o2neo2Z/d9ufQZustusSolU5vgjyoxAIMDz1x9/QmTNfzyZzChgbQTKKyDm8FiNJBXF4Q4vOI66tKwu686Z2XmCVS58sPrgHfeuqX2bYc5mbH6mW8HT/Al4/EHRwj3lcxYCc7Du4IkcOGCkAp4rUsY3fzFO/JVcIFobGRNXoFJrav8ODiMv2CdDS2KK8WDrD+/gaJ4dvW5+G/PnjISKyO2lXQdgiEQGvai5T/OlSO3UG4PIgqKT8yGPysTAIUPQIaFp3MgiEtF38BBEpBlCDoUkyij0HDAEnPQEIvorcN+32zA7fye+/m4Znt68EOrETrhy6hRIKR6UcZkYNNiNzFgthFI+OvcZDL0iHbLQbC9C/5texI6x12H5oh/w8lOPg6OMx0NfrMSY7skQnqQE4WDCo99i6I05WPQNm3YjRJpkDBk/BVpVBOJm3Al97Er88OPr+MHogjZ+ML5ZMhu90nSs6yM89tq70Hz6E3798F1EPnYf3vn2O3z+xSJ89vTjyOzSD33HXgmtUw8/NxNvLVuNDb98g2Xv/RtrY3qiz5ArMIRbj0QV6VtiHbr3HYiYiDRIVTGY/9rPmHxsMxZ+/wOe2OCCLqk7Pl/1HLolqXERPiHDCCOMMMII47JBLBaiS6ckdpWKe+mX3GOswkEl5Ve+cvugsUlRimN/ZdnSZAJkqZU4QNZyin9CEdDqOJBwbzzuCV8N7Qn6Pr8HnVKisL+wCh6fDxQnAEpwsrn8yYqF9t/TVlnQnrVB2y0S59pmwX6mEYkkWH7chJtTtSiuqkesPgpiPhWKbHU2cHh8aIjAHRERAZ2sM1wks+Ld+yDLzMJ75cW4JSUJQnsjrrnpehyurkJ9eS22VBajV3ISLDwabrUIWw8cRq/jeaiuM+G+Nz9GbkkBZPER6J2Ugn15hXj7qskIiGTgDe2LQGUZjvlsKHM1Ql5vhp08I4ogQmV6MkyMDx6jkQgDNOHZxOA43fBLhbD5aIikOgwaPgKR1bUwVpfD6XShs1YFWQENh1aPDh0HoCvJd/+KFeDUlMOxcyci6wvw6E2J6PjgAPgDAUhFv4N2G0HT3FD4Tb5QgYp6FTSEdrQvH0WFPpTV6nHFpPfQoUMHHM/JgUi7C0ePHiVt7YVIH4toeWckimmU1W+BQLwJm766FaPnvkbaUEn6Difsn+ESQ69VsgfPGsTMm9/cNrO4tC5wRa+Enx65pud1Aj7vwvcI/E2x+MP//Ofw2q/nny0NXyhy65M77ek+eu57fQaPXCWVye2npimvKklfV7Wmz+UrKZBrzMkc93HvvZ9M/2Vizw69d19sfof3/d7/hxduXsZabJQe3Dz26zaWG5l9R/18w2MfzBIIhedUMlWX5IyJo/NON6RnAk0+K5ggP01Q9p210TBNGRG5+WLL3Wi2RFz/1YGvVhe5r7zYvM4HrBLD4Q1IUww+aQ+J6VGKi0fRUAQ0NCfgkAmUL6mASJafW+neq47J+CnCEHPkz7I6uWCla8vcKTWkY+6CuaeFApz++EeY3vaCLB5zFryCOS3nybfjv0Nvb72d2HUa3vhiWuu5JH0Ynv9w2InnuXxEZg7GgmcGn1aWzPG34pPxJ84XvPb5aaVVRWfh2vuexrXnrBgX8qhs3LLgGdxy2k0uMgdNwpPkOB0UojqOxLNvjGxzrQ/ue7btuJ7b5rcWo6+9hxwnrsxofWEEnnmz50m5x3YagYfIEUYYYYQRxp+DsE+GSwexSIAuHRNDCodXluYeLSiqZhRiquqVeYPGJkcpj/7Z5WGF9SvSDcizlKFdO0KmJYJCKCLjGcCBXhMBobAODE8Cv89/UlSKs+FShJZsyaNtXtxgIGTeX2x2o5IXAxljQmVNFVLjE88p87LezfUKGRo8LuwsPI7pSR3RmBqLHLMRNQ4/3jVYEOVw4e0HHoa5uDwUVjLYtzOOllTCsXU7hKwzRD8HN61cRSSbBghcLog0OnhVEdjic8NPEZYzuyM4fAEb7xwUzUe2RAut3AKX2YmASgaBRAzVgH44qJbCuj8PDpsDEpEIFtIYrK+F3zh8TP36I9zZNRujH1+AXUXlWGO24MPSY7CT9xi69EE67cS+Zb+AcVfBsmI9Hh0ZxO33joMzGIWSaicE/KOwu+sAHwdchnXpKIDZwkdUZCySZGaU1HBRVsfDnLs+gEapw7YtW/HpZ5/BRweRn5sHi9MJpVgOfZQWqZ0zcPvYOag98iP2HNkCatEbGHH1UwjFvwz7Z7hs0GmV7MGzMZhx89vbZ7AWDiN7Jvz08DU9rxfy2a/U/3x07DVkXXtKBl18xtGR1z36QO9BI9acTz4bDq6cXGYu15075flhVsepO3/MWd7DH/Cd5MnebK03PPvBjBXP3PD9hG7d+uz6o/myzivLivIzV33y1Fv5+7e1hiIL0PaTwvfk7lo75Zf/PvvajLufP6sChkVsSvZKq0k+XGnL/Qhee2q7iWhfhLJh20a4o9bWUUnzDJHRJX+07CxW7s65cu5PFd9a3fR5WZ1cKrDfyhd084NqT4fMMBR8ziT2yFRgNOx7HoPvWJnbMKSrWCK1XO6yXbBPhqyh87El/5ztG0YYYYQRRhhhhNEKVuHQOTukcIh9bWneEVbhIBPxql6ZN/jKlGjlkT+jDGyUCL2YQv8EOTaVO067H9q92awvOJPagHWQxiFCcJ+kJGw8lhOyVGBDJba1TGgPZ7JY+KNoz6oh5FeAiM0MfPjuUCnm9klGQXku4qKiIRKcw58Yz4HORDCuYENSCiNhKquAsbAaddVloK4ciXoi4NexnhYjdeCKRWAoATgCObg6OVK7dINILoHL6wVFSsDrHIRQzkHAE0SZwwFPZU2TUkJCnlMpwChIWfhBuDg03E4zhHVGeHg0uA4P6LpG8KRCREiksDQ2QqfRwlheDa9IDC8ngA2k3TZWVENSVBNy4mhnrPAFuBAmx0JAu8CtqYLA04DG3aW4sasLc2f2RJDmgkv7EKsww+e3weHgwOH2NYW2JM8a7RakJgjBeN3kHoW01B5QqSIh4HJgrK/H448+grzD+7FskQOmgAcujxvjhvZHbIQOP29vwA19B+J4fSOKd6/GwAn3gKdQ4RI0cRjnAV2Egj1CCodb394+o7i0NjCiR/xPj8zu9Y9WOKR07LkjIbPb72W5B06yQgi63eLufQdvON98vjj25oPB81R+nguddOmmF2d8O3TYweXjFyy74UtZo1+WWSdDhJMPbsiBLq39/um564IPfjCtx4Dha8+WF+u8sqTgeMcdy7+eV7zvtwl2c0Mkq2g42zNimcKS3f2KX0Zec8+z51Ne1iGlMiJhIyIS0txut9xeVzRHT5c/CtoVc2pSOGpGG1BTDIfAXOZSvK9O7Pa6QqE0nesdXq9POO+bff9deNgyh9D5smxJacHSuWnjE1WCkjyjL3NHXlX/PY3C3ikaTtCgtNVBxMmA3xODgDcCZ7Oj8roSPLVHF4iT+zx2OcvKIrx9LIwwwggjjDDaQdgnw+WHSCRAp2aFw+u/5h8uKKoOSgXcmlduH3Rlaqzq8OV6L5/bxP6MTojEMWMlLI4g3Fwn60nqvPOgSGp296xSwEecVoUqlxt+Gw0pEaBDXtqbw0mwyoAWXw9nQ1ufDOejiGC3XLB5s+naWlCwv9nrVncAm21+jBSocaykEJ3SsyBg82STNmftD9Lg+01wFjwNyvkLRL7+SBZPQWSsAVKTEbLEGEjjIxGMTUBZwARP0AuOUoYgRUFA8RGflAAIBLDXViO23g9hcRlSomOxnQjkvrgEcAd2h46vBkciQEF+PigRBZrihehBB0kd/KQtog0QHi+El8dASAR+n7EBER3SYBIL4CivhqpLJ/g5XHBJWVnFjhd8IOCHhyUpucbx2ELRIljLBInfh4a8UvAFUvjqCjFqdjQ0sSryLiUOWCPBtxQj6KTBeIOwe5wIMCSvIBcNJj8CNCcUmlJCysQhZQypaoKBUD3ZMJZuqwWxOh3M7BYQsRCHjxzA4GtuxpayaogVWZCKeKiproOrugp8QiMB+O22WxiXD9oIBXvw7MCM297ZMaOopIYe3j3ul0dm97pOJKBcf3X5/gg0EVpjUp/x352qZLBaamM3r1s+ecSVUxafK4/lG3+cW2AuOdXY/ILA4/KY2dn/elkgEPgm9J66ZEjmiN+++c/tPxQUbR3VNp3X7ZR988zcNb67Xp/Td/xVC1siT7C+CirLitK2//Lpv45tXzHDaTNHnPOdFOWPz+yxs//UeS927Tt03cWY+YvFYrs4seP7QMf3WWVGef6hqXFU9StcvysRbYVy2qdOEBgfQ/W6x9AgLa1gYu+KSuywur13s/l4LGXT/t2lZo7fRXGXFFNskJ3LBjHFdWelJhzPSsXxKX3TlpwtLU37BQ01ldn6QMm7PI+pf9t7al/lQ/XVUYv10fEHL19pL8InQxhhhBFGGGGE8cfR69aFzOD+2a3nW3Ycw56PZnMu9P7fAZeiPCIhH52yEtgvQTFvrig4VFBcHRRTnNqXbx80Pj1OfeDiS3k6KC4HY+Kk+Cq/kcitQiKcn52HPVWYZwV8n8+HjrHxsBw5BicRNH1E0GXDWTYpClhJmIPz+ZDYVqlwpt9n8snQdutEy8FaGhw5UIrE4alQF5fAbDDDoNIgSHh+hgjpbGx2bv3P8JTeB6HPBiM1HN99kQt1tzqYPD7UrV4Bu9cNJ1+MyMxUyNwB+FjfDsIgpBIR9Npo8Fwe1C39Hq7f96LXFSNQVVYFJZ+Dw5s2ImXYAIwriYMjUo/NKhW6j5iAEmMp3A4bPCRfoUAIl88PoUQOjlQCnsNEGiQAb2kFRH3Yd/HQ6LEhltxn09KBYNPeFTrQZGbCHuwfL7tFhA2r6YWblEdO3s/lcUCJ+PDaadDuepLMA2t1FPaX85Dsd4Ddwu+l/WCCAbDu3L1uEaxWBwwaMRRyE4r3bYPXb4eAJ0FSeiqqjPWkPCJUmImcypXB6/OSsgfxyZKfwJFpYAta4faL4GfDmbLbQf7Eb3i9b/sWp84Xu/97zQXfv4S4YLniUpQnQiNnD8oBTL/93Z3Ti0tr6CFd45Y+OrvXdWIh5bzoF/wJ6D9qynf7fv3wYbupIbLlGu33CfM2Lbp92JhJP53NuZ/b7Ravrfnx6iATvCQ2NTqRzjS5/6yPWs4VCqX1ln9/Nf6bNx57+8hv3912qhXC4nfv/6ooZ8dIBtxAzq7VU1nnleeyVBBJ5daE7iN/GTbt1jeS0zKPs04rL0XZTwWr+EjI6PoT0PUnp9Mp9zUW3qb2lD6BoP/kqB1+Z2Ic8n5FYX4gKNFvbxQk3KOLjA+tRVaLRSu1HV6kdNWPUMqAr6/w4UvGh1wzF4/s5GNLrcB1Y5/Yjz7eWXmb29/sA+Ii4fb5zjsfiuL7ouKSDlgtqqnKuk2lZNJv41CS4en9+Z/a7ephcrncdinK1m4ZLlfGYYQRRhh/CtivfyET4TDCuLQI+2T46yAU8tGxQ0jhEP32qsL9hcU1QSEPtS/dNnBCZrxm/6V7E4M0gxQZVW4csNnBP4+Z5FRFAws+KW9CTAyKjI1wuFwhJQN7BJv5+/O1Vm7PeuFMCoe2ZWjPeaQfXoiDfBwvcSBVIEFFRQW0ciV4nCB8PB+Yhq3w5N8Hid+JgCASK3e6IBWZIPY0gJbEQKTRwl5XCyVPCJE/CK/ZCq5BSSjkh04XCYHbD9/mjbBt3Q7K7YPFaAJN3t3YYITOYEDlsRxsT+kOyf7f8dHrb+Cx/FzoGQnqYQePS8Fpd8HJOsZUSBEQCyCnm/hnvt0JlVSOcq8DrqJyyHl88IngzuH5wRDBnsM0KxhCGgYgIOSC56MRtFjR4HUhM5G0Q10JeDEZ+HXdJgztIYBAJEC3iP3YkhuHGnc+lCAyJrs9g9CNy/FDTvLZ+/suJE8dBjV2oaNegkUfPYUb73oDScnJ2LhpE7YfOIC4rl3gzisG7bTDQ+pgCnAwfeBg7Dv8K6pLSiDXx0IVFUuYa1ZZFbZk+LugReHgAqbd8f7OaUUltfTgLjHLHr2293USIXX6fqm/CSIitMZOI8Z/u2Px5/e1vV5wcPvw0oLjHZMzOp7R2qu6sSr252M/jblUZbm2500LlQqVue01Vglww4Mvz1tpiK78bdGbTwcDdKtcySoU9q3/8boz5ccK+hK5ypTYbfivbASJDp267metJC5Vec8XUqnULpV2eRXo8mpjQ02qwlfxH76jciKYYJv9ZQyP66obrHPV7Ydtnwsy/Qal0zgIAf9J/hdYf69ZERz319emPagwZL67Ztve0Z9xcfOlKqvbR58WeeJcUKrUdTnFEfd1kDa8HfLR0JqZpXt17s5H0nte8WiLtcmlxgX7ZPBbyvHzT6uh6ToCWQYOdq5fB0H2FRjfK7k1RrStaDO+XNeIYWMGQuUuxfINebhi7mykyM9lNuhDTXk95GSilrUzR9fvXYpfGzrghrHp2PHTZ6A7jMeQLH0ra+C11MAYVCFGc0kUR+eAB/tWLEZl7ARM6qL6w083FG/Fqv0cTJ00sN26/k/BV4MlX61F2szr0El+9qQBwnz8+vUXUPaagL5ZcRDzT/66Ewy4UVNtQUR0FESXy/eSsx7r1/2G3PI6eCN74f4ZA1hPtTiw5jvkUR0wMlOE1Ut3oPucOeDtXIydnK5kfOiweckyGAZOxsCMS+aH57xgr8zF178V4ZZrrwT/LKFlbcZK+ARaaBV/eC7700Ab87GzkgelswCl7kgMH9YVsvNs56CrAUuXrEHmuCnooJae+4HLDFt9BXziSML8//EJIOi1otLoQ3SM7pJrjJ11+/DD+kZMvGoUIgSsM2Yfcg/ugyShCxK0kpPSmvN34bv9Psyb2g0V9S5ExxrCGuwwLhmEAj6yM+NDCof3VhfvKyzeHuBzg/Uv3TZoYocEzd6Ly50D1vXfiDQ1Cg9YQQdPjyjBmugH2/koGwSPXG0W7oN+xEdoUNJQxzKtoa0SNE23sTxg59zz3657Kfw28NntwXw+6iqMqO4cBb7ZiLjGRqj1KlJeIuQXfQ6exwIXqYUgwEVJjRselxMaUyFs+o5gFDJQ9QL4yJxpqqgktfUSrlENjVgGOcOBu0MGyl57GkI/B4roSPTs2xebtm3Hwb37II+LhKWiAaVV5eh97W149vknkJTZGbXp8YgR6pBrNEIkFSPgccDPj4BXIoXU3AiRSAQB44GivgG0XAQeoavLbYdQLgFltINtkpCtSWgrSlObUHwxgnYafmstOHYhilKTIRfpwO0cxNJVJdB/cBQjso9DLpAimhmOSn8SXD4TNOy2DdIufD5pRZ4XHNqK33aWo290EuEr7Cjbug1P3DcE0655BkOHD0HPHj1QXl6OgX17sKbX+OGHH2A31uPorh9QX7uNlN2H/kOvBk8qYW1XLqrtwrh80Kjl7EG5gal3frBranFxjX9Q55hfH7221/USEf+06Ax/JVhLhQFjbnpv549f3HuqFcD+VV/cmZD60h08Hq9dA/2PV7y1gA4ELsn3l1hlTOOwuImLznT/yuvueV6giS5d/cFDnwVov+BM6QQiiTOj/7hF/cdf/15mxy6XxTrtYhChiyoEomYwTC+O09rQX2ba/wl8jsyTEgUDEthqxrebgUh92K7uMVGhVJWxp3w+z8/lcC5ZKEm/P3BG2p4NHboP/sB8bM0ENc8+tu31DIXj3rLi3JUJKR22XpoSnowL5gNLjuwGkzIIw3ukhJbNcWOG4uule2HunggNj1zxNGDjpnwMmH4jOmrZ1xhwewd2WxGNit+XYGWlHhOHdUDR5pWoUnbDsAQ3Fi09iCtmTkeK3ouNK39D9xnTcfyrd+HtfCWGJtJYuuIIrrxpJgQ0We4ZuukLgseGnGP5yFBasG3ZeiRMvh6RRduw2dcdc/rwsWJDATr06wl33nrstSRgTDYfixdvRe8pU2Aw78fyXAXmXp2B5R8tRdLoadBYD+CoXY9xgzo3hbd01ePX776FJ2MMriB1q9/3K/a5kjGiixrbV2xCzOgJzTs4A6g7vA4ri0QYOzgTRVtXwxw1CB0DO7GEvGPalV1RvGUZ9liicNWk/qjftRplip4YGB9EQ2kJyqpTQFXuwoZSNWaPVOKLhb+jx7hJ6BxJYfPy5RB2HIoeWjdWrdyFDuNmgXfoR1QbRmJMNzWZaH5CbfQojOvaJLRacjbj0zXFGDHpSgirtmNXnQGThidh08oNSOwzHHrncaw6zMHMcTH48fvd6Dl5AkSlu5Hvj0efVC9+/jUPIycMRcOR7XDE90cPUQ22F/nRt1ca8n5bgoaYMZjQJybU7j5HNX785lckDh2HZKYUP67MwahbZ6Nh1UL4e83G0Dhg2y8/gul5BZKcJThqUqJvRw22L1sKSf9rEVu6BL/WRmP6lX0RJQkQBoIwc6R9aZ8DO9dvRkTHvpAZD+DX4xRuvnY4WtSKTIAGzZMgWh8BniUPC0nf6DZ6BCQ1e7GxQIhxY+OxbvV+DJ0yBFu++BLRI6ZhQLIEe7bvh6ZbX+jdR/HzZiumzBoPfUiWDqLiwHqszQ1i/OjeqN2/Gge8HTF1sAw/fL4Z/aZMhL94GyqknTChZ3JTIaR6jJx8NToe34rleS1fq7woq3Gh6+hMaKOF6JvyOw7m2jB99LWINpajKD8XDsKkddHITgwmnxnbf9sJWee+SEABvlteiUlzRmHzu2+D6jcFgyIdWLwyHzPvmIKqVYtRoOqLkR3V2PzrL5B1m4CM4BGsOMbDjFEZOLBlG9SDpqGHvmkeCjQewX8XHkT/iaPBLT8Go4MP2l6LzdsPIqlXH0hrfscvh0WYNXsYlM38b97u32CJvQLpzH58v9GGaVcNg/XAahwSDsZ1/Sms/K0I2f17wHFsLQ64UjAqI4BVB2yYNWk4RKY9+GiFCdOu7YL1n/wCw6BxSEI5liw/ikl3TkPZylXg9ZuOwfES7F76BWxJo5FkXoelxTrMnNgLtoKDqOGnoUciByuXrEHG2GsRUbsBvxWKMXZEBxRuWw9H2gikuSvg56YgITMbOT+vRm5VOqIs27HySABjR3VHw951OMzthmtGZzV9S6IbseTzHyDrNRaZ4npU1DqQGfQhfyOpQzARw7sYsHvtKsh7TMOgDFVIHDCV78bitVUYO64f6RvbQKcNQHT1buzzxWFEtyjsXbcSom5ToS5ZjpozjEU0HML73+1B/xlXI8adh583FmP06AHY+fN3kPWdjKEdY5GzYw1sKVPQjd6Mz3dzMXVCH1TvXo5tlQrMmD4M9oPrcRTZGNddhNUrD6HLyKFA8XbstCViZtcAVm8y45qxUfjmq03IHD0VPeIk2LnyZ/gSB6B/PLBuxWYkjJ6NvnFNigFrxV5SjjoMGdUb9XvXolTZG0MVFfiZ0G762E44vn0rBB1HI1vYPB6ZAKpyduNIgxx9OyVh57KvUNhvNnoLjuKrVZUYMWEQbEUFsHni4G0swerfKnH1tYNwdMli1McOxsBkPn5bsQ5xgyejf6r2QpecvwRhnwx/PwgEFLIy49hlN+r9tcV7ioq3B3gI1r9468Ap2UkRv//R/EK+EzgBJErE6G2QkfXSFVIQ8HgU2olaeUb46AB8Xjc6xkTiUHVDSMHACqItlg7s3/Z8NFyMMqG90JYhVUarfMtjF0u4yN99lRbERspQXF2L7joigNNO+JxV4LgIG+UNQMCvgU4XjQPlAtC1pQiSeZ3HRs7gFYNLqcAzmyGL1cBD1kqZUAWPTITMuCgIFSoEZFo8++y/oYuJQoPJioriMtQWliL29uvR74rxaFj8AxRSLez5e9ElNh7FfAoKPw2n3x3aXsClufApJJCY+OCQelA+J5l/i6HK6gyTRASLsRYqjR7GRiOaVQzgBVrqTujKVlhCuAOnCxxSr7JSoFuPruCs3wj+qEH4cC2wPq8CE3tzIGV+Q1ZsNmZO7g+BZTNsNhqNbh8qK2l46Qrw6h3Y5YpFF60C3WO58DJGLP/qLpjNEsgiu0Ks0sLtYVBWlAerKQ96NWH8vX5o5F7YGmxY9+Zj4NT5MWjOXQiKeOCROZR3jrCjYfx10Khk0HRP47MKh/kf/j61qKTGP6Bj9IrH5vS+TiriXzYz8j+C6LjE4qw+I389tmvdxLbX929ZOWPAjHuei4qOrTj1mcLigvQdtjXjLlUZ+kcM3da9U6+dZ0szcsKMhckZ2YeXvv/ApxU5R3uyShGJTGmO7TJ82fCrbns9OS0j53Jtf7jUYL/sy1T67VCN6eB2uaQ1RQdvSJaYHoXfE3WGRxi7NO0xcVTWq3LqRB0pHo++lEoGH01fkJKBhUfV4V54j2bD54pvvcgEhZHB0rds1qgrFEpV4yUpZBtcsE8GmUYL61EjaPKUgKxqNmsjaJkaspbFUiCFXsFFZY0N3bQacIIuHNu+Gf7EHrAUmZAxaCyiNGIIumTh+M4qBOI0kMalIiVSBQGvofU9fKUWKR2IAKGywCA5CDtpKh3FOjpq2u8IkQIdyP3IGAlidSJYbQxaNi75LTUor/egi1gCaZQBgVoXYZiVkMZnIDs+ClJNDPiHq+AOKjBi5lVgPA04cKgWdoHyZNNGIhRmdYqHShLEjiIjInsMZD2cYuSMGeCLeTh2iKQJelBcVgN1zCCyAEuIIDAVXIEQjfsFiE1JQFRkJBjCfFRHZCPJoIc4SY/CSjcCDBfaBCIsRZPyRHTE7mP7YXQrIVHFICMjCpyGfDQG1BiXGgeNlIvO8ftRUm5EPHPC/JJhTm88WWwmspMMcEkTwC0xwWaqhMmvQB+1HMrIfpiVwgOfH8D4GQYErTU4UFkDq4JlGJIxZXocnA0lKK00QqrzwmyrQr3LALFEibgoJUodVrLMx4Dt6V53PTzSOKQnxkBLCqGXlzaXgEP6bpN3qVA5gwEYqyphFSohVuoRq+GjwuEBhzBysUlpiI2QgeuzE0aJQxgaLoJ+G8oqzdB2k8AQaQD/YCnsfi9oL81uNAI/FFucE/oiX19dBYeI5CkXgSsfgKsyKcJSVbfSQhgRhZTEWAgCpSipNMHQXwKlOhaiQBV8flI4EYf1SINawhjGdhoCg0YDTZ+u2Lsoh7zzCky8ZiL8tmr8XtYAX8K5LLkoCIlU6/WRcpK2tbkJQybno66sBMLIJHTtEweh9yccyKlAqi499ETAbSVtWo/ufSRQqOMgCebCQ8olUEciOzMVkdJ66IhAZ3OYkFvLRZ/RGdAIOejaMQpbqurRf0BvzEygUVdyHBU1hBH0nZjPbOVlECR1R3aiHh4qG9qSIvgcFpTWmtFBJENkQiS4e6rgIbyv8jQ+iEJ0ehYSoqLgshKhOs8Op9mNcqMH3ciY0kbpQR9zk74nbI3xxrT0S0s1LCIivHeIR4STh2htCTho3tLQ0llbOjBXhISMJBiUQhSWk7JkdoBEY0CkLAgPYf7qq0i7ZI5GpC4K2tETYCV9IOc3Mm7TMki6SJKOCaWrq6gh6UYiWqdHVI9s7P+1ECY6CwZWx2mpgokbh4GdEqH1SBFnyAfjc5Ax2AhDn74Qk3lkyMSZoITiVosouTYD06Ymw1pTRPpiIyIMNhRVGaHv3iuUftCEpvT5xWcfi0KJDjFEeIlwJyOSOo5GjxccRSQ6JkVBJmpjnM3hI4r005hoAwSxMSgWxpH5UA9zciQOH3URklbAJjEgQi6FtPsViOcKwDWdsJQUyqPQISsafFs56t0SDCM01agodE/ZjwPFdegZlxTSKhsL88DVd4ZaKoFuyCRk8wXg+fWYEUfDWF5A5mwjojKC4Eh5TeORCBWNVVWw8DLI2NWRsSpAid0Ds6sSmozuSIuNhM2Zhr3WE2ODcdSh0ELKMDkVWopG5zQlEbzM/zglQxh/bwiIsNoho0nh8OH6kl2VFbusEj5H+M3jY8/5bAvYWYliRwblx/AUA/Yay0JRIrxeNgT7+QuHrAKBIuWJkEkhNtnJbwF8Pm+zMoFzNl/fF41zKSnYuwUNHljJWJWYC1BdVY5EMq/zqTg0WnaFErBzaC+dD7u5wpB1AR2gQ1YFfA6XCPQceMx2SOIM4LjIOizjkDVci99//BUusrbRLuDR/zyLW264CfNvuz20hh47ngOjTwx9eQlZj+WweQOov/MxdD1WiBy/E7GEnzhUTibPzDhUNjQiSqcBt7yUrJ0+1vEanHWED+zGg0ilQENtLZTJXZp5ira1avZgSSZeis8HQ8rLNJhAN1bhuMWLqJ6doCfvqx7cD4X1HfGfVb9D4axApw77sXa3BJ26sh8uDkDhrQQb7IJMqXBZ6tDYYEZ5QQRocRzpGXKolA5yOOHxboC7kRta25Mi/OBHBENWLnUNhCZOC9QKGjKy4Bw89BIqqg/gmkfeRJCrCJU0QNNNPjBI32LDTpzwvdnGC2cYfynUKhl6dkvjk5E/+a7/7raWlNR4hnWLWfTkdf1u+KvLlj54xicFB7aN9HndrWaEbodNtXXFD3Nn3HLf86em31a8fnRBY2H0pXi3TCD1Xd3n9lfPJ21yevaRe99c1ftSvPfvArFE4kzu1P9d8vPd2prqBKXj+BfigHUQu4WCtWFj+NISe0SvKxWqiPxTn+VxuQHCRl0yJYPXHzhHiKAzIyomPq+myPxkFAo+R5tJRxhwdnPWH3sMygH3neXxCwLl8XhErHMQdjFhte7ni+jsobgxvhbbVi1FjTOIlE79cNMsNtxPcwKuFP2uuhm1+XuwbPE6+IVadOo1AFlRZMK9Zi4K9mzFDzsaoU3qgmuu6g5Yy5CWKAvtZyFSH5IzUqAUUohLTYOaCPKgRIhPS4WK4kGf3Q8J63bh9zwZtPGpRDoSsQZv0CWkQqGkIOUmIoWWgx8TjxFZm7B9+a8wJCQiI1kCMWGsUxOpkLMnCCKQnsaFSCCB156Pnfvy4BfFYFCfbAipZvqT98alpEAlYNlzAUaTsuft345VK+yISOmCAV2JEBWbAq5GjW7dZqHs8C6sXrkHUkMa+hFBVapLQKJfHTLJk+njkdzsU0SsjkEyI4dULkeSvgbb1vwCO6PG8KunEPG9BhVpCH21lxFBdNZkOXZtXU7ozEeH3uMwMU4Nn2ssXJu24MdfhFDLDGTBP2HaLlBGIiOBF2KP+KIIpKaIoI/Pwqxxxdi5Yy0aaRm69R2A9CgxguXHseNACRh5Egb3zYSMdIGGokPYn1sNUWI39MuMhoYskh137sSvv65BSkIs0pTypu2QhERyXWdMHSbCjpU/gSYCk9nNKvAk6DVmLDb/tgI/HlFApUpEvEKBpL6DYN/xO5Ytr0OiLgOxRKqVipKRyJM19XaBAf37pWL7tu2IHtADQwemY/fWVaiNiUVmehz4PCGksqbxFfQHEZ+SGhLS9F1G4hp9LjavWQE3X41eAwYjhtA1NT0BMqEISanJZJIkjJI0GWMHW7B59TIUkzbL6JQKScjKjLQtl0KvsVeh4tg+/LBoB6SR6bj6hmmQ8oKoyPkde46Wg2PIwuCuiaeMBA7EKgPS4pu/SnH5uGLqNBz8fRsW7fYjudt4jElTgSGr1q6NS1Fq9iMuuyfGZ5/Ih6eMw5Uju+K3tctQEpOKzG4ZkHG5SEhPI12btCLreCojDQpFHGbdOA17N6/AbqMXSWTMzRodTQREO4r378PRUhOhxWB0izkxB6nJ+8dLdmPZD9+HlDsJSXGQRRI69LFh+6pfoUlOIAxXDOtrq3X7qDaW0EUtgZT0xGSOMhSGi6+KQXqCEqLYKAzP2IitZExFkTGVniiFLi4Bw+17seynn6CJ1CIlJRpC0i+uGVuArUt/hJ8wrUY3a46qwMBxV2DThtX4/oAUBnUiotRiyEWJSOAqSRPI0XfkEGzashPLKg2IT+wIhZSLtHEzUbT/N7zw7xxkDB2Pcf0z0Hv4YGzeuqspXUI25DIu0sfMRHnOPixZvAOSqAzMnDsOshZbLW1nzJokx9ZlP8JDKSA2JBGGUI+sa2Yh5/edWL7PBF1KV/TrkX6iXQRCmHIP4mB+LWSpvdCnI5kHusYhdzdJf4DMXYTp7d8zA5lDx8JxhrHIwu82Y/f65XDaKHQeORkpGsJDp5A+KWwqnDY+jdBbCGEwFsk8TWiukEbEIlWgbhrPikgy7gSI7tQbV6lzsGP9SniEBvQZ2BuRUjKHJYtBCVVITUsAO4OLFPGYOV2EvbtWY5uFg9TuIzAj+cTWnJQhMyEu3I9NZLxwyNjs368rdCIeio/txeESI1QdBqBXigoSJhNdDOuw+0A5RvYeCPtOQu/l9UjSpSFOxUNMtynof3QXlvxwgN3zh+SEONJVxUhPZcgclIrZc7TYQ/r8DiuQ1nUIpiWpsWvF9/CmDsGQjEj8ExD2yfD3h1LCL+d4nDt25FcOeXBWr/lDu8U+TS53+iN5cJqNOtXkz6iUCCw/WgOazztJedqiPWSCXLRYLQdDgRqa+McWQZ9VTmTHaLA7rwpeh4WswU0Ta4A87yPrjEAggJAvaPWj0BIdoiWP9hQGbSNTnMt6IZQH9/Q8KMJR/FhUhgdSYnGE/I3UqSHouAD08Z0QCevgdgVQZslBr3g1Nud4QFsb4dRHQMgVwc7xg1dTCVHnDgg0NkIo15P5ipS9/DD8Ti+hAQ9V+YVYt2Yjiuvs+H3dWsy78WZoybr/U+Eh7Nm8Ht7xEzBp42Z4TbUQZ3YApOwWBzcEhIgmiia8nQw8Dp/wBGRNDYhQ3VgHodsDCxHMPTVm6LqQNSnIg625roE2+h+K5jWJ6qySOFIPblUNXIW5KLXWQiBVITG6I6xMEfhD+8Ntb8B2uwX+CguWHt0PhnahV2osBmcGIOOR8sAHhuOAgyE0sNTAR8rCV7ihp/RwBWlwA16ohZGI1gmwe08+4acIP8ejoVWxVSI9iROEP2hGQdkifPfeNkRHZqO6thECtg39DOy0GxxeJNIyBqHzkDmQkLx4FKuAuGDZIYxLCIrH9cTIOIuPHa274qqhaZ/dMr7TM391mVgMGjFm+cav9FW+2rK0ttf3LH3v8QiNxjx08pz/tmybcLqd0u/zvphzqd7dN3L4ul7ZfXZdqvzOB4FAgMrNzR2h0+mGabXaUWSuYz2BNmts0fTpMhAoJPPd2uLi4tUJCQkb+Hz+ZffnEBkVXUYk4GHsb9ZSg7V4YAukOEN6PkX5eVzOJYs34faev+PH9qBLyPq2LqdurIFvm9n2uoZpuDXn8L61HTr3WH1xJTwZ1JtvvnnPhfhkCD0sj8TQKyedNU1kei9MIsfJECCt1whytLmkS8TgVj5Ygb7DBoZ+RQ0a3HxNiR6DBzX/1mPkxBaroROKutRezWkje6ElVkd6vyvIcfLbB+mbf8gSMHRwQtPvpK6YRI7TIFCg68CBrac8gRRZfUeR40SSxC79kdj8O6HzIHK0eT6lJ1pUetq0Hmj5jqeM64R+cU2/BxvScTISMGRQQpsiRGHwmMknF0uiRf8rp55eXgJJdAaGNpNFokjEgBZi6FIwckLKSWkjSSNMTTu5fVK6DCJH2ytEcB92JU5txSZwoYrNxJXkgN+MvdJcKPnsvqsYXDFlxilpFRh0Sj3I9IUTtkdcRGYPwLQWp8KEyLFZ7b4UXL4YPQcOaj2XR2Vi/LS226Z0GDikqUNpB53oAPrMXrgqs/2asJ8x4ojgzh5t6xeX1S90nAlKQu8hbfTFlESDnoRePduk4RAhsN+YSWg/Fx40RMC9ihxtoR/S0vc16DO8pa5S9Bwx/qS8CceHTv1GotMZihiZ0RvTMk5WLMd07IMZHfu0mz6p84DmXypom/uoOLpTa5/KHDAKmac8k9BlIDlOvmby+0MevgNEqB83awTiJQKyOkSRsXtqv+2NliEp1CRi9OTE08qU1rE3Bnt1SO6cChG7HYukGzXp9HSJHclY7NhutSDVJmHMlKRTrgrRof9IcpyenkuJkN59CDnaXqWQ2W8EOdpcOstYhEiD7K5d0HVQN7R1M9J7wIkMksk7mtCtdd7SJHVGSytII7MwoFkmFyZ0xPiEthVMwtDmhwYNORH2mS/To9/ICe33Ny5rodIbk9Pb9gkBsvuOIEfbhFJ0Hzm59WzA6Mk4FQmdB5483xEMHdDcaUQq9B01CW2z7DtuJsII42IhE/Hq+LRn09YD5cPuv7rnQ6N6ZS9C80pLGL+nLzTfIOEF++uU2Ce1otjuQks0S1apwARPd/rCfllvz1eDnBIiEm7klJfA01BHBFUinPL58EVEwJDaKRRaUsKGvhRJiWDNa1U4nBpFouX8TBEleH/A/jTIccHi5COPoaAh+R0rqUXPjG5QDH8Z4qK7SX0jECGmUV5bgz4Skt5YB5M6FoyYCO52J/h+HzSEPnyHD1xS+AYiqHOIAC1kiOBP5hTa6ceGnb8jLxDEEy++hnjCB+QXHMfub5cis28/HNm+BgczO0KfkQkq4IeXJafVCo7XDx8pk1StACNTwWuugd/rBMcmho7QpkIggc9WSgR7MvsKeBCRZnEzTcoULgKgGB+EQglETicYswlemw0+tlUUIvhq6uEzF+EYJQFHFUHSaSGT6SA3eEFrGsC12EISSymfi+JiD9wuJ5wkH7+vRVZhwA8E0Dm2L+aPT0eyaQkUAjdsTBFKapXoP7AXPO56HCuuhEJM2ouh4fP5oVDI0IXwniW1hSisqkRdlR119Qx0kYTekXGgLbUoXroEB/cshFqThUnXvw2pQgUeV8i6dQInvLviTwWXy6Fj5byfVu/IHzymT+LC28Z3efLuKZ3cf3W52oIVZvtPvvW1Vf99/IO2vhlon1e09sv/vLz+65demHDPW1f3Hzpq1ea9G0bm1hzuxt7nks4kpIQMn8eHkEf+UgLwuQIOnxJy+DwBBORcQP7KiGyjEMmhECohFynIoYJcoibXVOgg6/L9xYSOPF94vV6xzWabq9PpniZzXqREIgn5QJHJZGB/nwJeIBDocPDgwQ5RUVH/Yp3sEhjtdvsb5PdbYrH4skcPOR9niQIB33dJlQw++qKUDBTF93P0nRfAvncw6DZbP4K0NFNUucjYENNdq4ssvuiCtrxPpVJZkpOTizUaTfKlyjSM/1Hw1eg55MzCeBj/W9DEZWFM3Bm0RH8UEj0GtmoH/0GQx2HQoLi/uhRhXCDCPhn+PhDxuWYibq7bvL902N3Tuz80vl+nL3HtGRTGFwwuKI4fQ1NUKD7iDDnypYMt26E4YIgwzWGaLQdCkmA7+6MILHI1bJ16IplMWdHTH0Idn4NUmouDtz6MwmM7kTl0DBi+CC4HEYxpf1O4S5JRaEvZOawazrQ14lxbJliFRJDiY31OPeZkRaO6LAdWwmIK6p8NKRhEbj6WHaiDy0Mh6LfDW5kHTnQmmcPU8BirIKOEcLhtbPB12D0OCH0Melx9JdZ+8Bo0Hj/06akoKCjBkMwM9OmQBrWUj+LcHEiyuyNmxCBwYqLhzitBZ70BZRwabDRKPqk3l8eD1+dEMCgHzQlCEGAgFojhsjpBuT1gSMIglzUqoSGQSOFxWEN2J0FCK0OQD3FeAd4aOxaZifHYu30bBg4fBpfTCwGh73d79uLNDStQb6wHYy2Dx0fDG/CSvBj2awBpRJpIKn7AxYbG9JMy0OCwziAZAWsEHTI/6ZOQhY6xsZBGDcW/XliCqwd2hkaZC5vfiP2lTmiEKnTt3BFWixV2Uz2EfDnSU9KhUDNw5HhJ+2Zi3r1vQxUbjbo6I/Jy8uFxWsERSsFl+Ni29DO88chQXHPnO0jOJnTi8lota8K4fCDDJRit4P2y4ffCXoM6xSy/9cruC/41pePfOqxl1wGjl+1f9eXdtWX5JzFWmqj44jteWTJIrlBY2fMoWay35Dl3s5luCJxT/v4hEEH/gv0AnC/MZvM9arX6BZ1OFzIFZeezpKSkUHhg9i+5f9ozbJqNGzciLq6Vx9LK5XJ268i/LRbL8+T3szwe77IrR84GAZ/yXUqfDJ4LiC5xKiiK7wH39BCunKBfqXUcXmoTiAcplErLxb4n9K7bb7/9Q/bARcSzPR1e5O3bD6OHCalkRaxZcmYSxNQF9e9zgt3Tnp9fCAsZU5roZCTHasG/WE1w0Ifq4lLwIpNgOO+wDwystaUoLK9HUChHYnIK6fHCP3W3nauxDOU2IVISIptoQBbmurJS0KpYxKj+vtECzg8+VBWWkDZJQ6Ts7A3sNZejxCZBaoL2EizXftQWFSCgS4fcXgIjPxZJ+hN79wNuC2lzExIzkvF3NHj02hpQYqSRnhSFdqxoLwx+K/IKTYhNS4L0MvJDrgbSn90KZMar//Cz5upC2CgD4vXy08eg24iccieSMhLw9xoVQRjLCuGRx5+07cJjqkaxlUJW0vkrWkwV+bCJ4pCo+zOi7JwA7ahHUR1NGIPoE9vnwgjjAsCnuE45x7d68/7SwfMmdX188qDOHz82u+dlc9vPCvnsN8KuWhmuzYpFhc2LWpMd9W4nrEQwdhMh1R9gQx6y/krOXIwyIgjnNTYistEOGZ+DYg5F5iIinNtd8HrcOLZuJ7qNGwlVfCxqGxvgp4NE4GTjVDAhx4enWjWcVMa215kzXG+3bnwImCDMDBe7zUEM5utg2fQcat1aWO1+7MupBO0nAnfABzaiu9JXDKvfC45OAqaIlI8I7vbyEkTGpcLkcoCWCEEX1yBt2lTULV2Cfz/9JN7976coOFqADVt2Iis+kgjTASg7JuNAbhXURXXol9kZPEJbW3UtEfYpUFIJfOSdfEJbm82JCL0GVF0t6uxWUAIhvFWVkGWkwC+oRsBkhVSlAW2uZSNPgjI1YEFCAmbcdAO0CckoJsJ7XUk1nn38GXCFWowa0gc6ayOW3jIfqxtq8PzCRWB4Afh9DvAIXyRk/R4FGLJ+CUK8kpHmwssXgKb9CHp84NIMeP4Au2MWt/3rTggJrYdOHI4lG3aBcfKRnqiGPoaD8qALx2tsiBJ40DlVhbhIwu/GBbC7qgKUvwvueOYjBIRSLPtxIejyX6DR+gj9OHB7/SjPd2LgFQuwYV0A37zwOOa//DW0kel/zEQljD+EaAW1YvOeoo69OxjW3za2x333Ten0t3DueD7QGSJrhs199P7l7y343G5qiBRJ5LbIxIwjg2be83SLgoGFUqmsJ39YqfyS7Et0u92X7UsJTdN8hmE2qdXq0+xJ2a1nCxYsaFfBwIK1ALv33nuxfPly6HQnRWvjqlSqJwKBwESbzTZKoVDUX6binxMCPmvJgEtmyUAHAhcdh5ChnfHwOU81622C19bRmL/jJUm3K+68FNYrVG1tbWRVVVVMZGQkYmJizv1ESyEDXtRXFKOoygIREWA7ZcWRhaLlrgv79x1Dp0lz0NFAxC77MXz08fcYO+dqqL2VKCishF+oRmp6MjR8LwrJOUdAo77OAU1cBhIUDqxbtQr5dgXGXjkeWdEClOccR6XVB318BpJilC1WjDAW78JPWy2Yfc0odCArRX3RLizfXIkxvZJRWlYG2uuDKDIJUSI3CovK4eHIkZyWBo3Qh8LSagiDHtRavYhMyUKiXgbaaUJhHhFOyKJffvgYYkZFwyDjoaEsn9TVDKk+EenJURCeyjzTNmz5+Sc4s8ZhbO8+ZJGyYMvqFdAOGI2OWj6qCvJQ3uiEwpBMhD09aHsNyqps8HvscNACpGd1QISUg7riAhTXmCHRxiIjXo5DOzZiy6F69Bk+Hj3iOcivNJFOQEMdpyELuAApRHBkbNUoMHKIICZGRa0fMZkJ4PhsyMvJg9kXRF1+DmQ9xyFGJSCCVzHyS+vBV0YhIyORLLDNjUYYi8oSUm+y8FmsblLOJHRIMYAbJO1SloeSWjukuniysEaB46jD8Qoj690QuvQOiFE0CTEBjxXFhYWot9HQJ6YjNVodokP+cbYcFGJTMhCrk8JnrUd+QSkcjBjJGRmIVAjhd5mQe7wQ9qCACH6ZiFSKwPidKC8qRLXZi4jYVKTHyRBkQ4JxCCsWZBUOOagwuiAn7ZueYICAQ64VFaDU6IHIWYCD7kwkJETAb65BQVEF3FwFktNToJMJ4HcacexYAZxBEZKzskJlOJVFo10WFBYUwEKGWd2BXZANS0UPfgBBiiE0r8T+AhM4DOmTiSmgg01ODWmHEYdJvn6BCkpCFl1MCiIkDGrLilFca4MyJokwXwpU5BbDLaRgrWsM+UhINXCwccVyHKjjY/yV49AhWYm6wmMoq3dBFZuG1LiIkGPM0sIC1FgDMCSmIi36ZMGbFaoLyhogjIglfTS2VanHek9n6Rb67Xcg92guTB4GsWlZiNNKwaE9qCzOR4WZhlZOISBPQIc4OWHYclBSY4VIQ/JLiYOECsJeV47jJfUQCp3Ys9uMKxOTwPcYUZhPGFLCmSakpiJKxUN1YRGMhFHzc1Xomh4FS00pCsn4kejjiaAcCQoelOTkkLr4oCX1SyH1a4mqydBeVJfmk/HCxnHPwVFuj5CSwW2pRV5+GbyUHCnpqdDK2irXaTJ28giNrZDp2DEajeq83SiSDiBj3Y7qRi4ZK5EImsn8Y+YjU16JzRsLCRNN+rGLQnw661xWTPqrFfm5BTB7eUjKzEKU8nS1EdsvivILSf04iEpKI/OGoll5w8BG6llG+qvPZgYj0SIjMxVyQjdTTQkKyxvAlRqQScanLGDBkfxqBINe8GRiFG3fiGIkYsKoQeQe3eRAMjT28lFC5sUg6W8H7LEhJYPLVIGcggr4WTqQsaJrowQNeu2kLfLR6OHDQeZGV/IkJGqFsNSWkb5RB0pNxn0yacs2WliPpQbH80pBkzk5s0MGFAIGxqpCFJUbQamiyXxlQNWeFfhlZx36jL8S3fU8FFVYQgrMmA7dofLX4Rjpz36+EpnZHSAl62kwwIT6ndtUi3wy7zrJ1eSMVDLO/l4qnXPhcvlk6JcdvXrLjmOt8ct7pBs2/aH7HU++/3dAtF5Rl5oaZ7jYfNg90Qquf/WmfSUDbx7f6dkZwzq/ezkVC23BDmMet2le6U/4jqbdmCfz6QxN5iUy4F1kFvOxX/V9ZLwQYbrKZSO8hAN1ZjucAQ721ZXBaDShaynhb+LiUMvlosNjt8N03b0IRnFxYN2v6DZ6ImRqPerqqsmLWWfW3FZXk2f0z9CGEq3bKNh0vHY0ekEGXDIW+eyHei4TsgZgHVTuMPEwIEmGZ96vgEAbD6ejETaTA0IJFwpRA8RsOlMjgmQN1uhjiPDNI8/5wa9thC8qnkwaQdQSHoYV2NOGjkD9zn244ZrZ8JM5iw2YV1lXiofuvRNfrP4RxYdykN6pI1TRaRgxcjBesZeDS4QH4+FDZC0liauq4SXzQqPLhSBfhAhSVpfbAorMy34iqEsz0tAoE6Khvg4qrRaeQgoU48ZADo2Oah2khG+tLC/B999+h9qKXUiJCWDxqi1QcB3oM3QQKDLHZlrNuOXqCfhy0Qoy9wnQISjBW/PuAU1xQ75x6kxGmG12CCK0eO3LL/F7ZS4CLgc4Hi9ySgtx7OhRDB48GB6yhttcHsQP6gtXeiL2d+oKn1IPinZi8JpPcYTQUSrrDTH3d8KLcTB0+jRQYhkcVgvsZWsxrquL8HBOWIx2NhQdxAovjh96D4PHzseKdx/Fkd3rMXBiMgT4Y3G3+2VHgcwHrec90vV/8H70/7H3FQByHFfaX9Mw7SzzakG7K2ZGW2RLFlkySAaZKYmTS+4S5wIX+u+Sy4XBAcfsmGVbYIFtkcXMtLtaZpwdbvqremBHaJliJ+7Pbm1PT3d1dXVNd32v3nvfed9/HpCRYm/v3z/vE8kUnG7n3tl54Fzh4KKUHffNGfWVry8edGnW+k+AMZNnrBsz+eDlFA40RI0M3fiEjAylpaXf9/l8E4PB4F3Jycn1n0SZFDQn4OrVq98cO3bshJycnPOed/Q59dxzz2HDhg1XLOPgwYP4zW9+g//6r/86L3cNPf7w4cNDX3nllS0//vGPB11O5vPThpb4kWU/wcSPH11dgoIadbiuM/+pZa81OY4h6LkowLjQ7r+npbXuqfSsfldUE7ka8E899dSKj5KTQQ4H4QtIWtKj2l2v4ETnzbhtcp+RQvJ14dje7eTloKKTEKyUgdNgrFyL56vScMfCcTB6G/DK8y9i1IIpOLhtF4YuX47xRQrWPPkM/Ncux5DSXHjas1CS5MMbTzyNYO5AZFt47F77HM6OXoDZw3K0l7G3oxmW9AIYGRH1p08RkiEiMy0LMnnY79p9CjfcuhApZhbtNR0QSZt6W87gmYMVWHHdEOzafhhzb12E8aZWvPjnlQgvnYEDr67HkMW3Y6zLC2/FaXIf/Nj75ivYF8zEoCwbzu1ZhcNnR2Pp7OHnGRqkoB/NXgaDsyPShKzJhWkLF9Mpdax/4TUkTVuI8f0daD29CS+8fgZTB6nYeUzF0oVToFRtwYtr92LJWBNWb6rEkHHDkWbn4A0bUTqoP041GzG4NAthst+e00bctXQCeWHtxJaDMrLzMqG2VGL7YRbFaanYQ65paqEDG59fjYJ5d2Bcahg7CPkMqCEc3PAyqpwTceOECfC1V+C151/HnMULyLl4aiHAmf17YZt8AyYMdKP6/VfwSm0ZMtv3IDhgMWaNd6H71Pt46tVKLBrBYdexMO66dVrfzL2qIEgGCWGJUAvFg00vPYP2mx+E49ha7PfnYSy5XzwZsIR8wM4N6+B1l2Fo/2Ryn3rR03kML25pxrLb58KmBLDtzZdxumwyvAc3I3faYkwoc6Dj3AlUtaWgcvf7sNgHIoX1w0u9wcjA5/CG11Ezdg5SG3agJX8qbhg/CF3HmnDoHBmsVL+PlbsULF0yFU6lBW8+8zTy5t+E4NbVqEkajNElZEzs70HYktqX7BPQ1C3eeOF1DJx7C8ZlKth+9oAm3dV6ahcOO5OQn3Yae08LWH7rFBgIEV678zRSnAG8seoEbr5vCZwQsebZv0GclonDq55BjbGE3B8rTm94BkfL5iHp7FbY596PCeNKsOnlJ9A9YiEGDeyHOt6MQQU8tjz3OE6T6yx38zi28UUcK5uFEcwxbGswY9zIATBJfvikpKgXgYTKjc9jVUMyhhe50HxwI7afGIQHFozRiLu/oxbb93Uj3x7Ci6v3YcGdN6HcwOLohhewxjUOyXWb4C2eg5ljs9C8/VWsI+cozeDIYErU/ISrdr+Nk82TMTGtGRtPmnA7lf7yn8IBUqbSdhBPvdOAhbfPwwCmG++9/Dwqxs2Dum8HlLG3YjqVDX3jT6QPZKM814GWba9gf9X1mOM+i3dOsxg/phw2PqQ9SxyWiEvEoXV/R0PadFw/Ng8texpwoh3wnHgXfz9Gzr10IkyBNqx65UUU37AEA900eNiHnW+9Dk//6zF7wkB422pQU9uqJVqj6G2twO5jBuTmZUBsPoPtJ60onsyDE8woGjgSSd5WvPHqayieOAQH1m9FRvlg2IUg3nhqJ8bf8gCGpScO+FTSz70I0a7na8PaVw5hxq23oCzZSnsNOiqP4IgyCMunjEdP0yG89so63DB/iua+S2O3W4+9g73nRuOhiRz27KnAvHsXI50Mlg3kueSXy1GcZSfttQNdhekIH9mAfRiJxdPGoPu4D0fOaA9eeL1BzVXY234aLx45h1vI4DnVzEAOeEh9XkPetYsxPtuBI20HQH4COPf+S1hJnhsjSlPhP/QOth0egq/cNFq7lubD7+Ll7c0YOrgAUvcJPL3/JOZfNwjb1m5H3shxyHEa4A+p6D9gAFLOcFof5U6tw8FqJ25ZNA6+45vwpy11GDqkELKHPFufOY7rJ2Zg5yEZmf3SyXvAp92HUNtxPLv3DO5/eDH5bej4zaPTryh/8IHff+XK338WGH3/82px8Uc7lsZEOzlp3ZYD58beOXvgz5bNGPzL/1w+8jMZFH4g+IhRz0qTOQsC3ORjvpkQN46O/VK132Z9bxdO7d+IMyEPdr/wMsY//Ai2uo04MrgEw8jz/Oxj30HPycM4RIimbewEqORYnpNgNBjIQFsgJJr8nsnVKwYGBllCgGxTOU4zKijk2W9iqTQiA0ZQYZQ5WGmiwvZmCEEPDMEeFOWmw99WCUdARK/YDc5kAkPGEwc3babkDY4UDhtIddNLrOju8kCWRLS0BeENBZGcXIZw2Ac3GfvwhHx7rA5wNKGiyCDU40VSmhPVnV1QnTYIvb04K9Zi4He+hb0/+TmYU/s1hY1UqxE//8GPCIm2w2J2wm9PQWpOFn5ddwLHwyEYRQUBQupVngOT5gIYnmZNh8XtQJDURaYPWPLyYtrbYCJtYXA50NXcgsKCEjTyLOTuIKb2H4KxA4ZAVlSkp6dgzNjBGHHvdBzZuwVHK9ajwOzF4JwC9JIHUKDHj1vHTsNrpncxyZ2B/1y+jDIcfOvhh1GaHIbZVIDcgWVYcNNcfHfZ7XimvgKvPPksFCc5b9iLb/7+d5jw+mp4SNt+/6v3YHrqIRxu9WDrvgPYNeEaiIyR9IMwio1dyLN4yLWwEDgWqsxooRn0PZxqEMh4xwejw4aArw6ixEPyS+iWGmEm42SvbIWns5uMZRUtkeaHwa+/Mv1jfj/tQ53vasF8VG1WRJ4n/S9MV/YhkGJh3z9wrDa1KNt59N4bRz/yHzcO/sxms//RoIaAysrK94qKii5Mn/WRYbVaZ5Clhqw+Rwj8PZ9EGMLx48cnLVu2bBZddzgc+NnPfob+5KYnJSXRa8DixYvR0dGBn/70p+glz5oLQY0KX/rSl/DII4+gtbUV3d3d6OnpoeWC8Frts8FgIMOXAQ/dfvvtv/u49f0oSHFae09/dyrNy/CJlLd3796P5TDN87zoKpuhJRHz+31uS+vOHQh2l8Z3YDl/O5fzI3dq9v6PWdXI+QoKCqpnzJjxTnFx8YwPc2DVvg3Y0Z1PCF0BQtX70aueb6jhrUkYNHpixJMB12rbxGYvmL3VaGovhaWzDt2sFUnU/YE8hziNX1DLvRrJEky2hQhpDalGpLnMaLRnY8iQDGSnJsFCswdHz5NZNALCYTJwr0jBoNxi5Ph2YB0ZjZdOLaRBk+R/qq95GitXH8KExXPgQDcONkRDUbiopB7ZjyEvApVxItkqobW9E4VCCF295GXOGJCU4oSh3YWyIYMh5qYhbMwEeWuhLcQhOSmiiMFbCcErcWLbloNwTxkCprsCq989hikL5yI1yYBzVQ0ocwG11c3kRTqUvIA6tIRPtHosnYFQFPiDIjLJD6ykIAv129/EcX4oZgzkoIaD2nfUX4C6UmqzLbwRUm89unu64G1sRlBMMFiydqQ6CKFpbUfAxqGz2wsDeYmnkOs4VFODDm8SumvPQTa7YTQmvMzkAJrqOuG1yaio64azPBUZpKy9NefQnVdEiFsTzO5yCOilo8LzxL0UMiihnhvGITMwJN+AsweOQ1WC5P4loXxIOTKsXqx64x2MmDUTnD0Z/cv6wxmqwEurazH/5uFwiHTWnBBhvhONHhUDnVYIDh71Dc0odoogD0wYBsXcoRSc2bwShwzjMXtwGlpOHiFbWLjIQKSysR2h0hS0tvRq/Yh3ppOB0H40tnVDCtShTbZjlCGMM+Z0DB5YghS+Ga+9vgfTFs9FuKUV2SX9YTfQ7NxGJNkVtHR0o8jBoIP0BesFvwGG3reE54Zgt8PF9aCyvhNFpi40e2TQ9J3uZDea5GQMHV6OnmQX1MxknDzLRrSzGVVLjqgNRMjAUQ74EZAEpKQ5CEFMwcBh/ZCbkQLBnY2ukxUoKC1HbhKDt198E/k3PYjRyfQusHCmuSG0EvI8eBhY0kcHGlIuCo0QrDakCAFSvw6Yk1VUt/qRVGCDK2BDVT3pS/lG1Ld0QiEDnobda7G5sxiLJ/WHv/YomshAzkJ+04y/Fj0BMkgkvxE/Gbyx9jQ4pCOobeyEILSgwStgDLlvzWA1+UMKV0oyTC0ODBo6HKH0ZISS0+A7U4OSQf1RkGnH9pUvwzB8MaYNTNGa05GchJOdbQgEU9HR6dFidy2p5D56TqGeDOySfA3okE0YHVVnoAk701KsqKitQ1eeEc0VVei0k/sYvX5OMELsbdd+K92NLQhLES+xoLcDjQ3kekN16FFNSCKDabvNBFduCYZkkN9Lcj9Csr04fbQVmeUlcGin68amlWtgm7QYZU4WJ47VROJ741DR21SPTm862qqqtVwS3ppdWH9QxvVzhgFtZ1Abs35oLtGR5x7tC2GPDyGpL42cjbRboKIVvmAeusngMyxZEWw+itc3VmP2wulwVXXieIMYd5lmeR4uKpdK+qzoBtq7gpTzwJ6SCmuViKIBQ2Ek5KPI0DeLZSGDaJOxE9llQ0hv60ZWIfkdkb+2vHL0L85Gx8EN2BMqxYpRAhg5CJ8/TAb90ecQqbzZQY83IIscn8Z4kJmvINlIJbtlhIP1WPvWdoyefwPymQ4cqOz95HSc/kHQczJ8eqAx0U5efmfr/nMjbp1R9qs75wz56XeWj/xM42evBnEp3siHCLj4P9p7Md1ixlsP3IWlT76EQ54ObH7yb5g2fiJaRw7GgVTy4/zzr5FGno9ZrR0QmptpmgNU+7xoTk2DqSAfKnm2qZR0v7MBc97ZijxCrAOEnDc1NWpKS5SwbyFjw8aaduQPG49yaS2WL3aDdffH1vc7cGzPZpgVJwYPmY0zldXweTOQ5LTgy7+4C8++9A72H29GfV2dNiDvDTMozUvHuIFFaJHDaGnuQm8HGfs4LORXHILBbIGJDPpNBhGdPb0QVRG8TL4h9W/paNNCAWgCuey7l8Ntuh+nnn0c5Zk52Eu+Gz9+Anbt2IfS7ExsTyeEvasT7LkWnD19DLzNDMVkhULeeyp5JoqkLUWbkVw7eRaSZ6qFPmfau8AGwuSzgp6OTlgNHDijAIvZil3nTmKg2YaRY4aBJ3UykNFdZUUAOw52wpWUjYDZgf0H9mLc7OswYMgQBPwhqBYjBhQUoaigH84cO45Xnnka72x5HkZPL+qMadhz8CDKhk/GLMdgvG20wFOYBKWzEz1eESvbqmDo6cCuJ/6MLT+5DbUVXQjXHMJSUxClDhM2DH4Uyy3Po4m0X1aWC8MJvXt3zVMYPGIWjA4rUsffjz8+/h8YOagH6dZ00nZ+iHyIXM9w7N9fBZPSS9ZNZMzE41MPgP8XhdvC7Tt2stac4bZW37do7H0pNw1p+qzr9FmBkGtKqpdCGwVcFr1+v38vGV83EPLZW1paeiPLslfyRqOPtzsIub+poqLiP/Py8n5HzvORFR0OHDgQT+JG8y7MnTsX6ennn/4//uM/tJCJpqYmfP/739c8G8aOHat5LgwYcH7Or8zMiIPHmDFjtBCKNWvWQBRFtr6+3v1R6/hhEQwGLR6PZ6zFYrnfZrPNstvt1OX4E4uaHz169LSOjo685OTk2o9blsVi7UTBjDJZknhJlnmj0Rik2z9JoXH+lltueZEu+JA5GfpPXoqYkfGa5V+54Nsk3Hr/vRcdI2QMx/33Do98SB2HB6K2k6X33Rnf5/oV90dW8ubg7mGR1QlL7o5/7ywbiEQY3Xm45d6+7x0jZuLhaDb4u1ZEc1maBuC+R6KdMXM2/j2aVH1F7Htk4eaH7tDWym7rq/eSBx6K1mUx4hM0zj7Tamoi62QElExahLi2TNJw3B271utvQSxB/ajZt8aVAe6IFVo8GfdH13Oixbtm3YqYD8uKu8qjZU7H/dHLd2WPwD13RUudvhTRPXD7/ZEC8m66J161eXc9EF27DrFDkkk7FZ2XNZ+AMyMz1w2bw40Zt94X3XgTYiaupFk3YVh0/cELLMys0Y7rlvW13V1fiaacz4lNuCXjtth9zo9JMYzGAw9FEnfdem/sXrhwy13R9Zw+BZ4xM6JqIndGryX7LsR6wsL7vxRdK4j3ydJrl8Xrfed9sa2jcO8DkdbPnhurlwN3kzYLtxzG5i4L+kW9GRjOiGtv7rue+fd9LbJStCJ6jnQ8EItoSivH/XeXU191jJ08CQYaehGSwJlshJhbUDT7tni7OQZE2iXv7mg/J8+eqUti9yoVD0TtzgNn3R6/Prc9sjF7Ut/k5ZKHHkYfWKQMvQGPxNQdXAk3R5XJANOrDeRYWzoWrOiTe74h2td7kyeC7aEzL+QBKTFITXcjd+hNiPWAa259MH7M3bdH+7RtAr4U/Xnccn/s9+sm/S/SE0tv7TtPyaQlfb8LR/Sqxl8b/03Nvu1+JKJo/ALENFAGzbkt/ju4+97Y2jDcXTgs4QgBRYT0x45JGh+btVmGSGvn456CyJasGbci9lr68qMj4/W+pyiynn9X3z0vd6bBV3uYDP5tKIznnUjCDffFfk+pePDRC39EDOxkgO222eCeuDDeB++PVj1n/oq42sI998RkGTgUTei75jELY3WYhRVRIT7XlL5ny4P3ROudPBePJuS9YwQLJi+6Lf75muWx+0b6auwBkexAn4mBgaNgDO6/J6Yy4YBLG47kYGH0J5oxbUm8H94T65zl5NkcKy9vJHmmj4wf79TeTnm4OyqOs+KRgshK7vX4+iedo0/HPyWcvLJp+8FzAxdNKf7zPXOH/+i7y0d+6rJj/2gIjBG5ThO2PnIH3q+txpdfX4u1O3fCvH0bhhYVImfcKJxMc6MyMxmetCTNc5WjoRKqBHMohKzqJgw9fghtL74GMRiEe8JwzLx+AbL7FYML9mDbO9thsNajZPgg2MJv4nhnKx59nIE/sA0ZxjotPOmOKeNxrrIBnjYZSblWOIwinnh+K86cOI7iLCPmjh+E2u4wWrs88HuDqOtoxPjsVPKeL8bf3hKQnepEu8yii1HIOCsNsq8GZgMPqa4dDHmXKITwU0t2ByH/noZG5PbLg9dhI8+4hdgrSxBHDsIeRoZcvBAnzTaoHS2o27kfjF8k71cFaosPTKYCob2ZPFZd8AoCOhuC6DdkJAzkOnmrHWHyDq0ipCKY7YSRJkeQw+CsRoR7fHjbB4wLdCO3tYWQ+jSIVh5r3nkfFbRcnwGlQ/pj/Nz5WLlyJUoGD8PvDr0LwumxvfYsVviCMNvt+MXba2Ht4bBv7SaUjR2JoQtvhBgKoL67C55gO9hWGaKiwt/RpoW2qTRRpcmKZe+04u6SfPzy+gbUmbvwizd2wMKswfMpQ3D/0Ga8V5+DRYNFHDK14Ju3L8b93/9vQg5GIj//NWzZ+C6O1myF1c5j+rXXYev2ozi9511w6QaMnDKd9IOoPriOq4LTxB47U9GguGzGlnvnj7sn/abBdZ91nT4PyMrKOlNZWfl2UVHRHVfYzdDT0/ODwYMHb41+fqS3t/cBQox/hsurMlKYiouK/i8cCt7i8/lmWa3W7o9Sx/79+5+gf+mExU9+8pOLDAwx0O/J9eB3v/sdjhw5grKyMpA6XrZc6vFFPfPpvk1NTWpKSor3o9TvahAKhUyE9E8n9aNEeLrJZDKS5dM6HcXg5OTkHV1dXfOTkpIOfBIFcjwvcZ+SeghfXV1dUFFRUVxQUIDij+rrqOOfHwYHrrn5zg/e718UhvShmPVxo4lZIwpKY7SyFHfdV3rF3f8hYDjkDp6GB66gGm9PzkFZcmR98k33X37HLyCseUMxO+9q9+bQb9ICXDqbjo5/RnxaORm+aLBx8o49R2oKrxvX79kH5w/9nuG2EcHPuk6fJmK5EowGI6YV9seJ/+gPr6hgW0M93qo6h8M79sMbEuCQgTwD0M9hxJzMbEzNTUOB2w6+yAYU50NdsDAiCq/QhJCMlueEekfNWboIs5cu0MJVWea/aBYUiP4ePEC2vb+3El96aBJ2nK1GSlo2ZBOPmsYGmHOMqKzqwN03TcTMa4ch2HsC7286gP02v5bLSeaMaFYlpPmB+ZMK8d7uBjRXnkQGeT90BCQk0/wOBgFWUp6NCUNWjYR4izCT81qaW9G4bQtYqwmGgeVwl5XCIdgQFmV0tzSivbYRguQDwhIMchhTC4swecxUzJ81AXUBAQ+t34q2I/tRnWVETX0tHP4wrBYz2uvrISe5wKQN00IrvMEwOJMF3eiElGTHj86dw9v7TuN/ls7DgUMH8fCX7wHjD2H3vn3o7O6BTVbRb8Q4fHXDRjR5O8FyKvZ4/Zj3w+/hl8vuwMLxE5GS7MY9K+7FmfpKKF4J75ypwH+teo20mwlKXZOWJ4O2v6qwkBUJErm1Z7dvxf/uYOCbkIOWQztQPNiGtp5WdDVsx0vcdOQoJ/GiJx0FOWnwyB34v58/hK4ODvPnrkDBoMGYYJ2tuXevfPJptDfsgyMzGY889itk5pVfuWPp0GA1MBXV1U1es4Hr+fqK8Xdl3TLk3Gddp88baA4Ct9v9K7J6JSODMRAI0BmrmJGBkvc/hcPhp3p7e79FyOz3cQmL15H31+LV3/4nvN3to/IGjF1553f/tCTJndzxYes4atSodT/84Q/3mc3mUZMnT/7gAz4EaNjFE088Qb0lzt11112//UQLj4K0kznoaVuUlWyn5f/DvCUIspOSkrbW1dXdmZub+9o/8LwfGvyLL754y0fJyaBDhw4dOnTo0HEpWFh53/5jtVnXjMh97UuLh33LdPtI/2ddp88CsVhcC89iem425uRnE3Iua+EB2vdUNpMsihYuGpPGjCCuOxctIx6oQaNMwWpL5COLcNBPSH0Lho/IAkj5dJBtMhnR3NsMg80IszWMQUMGYOK0ayHK52BmrBibZ4Qn4EGbKKOnh4EaCoILNyMtbwA6ek/BGhBx1uiHNdsFc1k/nGxoQ3dNE8TuaqQNL8J1+WUI2dpRazPhv3/3e6i9QSy97W7UtfRCyoiE7DGSCqPJgB/OX4yd721BRnIylj14F7KyCpAlAIWsjBMFt+GnzBL8dtcuDNiyE52rXsWo8kHod+u9WGMxo9pM2qW+AZ7OLqQkudEs0kS1VvRyAt5L82PMxrdg6urEs7/4A9I5FqIkoaWrA988ehIBuwO86gedplMYBQL57pyiYNGzT8Lg7YFTMMButKCzrgKtfi+MBQUQrUaoohFs2ADWL0XdfBktmz29SarAokPh8H872jDInIc1z1VRR1CkpgcQbHsbO8lxeRkytolh9C/NQ/9cM/zJQZw88iT27w1GyqH1gYh5t9yC65c9RoizlWxXtKTWHPfPlSD3HwETh7rmxtYWRVHlL981fkVBxtBTn3WdPu8g5J220TqyXCZJsIrCwn6jGhsby7KysuLtaTAYQsnJyT/o7e39lcVieZbjuLkieTZse+spdsNzv0QoEFc/ZGqO75r2o1uHtY2as/yPN9z77cfsdsdVq3VYrVbfo48+SkP19546daqEegDQyW5BuFhAgeZk+OMf/4gnn3ySJqLUQieGDRt2UXJc+ttqa2vDuXPnMHjw4GqyXE/zEFxtna6E3l6PO+RpuiaF6f4SehvHGlTZpIU2dfFA3jTSSV1XW1TI5/OdJvU8ROrWQa4hRKtO/kpkEUm7lDkcjmUfUIY1Nzf3ZdIuXyf7/+aTTC75SYIvKys7tXTp0lfIjVj6YQ5Uw93Y+fZqHA+4MXnqJJRlXuBZE+7Brs17kD7xGvSzfrgkNp8fyDi2bSPQbzIG5VwYkS+j7dwpdBjzUZZl+0xqdyECbTU41WHAsLLMT8zZrmX/a/jzu80ozEsFG/JDtJFB0twZSDNdcAY5gF2rX4evdAauLbs6l4BgZwNONCsYPiD3qusrBtrw6gtrMW7+TUiSGlDrd2BIUdQBXA2jYvcGHBGLMXt8CRr3r8UubyluGWHAu3sbMGnWRHw+7tQHwFOD557fjGkP3Ymcz7ouV4BIk7tWtmHgwBIYuCvfwXDtNjzxvorly6Zc0Qfv44CqkRzeTu5/Y4gMJEMw5o7G9dMG4MKuemmEcOTddTjQGISRlWDIGoTrJw+5hOyuB6se/zOquCyk2xl4/TIGTr0O44uSLyqxp/UwXt/QgoVLZiHpQ48ZA9j29xegjF2GqYX/WAlKHX3QczJ8OJgY+ejhk3VJEwZlrv3qkhHfsNwx8uJsXV9Q0PxQpkjyKUIi+YtS+32cURJVSOPtKVgwdRSaqvehf047TMiGNxxAajKhsmIulEArRuVb8Mc//Q0L5s1BTlIujrafRU0Pi31NfphFN1KsrcjIL8Uz63bBZEwnZINFsDeEbtYA07kqKB11GJ1XAjbYiapd+2Ay2fG1u+9Fb0838lNS4ejnwP7NG/Bwyzms3rAPqDqIEBvCDFJm3eGj6GpsQIk7BYc3voex99+vEQJNzcMCfKNHwNqSIkxv7UJx2bcgSiLWrl2L1CEjUT1nGpieLjLGqUNG6VgITgeCvg6EFBkKw8LMmBHIyEGdJKNW8oIJk+12qyaPqZJyQjKjKWVxIhmHk8+0bIUQ+jCjIKCE0BoSIfUjb1uaGyIsgaHqQMEQ2YeDYjZrObM0S0M0ez0VG2U5A7k2Fvs5Ac6pZegXaESq0AB7qgeTkpKQmWHCL1dSZaNWGJJzyWkVsjCw2GwYOGAoZs28HqMmjgbHm0kbGKP5ebj4OXSAJldvaWtprwmEwsIP7ppwR1H2sGOfdZ3+mUBIe6Cqquq3hYWF3M41z40+c/B9VxPhLW2N1VDkiHf88Nl3rL3j6//vkgYbu91OZTHni6JoYnlBzB44aaDZ9uTqUMB3nqQl+R0ze99+7uGD77x8z7XLv/KD2bc++j8Mw1xVCL7ZbO6RJGlgUVHR46S+d5+lim7d3VqOBipLuW7dOnzjG98AuY74MTRHG3020MSP9957r2ZwoOs1NTVUblMLpxgzZsxKv99/FyHgPVc4/QeiprpqQGb49N8Nim8IDdC4ZJAGebagdjNQcC1guPQeZNlRX1//2/T09DcEQQiTelEjy2XP29LS8grZ9+9k9UqjR5bco182NzeXJCcnf42We/VXdnWg4SCr1r17984Dx5adqawZTe6Vgd7bUcMGrx1ckrdm6eIb/nil4/mFCxe+QRd8yJwMZ3aux3tHGjBgUgmSrQo2v/F31AWM4GQ/7MXjMHeoDS219bCNSyhW8WHHmrWo8fOQwz5kDZuOwZZGvLJqLzLzM+BpboKcnAMnK6O3ux320qlYOKk0Im3X24iXXnwPo2+7CYVmBWuffhYZM26Df9ufcTiQgTSjiBavjKwUKyEbYbT7eEyfNx/90yL3J9zbgu0bN6NNNSDQ3YOcsdfhmv5BPPnL1yGU9APr6QB5zcJlMRG+3IaAczAWzxuLtoZaGNJFNBzbhPf2N4M3KAgZc3DtcAc2r96IJnMZQF6Ads8hbN5fB8HAEXrgwsy50+HoPIlVm4+DN7HkZW/F9LnXIcl/Eus3H6GmQviCHMbNnoPytFhHE3Fy6zrsqwlA4EUw7jLMndQfZ3a/g5OtInlBEvKUPQTXTeuPXU/9CSeFAqQJXrQHnZgzZzDOrFuD3R1miOoscGfXYmOtgPKSMpQXkBfhfnIdAmnXoA3XLJiDvEgWO0jBbuzeuI6QdR5KKARX6TjMHluEeAg6wyN3wDjcOG8k6ekqTq5/Flv3n8MYazW2Hm2H0ajCz6Vi3sxR8dsskXu16d3t6CQv1LBfwoDp12OAtRlvrd0FxWBCiFz3qOkj0bxlHbY3MQhKs+Fs2YI3TkgoK+qPIeVWHNxdAc6goqdXwLRF81Hoitco0lLd1dhABjNnA0mQrpuLIksD1m06At5sRK/vBJrKs+L7yqFe1DZ2kedALw5s2ojj3eRXK/WAzybXOrEclvPe6Spaz+7C2q2nYEtOgq/xDIxDbsMk5n1slsZi+dRcVOx+C/u85bhpnBvbN29Fk58MTLwBFE65AeOKXNrckqqKOLTxDZzqZEl/9AMZQ3DLtf1RsWsztlf6YDf4obgGY/b0waje8gaOdzBgpACUtEG4dVwSucBObHjtDdiDXeg1FmDudVOg1u8mfbAaBkJ8e8IWzFl8AwzH3sCqniFYMbMYtYfXYltDNpZfNzSS9JHUof7IDmw6RgacnA9q0kDMvqYM7/z+t/BklMMc6kAXTaxqs0AKd6OXycENC6/Fqdd/jWNyPukjMhrJ43nmooVI6TyC1VvPwmgT0OPnMHXOZLRuW493TjajzTMbo8tM2LxxF2SjCf6eMAZfMwdlTi82rtqIHoMTgq8JIZ4mJpBwctsa7K8JQuBUyM5+mD9zNGwxScVgK95dtwVtEgvJH0L/CbOQ0rQebx6TkZdmQEuzFykZbjBkUN3T5sPgGfMwujhFa3OGt2Do5IUYRu6nnxD8V1afRvvYAcixRIoOdNXg3Y074GcE+D0hDLhmLsb0c0XvuwGDpt+AIWTQ2NtwCq9vOgVPcADMtost6pzFidFTF2FCvhnBqp14euteDCiciprN7+Bos0gGsz44i8djVCzMQgnj3MHt2HWmA5zkA5MyGNdPz8CqP7yKYG4+zIEudJPfhNtpgxLsQDf5XS+9cSL5PXpxaPMatOwOoydgwswF09G1dw02nQ2j34AhGJ4pY/fBSvJM4kldTbh23hR07lmD904HyfdDMX3CcPI84xD01OHlv74E5BaD622DmDYCy+YNQ9uRrXjvYBNMZh5BPhUzZ00A13wE63ecJY8nFb2qC9ctnAP27BasO9AMs5GQFQvpizPHoLdiB7Yeayb9SoGXScZ182ch/eqsOTr+RfH6j+eXJNlNbTbzyI81qNPxUcCCFwSUzrkH3iffR3AHedaMDIBNugGTHDnYcXoHTjdlEDIv0jQ9ePudg6js8qC3xYMuTxApnA0iE8KeVifePLwJUsgJu9AJQ1kG+LCIHEXFpNwCGK1mpGXk4kiYRWNXK3hC1g/s34Mxo0bBabWBZ2kCZiu+ZhuIdxamoPfZJjJu8aOKvB+nZmVpRKG5pxMTcibQwSslQZrUHCUIslmGFw605RTCdmQv8jLT0Hm2FvzQEQgcPAI7b0S426cleVSba7TQBUULY1DgJ+NJRlLAygqdDiQESqZ+B2Doe0UwgOUsYExGEKIExUie6WYLeHKs5O+FQtoB7b1Q2+rJiDoImZSjGEiLmjPBWR1QuZgUNDSxe5r0lr5gFUbVvFAMEhk7ec7iRq4VeSV+5OcKGDrUiiPNEqbN5NBa7cN//epPcGcUfJYd5J8Ojywa9tj4gVnrS/OGHfys6/LPjMLCwrXkz9pAMPylQ1tXXxQ2cO7AOwurq+76XUFhyZnLlUHIqxbeVjpg0JHvP7877+C2jXPf+N1/POHpajtvRlESw8b1T/38/2166fHHFn3jDzeOm3zNxqupI/U0cDgc93R1dX0rIyPjOwMGDHiQktmNGzdi+fLlmuHgUqDPjj/96U8g5B0vvPAC+d0NlQkBftXr9T5K/rZcicR/EHze3iSu/eBf8oOtN17VAfSh035kF9KGFYC3WhRF2UCu59ekDoeosYfuQmU6rxbUGFFXVzcpNzf3LUTFlC+JsBcZTjbDFw6bP46RIRwOm1paWvr5/f5yg8Ew1ufzTVJUZsCRk2dcuw8cQ1UtlVyPOEtQo9KxU2em5GakHmtqas7NzMy4bB4U/tChQ8P27Nkzhrqd0IycV4uiQaORdZq8eKaMhrz7dVRZx2DFwiKwgRa88dxanCqcfdExLXvXYxMhWOUFSWAMdi1bsGoCjNnlmDt/Ggzt+/D4mi5MXTELhsYTeHHzWUjjSsAlaLpfBMGOgUOmY+oAC7a/+ioCQ6/HzCILDhAyXNnQjuK0nCjpIwNkloNZsMBqbcOxQ6cwtj95aboyMHbGQuQrZCD+5k5MmTsP6VYPXvnzKtR1j42fxpWah4wU8lIOqkhKT0FybhGGl52ExTEKZSmdeG5tI8YtW45im4r6XauwYfs53DYxA7kZ9Wj2ikhxp8Fi6MX7b+1Bv2tvx+gcI/xV7+PZdw4gb9lkTblAbjiMrVUClq64IR7c01axFUe607BkyUTYeBk7X30Su06maokWh02ajYlZYbz7+htokQnxGT0A1aftGFmegxPVZgwYew3mjcpAy8k9ZH8jIUwmMqg4jtM1E5A3OHKG9mNbcM44AEtvGAIjOrH6b6/hYH4BRmdEmbcqof7UHrwRrIXRYEZ2ybVYWJ6Bhv1VmqyVzcqgueIMarujSeGUEI68vwXHWxjkp9thcArwtrVCTklDUVYqarv8MKWnI8mViZyxQ3H2gIyxQwpwbtNOlIychkWTctF+9oBmhDHbrQi0HMaxyi4Ujjw/Qa7gysfoESUItuViREkyemqrofBmmi0VnuYKnGrowkUZEcigRJQZQqpIH2AlnD15GJ3Dy2CxMwm7iNi78wRGzFyKIXkOVL33HHZdqt+R/nRm/3vYV+VHv1w3DEk8Aq2N8Oe7yH3SSoKkMhBMpE7GMA4dOYq6EgYbTwMP3rUkYdZKBJ1cMZC2tJL9DpD9usZNJhfoxqwbF2qeDAfWPI0DFWT77lMYueh2lKaSNj2xFs+vO4rrr/DM8jVV4q33DiI1vx8Z/DlJlbzo8QTA21MxatocDLZ34tXnNmLgTTegzMLg7Wf/gvrmsVp+jiGDJ2NyWSohuK9j9Y4qLCwhgzaDibSvhfSh0zhd58OU8WNwzHMc40eXwUAGWTJrIN/bIXVX4ERlA0zqcUglk7BsXH/IUU8GuX4PNlfbcOvtC+CCH3vfeAlbzpRi7kAXqXEYx7duxKEm8mLMtMFo59HV3QWXIiB38AgsmNQftVtfxy5lOJZN64f6/auxrbIewwpTInKyVFeeNCxN3BUMM2DVEPlLpSkizw+FDkBZHiarE5znFA6dbEgwMjBU7AFiOEiOEallSpslo0klLwSVitz3zpuo4nrQEEjDnXfMg3R8K9aTihf2SyfndSDk7UYoHOlX3s5z2EB+5/b8IghWFxkId6LNkwrB4sLgaxegzNCJt1auxeBps1CcGtZ+g5UdE8lv1oZhY+diaqEB53auwY6DtSgHj5KJ1+KGEZlor9gFlTfBaiO/3MZzON3ci1SGR/HEOZg/MjuhxqQfJudhyux5yEYVnn7xAJqajdiwpxsL712GVEg49f6b2HygDgsHZaIgvRktvSGkJZFnFs/BQEhFRpIHXlklA+VkqOFGbNnfgumLlyLHyaF530ps2HIaN88u+5fLiq7nZLh65KbZKz7rOnxRoVKFGlXFkKFTwd39Kt769b0YeLgKTPKf4XPehKmjijGkNxPbDp3B4ZOtKBnYDw6mFSfbzsFkSUd9Uy283jCCXDZ5PlqRVpBG3hXA1x/5d9RLPdiwdiu6jh9BdyCIlIFlGOweilNnTqLX60NnRxfOnD6Ds2ShcdWqkYOXJe8/JQhWIKSeN6BWDKDLbEB6TrY2kXPiyCGsX7saKWlpWHzTUpSWDsDfN76Hs9v2oDJdRUmLhKSWDvSOHIPas6fA19hhHkqen0luQtrrYCbPwaCJ0+QrqbqPSL2hRVlT5FCCIhVtAksGxAyV7aD5LCQv+S6oGSVo+IYSDJH3M3VQIGNyWQJ1NFbDISoirxkrDJJMCEwDJLkGWlwLx0ZCVUhxKpUZZQWYyXjDyLBImTIA/zn+qzi18jUIHT0oHTMTFeFknGk5hrOHV6LMboTbkRQPkfgYqo5fKKy4bqD+3P0EUTp2xqpda576clt91Xlp27vbGnNP7lyztKDwqz+52rKGT565ZvD4vTkbXnniSxuf/NEvL/w+HPDaT299+d7RE6a+R3NDXG25SUlJbeTPo3Tp6enJHjly5IKvfOUrK5577rn+hLDbQqEQR4ku/Q0ZjUbZ4XD4S0tLm371q189GQgEXjGbzZW0HJvto/ssE3LtbG5p/qtValuQqnRePACMgeECYYP9QK8p+w8mR85aq9Xenfg1zctDZTg/DnJzc/d3dHRMsFqtb5pMpkhqdzou9bUAnWeAAE2DoT1XFqum7J6wMPIhGupy0TX5/Vaf12sXBCGppbl5rNFkmkzuS7kohkslSb5I/SIcDsdDVkYPHaAtPR4vfvfUK2gjz3uKQCBor+vq6u9Odl9RGpZft27dnFhOhg9jZEhExtjZKHzzLfz9xYNgyODcPmQiebCyOHvBfumjZmBCw5uo8QbJA5qFi5BIhr3K/E+WJAwqMWP9i68h1cZpWZEzPvioOBQpBG9PN7yEqBsJ2TCwMuSr9t1Q0VJzFg0dPpgFDi11hEyOLEFaZjo279iOwxkzMG1SJrasfB77DZwm2zh7bgE8TUdQ2dRDXuICOpob0OkdionXjMb6za8QYm9EQGQx5do5cWlELnsoIRTrsebZl2AQVPAppZg9fjgG12zEm6++DI5cgyFjNOaUJ2HPoYtraXUng2veg21HU2GPX5uCcLAXPe2dUEQzobTkpS+LiOUvThk0Ffnr1+PVF0+Rl6mIpPLpGJ6RMLVPSEtO2Rgs1DwZYhAR9HrQ3e0Do1ApO/Kyj8n4sUYMmjANzb2b0B6UCPkj7U2Ildxdh1P1reDMNoTaGgnJCiLVlQRL91ZsOpCJjFg0ER0whLzwkI4sk+sNqYLm7ki/Pt/UxCDJnYbQ3v3YeSod/eBDDz2GSucpDKTwxTeXXjftA12ESMrkuccKPCTSNm+98SrKZtyC/pkWbaZjzPiBWLN+JU4lO+FtqINpOJDafyBCL63FC81pCLZVgy0vR/GIqRjWsRG1AYnw2BB4SzLp17HrkOEj5+r2hSEbghAYAaac/phZ2oCnn30ZdkMYTMpAzJxcpu3XRQZ4sonux2tGByjd2PTaShiC3QiYCjG3ZDAkow/vrX8FR8kgrjdswtwFg5EpkTo/vxHPdx6G1FkLMb+PXFoz++H6KQOx8XArXA4yMCLXa7FehcRuqAeHdryHlkMiWvxmzF9aALlmu+bCpiJEWpjTrs9gsSJVbca7O49gXD8GPZ3d8FNvVInO7ijIGTUBlRu24Y3mM+B66yGqZeByxmBK7lq89dyLMHJksGcvxw39YwEUBgyYdC1qutaj1S/CRNoy1WwBF7i6QZkc7MGed99GhZeDSQnBmjMAGba+XqNIAa1fKWGZdmFtQEnfflpvl3uxc93bONtFjuXDMKYWw21iUL3/XfJbHYjpg/qeNpzZgVFTF2BCvgEtxzZj1VvbsWTxOFzT2IXDTX7YSCewGM3kIR0JA7QmFeAaMhDderoTPLm/LJ+meRh8ENSwF0e2rkbznjC53zbMmJePrj0n49+HfV50d3aRwbJNuw7xEn3+cmBTyzFnZBvWPv0CzGYOYS4VM2dmo6N2P6qae8kzi0VbQz06A0PAnqtEfXcAZKyMpoZGSOUTMWVEGjavfgU0INGPJMyaV/ovZ2DQoeOfBZqsNc3lQJ6ZA4ePRv+/7MXGN1/Ac3/6EYrMz2Hjuy5YMiZg2KAslBfyCIU74XAPJI/xMMaOmQSBPNOef3klqroJSQ+k4f7bHsaSG68jzzAZq85U4veOTegwmCDUtaBuzVYMJs/2zJIyhMlzfmDZQIRDAU2Kmbo695J63P3970LMy4eSlUKerQJCBgE/9dYgg048tHYjw+NBkkfF1vpTcA+owH//4rd4Y98+CNMnQ2ZzcM7pQUVrAwSeEAqTCbasVPCkHK6yGQ2t+xAIdZPhSijufsupZBwSljSDgWqQNIODIsnaJAlCkvasp4NzhrxchVAQqmLBgIICDM0rQkZKBjKSUpCZkqIlgUwhxMBFHmw0n4WBjAdMZKEy4zS3Bk8IDp09pZNWPm83fv2zn6DlzEmsPH4M+QY3tvu9CFTWw2b148zRM7ClFmF/SwuOVVSjtHyARjw4PRxCx2eA7Jy8mqKx17/UVv+771743e43nn5k2qL7fmW2WH2XOvZS4HleImPg9dte+pUn6O+9KApWEsWrGGxeHoSgN5A/f/jRj35El49T1FVBURR239FTD7xb7f/1L7Z3CONyzHhsSn+kmVVkqA2wIZJqQjQmb5XSx8wjbdVLxzwfz4zwwSDtUBMIBCaQZ+vM9BRns6N95zrIouvC/WzBhrt6TreOrGbzvpmWlnpjZYt3ybJX6l03DXKg3Sdhf1MQf7ohExaB1YwIHxY+f4C8N84/bt/ugwv3jB02a/L4sasudxw/bty4Xd/85jd/OnXq1G9+mBPyyYW4+/6Y7KAd0xYvv2ifBffedf4GzoHpi2+/YK9C3BPTbksfjQdjapSEINx/W4IGKmfGwGk3kuWCw2/sk6+ctGRFfH3E7PPPY3YXYPGdF8tqLn8gqrWGXCy/MxZmlIKl90frfkss2/4cFF5ogxl0Lb4UU9XDVCwfcMH39jG4s+TCg4bi5tuG4tIQUDZlHsou2Dry2sUYecG2actjsodmXBtXhUjCvQ9HZfEG3x4/Jn/ELNx/odJeFLzJhckLbr5MfcgtGbEAKy46VkDp1MVkuWBzfl8fuP6mCxPapuC2Oy+UOHDizgdjV1sUlyrMHjQN95LlUhDMqbj1nuj1po7CfQ/GwjTS8GDZlAv2LojL/t0f1QK89ua7cG3CHorog33uzXC7oz71ZLiWWjIOK0oiQoNV73k1TwZj6kDc86Xz5VMprll0YX+OlsJaMO2me3HhVaRPuA79J5y/bSrZ78KmvO2RRy8utGwilpPlfJRgxSMlF++rVcKIgpEzcd8FnWfBPbG+k4klD/Tdp+tuj0gebtnnxLBR12ieDHEMugYPDcJFWLjigfj6PQ8Nu+j7ebdcnLtm4LT5uLglo1W2ZOD6i1ROlsblHQunLEXsqZMzch5uTdiLMzkxfu4tGI9Lw54xgPzeL/yRxg62k2NvuuhY28hrSS9KhAPX33Ff/FM6eQbElDVHzlpy0e/0rqiipHvsdSgZe/53N94fa3s3Fi2PSU9aMe/u6HPqzkdxUY+ecwuiIqLIGjoDD170KMmLy6XGYHLk4NY7b4p+6o87ozK3GD4ddw6/YOdBk3DHhfd57Bzce0HdMXQ68i/3GPsXgp6TQcc/G6h3P2cw4IalKzB38R3oaGvCprfXYsPalXj8hXVoae1ESGGRmZ2FoqIiuBo4zJ4zGY//9c5LxAjzuLa4PwY5krEryQe+zIAAIe+Hzx6EOTsTLCHvO578M64dNRN720V07XkGjZX7IZUWwpSaBDMh+Dwh1ayBhwIDJEaG5E7GKZpgUvVB4V341rpVsLlTYJk3C06rHQK1C5B3V8Bih+ol+0lB+Koq4bU2ostsRkAwwm9KhmB0QDGZIVAViPR0LU+DLEvgjp0j9QpD3Lsb4oF3oARCEEQFTqeLJrRDjjsVBdOuxZfvuxtjs9wXzl5cFtSgcbqmGv/2q1+hN78EJZ5qeE+ewfiHH0H+0VNwCxZ8bcENeHfzVqxb9zYCsgiryYJJc27Et7/1Tdx+x+1YtGiRNpbVJSp1fNqQJEmoqq4qP91xdMaovHE31BzamnZ4/TOXdLvv9Xalrl/16q0Lb77zrx/mHOQ3lfLl/3vVHPPSOQ8MM4XUgf8wngyfFerqassCYeXVZ4/0DHz1uAdLBtgxtcCK77zbiqouEd+enEo+5/sEDg8U9it9/vLuDZ8OzGazv6Sk5E263qmMH2Js3bPaygSHXLifUxCHOFH5tthRS9ZLcMdQJ/68vwtPLsrG80c92Frjx5ziq/Py4Bg1aDQK1R5Pz9mWtq7ax59ffVevz29J3IeGTbz01obvDSov3ZHkcl1SXYSfNm3aZrqQ9Q9lZNCh418BrGBF8hWEZwqvuS1Oar8omLr0ng/eSYcOHTp0fK6hkvG9OyMNS1fciwXL7tAINc8LEcasRtxijcYr+yA5GBWvPPoofvDaW1i5c6+Wk0AmB5sMJvL+BPyWJKyrrdByJqROGIWymWOQYbMi0+FEstOOTLuLLElId1tgs5hhMQgQeB6CYIDKyDjh8eP+3zyN+poj6Go6oZ1TVsJgSXnG4jLYSBnZnm5wnIJkxoCh2f2QZ3cgLBhhsZmRmZUGK58EdPeg+sRJ9CCAts4mdNg47ErNJmW2YOKUifjav/0bhg0ZBAkKaYOI54dEXY8VRvNMoFKV1A2bRjMoakjz0ORkjpAlerU86to78eBvfw9XaQEsFgE+MQU5P/guQiKPQMcuCKQKvZ5eDC4uQke/fth2cB8MZiOGDCoDI8hobmjG7t17SV0unFLQoeOjg87Ah0Ihc3d390hC6m/Jycme89yevxb8cevPmceXvYgfbf53PDD533DzhFux7c0nEfCeuLgMWeLtoY60D3vucz0V3K3PzdY4d0z5JhYOZORNSQf+o+5zbU2TZZlvaW78hd8f+BL5yHxpjBt02dMQwA82t2kJzX85Jx1dEvd2SlrGnclJzrbPus7u5LQ6v3n6JLnjyPOcr+GGS+0jQESS2o5rClPw2kkP1pzpxd3DXXj2cA+oh4bLpHlTKSzDhFRFrHMmJa/q7OjYm5qWtislJbXmwvLopOCIESO/+4s/PvmXfUdOL6bGhdh3lefqR/3qj8888ZX7l9+bnJzcfuGx/ObNm6fRkIlp06ZhzpzLqJzo0KHjs4WvDuveOwJr0QhMGnAJ9RB/M9557wCE/MGYMigXetinDh0fH3pOBh3/zOBiOWXI+8BoTPBejr4fzOarkL1hedgEBf93y1L8/KalYBleyy8gUbrOyprkJqOlima0hVUvlt5MBLVviLKMsBhEOBSG29eL+8aV44c7N6D3yG7qhg3eaIJssWvfB1xpaFHIeUQ/RK4Bmw8fBzktjFYO8x7+OtZtOYXguVot9NFEjR6iB2pQBBcIwZCeAS4lAxva2rDhoQfBiyoK8rOQmZECt9ONLPLXZjND4AWU5xfCarZoJMmd4iZ1sOC4ScUTR0+hNhAA73QhVDoYsDlhlCS0F9tx5I9/wfXjRqFffg6mTJmi5cZISnLAAhl5qck4U1eDjvZmtHt60FvXCs5g1I0MOj4yqHdCW1vbYLfbPZv8sm4xGI39WZY1ms1mxmzuU6FaNvoebD6zHr9890f4xY1/xYpnFmJy0TVY+OD38advL4csSfF9DUazf+ziFb+cfttXP/R7zizZuw2cIRK2RJ4JEX+GyL9hKQRZkT+3sUEN9XXXBgKBlbKiXiQHMSbbjDduzSVPOO4kx3E3lxX3O/pZ1PFysFisvZJh9OLWKvXHaUrjJR0EXOiUM81JeHR8ase+FuXIjXm9m24f1W9HaU7afvLc91/oYZKefuUEBC6Xq+uHj31tSU1tXfFPfvXXVXWNzXGn+/3HTs7949OvPv6df3tgyYXH8bt27Rr305/+9Jv0wfpJGhlUOYTObj+c7iRcpAJ3FQj5uuFXjXCRF8Any5dCOLLuZVSmzsaikVHDXbgbp6q6kVdcAMv5IgY4tflFnDSNwpyhqQiIApIclquoj4SGs2fBZxYj/RLZ6b8ICHg6EBZccJovfMaoaD22BW+eNeHeyXY88fopzL//RsRMqL6WfXhhdSPmLJqBZDr+YVkYTaarlvdqP7EV2yra0HSmHqPvehSjk6mllwyHpEjSh97q7Vh1zIrFC8bAcVGhXmx78XmIY+7CNYWfswjz3npUtZtwy9zLyJNaMjCg2Ix1JxswedDVS4Lq0KFDhw4dlwejSSxqa2xsCwMhIQtLWAmDVxi09vTi3p/+GKdqqgm5bofMCJrng6bIwPDkc4SI0KRimou1rGhJ2SnoN8YhI7S8BUxAJEM1EWwwBLGuCiIhRfR7hWyjY1WW42jKZBw8egyeKg8kTyvkjjYEfT1gwl6wZMSgSjR/QkR9QmUoqVIgkHf+yYYGHKs5p40rqJKEiZAzidRBkmRwPK+dX84h79P770Qduc4cpwXJvBOddeQYpxUepRfy2SqYwz24bdz1+HvnGfzYmIsf/+7XGJaUicW3L0eDYkQv54TJqeCdbQcxsLQYXmMvsnLyoUPHpRAMBk3VNfWl+w4dmXnydOXk3oCYExLDtvlzrvHOnTElheySRw1wmZmZH1gWx3L47vU/w7K/XYe67mrcNPIO/OdbX8Ffl7/sXfYfv1kfgulvxQOGHkhJTWu5WqnJS8FpdQZOfre1k+WES/oDk2v63IVKtLe3ZbY21T/HGSzXXG4f0s4dZrP54aysrJf/kXX7MKA5MdL6T/iWKIrfq6o4MzTg99ktVqsnr6DwpMlk9lF6U0CXUuDCAOSPg/y83Io//+IH5W+uXnfnM6+9/Rt/IOhQFZWtbWgavP6dTUtmz5j+6nn1pKESNO503LhxH8qKdW7fW1h32oE7loxDz/ENWF2RhmXjRfxxZQPm33wdcrl2vLXqAG5Ydi0Ov/giugfMxcz+Vmx85gn4S2fhhlEp2LpyFdxz7kRW1Trslgfi+lGZOPHeavRkTUJR6AAOiaUYlVSNNXtVLFl6DVy+U3j27UpMuWE2XN0n8Nb7TViwfB7SjDx5WXmxb/VqtOWPx6RCO3asewvWYfMhnHgDzalTMKWEw75D9Rg5dYSWjViTPlIlVB7cAzGlDJk2P1585mXMXToL595bhbrUMZg5KAX1Td1Q+wEtJ7ZhZ0cOZg4xYPXbRzDpujlIlmqwat0JTFyyFEVRicXaQxuwvd6JWZP7o3rLS9ieMh2Lx0aS8XlOb8Fzu0XctHgsIdp74U8ehLTAYbxfY8OsKeWo3b4W59wTMDx8ABubU7Bk9lDUbXsN77Vn4aZ5Y9C+720cM47B3PQGUo4XN988C0r9fqze3Y3ZU4uw8a2tGDN/MUqTw9j42rtwT5mDEqEDb286jMlzb0B+kklTfzi+6W2cNZbgmqHZOLbpLXTnTofzzDp0FV6Piblh7D/ehKGDS7DpzVeRN/1GDM+QsP6VtUidvhyZjWux29cfs8Zk4ui76+AvGAtz9S4E8ki7l9ix791NsIy+AfxB0u55CzDB3YjTPQ4MyLdhx6rVsIy9Pq76wJAHIS/wF4VE9jZVYss7Msw8A1e/kRjtaMbT6yow95a5MLXsxTNvV+O6JQuQ4jmKNw6IuPvOWYjZcFMGTMGisnasemplvDyW5alYBXz1h/DWnhBuvHV63MCgSEG88+IzkMtmYHwB0NrtQxIZzLQdXIW3G7Ixd0oRKt9/G43uUVg4rhh0KNR8bCte3dmBm5bOgHj6HTy1RyL9cxbY6vfxTr0b980pwf6jNSgaVA6mbjteO2jCTTcPw7rfP4GsGTdhbIER77/+BtSR8zEul8W29e8hd+xEnNmwCqnjFmBoegAbX9+I/Pl3Y1TGF9NIpUPH5wF6TgYdOj4YLMNoRgCaHyHVYMe7h09DFlhNzYdmbWbIeEu2CBEDAcMiZDRqx9BJBIVjtO0qR9epMUIBb1TBmTjtO1nlyV+jlnxRZY1gyMtc5TmYbA7kT52Ilua18FHDgNUNVXZAohm9OY7UhoZDCNoMK8JBMEE/RFGBGO4lwyCyUEOELIGKAMmSTM7vg5EcZSnLQ/aSxagV/XAKVjjNWairPk72F8G39kAeUa6pW6SsWoPT4S0oP1aN7zh3YEjhSFS3dODXv/s9cnPzcOs1E5CRnon//p//w7Z1NTgh98JQUowbwqRmAlXFEOPGGx1fbJw8dXrk//7h6VeaWjv6xbbRkKLvfeMhjBp2iWRYHwBVUZRsW9axl+7asPrM2bMHfnzDijdYltUI/4hp8z+xelMlA4blaUbEKwQdf/LY/P6uhaS9XiWXyaUmJ9WVFRftzM9K3jFsyIBt/YuLjwoCL17quNrqqu8EQ+H/4gyXz7xNyvxxbm7uj8i1fWQ5yH8kqGxlafnAvf/o8y6YN+fpmddOfeXXjz/1+Nbdh25PTXGfu9DAQKElfqQLWf9wrjLkReFISYHZYII5Pxvy/ib4JCdsSWnkweoE35mQA4IxICMrCw6ngKzcFIjZ2XC6bUh1COjxBVHf2AXHADeMvBHDZi7RZI/q9xyIHszBlZmKZLsZgfpWKK4C5KfaYbDkwSWfQldY1owMatiHhk4PUoe7IBiMmDL/Zs0q3WWZjkKzE131J0kHa0DJhGHay41l6YutF8f2n0TmzCEodjrJiyOEQMgPKptcPCEXLocFORkunEy47KC3E7BnIjPZCStXiBR+Dzp6wnEjQ3djAwyOfJiMZgwkBH2gIWbtJy/NlBLMn2OC2t2G5vp6eNEPfHcjePtYsr8JA6YvwgCBQ+OOg7Bn5SKZ1CmcmYZ0PgMpyUlgU5IQbo3oxRptKUiyGCClZEIQ6yCSl6TBlYoMUi8+dAqNfqDIaYXVbMWNS/K1hEca5DBaOjpgGjAaBsGEEbOWarMTHa456O8yo62mBrW1jSgs7afJjKaSMow2AekOBp093VAaOmApcJP62jD6uhsBQtLXn1DQPyddi50cMXEsfGRw0KSdTEHDyUOos07GCFIPp51Hr5hg2CTtNOe6dDjP71hw5pZhzvzrkBL15PRUNkNwpSEnmSZ6ykKay4+MtBQ4DW4IaNCEAsy4MqRgO9a+dxwT5iw+z4NBVYLolQQMzs2EK4X0JZeVlCehoa4FJvdQ0r+NGDydEH/+fM8GU3oO2ZcMaLLSkJwkIz3NBcWbDpzzwNdeh4NnGlE+cgRsqU4o4R5QRwre6iS/jVQYlGa0eHmML0kDdXQZO3EcJLMPe8N2jMxNg8MuoiiFxblWH0aR/qdDhw4dOnR8XsGR93ZYlbG/pgKvHN8PtrwEqjcAmVXA0Hc+VX6g9gM62KCGBfJX0TwcaEoEOiZQNXUrMojUypPFkLaYbWT8YSUjs+5ecGEFxpIBsJUUIGtwKUyVrah49hU4OmsQJsdS5QdFIeMgKaTpuauyrKkcydF1qjjBiQokVtJyQYhUwzLu4k2rx8JPxiwFMyejztut5WywhAIIu/JopnwYRA6de/fgmYFD8G8HjmDnmQbYe4Jw9HbD1t6FgwcPwRMWYDCbkUXGu2+ufAvdBjI2LcwDn5WJ1h4bXm6uQ+Xvf4SnH/kPGDhctYemjn9tJLuTGsaNHPzyyrc3x13fWc3758o+qUGvp7Pb693T0da2yZWU/JbT5Wq02Wwe6jkkEA6SayxAbkrBp1ZvzchAidQ/GOUlhbsKcrOOdXZ7sts7u3O27d530zbgpudWro/vQz00TEaDz2I29QwsyTuw+Lpp4wm5TLlM0lXVZDJtSUlJWWy1Wrv+YRfyTw6L2ex/7GsP3fEYcGGm/zg0CcuVK1cuoqESWubbD4GwpxkH9/ogG5Jw510jyAO5GVPGKhrh460pGD+ekCzegrJxE8G5Io/TwqEToRAySp3tCkdMQogQ57ybV6C56hQOHayCNTkHxYXZSC0chhGKCylGO8YQ8kszJjsGzsZtmQ04tW83RIMT1y6/FW5LhPwx5nQsuPMO1Jw6jQP1ASRlFaEoLxVJbheqzlSiM2jCtHmzkWe1Q508B/KpBtS2l2DG4lk4U3EMx31JhCCPh8uRjuuWLUHFyVPYWanCkTYEw1NSkcwKGJ1mQ2aOG7ekNOPU4T0IMTaMWbICKQkhEUPm3IbMhnM4fugQeEcGyvvnRb9hYHWnoqfqLCpaPHAPmIyx5GVp5XKR01iFE2R/zp5O9s9FSskwjEaKdpQzbzDGpFkhkIeNM38QJpB1eDsRDnTi+IFIHebdshhO1gdpjBk2I3lxWwdixZ3ZqDx1DAfDLDL7lSAvNWoxF+y4ZsntaKqqwMEDdbCm5KK0mLR3mgWnTlegVzTimutnIdNKBgxqCA0VJ9BaCaRMWIqxmQ6og+9GS/VpHDlYD3NKDkr6ZWL+zUtwruIsdlQTkp5P2t1phDBkImkXB7LLFoI9cQr7j/SSax6HTELM7Wx/TLJyWkhNT1cX7OlJ5Poi1TPaczG0KEju8XbtM8ubkV+Yj8ljRC3qUzVkYMJECxykyQVnDiaNS8JFUaWkTYZMmARzPFG2Ak9zG0bNWYyCtPPNEZzBhQU3L0LlmePYVS8gacBYJKWYkX3DPciuPYVjZOBgSMpCSWEOog6jcGSVYKItYlRik4pI3VRYSP3V1HxMGRUm9ykdS6ZZcXz/PjjS03DtxEyYGRPpXxPgJscxpjwsu+8m7Nn4Al45rWDx8kUodFlx24pUnDt7EDuDPLIm34phqZfOAttacxoBaxbM3gYEbNngO+vBpBZc5a9Whw4dVws9J4MOHZeGEpdzVDVjQW1jC7pqmlGWnovqpiY4bC50MD5oPg5UYaKjHf6qRnCdXvJuNsGYlopFE/NQHm5CXX0Hqr09ENOKMXL8FJQI7WBbasBJ3eCDPZDJ2/d0Ry/2VGzGoa29ONrJQFRFyDSBo8UEg5VHeObNyPjaN5CR7sKZ49UQq5sQWrcWTFsNeH8PGW8wkBQGNHLCIFNDBlnxB8DIEhRZIdfhhcmSCogClEBQ83YNB1Uo/jYwKodAuBf33Xs7xkwahT3TxsHj9WiSnRXV5xBWJJQXlqCkqAic2YyH/++/8fqOnWScSsvqhK2XReHIYWhpbsGO1h784u11eGzuJXO26fgCwuFwdJcV5W8uLsidWVFdp+m5UbnA7/73b7QJtWsnj8XQ4syjBp7/rWCx780v6HeUxtRTb54MumRcUjTiUwedRQeNM/4HIz09rfn3P/3usGAwZP7ef/9mHcMy4ZMV5yaLohR3DaLJCQPBkI0ux8/WZs+dSfjFpaOgz7Ese0d+fv77H6UusSSIF/69cP1K2z4IlwppSdwWW7/w7+cB/KFDh4b9+c9/vt/tdn8oI0NK3hBMTrehiJDueIvxmRgYVUeD0YGysoh0anb/8vhx6f36BBrTi2LbWWQUDiJLX/nmjH6IHJ0Ee4IQqdWdjWHu7EtXijUhf8BQnBf1ZklC2bBR5+9nTcXwkTF5PitGJOee/z3MKB40HMXnbXMilh2Ec2Vi2KjLxEWxAlJz+5PlEt8xArKKBpAlcSOPlJz+ZEnYlBm7dlJ9QlxjAoXW1DxN3jLU1Z88dHiU9ktLCDVwon//Pp8A3uxC6fALrjteRwMyiweQJWGbNRmDR/Q1tCKGMHLiFKTkkrqY+hJVMCxP7tPA8+4VbcOi8mFIvKyUgth9NqF8xIUynlaUR52rSstd533DW9IwatLFSW6zY5sEcj/LkqI7J2NA6SVUagmhzy8rT9jAwl1Qfll/Lq2thl7cVqn55WS5eH+LOytef9aW0OedaRgYvQUp+aVkOf+4wvIEcVLegoHjp8FeqKAg1ardR9achP5DLnPP0sfi7ltFcGRgJ+RHA05SouWl0GtVELZMwG3FPC6Ra0uHDh0fATRONhAImE0mU9BsNgc+6/ro0PF5QYiQ8Nb60/h/P/o23nxrA8pybbh3Pof7HJNQbUjCmkNnYBxchAXjp2N0bgkOnT0Bf28Y2197EQvHGjAxV0SLXYYi1SJjcCEeGPYovAYFHcebUTTmVuSXToXCGeANhBAK+tHwwptwHNqBX6zoh5efehLrT3aglxoDpCC4Xj/4Z/8Ed08d7v7Sv+HaJDeOcFb8aUg66vfVoOdcJeQOD0QYcP9DX0ZtQyVSUrPR2NyAvZu34NZ77yLjigLMX7AQN73+Kqq7PZDISzkocQh7/TDZXOhmRbxyaB/uHTUWyQYZTqcTo0aN0pYYqMGFLn/9xrfR863HsJ6cx6h5PKbDK5uQlO5Ce0sH/r57D26fMhkFSZcYv+j4woG+X6ZMmrAuKTm565s//OXORCLa3NqO519bA9PNc3+zZNH8DyUx+WmD5/nPxMgQg8lkDPzsB/8+NRQKGTdt3XFLWAwrb6zf9v2mlrbzWFZHVw9++vtnsHD2FOTnZCIYCqO+qRU5mam/GDp44I9EUeSbmpoyw+GwgS7ks0AXWZa5j2IU+GcANVIJgiDGFhoeQpfYemz7xz0Pv3DhwjeKiooqy8rKPlSCCzshvBel5NTxD4ExiZDcpE/3HKxgRGFJyQfvqOMjw+oi99F1lTszLH2gXmEHFgajHt+pQ8cniV/96ldf1XMy6NDRB0URNTWJfRuewXt//xkmFIkY+4gZQUcG0tJuwJbt+7HhYBPWvvwC0jLTsPfgETz73LM4fbICAi/j328dhlxrI5jcFhjP7EZXKAM52XPhkTk0+DMx+5bbIZuc2H9kJzynX4W5ex92ne7AoWofmjrd8M+bg8KpN2Fu8ioUGJvR396L7qAZc6fk4beVyShmHMjIy0HTiTb876yJOOqQ4S1141BlJza9vwsdR3Zh0dJFeHPzFtx1283oranAvbctRU5uHgRyXbcOHYP9G96FGArBJ0o4snsvRsxZgMC5bniSU7Doid/hmVvvRlGqCaqi0oQUfWoaMfk+8i6+fdkyvP3rH0Bpaofs9aJzkAWZ7jTwQi9USaaZ9z+7m6jjc4kB/Yv33TBj0q/f2rjtqxd+N27MqHc/gypdETRcAlc2MvxDCDr5vYXmzJz+NF2ff/3sZzs6OtKe+vtr39+6+8idYVHUfJo7u3vwt5dWxY/Jz0rZfefSG16rqKgoukyx/9KgBhS60ImUq9mfekfQZJPU8GCxWPwul6ub/v0gQwRfVlZ2ii6fTLV16NChQ4eOfw3QF2lhYWGV2+3u/KzrokPHZw1ZVBESJJzY8jakqjewcEEmWttPgDGacLwyE11cAG9tb8DGje+Cesf+72PfwqQRKuS2jRDC7TBzOSh0F0C1C9i/oxIms4o2TzJmjmewv0bGoJFDYLQ60FJ/DrnmXUguDYPtNUBgzKhr90IKN+LV3z6ER771axxyyKja+gRUl4o0cxCB9gZMdtRjU1U3xg9RUFA0CDXnutFo649tW97HwRO7EQ6HUdHdjff/3//ie9/7HtobWmExWJGcnA7BYNaUKG4fWI76+hr8qqoBISMHJuDD0Y3rMXDEEPSKZiiwYsXaNVCbavHX2+9G/9RkMIY+A3/M0HDuzAmwNS0I8yYYnHZksAw6Gxs0I42ZEWBjP2cKVjo+c/j9AUtDa0epwSAEwmExHtubluKuzc7KrP4Mq3ZZ1DY0qXnZl/bs9vu9aUajsfYf5b6vKAp77NixQdQLYdr4kX8zG/jmwyerFimqytQ3tw+j+3AsK44bWvrX62ZMfuIfUad/FVCPjpiHh9/vt7S3t2sx/SzLKsXFxRV2u/2SuTn4F1988ZYnnnjinltuuQX33HPPP7bWOnTo0KFDx+cUDz744ON0+azroUPH5wEMJ0KAEVXHNwMev5Ys+XijBINihsudhffe34MZs2YhNTUVLS0tSBFO4fSek6hq9KC3RYSznINPliB4wqjr8MDMqRB5aMmwg63dCId5yAqHnoAXKaoPguJDUAlCDffArIRg52R4pBDaWoMIBEQYZCfO1oYR8Kt4alsYXcoppNz5Nl5b34mRIwcjPZnBiXfWo/LUCXT6ApqHgclqw+++933s3rMHb69diy99+csgA+RI6CnLaskYvzNzNsoOHcdXn/wbgjYGQQOL09vehcvlRsqYseisasS1+YXol5qsJbdUJDq+NoENAHRecH9NC/5vzXbw+cOQVJgNjpPQFSDXYxDQ0+bB3bPnI8l+6ZxLOr64sNttvd94+K7bHvj6D04nGhkGlBZt/yzrdTlseG/rrb9/8uXFA8tK8O+P3IU08nuguVpOnK7Eb/76HOoamqsZhlFsVnPXiKEDVl87fcJfRpSX7aKu+p9GfSjhHTx48FEa4tjR0ZE8bdK418aNGramsbmlQJZkIdmd1Eh+g2Ge/iB1fGTQcAqbzealky/UuEDb/XL78tXV1QXvvPPOjMS4Mh06dOjQoeOLjubm5oyGhobsjIyM5uzs7IbPuj46dHyWoLLTiihBMLvR1uxH2Mujs90PiwC0BXn4FDsqDh1DKBQCGYRiX1UY/u4wKiu9CEsswkc60THVg7qzRzB+SAb8viDZpxM9nB1p0hFsX/1r5H7150jPzsZbfw+i6cgJ2A3dCDJOZOSlwCnxcPry0NrZCQfhKevafNo51LCsGQlUpQ6m//1f+Oy5eD49DQ//5D9xw40LMHbGJKQaLQiIIXQ3tuG/f/hjNDa14k9P/AnjJ47TklMGZVHLOaUFP6gSZo7oj7NDforntm3HXze9B4XnUWKxYDKhJzOunYXsTDtYOYzfrduMxzduhNfIIyUjE1mZSWisrkXGiAIowRBUjiHtQko1WeBtbsOvl92BG4eORkgJg1UvFPCO+pfTMHAqrUn+A6OCp2mv1YhUqI5/bbicjs7/+c6Xx//0d0++WVPfPIB6AaQ6zEcocZYkiU9cqLt77C+dxacLnXH+oIWe54MSFFJcLrlgbNmyfdd8uvHYqbO488vfvuT1kLLZXq8/ecv2fXfShW4zGg3+wrzsfZPGDntu3JgRa9OTk5s/KcMDrRd146dLbNsIQJMrpO0Um42nuRdoLofYQtsx1n6J7fhJ1OnziMT7SI0EsYWGniQusdwMNFTio5yHX7FixVMzZ87cSAZR+z7pi9ChQ4cOHTr+WfHUU0+t0HMy6NARQUjhYGREFE5ahpeefwGr3m3Ar/4zB3uOhbCv8jB2HupFt6cRX/nKV2g+Ezz83d/hD7/+NcY7K1FTdRqnzvTgv56swdhhmTi3u4qQ7GRcN78ffv+Hveif3oCZy6/FL374dTz86Hdx820/RHPz/ehqr0ZHWxtSUlLR6Q3h/S27MHxABs6c8sNkzcLq5/+A7PQMMCwDgReQ5k4Dx1FaTuhPyAPOboWoSOAJX1AIT3rzsZ9geHs3hgs2vP7t7+GlqDgYL1hhdqXAlJwGKdUJwWSE0WCEyWDDTaoJlW9vQEf1KawnZb3LCvCHgxBYKwIP34lwUSZK3ekoJseGunrQ4glBESR4WQm2rhD6tTXCcfwkikJ+7Fu3EQdJ/QipAlUHoGBZDmarRdsmCEYIZnPEoEA+L/j6o0gqL4eFFa5wZ3T8M4GS3d7eXrvX67VR1/OYASFxn3tunnfXmne3Ptjj8edlZaQeO3nyZPnlyvu08EGGCJvF8pHCCEOhsOXk2XNT6PKX51bSfGO99918/fL0tNTG2D50tpwaCqxWq48udOb844ZdUEOGpsphMgU/TjkxUGNEooEncbnQWHE5Q8+l2jVG/mOfY54CiUaBROMAXafXFfscW/8krvHjgqczNHT5rCuiQ4cOHTp0fJ5A340jR47cr3sx6NBBBv5QoLIGDCrMx1d/+ib+/os5ePb1XjQFJdwwvgAbXnsHvrCC9Stfx4TNm/DIY9+CzIg4cboFpSXTEQ4fhuRrxuyxaejqVjFxwkCsebsCXSEJz28i5Fx8D6Mn9oe/4T28sHIn+pWNgdGejPqGXlRUtcGINkwbW4I3Vm7Acy++jlumzsG4sWPgvISEX0TpIV1bFxQFoqLCt2sP5Jpq2MYMB0qLYWU4KNGhPKv2ZanTJC3lMBAIIeBrJwUABTdMR440XcvbIKoS2aSSvwKqyfWZ69sg1zaj1cDCxnGYwPJwyDIE6ohgNYK19AOT308rm5cu5kmSqsBDyiFEE1/7+teRnhWJcVfBEmJkjK6rYP4xefR0fEo4fvz4wKtNtEdIonzDzGm//7Tr9HEwd+aUx8tL8t/ftH3fbYvnzvjFW+vee2TM8EFv5uVmn6UEO7ZfKBQ2b921/5bigtzdPMeKFdX1YyeOGfHa/kNHZye57LWjRwzbdGHZMaWH7u5uV2yb5tWRmtqWm5tb9w+6xCsiRuo/63p8nqFZzv7nf/7nW4891jdJQ2+q0+nsoes9PT1Omvwq9h3dnnjTa2tr8/Lz82tin2k8zJEjR4bEPr///vuTJk+evC32ee7cuWtWr149L/b5hRdeWLZ8+fLnY58feuihP/7hD394OPb5j3/840MPP/zwH2Kf6WxSom45nWWi9Y99psfSMmKfadn0HLHP9Ny0DrHPtG60jrHPtO70GmKf6bXRa7xU21AkWpsubJsL2y4vL6+2pqYmLmx49OjRwUOGDDkS+zxp0qT3t23bNjn2ec2aNXPnzZu3OvZ52bJlLzz//PPLL9c2F862fVDb0GNpGZdrG3puWofYZ1o3WsfYZ1p3eg2xz/Ta6DXGPtNrp20Q+5xosfu4bfNx+1W0z/937POF/erCtqHtTtv/atvmwn51YdtcqV990r+5C9vuw/7mPmy/+rR/c4n96sP+5j6obfTf3Ef/zV3YNv9MvzmKSz3LV6xYAertBx06vlBQISsiRNZACDr5bUgKZJ4B1atTNbd9FgNygSUTi3H8bAVG9xuAn/16K3qNVvBOHhabE/kFRXj+N79HXVsjenw+tFedgsCZ8Z1b8uAJe1HQbyiam3sh91bBFPBi+rjB8Cmd+Mtf3sKQQbUYPaIQWY52KFIzsvKc5NgsrFtXgy/94Ovo7O2CidRDbmhEV0UFrElO8EYLRFnVRrV8LNyA5lggFaYGBjEcwKlDBxEgRJ6xmcl2BrKJhypHfvYKIfQxEk/JfuxhwESGyVAZRotioDHnLFkUln7DoYisw66CIeeSZFn7nmdpZgcZLDk3Rz0SlIi0JbStl2htVdHamRouVNKGKm/RtrOIGD5ix+r458bAgQOPUyODx+Nx0MXn81n/2V3zCwvyj5FFG/Pcecui719qH7sNPYvnzvxV7POAsv6a5/ysayY/fblyY7P21E2fejHQd7LD4fB8VLd9HZ8NtKcnHeglDvYSQW/slTo/HcRd6Xs6CLzS93QQmTiQvBB0oJo4SL8QdKB6JTdWOlBNJAkXInEQfSkkDsIvhStd2we1HSVWV/qeDuKv9P3HbRtKABJJwIVIJBCXQiKxvRQSyd+F+Lht83H71ZX6PMXHbZuP06/039yVf3Mfp199UNvov7mP3nYf1Daf598cxT/rIE+Hjk8adM6ckuoW2YdHK5uxucePICHKCp35j85PMoQUsxN+D3kKA0EMod98DmNZHi2VR1FdcQbnBCNEaRZUfyOyXlkPf+053D0/AzlDRXTBgdS0TGxauxHBUC8y0hT4u49AkQVkGljUn25Ad2sy5s1+BMMmX4+gpxdSRzcmjByOIanJ8Jw6g84zx6C0tqHu374LduQQpJSWgnUnQVAlKCEJcjCMsN8PNSgSZh+ESg0lfiDgSILRagYrCAhTGcmYkUHpI/LUyBADy7LRNlEhMapmRKBLhPiLUY+JyLFUWYLuT/aCgRo4uMhnhpYd3SdW3nntTb6j3hZU/YJVVPAXfM/ouRj+ZUBd9emSlpbWeuF3sXcQNTzEZvJj+QPo38QcDInL5/XdFXPdT1yokSAmhUgXGhZBF7qemPfhs667jk8GFz7LdOjQoUOHDh06dHzBEFR9MDImNIclfLexA89RBYggi16jCRBYjexSrkxn7akMA8vKyA6pKFE5qFU12H36CHoFQrE5Fl1SCCrjh2C1InTnjbA31WNQ+R60e7swcUg5nn1qFSlARiAcUXQw8wxCahBJSQwEewbkUwqOPfZzVKo/BktGqgbBAMbkhCIImleFxLMwmE2oCHjg2bUHhcdOwpliB++wwZySDtadjJDbBa4sA6zLBWOSDemMFZY//RldnnbY0rJoXAQgyZpRgRocYsYCakSIoY/gM2R7H5eL7auqsrZo62RheR68wUCaS4CB42ls+WXbO3YeWhY1PhDyqRsUvsCIkWtKxs1mc4Aun3WddOj4ONCNDDp06NChQ4cOHV9QUJJNSe5f91choPCE7PMoY2X8hGXAmhhwLCXYhEzTUANFQigYQlN7DYqLCtEpCXj8rb9DcVngNRoQlgnhFvtIeig6ux/KyMKjDdfha0MUvLnh7wh6ZfB2P1JcdiRllsNocmHf1v2w2CQ0nzai51Q7Os1WGN1ZZJsNPC+AsblohkbwHEfKVGEMK9oSkAPoZgSk+jhkmc2wp2TDVt4fhvwsGDOSwbldUFkL0iAgd+wo7P7t/8En+WCXBc1woMrUq8EHhovUWSZlUSUNNhrqQKFR/6gBQFEVLW8DvS5ZlSGRha7T0AgaSsKS9qJGEZqIkud5si6QNmQjhgTyH12niSppW0U8G6iXCANBIG0fDpLmE6OtRwM42EjYBPleN0Do0KHjnwm6kUGHDh06dOjQoeNTR2z2m/7LxP4HPqpz8OU4Z19CgciMO/VAuOA0dLOsUulGBVu2bMEf//fnhGRbwCkBMFIQgmokfJ4n21j0SiGIoqjty9odcLpc+MajD2H4oDIcbGrDJpMZ53qC6LJKUAmhjlWL5ijQQggIKffxHPUVx0/PqHAmP4Bff3cucvy1MDqcWPvM/+Lc6d3gXQx8Yhq2Ha+BkR5rCAFdHTD19mr15YV67XqoCUOm6g5hBYKkQpUD4CUFBpr7gFFgfM1ACD9P9hMhcBw4cg0CVZdQJLBOK/IFK4y7TpPqWFHr7UWNx4eAgYPH44EsiZokJm0xajSwM2y8TRl6ZppngXyk2fuoQcFCzmdjFM2wYCR70O8Ysj1MjQ/UcEDOH1Kj5gKyj4fUi5oNqDeIgogHgyZzScpgyL6rDh6A0enWThk2mDH/wQeRPXQguVcKeIaN3sWLQy4+LNQLPsS7EnP+Pkzi/tG+pJs6dOjQcTXQjQw6dOjQoUOHDh2fImRFISRd0ma2oSoaWeMoaaRkWumzClztZDU9Qurz5O8jfjRXgqxo5L5KBP7n7Dmsoh4IFidMHCHehChaJerCzyFMTk/DDlwF45D57z9E5YuPw8DlIL+gFJPmXoOBPIODb67Ejm1b0dbaSQ4hdSckvCfgwX3f/HcUFxTiLz//JTZ88zH8z+Z1ePKdrQg4XOjlVYQo2xY4rZ40mEAJieglhDpArtkXDGPZ+tcwKiUFE7pDWHHf/yEoteHgey9g77ZjGDlrMHIKC5GZlAK3zQir2QCrxap5EMjREIOejg60eXxo6/WjqbkFNp8fKaKMPELUzRYLLGaTJg0ZImenjhVMNNTBJwbRS9rcRYa/HlI5h8sNY0Y66rZvQ79kF8yk7iw9IMKq4+ejeSqCCqeFOEiqhF4mqK23ybROkbwM1LjAkvPQ4wMcNSIw4MnhobAYz/kQEuXI3SP/c1KkbHpdQcWvrb9wpomUVw+e3C+GM+EbdzyC6yaPwaiSEq0M6u1gcdpgMBnBknPYrTZN+pKeOzXJHvGoIIvb4dSun+M5GC2RMAyOfDZEPSe0iBc24h1By9E8JVgmUi82GhojShEjlRbOwWtGFTCRJJc6dOjQ8UHQjQw6dOjQoUOHDh2fCkRIjIBeQrLrahtw59f/E6b0LLgIoTUajDDwVKOgL7lgbIpZlmVNdUDbLinxhINyQvJBEMJKSbBGvqH2bYMXsqsArXc/iCY1GQpD9vEpfXkEWCVyLgKXBbC2NuPkvnXwZrghGVNRY7Hj9O7DGFN5FhlKJ0x2K4QWDsFQkFwL9U6wgTWaUNPUhhm3LsNX7r8L37phCRYMGo1/f+FZVAQDqCPXyxjPH2LS+oUJ9Q7zDMxk+LmvvRdVhJS3vLUKE0w2zF7yb2Bs69FwvAFJ5Hu+pQdKRSdae7sQ8PvRG/DAGw7DE5DRKavoJSTeq7IQCeHOczqQl5SOlMx05BT1g4W0L0wCzGYzGelyCJN2kyQRIll8hDz3hhTN4NEqmLF/3wkUu1wYV5wF3mCJeCzg/ESNtO3oPYkhrhZBtiVuj0NW4qEWiTke5GiIBCXxNJFkbD22D8sxOHLiCE1USwUswTM8Tm9/D11VJ+BIckXKk9lYHklI1B8iuh5W+uoWlMR4nxIR7UcyA1mKxIQotB9FLVNhWdI8JSJl9/UT2q9Uch897Z0w80ZIpG4jRg7Fhqd+rxk2dNULHTp0XAm6kUGHDh06dOjQoeNTAMNEyBhLiPUbq1ahICcLzoxMuK0GLTGgwHJgEpzSIxPHqmZgiCkXUAMCDW1QKUmXI7PL1LigRlUOIrPrSoTsCiz4Hid237oYdRIDNRikfv2a/KLm7a7xQoXwbh4sIejurh6ktp9Bc2cLfByhrB1dUGuOo6tkJPb5ujGeEWB3uGGxt6O32aMZKMAZoBLirpLyKdn85V+fxnOvv4G//vzneO3LX8GrO/bgx6tXoj3FBh8lrzzNfdDXJiqpZ1iTmJQJAWbxRljEQYOKCTKPwsGDUF3Vhm5yDs4gIOwywBtywOv1ossXhjcokYX+9SFASDxdJEKea21WGFwWqGl2GErz0K+oEHaLGQ6HHZzAa/kPRFFGUAyDpxKV1HDDWeFXJLhOV8BA7o9M2gMi9TaJ3TvmPCKdqAzBRu8Z9UZRaMhD9F6piHknJPSBhP4QydkQ9RyIhktQDwKW4TXvFoFjkeFwQUryknqzWmQE9TTIzkqHOzk9fvZYqawS6TcUEif3nVHpq6uMWOPTcJJoxaj3hRzdzjHnJbiM15h6P5Ddd2zZhqNH9sCSVYy2th60d3fDbXdqfZpldK8GHTp0XBq6kUGHDh06dOjQoePTgMoRPieBIUTuXG01oXuE4BPaF5YJ2aXGATpvnkBkY0aD2F+6xMiuEjUsRL5X4uESmqs+LYOQZ6si4NCwIoi9KrLVVnr6CAiR5aPu+wo4WAm9R1cLBrtMOHjiACRCwCVRBEPzLvAqPEfeAwwWHA8JGJ7kRkpaOtpaGhDw+6CyvDYLzhLiTtUTOMLKmxqbseiOu3Dj3Fn41gOPYPawgfjyX/+GI54u1HMhhEzG+DVSd/uY4URmaf1ZVPsDGP3my1hszMT+szVQeKuWkFIk5wv6fPCRv5Lo13ImyKSekqYGEZmp52GA0C2gqbsXR1u7sKXFg7SMUzBY7bDZ7eDMZoQsVqhuN0Lkc8BqRpCQZN7BITWgIIXUppeRIDM0nIAOi6W4HCUftTgkejJQci6Fw/HriXuZKGJcaaJP5pKsi1JcwlK7FSwbVeigFoSI0UGObpdVBla7GQ5XJAyC05JcsrCZLDBzwsXdi+3LnCBElQy1UBo2plxB+k80kST1X5FieUHof2y0TgkJO1ia9DJqOKCGKM2jhqd9S0XI69O8P06dq8P4gQ5yQj07gw4dOi4P3cigQ4cOHTp06NDxKUBGCLzMwUvWXc4kqJKAtOxUpFkcsBtN2kywnJCGL05YCbmLrVO3ejlhOwUlsJSEU1ASStepT0Rd5TkMPnAS3OGzmuFBluSIYUKV4gRYDPvgTnLiO9/5Do7U1iInvQBywSCcaa3DjoN70a2GwVpM6A2GcDTUjXBbAKOS0tCvoBCVp49BJmRTJrxVMhLiSWfjKZHlOYiEm76wfi3Wb9qEP/7kZ3jpK1/Fy3t34zsrX0EPp8JLSLxoMCLRrYFel5/Q3wChv0aJxxu2IEpvX4oTX/tvqL4m9IS9keQTYXKtciCSf4K0GU0wyXFmCKyJlBmAzDBo7ulCA9l+oqqKEGATVGoASUoB73aDTU2Fpf9AWAuL4cp3waQIMPup8UYiu9rRHFRQJHqhhHgovBo3MsRIPlWaUEkdYvcg1vax+6CFOygSdQqIGAvkPklMmncjdotFNaJKkVh2zItAWycFUEMAZzRo3hJcdD8a9mGz2eLlxo4NK1K89ySGZQhKn/eCJIUifYmchmcRv4aYdCZF7HqpkYHmcogZWOi+FqsVKmuGarHBnGSDNxTQzk+NWlqBOnTo0HEJ6EYGHTp06NChQ4eOTwGKymkEXPUGMGH8OLy6YQd8AQ4eE4tASAY1McTmgyNGBfU8bwZt0UIdNOd6qErf7LHKRmfZySaRkPhD772Pc4cOwe9t02atqSajzxdAKBSEKoVpwL22P/VoEAip3LH6bSxdcTPuv+lmpKWko5OQ6C2l5Th09hSOdLRh75798BMiecboQ6CnG+Oy8uCqa0CPtwd+MQhZsZPam8HyAgRCSAWqLEFItFcM4aYHH8CMaePxk298F/NHjsWjf/49NjTVoosVEbBbtDprFx6bTCfHhUk9FUKIj4f9MD/zMxh+9UdIK18khJiBSBMOyiI5LpIwU4INNN+FQkMNAgx4gwA5TAh8r4/s06t5CAgWMyy0LDFMyLoJptZ2WK1muFkFBoMFyQ4WqQYGXU472rzd2HKgDTaVXAupIxOtH2OIeJKQxiZtaNA8EmTSjhKpC6MZCOgFKFqyzcRQCC2ZYszDgGHj95gm29QcB0g5gsxp9achDJoXAv2fOjfwkQSXmg6F5pEgk74ShDdAVT7CkBnEwywULZcCc1FohypG7nVfIA6jyW2GEQ23oTKcgCYFGtuRGjSMWoJI6uXAavvQ89D8DQy5xwarDSxp47QkBylFjqpd6NChQ8eloRsZdOjQoUOHDh06PgWoUSZtM1swcfJkmJKcKCkpgYmJxDHEDAoU8YSOwHmE8TxZQeUSyfYoCaYhDPPng04ws4qIs8dO4vWXX8H6jRvQSzi33xuOlxkiJDJETnOk8jSOff9H+P73fwie51CQlY8V992DB5beqKkXtN7qw5p3N+P5995Go7cLK0/uR1lmEvjj7YSMK/DIPiiiijDNeUCOp6ETNMeEFBS1me7X316PzZvex08e+w4ef+hBvHP8FL776kvo8HPo5CT00tgNPtIOjGZJYRAOhiB3eRDkSVkPrUC/2+aj/sEHgaZW2HgT7FRWkqZEZAOwIgiHwsIZZmBQWRhSU2CdODXeTjGvD4Uj61wN5Ko6MFWqRtJl1og2Us9OLRyCwYD0zEhEA01nEPLHk3CGlb48ByobCfmgIgzGaBgKuXJwkhdqIAzR0ws5FNK2J3oy0BCUWCoEvyrFMyTwigCZJyTeaIJkSSVtbkKrz4tOIWKsoIdw5PxMbzdwvBVmmy3qkRDJsUGNAlo4DRvJjaCpQ8TLjuVVQJ+nBGSwaiTMgxpq+KjHQqSeatTDIuq5QcqioSPUC6a5uQeK0wJYzLAyBhQW5GseF7qYpQ4dOq4E3cigQ4cOHTp06NBxGfiDfjz42P9DLSHr3mAvIeiyRt4CgYCWx4BCipJKNbYey6dAiaBGMlUYxQjx1RI2MtQNPjKtrcXNW4zgjGayH6sRXLMSRjohdkZCCk+cqyf7R1htUA0iYrpgICgGiPT8kgjG1xkhoFG1A80TQhNRJPuSXXizDbIU1tQCqFdDRHUAEX1JghCp49maCvzgP7+N//nud2GwmNCvuBg333oz/v69x1Df1o3Wzm68unYVTjV3wtjdBUPYS641TAi8ERI1cgiElDK8Rn5lOitO1R98Pnz5376JP/b7NZ555hls+v5/4X9efhWvHz+KFs4Er0RqSYm2gRyfkg6ucDCU3Hwg2QHVZMBJ0m7Djp1E6ZpN6Cc7YIEZZpGB3esF5+kB19MDc083LEUZmHD/Arz7+B9Re+QEJKsBAtcnz6iBjxpvYnYagyF+j6lRIqa4oZhs2rrmzJCQ7DFGqlUaoiCGogoLRsiKALjIbchMhaTGVCn6SLiRylrGHAYSNEqp0kXIrqI2dAx+Vy9kE4+hpC/40s2orW+H3BXS+onmtUDLUluiXhBhLYMn3S5SI0Q01EFSQpHzMtTzI6B5bdhdZgjOgGY80M4dU7yAGg/TUeP/UIMMNA8Gg9EIVSatEDZjSe5izB96jxY+4Xa54TZbP+IvSYcOHV8k6EYGHTp06NChQ4eOy+DQmQZ4JB77dr6L/BQbxgwdSPgpdatP0ma1aY4AGrN/3sRujMhGQwI0p3pC/LmEGPzI/kxkJpqNKAbECCOiiQZPnjqJkcU5l4y7j3k+UMMC5KxoqEUsZwA1NESNDUrExV/7TBUFyHe9Hg+6urrg93eQbZLmDSHSr0Bn28mK34+Ww0ew78gRWAiZNhpNyMjJxrwbb8SNk6fixVUrcXTnXljCpEzBC5leQ8gElZB6mh+B2i7Cal9eiR37mjFg6DD88Ic/wPdW3IlbJkzD/X/7Kzp4AXXhIKm+G2JyOpTcXLBp6VBJeVoqA07FYaMD1QsWYnJDFQz+JFi8JmR5fEjv9MIuizCVmWEucpPr8KGrtweu8SMgMEYtPIC2t8bPY0kYo+EoFIkyoTR3BaJtH8lVwETuK9sXAhHzbqBeBAo1YNBQB7XvnrBx9Q6txL4PCd4niQoV1KvBSr46uYPcox4WvIfVyra0qihnSzQDSd/5Ea8Pw1OvBe4SdUTcq0GJ5lugBialVenrktHkkDJNdBm1MGlKF0ysqipCwRAOHjiII42HkDvKiZ2mlXgw9/uw8FZyjcGr/+Ho0KHjCw3dyKBDhw4dOnTo0HEZVNRXw52dRki0BAsjwJ2UBMFgiAgJqn1J82KQNZnJyHpigkBqDIjtR0k/ZX0sR2Ug+WgiwKi7etTDgR5roq70NH4+TpKjRgZKbtmo0gQlxkyM0PfJESoyE99H47wamVS0cyQlueFwOEm98rXPAb8fzY2VCIfD2nlj9hKRnKebqk5IIrqqKnHipz/TrsXIcXCkpWhGCbbHo+VkYEljiITYx1IRhqQwaQtFyxug5QEgZX/nO9/D3574C1568UVs+v438efV6/C7Y8fRnZKGdqcNMFN5TFLX6GUyoUhOih5Cntfl5GNEVyfcT78Md30dyu+5EcLEIjRLNByFh1LbCH8gAC7gh5Gn+QrkPgNBTA406ukRuyeJiIUo0Pai8p6spmgpRbaT84fYaP4D8i8vM5qXA5vomUCND3ElSAUxS1OiikOsm9BSWIWNe0wo1CgUVYpQoyEQNFeDQu9X1BuDti81ukTqCu2+05wQ9B7Hc0Fode3z2GCjySVifhUxCUvqG6HG+okmjxoN7dDOE1nCQQXdXV6ociYqandgcOE0Uv7FChc6dOjQcSnoRgYdOnTo0KFDh44LECOhnq5eDMhPQUgQkJpkRVZ6BoxG43n7yAqNxb/AEEDASkpcepIm46MEPhgMotfnjxBenkVY4LTyTGYbbDaLlkAxRojtDhONhtDI+qXkLTWZR/KdGPNqoGoSokQ4rhJRNdBUD8gxZD+NvCJCnNUEowU9Dy/wyC8qiSsOBHt9aG1tJaTdpwkfatdIroW6zFMiGiJltDQ0adtpKIjFZIbFwmn+9owSqR9L1RiokYWGb9DkiAYDlJCIqtPnMPOaObjjzuX49mP/jtnjJ+Brv/wdKseOQG2ZCjFIjhcVcBKpI01gGGtjE4fTig0TrG0IHnkd3bgJla0MaUPATettFuBISYKRqngYzRASPAhofWhoSSyURAszUfpyJdAwh5hHSaLxQeYZzcBAr5mPGnBYmviCi1kL+ow6bExKlIDjhPNUIxL7VMzzIHYvXe5Urc3iYQ+kzWgoDjX4aC4GmiwETSBKQ1H4iMFAZSKGEOrJEPuPrAs0QaNBgNHEa/eKGrCojmmisSVWj3jfpYE1sTAPTtUShdbYa8FING8DOc4M9MBHduSgCqqeiUGHDh1XBd3IoEOHDh06dOjQcQFiJNHicKG+6iwh/1awvCEai69qxoAY2edYY5w09uXfU7XYeEmVNe8GhRBbamCgBDIQDmpkUg7RpI0CWKeREEQLOUQgZJ7VeCWh5eCNNk0GkzA+zROCZ7hIrgeqFsBE7AgyIjKKlJRS8pxogIjJK15onIgRTE5JiPKI5pfQtht4ZOblaDPajKzC6/Wip6dLy8GgXVkCEacmFR8hplSe0WiwaCEJWn1Ef8J+9C+dP2dhJERYCvrw7N+exMqXX8ETTzyBN/7f9/HKpi346fNvoWnMNHSZDJEoA78HqiRCCCswkDLtDY2QNr6EZHjR9rPH4Jh3O4KFg1GX5cLoAjuSHU7SnDQroqzlhYhBllUtnIHmy6AKDbRMlRpk5FiyTU7LYEHXFE6OhELQsANaRFTJQWaj3g604aWYwgMb9wjQ8m/GWpMaVzSjBad5kbBRAwL16qB5Nzhyg03kb0AVwdPUEFI4YnzQjiXXy9B8DOFIOIfmeMFRYUvI1PuBHkC9X8jxgsJqBg1eMIA1cjAYBc1gpXnGkHbmBHJ+Ieopg77cENQ7RI5WNdHgQNUsuDADs8ME3qrCkMFqyhkpghmsgeb/oAaWj/Z70qFDxxcLupFBhw4dOnTo0KHjMijMzEBNZRUhWCE0d3Tg0KGDEWIWnbFXonkQtM9U9o+QO5pWj3KxMN1GPQdo8kc24smgUu8GQpqDYRFBehwhoE6nE3UNDYhwWlbLbUDLo6EHoZAYMUjQnAo8G3f5p4oK1KgQDoXg6erUDBihYIBUMxAhp1T2kiaSpOEKSiT/QGTpuzbN4yGeP6JP9jA2284LQtyoYnPawXORfBG9Hi98Xr9WJ1WVNe1FalwIK5JGShNDRuLnQoKkIvmS1r29vR2LFi3C9Gum4g+//j9MHT4E3391Aw7ChAq7E2ETA8Hnh5FcV/7+PRh1ZAuMrW2QAzxCzVvg2rMFvMUGftR0hH/yqBbS0nH4CEz5afD0dGnnkpVIksOwGNakIMUwacsgaVNSR5q3gdYloBlnYl4pUVNBQigENepwSlQrhGxXojkSYlKXqopo3oQYc49er6ZJGfF20NQfFDlunKBGC+qh0ObpgocJw2TgyS2QIRoMEIMhhINhhGhf0VJ4qpHwDLrwTN/9MXIQaH4QKh/K85rCB71nEtnHQL0ayL0zA9o95DTpkWjySkQkUeOVjXYCXuYR9AfQ3dUN3sHA5TJokp4FmeMgqaRsVoYuXKlDh46rgW5k0KFDhw4dOnTouAxGDy7GO9u2YNGNM+ALK/BxkalcKep6T0EJXJxAN9RBOVMHiQ2jp60JHCGKNGxCC12gng50xp+QT4NgJSTajBZCEtubO8mAjNPc4Sl5DAhhZJuscFqsqCek2mS2wGgyEYLJwWI2w2y2wmyzIjk5GXa7TTOAUAMCrZkxOoNPqamSYDhgokSSEtKYC7+UEDpBt2vHRQ0A9Jq2bd6CmtMVkBQJIZlBQDZqbviCzYQkFwO7zQ63ndRF4NHT04PTVafBE3JMsw3IjNpnv1BjiRHVuLxjPD8F+fzeu5tRVDoEf/jDH/CnOxZi0+nTeOz5l8HabMgeWo7aPz8O7H4f+yCBikAaCWkXVAY8KTMQ6oFlw/P4K1kM2TmaaobRL0KNymPSyXza3oXuQix99GFkuVOhbD6LpgNntJAQmVybqITj6hISEwkf0IxHUSMSDTXhlIgSCNkKhou0a1ANw6cGNUWGiF5HZLvAcHGvhkSlCZYqQETlJqUoW88mpzVxZiSb3fB7Qqg1h8GLtJ+RNlJU9IpB9CgiRDYaaoEwOT6yHmJkzSuBlh80BxGi4RYcC94uQ3ZY0WtW0Si0RqQsqcIEG+27vAKRi/Rdhu0zMjDkHhsFI9KH2nDT8EFItaVjTMpC9Byrh+oNwF3eD7ZU90f+LenQoeOLA93IoEOHDh06dOjQcRlYjQb8+GsPXXGfsBQmZJSFv+YsXrl5Gda2dyOlIAdpAwvBmARwJgMM1KU9ITaervcnhFNWI2xTIJyPj2b/FzXSGiF+xVnOuLcBG88zIEEN9wBNPQg2EuIeTTBJp9N90dn0y8XdJ6oJ0FwPfbkkJI1Qd3Z24cSpKs3IQI+JzIILMPAM7ISccpys2Ut+8v9+jMmTp4AjxJUzG9BW04CHp8wDZ5Q0rwFKtH2EyJ5Qe9CuBLQ68oQiy4jkSKD1iBlpYqEcX/rSl/DU00/hmaeexroffht/2r4LvX9Yh0HtFnjyplCtTfi7PFBCHvCkLUh1ILAKjDS0hOYpqIa2zjEWmFxpcA8fBcs1U2AtL0ZhUTYcVgVmCwPRnITadzejeXI2ZCMPySJEgjmoF4km/4l4W0Wgxj0dqKGIScj3IFA5UUSMKPFwBFWM3z8t30KCW4eWXFK7g32qD91k3UNzYUginnv1JRgMRphVHqPKh2rtb4g4RERCLiLZRrV1syRHFSYYmJVIAkhqxODUaDhHCEjyR3VKtQQU8kV9l0lQxUjMtxCSJXT1BvCnn/0cL3t+jBJDBoaVD8Pi3X+K9iE9O4MOHTouD93IoEOHDh06dOjQ8TGgKU0Q0nX61dVoCgI1hLQVF5cQousCbzSAM9A4ei6StC8qd6n9VSMODloZ0Zl+CklLGNmnJBCDiFhSSVVTFtDWEowJMbIeR581Ia4gcKGqQgxqVBpBNrfD1eqBgdSb5QkBF2ISjtDkE81mi5b3MCUnE670FEhBESwh6icPHILsCRCSz5P/BC2cI0lhkcmYcE71wQcaHqIgoEQMLNSeElbprH9EvlOkBhe/gsM7DmHwgGFYNuU6WE+3IFmwQRHMcJhsCPFBhBgBvh4jWJEQbNJOshSKSjNGPAZkRMIXAmERYQSR51SQmsUgPYsj98MKISTjxHsH0ZFmBrLd4LUwAjV+b9S+iIe4EkVU/zFCrrXEizGDDeL3QTMmxFQxVEM8YSWLiwl5VEciujNg4DktxISGS5hcDk1RghNMMOank/rxULlYiAaD+JVGwzYQNRpFlCEiihcxJZOIQUvuOytz8b3X8lBE68JqBpHIdp5TwIdlhNtq0d5Tj2TJiC5vJ7pbO2FLS4KB4RG3funQoUPHBdCNDDp06NChQ4cOHR8DlGpxhNzXV5yGTIirWOPTZp7FUEgjpDR5osJFhlyJs8CJBC+WlDFSIHVrv9JMsRrPoZBoWEg0OJy392W2JyJWLzrDzpC6MjR0gxfA8xF1As23grr+U7d7Ro3zS9XAIkS+ba+q0cIkyMGR8lQ2oktBDS6cC1T0k/oLSEJCTgFC8C28EUm8BXbOApEcGmAVhGh+wTPdMChGskHSVA5YVgVHyjbRMBKzFWpIhBIMayEN1KOBLpCj7UfIskDqKZ6sRmvbGzBvLEBrfhaEDAeMMgPP1mMIpBkQEkPkRKRWUl9rSwnEOdE4IEeJO0fDHWJtySjRuwjtHsc8HFj1/PwTl0I88SITCVnQZE01IQkuouJBjQSk7akhgRoOuGhbypqPRKT9Y/eMniGm/AEmIk+pRo1ZsprgvZBQFTXBsyUmxSmrfUlBGS2JacRIEZNRDfUEEKxqhDMtWZMrjRiIdOjQoeNi6EYGHTp06NChQ4eOjwFK4yjfMljMEPxBWBUONlcK3KluQopNGmGUmKiUIMvEQwaoa74iSgiFQnF1CPpX0tztIzhPySHBmBAjlbRMeo7YzHVsn0SjRUyqMvY3kWDGoCqR8AhRJGRSlSOGBOpsEVOkoOWw0TwOTJ+cplYXOrNuMxPiqcKgRs4pJbSPIFG1B2qkoLKUiLhnsDypC09zISIok725gHYNlOjSc1MvCoW2W/S6qCqFZmygM/vUcGPmwVqtWphEzIBB68tFc2Yo0fABGqLRXdcGqbFHM5LQ2XpzVgqc4QC6bRZYCH0PUqXHmIEg0RMkATHZUi0xZvyWUPItRSQxE+6RKPcR+0SDQywPBT1cJBfOa8kajVoeCK2fkGux8tbovryWJ0MLW0HEAKC1ZSwB5AVym0aub0jPCIbYXSX7MBGlEc2bJSp1qp0gqooRTWip9Z+ozCk9J9WjMArkPCYLRNqvqCeOQSALrxlEOF6XmdChQ8floRsZdOjQoUOHDh06PgY4wr7Dgoyi0RNRs3MHIWB0xpuSYjojrGjygQI1LSiylgdBI+qagUGGFIzIW4bDPgQJ8Q1T1YlAGLIUMQZoNDI6Yx2SxEjiRbLwAq/leaDqAgoh6TwlgJQMShEJRjqrr8ZyNVDyqEQSQ6rRc1NoBofoNfjDIe08QW8nAp5uqGETQpSwG7hIskJq0OBYbfZfo6fUa4ASc6qCQUhy0dBBCEOEgapa0PSIrDHivk/Iay+pn0GheRPIZ1JPQ5R2U1MKNWjwihQpn8o00ll9KsRA2k2IJjDQPClkVVunKSwUntOMNRwlxio12tAjWc17IWZs4dHnVUANOUGtrEiOCTs1THiCYLv9CFoIyRfVuOEgcebfEJ+pZxJdE8g+sdn+iDMDzQMpSmFtofdXiRp0NLURpS9KIVHSk3pMaPdSELSknqBGFarQEQhp4Q8WmuuCJg2VoRlHYmEPMiPT1JNafYwMr7UBvd+SFI4oUNDcEqSfsFFPBkkh5VF1E5kqalDpVBkiaUtJxXlyppF+3OchQe8T9VSQSX+k+SJZgbSxwwJTYSbp21E5TD0tgw4dOi4D3cigQ4cOHTp06NDxMSBTwkuIW+GcKdj8q59jkT0dypb3CHEPA4hIKAb7pr8jMojU4EAJMg0voCEP5C/dTpP2GamXAjUk0Flss0EzIAiCAYrDBDP5TPMiGASnZmTgjQJgEiLyhTwfiddnNb/7iLQCxXlu/33EUqJuBNH1ICGhHDk+4A9gyoSpkEURtfV12sy15hFAvqOz+ZQU0+ObGhux/u23YSJE15GZitJhg+EozEFTVa1GrlsVHyHHhPRSSUhCkDVvA5WFIrGaIcIADjwjwEyOtzMGskRUL8xkaEq3abP1lMBHvTUE8j0XMyJQ8h3dTq+YJnukOSAEOtseDWngo2KL9NJ99HhSbwPLR+wFpA2d7QGEXj+AIKPAS90rot4HvQoXF3UUL5HDgNaJl6KqIiytR8SgIFCPjqgSRST9QcSwIzJ9UQoiIf+x/A4xIwM13FAPEmo4MgkmjG+G5t1Ak0eaWg9pXhK0i7DRsIwgE1HuoOimHidsJA+DSYmGoVC5TTGynd59noZ5kHNYqUGCtH3EE4U2bMQbJebhQBGmMqnR2tI+Tb7EeE6AiR9ByuBgz3DB4XTEk5fqyR916NBxOehGBh06dOjQoUOHjqtBGHit6hS+tPJFiBwPCyFggtKXF0EhBJ5fvBiGQBCenjaYojkCtNh5lonPHLOXIGcKowXfa0RR4syECFvAW01aiAFHZQmp9KQSlUSkM8xs38w4p7nO01nzMARJidoN1PisfhrPIS2k4K033iTEntBVf5CQT1ovSauXRk65mBcAzaXAaXW1WC1QeLKNfkfrRzM+GqhfBgfu1DmN9IO0Q2N9K/wdfjCWLGBYvkboJT4SGqLVj+ejoSKknmzEmMAaLYA7A6pRIKSZkN1wEAiR+vhCYEIRDwxZkKA5U6gsPN0tkIMBMCK5vrAIlRoFaEgEIdcqNZYodFukLTWX/3heRZrLIXofaMPIKtTac9QsQTUptO28EEZYYDH3njtx9003YFV7B1raJWyoqNTur6CJcMTcEbQgBHINnFaeKkVOKrJaEIxWKjWCSKQPSL4gaetOzYNFO4/YF4ohJ4RUxNRGNGlLmuvAbgXrdGr3IYkzovHdtZDbGqFYHUBSOrknBpitNmSUF8FP2o0aK0QxGJfHlBPufTwfRzRppJa7wh+CQuqnnVuUIh4etN9RI4iW4JKsR0NAaH/zkIW3uWAymJBdWYdJhbnkPPQeGT/8b0iHDh1fCOhGBh06dOjQoUOHjquAQgj/huNHIHm64X9nG8aMGIWMjAxYBGM0gZ6gGQRgNwAOOyFoseSMjJbzQIvJj+Y5iMfnXzJRo/Zv3/e0GEqO49vl8/IzSFGiq2rbwxFjBmWUsoJQjw9H9x5GEyHquVpMflgLZ4i7umu5C2VC4OV4jgeZi/xlPJ54bgPNCCFEjAXUfR/RfcIKD85sx8BrRkIO09h/VSPQfjFSL7ouR5NBUnIeVkKQDRbC3M1a8kpWisYc0Fl5VYJC0wlEvScU1UiINkMIPKmLxYZYFgBaZrz6Khs3jqgKH29XOZZbgSbgjBl1aNhITN4xwdAjQYQlKx+bTP2w4Z0zmJEiQBB7caenIrovbefwpTtFTDKUhnPE7qUc8RSAg5zSpsblLLWQj9hhohC723ElEZUqOhgYbNmwCmGvDwIXuZ4S6pXgoAkhvVD8XrBmAwrSS2Dwno4UEe7rL4hKcUYvve86mfg/YIwMGBMTb8tYH9QcWxJaJRiU0NsTxIatx4GMXCQPHIodJ+oxpTgfWgyFDh06dFwGupFBhw4dOnTo0KHjKuBVRNR2tUFt64bgC8JKCK6REEcDIclaLDwbcaePQNWc4iNgIp4GTMywkLgeJZjRI7TZ52iixVjoAxPLYSD1JRhkEiUp5ZgxgxoZJI20al4EdB9C4jk5alTQkgeql1IyBM+xUY8LVnOvj3keUBKtJVOMelpEkgLSsI5oDgVyfpvDAYfLDQP5TlYiXh3+kKLll6CJLHv9Ac1NX5SkiGoCmKhxRNRCKuh1K2SdESUtp4Sm1kGNHbS2tJ1oIgYaChFTU4gbCBiN7Gp1ot4SFrtmbNBm8yOT94T8k7YUo/kMxIgHBKt5lfQZBVibFYrdiLAxciOSXRbUnq5DjuwHq7LROseGzImSofTfRGWP6B6SEi+berfElTjiSSWp4oi/777HVUVoiwowU4MS9VqhhpyoZKbmkUHqTUNNaG4Lk0j6XYK4Q8xQQW8uF/UgURMMGIkGB83zJi6x2QdWQbyuCkSYqJMDzShC+hWVCpWCPjQEuiGSyvGaIeXifqRDhw4dFLqRQYcOHTp06NDxL4UPkmtMTHh3nsLCBxwnioQwKyIhtTxCcggOpxUmQk6pDCMXnUFOdIMPhyNqDrIsxcvWCDqToDAR3U7JOvU+0GQSo6ELkfWEWXe2jxL2EV1Vc8/XrkXlyRJVV2CgEXzqgaDKYlw9gE24xsS6aqIMauRAmuVACy2g0oVRwhqJ96f1iclvRsgs9RhobmtHm+8oDAZoqgORepvjxpGYTKKR5ongaEJIA9kmaCEfbNSQIWsyjQIUoa8dNMnMqMFDVCLGhdgsfax9FJaP5DYgi0TL1iw9LKRYiEY0ZEA7hhp3YjkHEmbwaZOxVgt4gdGkSD3k3tLkmwpt10hx5DqDl+1HsXCZuEdAgkIFVb+Im0SkPiODoorx+9dXP9J/tOSe0JKHRvoKvQYFBp6BxSSAM5gjMqI0SSQ1mkSvh7ZBDJKSoG6R2GcS/l4qn0KiIiUbTS7JCbTefnI/JeS7nQj7Ra0fgNSfqqLggrJihiAaMqNDh44vLnQjgw4dOnTo0KHjXwI0SV4ieUzEhZn0E7dTXM7AcN5xhPg5LCZYbC6E3KkwWZ1wGZzaTLcSV4KISkUSohcSwxFpSjqDn1BeLARBjX7WcghQOUNNspAnxM5ESGMkkWNi/gaGo0Q3SmQTjQwGPr4eK59+yzKkTqTOQZWcX4peq3SeT3wcoZjCAyGwssQkGDk4rVxZS2rIaMkBY2EVYS15JbnWAE0aaUQoTOUnuaiBwBcPyWBlIW5MCKtyhPxTNQVBiCcrpNKSlBpLWjCHGj2vZrrpMzhEDQhhTb6T1eoKXtCui0YniDITSYBJjhXojD/DacSZSmLGJD+ZBENFfJ22dTQxpMIrqG4Lo4s1RawPKs13QZaoNGaiUUBNIPBsTG2Dlos+ydCg1m6R+8IhMQ8DE79XSixnA8/BaOQhk7+i5nEiawoPPLlGhvQJ1mCBwjGkf/CQ2IhRIObAQA1ZcaMKm9BnkJAXJNaX5EguiUgekb4ojwsNJXQ9GAySLk2OVQyah4fr/7d3NT1SG0G03G237RkQWREJia8DB8QpPyEHfk2Un5gfkUOkRIqUS9AmAkECaBhm/dF26lV3e3o3Bha47daTVuv1tO3qco+0Vf3qVdPQ/t/XoufRz0dfLroS8TknJyer3yeFQnE9oEkGhUKhUCgUVwK73U6C+oUdkNebZwFUnlDId53zhEM+JgVnW+PoYb2l3xsO/itLv/75Bz1rNxIkLsN9Yi94Gg6dlAlARG8eu0CN54B3slHnwFgJfl3tqCqr0AFBjp18JsG1zQLZWAcvXIOocyBChHGeYACAwSDtL6dQJtC93dGb7r0MhKDi3IUWibiHl3aTyewpygsgSE1dKQxZE3ekCzEgjEEQW4TWiVS2NI0NYVPbS3JjDpUZZaRGwC5Txh13/qQKLIbC8u/Kit/SbjxFgcI5yQhgxx5dN3BPUy4+GSsegy12ts0iUeEq0Yuw7LfBtWSbDZm2DYKZ6CwBO1FxYWZhJYiYZsE+jl0sJsv3R+OJMsxNEhp8v7+6XQjy4a0xXydRz4Dfe2INIOmzdGbwQX9BNCjmXtaHJBnO8HDxDq+LfWiHKR0exiXh0XIQ/+rdQN1hICdMEZRWcEBPvJ74CcYiEdXTMEw0IMkzhaTNOEypJoIOto5rthBbCzqWSCBp5edhYVJ03UFab4rdvF5RXlII0yZ02cD69XVFpfXUn72nb7dbevn8lKay4bkPSzJK2oZmiRxNMigU1xuaZFAoFAqFQnElgCAn7Q4DOVVcWvddoLPnbfg+lFgQVkIsK9jPe/rh8Xd0f2/o9PZdajkQRZDYm3nZPe78GJkEHNSZKegP8Pmeg1QEvSPfdz+HJIO0EByh2zBLa8kRWgYUdr2X2nixOxz3Q6Tsi57B2WK3sCEQnKOrQd8HVgU6FXAAWt9wdO/p97T7+zkdTl+Q6wfROxD2BYQpowaDjSwG2IiEB3xp+TcCceykI9gGOwAMg4rHQZRQkiJbQy/+eUm//Pwb+W6IiYLQDjHhcDgeo9Yf5Rgz/wvqkwBlTKaIfgLECJL2AhIsJh6XLogN8lhn+HrbsP1O7JjB5MC7sC1N9ZaobmlsIG7I14C2H9kVBvOxVZzvDf6pRb8Bc3AcSFeo9yiNsCHwzJ/qR1T6WbQKfGykIMyKIiReYO8U14ZHMB7XVQG/VrEtJEpo4jvauCGUoeDasUsOIdOP0vYTmhpQoiie3OHnl9TznNHO8nb/irbdG1kTk7AtKmFzVN/cDBkN9uGt9maYF/vI2WPZTd2EVqhbvtcJ+wznXTmxW3xMctnle7KZ+B1H5kjlfEjSIAnG40OL1IrXb0lvzzoC1wK+Ss85X+KjYg0KxXWHJhkUCoVCoVBcCWw2G6rrerU2PtcfyM9/iNXwMfz44MG5vy973WXxKW2Ir0Fu62pdfk6rvyQ+x97PHbs2fhGAjMF7fn7t+OK1whC5oE2RziPpk4LlkY6iiZb+f79kH4LqxJpJyaO15+S2GpvEHknKQhIbYG0tfck7WUOedMvn8CmkuQjDwftz3Tly3Yvkt7XnKBSK64X/AIscsSWFpJWZAAAAAElFTkSuQmCC) Exploração dos Dados Carregar *dataset*
###Code
import pandas as pd
df = pd.read_csv('http://tiagodemelo.info/datasets/dataset-uol.csv')
df.head()
df.shape
df.CATEGORIA.unique()
###Output
_____no_output_____
###Markdown
Verificar dados nulos (NaN)
###Code
df.isnull().any()
df.isnull().sum()
index_with_nan = df.index[df.isnull().any(axis=1)]
index_with_nan.shape
df.drop(index_with_nan, 0, inplace=True)
df.shape
###Output
_____no_output_____
###Markdown
Adicionar coluna ao *dataset*
###Code
ids_categoria = df['CATEGORIA'].factorize()[0]
df['ID_CATEGORIA'] = ids_categoria
df.head(n=10)
df.ID_CATEGORIA.unique()
column_values = df[["ID_CATEGORIA", "CATEGORIA"]].values.ravel()
unique_values = pd.unique(column_values)
print(unique_values)
category_id_df = df[['CATEGORIA', 'ID_CATEGORIA']].drop_duplicates().sort_values('ID_CATEGORIA')
id_to_category = dict(category_id_df[['ID_CATEGORIA', 'CATEGORIA']].values)
id_to_category
###Output
_____no_output_____
###Markdown
Distribuição das notícias entre as categorias
###Code
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8,6))
df.groupby('CATEGORIA').TEXTO.count().plot.bar()
plt.show()
###Output
_____no_output_____
###Markdown
* Um problema recorrente é o **desbalanceamento das classes**.* Os algoritmos convencionais tendem a favorecer as classes mais frequentes, ou seja, não consideram as classes menos frequentes.* As classes menos frequentes costumam ser tratadas como *outliers*.* Estratégias de *undersampling* ou *oversampling* são aplicadas para tratar desse problema [[1]](https://en.wikipedia.org/wiki/Oversampling_and_undersampling_in_data_analysis).* Lidar com essas estratégias será discutido posteriormente. Preparar dataset para que todas as categorias tenham a mesma quantidade de publicações
###Code
TAMANHO_DATASET = 200 #quantidade de artigos por classe.
categorias = list(set(df['ID_CATEGORIA']))
data = []
for cat in categorias:
total = TAMANHO_DATASET
for c,t,i in zip(df['CATEGORIA'], df['TEXTO'], df['ID_CATEGORIA']):
if total>0 and cat == i:
total-=1
data.append([c,t,i])
df = pd.DataFrame(data, columns=['CATEGORIA','TEXTO','ID_CATEGORIA'])
fig = plt.figure(figsize=(8,6))
df.groupby('CATEGORIA').TEXTO.count().plot.bar(ylim=0)
plt.show()
###Output
_____no_output_____
###Markdown
Representação do Texto * Os métodos de aprendizagem de máquina lidam melhor com representações numéricas ao invés de representação textual.* Portanto, os textos precisam ser convertidos.* *Bag of words* é uma forma comum de representar os textos.* Nós vamos calcular a medida de *Term Frequency* e *Inverse Document Frequency*, abreviada como **TF-IFD**.* Nós usaremos o `sklearn.feature_extraction.text.TfidfVectorizer` para calcular o `tf-idf`. *Bag of Words* É uma representação de texto comumente usada em problemas relacionados com processamento de linguagem natural e recuperação da informação. sentença 1: "Os brasileiros gostam de futebol"sentença 2: "Os americanos adoram futebol e adoram basquete"
###Code
import nltk
nltk.download('punkt')
from nltk.tokenize import word_tokenize
sentenca1 = "Os brasileiros gostam de futebol"
sentenca2 = "Os americanos adoram futebol e adoram basquete"
texto1 = word_tokenize(sentenca1)
texto2 = word_tokenize(sentenca2)
print (texto1)
print (texto2)
from nltk.probability import FreqDist
fdist1 = FreqDist(texto1)
fdist2 = FreqDist(texto2)
print(fdist1.most_common())
print(fdist2.most_common())
texto = texto1 + texto2
fdist = FreqDist(texto)
print(fdist.most_common())
###Output
[('Os', 2), ('futebol', 2), ('adoram', 2), ('brasileiros', 1), ('gostam', 1), ('de', 1), ('americanos', 1), ('e', 1), ('basquete', 1)]
###Markdown
sentença 1: "Os brasileiros gostam de futebol"sentença 2: "Os americanos adoram futebol e adoram basquete" ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABC0AAAA/CAIAAABLv6XfAAAl/klEQVR4nO3dCTyU+R8H8N8crnGfSZGSHEUl3SulpINUoqWkNl0oxRYVrRK1kvTf1X3poOQqSodCB23laDs2lmxY5L7lmJn/jCuFMeaR8djv+9XrpTme5/k9n+f3+z3Pb57nmSEjAAAAAAAAAPg+6HR6p8+TWbwGukUgECA9jkF6WEB6WEB6WEB6WEB6WEB6WEB6WEB6WDDS6+olcl+WAwAAAAAAAAAQjEMAAAAAAAAAfQ/GIQAAAAAAAIC+BuMQMKDQy//wslzpHpFejUQWhmdGGEt0eU0i+2r+sFWaciyfbBBZdGehaN9N2+/UvtgyctJvucTZN0uijfC+Mmzpm83Xfimz3rO1xAFVrwDX/feadq+AZgi6BZWkOzAOAQMJ9d+gLc4R6Yhn7JpdVgaqAqwHIQ1/H9IatQMdTk9yUOLpoxIC0AFJbMycOTP/pUyWYbsacjDJfwy0bvDdQTMEAwtXuk0Yh4CBpLEsu5j5d4KD555VskTWb274EHLhDUJj+qJgAHSNT2XTpfubmv9f0/NJQCegdYPvD5ohGFC4023COGRAoJUn++/ZeejKg7+KGxGf7HijjXu9nY2G8TW9WJ8d6bntl9O3k3JrERIermPm4OVlM0Wim4N0/KHlnZ8q99Pzpv8/sxpMspp+9iLP2lWxaOi2l2k+EwS+uvbgtvLvmmouqcz3vnEcyes4+UJevJXEv7cOOOz1f5CcWUqX1DDe5nPMac6gdk2EVpFyfJXDLwFJhXSZSdbel45ajuJvfoFV/v0frSL5nJPDwYDYjApEEFdfaHPw+B6jobxNL5U89dmwYX/w23I+ed1NHrZ130zZ9YpXP7KS171YOsQ2/Ne6ndZnsvSjcm/ORV0sqOXNcrZ3L8r7bdx3M71GZIzlkZDf9d/9YmHj9ySPNnim08WgfXOkSVyIp2lFO930PVpHFi3xq+uyvl5wfXbndbL9JDNfdSzGPCHc9Am0kife6zZ4hr4r5x+mZ+e9Z5D7zO1/Ip2gokemkoijzk0ozaND67akvMJp3ftOumnaXVW8/q3LrqxHW7mrde/Y3jN2R4xrd8lNY0Hc0Z93/i8kIasGCShMN99x2HvTZHEiOwWzexCqccnW5VJiIV16orVPwNGVI9nYufSrhsw5fFa2nurq+IHF/pfF9qWVPju6cd2+oDdlzG7Ty4myx2Bfaku3WX5jltjizo98mq+67DRw2vuO3Wa3B0W9YsBt6v+imlcHZk90SaQicS3DH5Vqnoc+DHZbFPtX8OsrJrKkuteec432vkfys1ZsUOMviL8adnbzzEyBtLtrFQbYxicIjv7R1uTzmZA/65CYrqXZOG1l3rCu3kwSn2S5eprXhfgKRNFebjFtsrpA+cMt0wxP5iBp3Z8cxhSF+4W66GfQ/3zmosHfMk1j0p5lKRTtH/RUo+6/f35qlZHUmBSP8QKs8++jlceAmnttxYx1kVWEkYs2rSTHngqN9FiULfTXc2dVXmpekJXB9shqxKNq+OMk6p3NP+WUt5uS5YoTeASYp3U/RW63LW7Q+GG6sjDqekGMNzO73fzwzdayY2YvGJMV/ObNJesF2ep1VerzF6r4h6fGepo7G2SenSHElYy62PQ9WkcWLbGrxdLLuquTTToWg4SfPoGaG2hp4HS7BiH5GQsmNF5fY1JX17ROfE1l4WhFbht+27r58nBb974P1k2bzYrX37DoytjfyoJdr3vHhtb+gJ9embBTZ6Z3GiKPnGOqWZcQ+vic3fTXNa8fb1cjd1+wG3bL7ivOMzVD/oGJL05ZmqtPjbdX4vkv7NxxWtl6rIudCItKy2L7EvKDV891uFnZ0m0G/2RaXo2+dJssdRX4TpmeHxT1DhzVVtA5Wn74z/sY/RT/nPOvolbLk+kVT+xH6/yWc83Jb7ehu3phQuR7hHjn/xZy2VgcoWq7E55BRYNlEY3b5e51BJFJ2w67ld4J+TMDqa3zPrFCpuJG1+OQQfrO7plhF+IT0Qhzj2MOSsSsk9MZ7Q3JWAdHnJ4hTLXX1FHckODlHrc5yKD1MsnKoQfS7lsPI1c926Y+1Tc77eypV7uPT6pglb9GH60852glL25lSiqMGGt59er+CcRXomrjDmS+CriT87OqQk7oIcaRChJZFvIkyEgSVT7ZrKbjV9U6IcuKp4GIJOZOujFL7Jf3T7aN4kW0whvHu1jQCMabmbfy0CrG+KZcNxb7ODZ9xJYUesZrrXvp/vqUVKWxarvfF8Xc/VA3Q5M7J5k63fRTerKOw3reEqnZ13Z1VyebdChGfoABTvqExqyQw8xBCGXh1ec3l8sSSu+tVTU43/qjVxx2boRvWzep8MZR3Na976Exm1XTZlXx+vF9tiy6Mva38nQxFo3u24aGav4IaF06NTfE+UgaQmLLw/8IXChBzw9cOMrizgvfIy9tTozqvmAFQzwyIlYNIZYsqFc0DKl8eT2hZIuS9KeBv3PHaWXjQOfHD9Vd1g0Wu4zG3BAv5iBE0DDoxQ3TQYSymA1qeqdZ/lZgG1aBs39Q1KubBsYhuFf9OiShnlEBJ61dOJS5OQkiWj8ukPntVEFGVEKBm4aYsroESiyJWjp2+tJFc/X09A2c3EeI4OBj+j5W/S4yhfGHoDxZojQ7uwzVD58kjxKynt5OqzUY3fIe7Z+M5JkRC2ksmSXhe7HkU+LbUtrodyzzl+TaGrGJKG18+Y0x83+0hrrPDSLDGSXORFWFVVRUmx6bxnxh7LKpTV87JjzebI6En39J84TdVLzWFdeyXqbE282C2mgumyZJRATp0SrCKKWSNG6JthhjwXLjGXN4X1/5qbIRIe4cC3a66aeIt7zKzjqSRLtuiY2dL5SdOtleWzFw1CfUZsQ11bHxFrMGMY/xxKet0hc7f6Ws+VXOV4T61VJwXfe+h26aNouKN1GAa4XuTq9s5erc7htdW0Nrr+btzZeMBRHHm05hRkqQNY+qMG99sfuCaVkZyDFrrrDqtGEo5A0qy2fUOAp+GjLHcFrZOND5TmQIJ7uMyvTYpuunxpnryjC7TbFJK/TFT18qZacYrALXYvudvbppYByCd/S6kgLmCTkBSUn+lqEwSXiQMEIFqDK/korkZ/iEupesdrv1T3ywH+OfG6PTNfSJDNg6XrgXvtF2wKDXl5UyLwehP12nMazd85/+yq9HLbsffklpgebz8CQhaUGESlBNaU1jN/n3+3EIohbFHbLZ6nsz5VPdt69UFzNXjU9UlLd51UhCUkLM9WbqruK1rDhJQq51x9j1glqRhUSalkTg4Wd+3s8nItTUQRF4KYy9fj2dRu+tde6pTjc9FbWMQ9hbR+GuW2LnC2WnTrbXVgw89QnU6hLmnfl84hItRSWLyIogVIYwrsi3i8Fv3fsuWDdtVhUP9eNDw17Yyuw0unbt/QtmXWXWZH5xkQ5DFHYKJiLeXMWJTUVhzI9R4/DUkDmF28rWY13sRKglHOwyWttvW7fZXC/YGYewDpz9d8I4BHxB4JOQoSBUU1tc8pmOmrofakVeBfMlUTlRRm9JktJ1icx0Kk59FhsTEx1+4dTdzEiHNaeNXjiMHPhfNcg8S0mvr65vPoigVhVVd/VGXjEJfoQ+E7RcLrtOoLQ9T5bRbnvwubSUEbEIgTmjwqZLGBj7bnJ3+fdz9OJb6w13hlUh5bV+h8w1JKqi1i0+kNr8GokiwVi16jpGb8QIkMD8OrJ/y1on7LbitbyNSOh+Qf1ep5u+p+vYdUuU63Sh7NTJr97fWgw89Qkkijhjh1ZTV1rSEnBjRV7bjQoYVmR4+4Xguu59F6ybdg8rXj/RK1uZnXVva2idTVhbXFbfHCm1uqiwogHxikqhaM4KhqeGzCmcVjYOdLoTIXK2y1jX2n7LWttvS71o1fWRD+vAK9vPo882DYxDcE9wzNKpPKEPGl5cuJ+3zEKORK94GXC7kPGC2uLp0qTa9LAT/nHZMlbum3VMVHRM1iyRHq3pnpGXVtiAcNNVcYokwvyMoKIg+VVh49RhpPLnAQ/b32aNmq+lrC2roSFEUVugiSKe00sbVQ0Wawmg+pwH1x8WCCpIfzncRC8v3Ms1WzmEVPU6LIb5ycPQqWPEiKzz7+qSm36jLutJErNPVLFytDZWIxdF/P6J+TStgUpHAko6I1HIK5Ry7XHR8qXSqCju7L0vnV0PV5zVgvq/Tjc9+rbkrNaRVUvsfBzCVp3sFI76BIERPyih0NcoOSD2049msqjkyfl7Xxop5ysyvH3rxnfd+x5YN22OKx5X9c5W5rzRqRuOI0Q8oydffVJsYiyFiiIsRywJqxRaGpXuwmnBcNSQOYbPysaJznYiDZztMtra79XHRWZLpQmlCRfvl31ZEssjH9aB9/SgqFfAOAT3iIMXe+0aq7331Z0VE3SC5wyrSAh+kIuQ/PojG1R4ERVlBx04+ox6JiZ2pa4CuTz1/pUMhKTmLVUdSGc8u0AZY6oreDWi+oWdgekrPfIfN5PFhdCnKjq9qfsnCQ6SZp6Mz/AyN/m4yHq//b6f9s07l3lQTyfdVIv0NvRaQjEasydp7kxGr9DYdOcfMcVu4g9hs6T+Do/MZjwca7NeUwARBVjl3+/HIbyDxiiS0cfGVB8bu2KVvKjwD+qyKD7/4+U9bqO9XZdumbVjbUzVDdOp88y06hPuvGHep0lrDpB1xeuw4iwX5KTKjXVnB6tN3+G3Plit48FV5T1tiWR5067rJCs46hPIw0y2zHReF1tza/lE3YAJ9OQH7xCzjmFdka9at9FSrWE4rHvfEVmBVdPmuOJxVe/0MByvO0nezHOjh97xrFCTqfPNtGnPr0dXMnqK7a660jIFHBYMRw2ZY/isbD3CYifCW8TRLoMs3NZ+p8wz0258FvminnkGpGWBLI98WAXeg4Oi3gTjkAFAUGtP7B+DnXd4X40Lu5yAhBSnr7Jz97KfJcnYsRBH2kbENDrsOBoWejKF8V5eGQ3D7bu93fTFcXMBKeeI0otOXttescE3LjX8Ct1o+/mrok4T7ZPp9bUNzJOZYnrue+em7LyX9/b2PcUleyXXHosPl93icup28JlExDdY2+LQYd9t45mnPxkTMGYnvvh4gMbFrR43Umv45GfaHvV3VGu6kpdV/v0eUc7s9NlH5o7nEmNPnytY4nTtgWWxk67pxQ+RgbGOLlarAsLTLTf6RGfcv0mdbXf+mqDLnF/e0+tqmgLs0YqzXJBDfz0WpLPe9N9gHeaZLltiV79dSJQ06KpOdvNzh/jpE8gKqwNvZK7aePj+xyf3eebYnTlX5bjk2L9kvqavEeJ8Rb5u3btOnl27Emd177siy7Nq2iwqXv/VSz0Mx42OIDbL9+ntwdt2+928F5iOBOSnrXb08rYbJ0BEnBcMPw2ZY7isbD3BcifC4S4DtbXfD/dv0vU2nLjQsHvZ79ktS2R95MMqcLYPinoVjEMGBKLYxA0nYjac6Ow1kpSOw8UEh74uEnfwjdmXTt/35TF58EKv2ByvL0/o07e0/Z8gNMH5bq5zu8kVjT1uGnt0mKug7sUS+sXm/y9979zhdVb5Uyb75dH9erYWfYtfedWZl6vOtHvGP4Pq3/p/ofme9z96fnnNhL6n3Tt7tOIsFzSz/ZsFdS4U0y+0PRSeHVTJpQtoWG/6Hq4j6qolfjWfr+fJ10WdZDFJM7z0CbTq3E8is7ceM7yoP1WWB1Fz/fXyGU/Laci1jPQ4XJEOrRt3de87I8uybNpdVbz+rLd6GLYaXWfP8A6d73p9viuGgiGS/MZE+sZ278RLQ8YCj5WNfax3IpztMr5tv5+Tft791ausjnxYBM72QVGvgnEIAAAALqEVRm2ZufFRA5L5wcJEk/fD3YuPqIhn+jbr0QPrV8wAAAB0AsYhAAAAuISsuC4srmGH85HrjwKOP0FEsZEz1zv8enADfm60BQAAwLGm38Zh4ycYQVcgPSwgPSwgPSwgPSy+V3q0svTYUzaTT9l8l7n3F1D3sID0sID0sOjn6dEeLBLr1wXsHHMc0vL1QaDnGJUS0uMYpIcFpIcFpIcFpIcFpIcFpIcFpIcFpIcFiyEcXJcFAAAAAAAA6GswDgEAAAAAAAD0NRiHAAAAAAAAAPoajENa1GVF7rGw8HpaiYRMo3ODZgtzu0AY1WdH7Fq2/PBzpcPpSQ5K8NUzPQPpcYxa/PiIrf2hkOSCRr4hUy1/Oe5jPVYYh3fOcQ3UPSwgPSwgPY5Bv4cFpIcF/tPj5jik4e9DWqN2IK53efSatGs/m1kdf1XPxUL0pvqcSJdl5j6ZskLcLgkeQXoYNGQcN56zPWO+R+BhXfHc2/s3rdctkUgNMhlE4nbJcAHqHhaQHhaQHgbQ72EB6WExENLj4jik4UPIhTcIjeFeCVrUJHraHn8lbeSyturwvphabhcHs4Z/AtyCpfclBCm7KRtlcbs0eAPpYVDz0tvjqfj6uEu7ZjBPKE7V4E1RNHM9n77IWQU+XO0e1D0sID0sID0MoN/DAtLDYkCkx+44pD470nPbL6dvJ+UyjtOFh+uYOXh52UyRIDa/duuAw17/B8mZpXRJDeNtPsec5gxizLj6kZW87sVSObsHoRqXbF0uJRbSpSda+wQcXTmS+N5DU80llTn1G8eRvI6TL+TFW0n82+P5NP/ibmNB3NGfd/4vJCGrBgkoTDffcdh702RxZtloFcnnnBwOBsRmVCCCuPpCm4PH9xgN5f163Uhik7dedNy6YvgLi6O9FiwXkYasCE12VBCtjuJ2SfAI0uNc3Yc70fli+uZaLVc1EqVnWEwgrAx7WrhDRY7I3bLhAdQ9LCA9LCA9zkG/hwWkh8XASI+9cUjda8+5RnvfI/lZKzao8RfEXw07u3lmpkDa3bUKpLKHW6YZnsxB0ro/OYwpCvcLddHPoP/5zEWDn8AjwDziz79ht+y+4jxTM+QfmPjilKW5+tR4O/FJlquneV2Ir0AU7eUW0yarC5RzMh97JR56ZcJOnZneaYg8co6pZl1C6ONzdtNf17x+vF2NnHttxYx1kVWEkYs2rSTHngqN9FiULfTXc2fVr0Yi/BqbXDUYf6t6P16uIAoOUeB2GfAL0uNcXU5SNpJbN4Sv9QmC8HA1KRSZnFOH5AS4WTJ8gLqHBaSHBaTHOej3sID0sBgY6bE1DqEWJkS+R4h3/m8hl43FEaq2O+EZVDRYFtEQNffaLsbgAclYB0ecniFMtdfUUdyQ4OUetznIgIdIYt4rQysY4pERsWoIsWRBvaJhSOXL6wklW1bqO7tnhl2IT0QjzD2OOSgRs05O52Q+SlK5Ic5H0hASWx7+R+BCCXp+4MJRFnde+B55aXNi1ItbmZIKI8ZaXr26fwLxlajauAOZrwLu5PysOgLuzwegtzVUltchigTly4WpRIo4BdWUVFO5WCoAAPh+oN/DAtLDYmCkx9bxOElUWV0CJZZELR07femiuXp6+gZO7iNEmGte8S4yhfGHoDxZojQ7uwzVD58kjxKynt5OqzUY3TK5lpWBHPO9wqrThqGQN6gsv7IRDfr6lFE1p/OhvL35khE4cbzpFAnGaIUgax5VYd46V+PLb4yZf2kNdZ8bRIZLIpSJqgqrqPA9YQAAAAAAAHATe8fjwjN8Qt1LVrvd+ic+2I/xzw0hOUOfyICt4+rLSusYb6A/XacxrN0En/7Kr0ct4weyiDh/03eIEXkpTddD0Wn0bxdA53Q+9LqSghrG//nFRb6564OBWhR3yGar782UT3VsrSYAAAMeRhNFRcXMT2JaPmagVhVVIyFpIRx9dQcAAPQA9HtYQHpYDIz02DwvQJLSdYnMdCpOfRYbExMdfuHU3cxIhzWnjV5Yi0nwI/SZoOVy2XUC5ctsZbQpLGbXAYGXw/m0TlhbXFbPGN0wxinU6qLCigbEKyqFotcb7gyrQspr/Q6Za0hURa1bfCC1J6UCAPQAv4K2AopJyvqMRjV/VQet/O+3RYThExX4upkSAADwCfo9LCA9LAZGemyNQ2rTw074x2XLWLlv1jFR0TFZs0R6tKZ7Rl5aYQNFbYEminhOL21UNVisJYDqcx5cf1ggqMDOaIzQdHajtqyGhhDH8xFUNxxHiHhGT776pNjEWAoVRViOWBJWKbQ0Kt3lSRLz1nMVK0drYzVyUcTvn5gT0BqoHU7HAACw4x2+YOHQPZeuPC+fPVuU+ZlA7l3/JMIE3+mSuPniDgAA6BHo97CA9LAYGOmxNQ4ho+ygA0efUc/ExK7UVSCXp96/koGQ1LylqgJkUdN9P+2bdy7zoJ5OuqkW6W3otYRiNGZP0tyZ3cyUJDhImheh+gwvc5OPi6z323M2H5K8medGD73jWaEmU+ebadOeX4+uRGjsdlddaZkCRTL62JjqY2NXrJIXFf5BXRbF53+8vMdttPdeK5W24WLNc1dz5ycV9MbiZMakqOru1gWzpMh8qrZnf1s2BH83ktBrs18+Sy2nopqkggZUnvr0YXQmiSQyavJEBQqufmOTKyA9LPjH2rvNPmFtvkLJz3Gm6D8hrjZxkmvumQ/DXyviBqh7WEB6WEB6WEC/hwWkh8WASI+twvKMtI2IaXTYcTQs9GQK4zGvjIbh9t3ebvrijB5K0uBYfLjsFpdTt4PPJCK+wdoWhw77bhtPYRzfs56rmJ773rkpO+/lvb19T3HJXsm1nM2HIDbL9+ntwdt2HQ2+F5iOBOSnrXb08rYbJ0BEZqfPPjJ3PJcYe/pcwRKnaw8si510TS9+iAyMdXRpNw5pKPjjVkzsl28XqHjzJJbx59O8alyeOGnMCVyt5/Su9eEpq3mnGH9UPd+/2qnS8R4a8DVIDxPysDXXH1ba2nia67lSBYbprrsS9+sccTiQYQvUPSwgPSwgPUyg38MC0sNiIKTH9v0hOg4XExw6f5FP0djjprFHh+cpk/3y6H7tZiK/MZG+se0hQWiC891c53YTcDYfxrho6HzXa+OV9EdsaLA/4+dsqiHafEqKX3nVmZerzrR7p38G1b/D/EUN7zXicsTROR7lHW/pO7hdCryC9DAiik/ZGpC0NYDb5cAhqHtYQHpYQHoYQb+HBaSHBf7Tw9XJG5YoE/eeXWtt8eMyisZrV3X4CAcAAAAAAID+a6CMQyqj10wyCC0TVlm6f7EiDEIAAAAAAADo15jjEAIBX9eSsVCZGmqvGWrfl4scQOlxAaSHBaSHBaSHBaSHBaSHBaSHBaSHBaT3PTDHIXT6ALo7om8xKiWkxzFIDwtIDwtIDwtIDwtIDwtIDwtIDwtIDwsWQ7iBcl0WAAAAAAAAAD9gHAIAAAAAAADoazAOAQAAAAAAAPQ1GIcw0cpenNzu4BX45J9qRBRXmWPl6rvfQk0Qzzck1WdH7Fq2/PBzpcPpSQ5KPNwuDs5Aehyr+3hzv/3u4xFvimlECfX56/f/7rZEka/76QATtfjxEVv7QyHJBY18Q6Za/nLcx3qsMJ77oT4F6WEB6WEEew0sID0scJ4eN8chDX8f0hq1A3E9uMZ/zprMsHn4GSGSqJRAeVHqPd+VOoViqRcXShK5WCzO1edEuiwz98mUFeJ2SfAI0uMcvSxm6w/GJ2hGe87/qiNZFPe74/6lc1HiqwNaAtwuGh40ZBw3nrM9Y75H4GFd8dzb+zet1y2RSA0yGUTidsnwANLDAtLDBPYaWEB6WAyA9Lg4Dmn4EHLhDUJjuFeCZnXvz/3KHIQM2/rolY8OJeucwai1McVB/3v0vwVLJPD4cVDDPwFuwdL7EoKU3ZSNsrhdGryB9DhHL44+cDZHcUfitb1NA4/ZU0RT5BcHXnv/i9Z4fm4Xrv+reent8VR8fdylXTOEGQ+navCmKJq5nk9f5KyCuw+4+h6khwWkhwnsNbCA9LAYCOmxOw6pz4703PbL6dtJubUICQ/XMXPw8rKZIkFsfu3WAYe9/g+SM0vpkhrG23yOOc0ZxJhx9SMred2LpXJ2D0I1Ltm6XEospEtPtPYJOLpyJPG9h6aaSypz6jeOI3kdJ1/Ii7eS+LfH82k+uGksiDv6887/hSRk1SABhenmOw57b5osziwbrSL5nJPDwYDYjApEEFdfaHPw+B6joV//ziFBZMqOo0frRTRMpooyhh2DtCYORjEfG4oLa2hIAo8fBpGGrAhNdlQQrY7idknwCNLjHEF8nv/btzUyKi1nPwgCMkNE0B+ltTTulgsf6j7cic4X0zfXEm5+TJSeYTGBsDLsaeEOFTl8npntQ5AeFpAeNrDXwALSw2IgpMfeOKTutedco73vkfysFRvU+Avir4ad3TwzUyDt7loFUtnDLdMMT+Ygad2fHMYUhfuFuuhn0P985qLBT+ARYB7x59+wW3ZfcZ6pGfIPTHxxytJcfWq8nfgky9XTvC7EVyCK9nKLaZPVBco5mY+9Eg+9MmGnzkzvNEQeOcdUsy4h9PE5u+mva14/3q5Gzr22Ysa6yCrCyEWbVpJjT4VGeizKFvrrubNq+5EIr8KC9VvaHlW/vhrykfFXevxocTwOQhiIgkMUuF0G/IL0MCAJDVZW//KwNi0y+hO/9lxlOBnChrqcpGwkt25I2800BOHhalIoMjmnDsnBdW3dgPSwgPSwgb0GFpAeFgMhPbbGIdTChMj3jAP2+b+FXDYWZxyr253wDCoaLItoiJp7bRdj8IBkrIMjTs8Qptpr6ihuSPByj9scZMBDJDEva6IVDPHIiFg1hFiyoF7RMKTy5fWEki0r9Z3dM8MuxCeiEeYexxyUiFknp3MyHyWp3BDnI2kIiS0P/yNwoQQ9P3DhKIs7L3yPvLQ5MerFrUxJhRFjLa9e3T+B+EpUbdyBzFcBd3J+Vh3R+XrX/xNgbfRrBuN4asquHZMEezFnAP5jaKVxeywOZI5zDzOUhk9U2dBQWV6HKBKULx9/ECniFFRTUk3lYqnwAtLDAtIDAHALW+MQkqiyugRKLIlaOnb60kVz9fT0DZzcR4gw+6yKd5EpjD8E5ckSpdnZZah++CR5lJD19HZarcHolsm1rAzkmO8VVp02DIW8QWX5lY1o0NeHJtWczofy9uZLRldJHG86hXkzB0HWPKrCvHWuxpffGDP/0hrqPjeIDJdEKBNVFVZRO1tves27k+a6m24WISS34kqQrTJcFgsAh6gF950NFnrXrA6+76QBZ0MAAAAA0An2rssSnuET6l6y2u3WP/HBfox/boxjdUOfyICt4+rLSusYb6A/XacxrN0En/7Kr0ct4weyiDh/0+3eRF5K0/VQdBr92wXQOZ0Pva6koIbxf35xEV70LWpR3CGbrb43Uz7VdbeGte9+M55iH12JiGM2h989bCQHoxAAOFP/z9V1M82vCG6OfOazUBa+GpxNPIwODhUVMz+BbvmQhlpVVI2EpIVweoFon4L0sID0AADcwuZBAklK1yUy06k49VlsTEx0+IVTdzMjHdacNnphLSbBj9BngpbLZdcJlC+zldGmsJhdBwReDufTOmFtcVk9Y3TDGKdQq4sKKxoQr6gUil5vuDOsCimv9TtkriFRFbVu8YHUzmZC/XRjkz5zECI469fY8O3aInj8liwA+gPqp0hbXfPrQ9zibrlOF4cLstjHr6CtgGKSsj6jUc2fgtDK/35bRBg+UQF+fqV7kB4WkB4AgFvYGofUpoed8I/LlrFy36xjoqJjsmaJ9GhN94y8tMIGitoCTRTxnF7aqGqwWEsA1ec8uP6wQFCBnc9RCE2H+7VlNTSEOJ6PoLrhOELEM3ry1SfFJsZSqCjCcsSSsEqhpVHpLk+SqhjvULFytDZWIxdF/P6JOQGtgfr16RhawU1bK/9chJR+vgeDEAAwqE8/bmp6XnT3kzt7pkBL6hne4QsWDt1z6crz8tmzmd/cR829659EmOA7Hac/Y9S3ID0sID0AALewNQ4ho+ygA0efUc/ExK7UVSCXp96/koGQ1LylqgJkUdN9P+2bdy7zoJ5OuqkW6W3otYRiNGZP0tyZ3cyUJDhImpdx4JLhZW7ycZH1fnvO5kOSN/Pc6KF3PCvUZOp8M23a8+vRlQiN3e6qKy1ToEhGHxtTfWzsilXyosI/qMui+PyPl/e4jfbea6XS8kEP9d/wgyHlzP9l/G4w9GTrsRNJbmVk4rHpOLxXnV6b/fJZajkV1SQVNKDy1KcPozNJJJFRkycqUODIsDuQHga0oqgdux7zzfOdUvX8QXTrsyQxtakThvBDet3hH2vvNvuEtfkKJT/HmaL/hLjaxEmuuWc+DK5sYwekhwWkhwXsNbCA9LAYEOmx1c3wjLSNiGl02HE0LPRkCuMxr4yG4fbd3m764oz1lDQ4Fh8uu8Xl1O3gM4mIb7C2xaHDvtvGUxCqYT1XMT33vXNTdt7Le3v7nuKSvZJrOZsPQWyW79Pbg7ftOhp8LzAdCchPW+3o5W03ToCIzE6ffWTueC4x9vS5giVO1x5YFjvpml78EBkY6+jSNg6h1Ve1LuBzVeXntvnmltR1uI0FFxpzAlfrOb1rfXjKat4pxh9Vz/evdqp0vIcGfA3Sw6DuY9yLSlR+a6vRrfZPa/p+SLQfDkc03SIPW3P9YaWtjae5nitVYJjuuitxv84Rx83OhMsgPSwgPQxgr4EFpIfFgEiP7ftDdBwuJjh0/iKforHHTWOPDs9TJvvl0f3azUR+YyJ9Y9tDgtAE57u5zu0m4Gw+jHHR0Pmu18Yr6Y/Y0GB/xs/ZVEO0+WQyv/KqMy9XnWn3Tv8Mqv83c+dRcnhN72LNcIlHecdb+g5ulwKvID0MBCb4ZNN9uF0KPCOKT9kakLQ1gNvlwCdIDwtIj2Ow18AC0sNiQKQ3cD6kpEzce3attcWPyygar13VcTMQBAAAAAAA4D9ooIxDKqPXTDIILRNWWbp/sSIMQgAAAAAAAOjXmOMQAmHAXAVamRpqrxlq35eLHEDpcQGkhwWkhwWkhwWkhwWkhwWkhwWkhwWk9z2Q6XR83o0NAAAAAAAAwK2Bcl0WAAAAAAAAAD/+DxNT6NtwOrvsAAAAAElFTkSuQmCC) Sentença 1: [1 1 0 1 1 1 0 0 0]Sentença 2: [1 1 2 0 0 0 1 1 1] TF-IDF TF representa a frequência do termo.IDF representa o inverso da frequência nos documentos. Texto no SKLearn Opções (paramêtros) utilizados:* `min_df` é o número mínimo de documentos que uma palavra deve estar presente.* `encoding` é usado para que o classificador consiga lidar com caracteres especiais.* `ngram_range` é definida para considerar unigramas e bigramas.* `stop_words` é definida para reduzir o número de termos indesejáveis.
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
tfidf = TfidfVectorizer(min_df=5, encoding='latin-1', ngram_range=(1, 2), stop_words=stopwords.words('portuguese'))
features = tfidf.fit_transform(df.TEXTO.values.astype('U')).toarray()
labels = df.ID_CATEGORIA
features.shape
###Output
_____no_output_____
###Markdown
Criação de Classificador Importar bibliotecas:
###Code
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
###Output
_____no_output_____
###Markdown
Dividir *dataset* em **treino** e **teste**
###Code
X_train, X_test, y_train, y_test = train_test_split(df['TEXTO'], df['CATEGORIA'], test_size=0.2, random_state = 0)
###Output
_____no_output_____
###Markdown
Criar um modelo (Naive Bayes)
###Code
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train.values.astype('U'))
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
clf = MultinomialNB().fit(X_train_tfidf, y_train)
###Output
_____no_output_____
###Markdown
Testar o classificador criado:
###Code
sentenca = 'Os estudantes de engenharia demoram o dobro tempo na faculdade.'
print(clf.predict(count_vect.transform([sentenca])))
###Output
['esporte']
###Markdown
Seleção do Modelo Nós agora vamos experimentar diferentes modelos de aprendizagem de máquina e avaliar a sua acurácia.Serão considerados os seguintes modelos:* Logistic Regression (LR)* Multinomial Naive Bayes (NB)* Linear Support Vector Machine (SVM)* Random Forest (RF) Importar bibliotecas:
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.model_selection import cross_val_score
###Output
_____no_output_____
###Markdown
Lista com os modelos:
###Code
models = [
RandomForestClassifier(n_estimators=200, max_depth=3, random_state=0),
LinearSVC(C=10, class_weight=None, dual=True, fit_intercept=True, intercept_scaling=1, loss='squared_hinge', max_iter=1000, multi_class='ovr', penalty='l2', random_state=None, tol=0.0001, verbose=0),
MultinomialNB(),
LogisticRegression(random_state=0),
]
###Output
_____no_output_____
###Markdown
Validação Cruzada A validação cruzada é um método de reamostragem e tem como objetivo avaliar a capacidade de **generalização** do modelo. Normalmente a distribuição entre treino e teste é feita da seguinte forma: ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAn8AAABnCAYAAABxVCQkAAAABHNCSVQICAgIfAhkiAAAHU9JREFUeF7tnQn0VOP/x5+QypKSpUiJVkIK0WIroezZKmSPjuNkOccSx3bsHMeW5RxLkbKlJKKoUCi7kqWEbMmaPeHn9fz/48z3mWdm7p25M/fOzPtzzj0193vvs7zuzHM/z/N8lnr//CtGIgIiIAIiIAIiIAIiUBMEVquJXqqTIiACIiACIiACIiACloCUP30RREAEREAEREAERKCGCEj5q6GHra6KgAiIgAiIgAiIgJQ/fQdEQAREQAREQAREoIYISPmroYetroqACIiACIiACIiAlD99B0RABERABERABESghghI+auhh62uioAIiIAIiIAIiICUP30HREAEREAEREAERKCGCEj5q6GHra6KgAiIgAiIgAiIgJQ/fQdEQAREQAREQAREoIYIrBF1X5csWWJ+++23qItVeSIgAiIgAkUSaNSoUZEl6HYREIFyE6hfv75p2bJlpNVGrvy9++675uuvv460kSpMBERABESgeAKtWrXS5Lx4jCpBBMpKYO21145c+dO2b1kfoSoTAREQAREQAREQgXgJSPmLl79qFwEREAEREAEREIGyEpDyV1bcqkwEREAEREAEREAE4iUg5S9e/qpdBERABERABERABMpKQMpfWXGrMhEQAREQAREQARGIl4CUv3j5q3YREAEREAEREAERKCsBKX9lxa3KREAEREAEREAERCBeAlL+4uWv2kVABERABERABESgrASk/JUVtyoTAREQAREQAREQgXgJSPmLl79qFwEREAEREAEREIGyEog8vVtZW6/KRMBDYOnSpebTTz81zZo1Mx06dDD16tXzXJX8UytXrjTvvfee+fXXX03btm3NBhtskPxGq4UiIAIiIAKJJyDlL/GPSA0MQ2DcuHFm+vTp5p9//rG3ofyNGDHCNGjQIEwxsV9LfuwbbrjBLFu2zLZljTXWMEOGDDG77bZb7G1TA0SgGggsX77czJo1K5au7LvvvoZ8rRIRiIuAlL+4yP9/vazsfPTRR2VvxTrrrGN23XXXstdbygrfeOMNM23atDpVvP/++2bChAlm0KBBpaw68rLHjBnzn+JH4atWrTL333+/VWabN28eeX0qUARqjcCXX35pRo8eHUu3e/XqJeUvFvKqNEVAyl/M34W3337bTJ06teytaNGiRdUpfyjSPnn33Xd9pxN77q+//jIffPBBRvs4Tx+l/GWg0QkREAEREIEQBOTwEQKWLk02gYYNG3obuNZaa3nPJ/XkaqutZtZcc01v87RV5MWikyIgAhVG4JdffqmwFldXc6X8VdfzrOnedO/e3dSvXz+DQaXZyeGg0rNnz4x+4MDSuXPnjPM6IQIiIAKVQABHvAceeMAMHz7cjB07thKaXLVt1LZv1T7a2uvYJptsYk4//XQ7uGDPs95665kDDjjA9OjRo+JgHHroobbNL774ovnjjz9M+/btzTHHHGMaNWpUcX1Rg0UgiQTatWtnnarCCHa3r732Wp1bcN7o169fmGJsJIJaEpjdeuutBieblHTp0qWWECSur1L+Yn4kAwYMMHvssUeoVlxyySU2/Ee6oPRsuummgctZffXVA19bSRduvfXW5vLLLzd//vmndxWwUvrCCiZOKkceeaTB1g9vX4kIiEB0BJhIEUIpjDRu3Djj8vXXXz90ORmFVPmJzz//vI7iV+XdrYju6Y0S82PChiusHZcvbl2TJk3MhhtuGHNvklO9b/s3Oa0L3hKetRS/4Lx0pQiIgAiIQH4CsvnLz0hXiIAIiIAIiIAIiEDVEJDyVzWPUh0RAREQAREQAREQgfwEpPzlZ6QrREAEREAEREAERKBqCEj5q5pHqY6IgAiIgAiIgAiIQH4CcvjIz0hXiIAIiIAIiEBZCKxcudKQlejjjz823377rfnhhx9svXga49hH2CeiGkTtCEY+9CVLlpjFixebTz75xPz88882zFSq7g022MBsttlmplOnTjaMlqSyCUj5q+znV5bWL1261JAjd8WKFYYQMeuuu67ZaqutQqUZ++abb+xgRlmUQ6ia3377zXo6k4Fj4403toNaq1atDBkuki4MjKTmI2jp999/b/tCvmTY0IdtttnGDtZJFwZ8BnrSxhGOgefCOfpBCIuOHTsa4qGV6pkQkoeXDceyZcts/bxw+E5Qf+vWrW1ga9hKRKCaCXz11Vfm0UcfNc8//3xGKC+334Sp2WmnnQzxQDfffHP3z6E+f/fdd2by5Mlm5syZhnE6nxCBgDGB+IZ9+/bNuPy8886zY3268Dt3hbSmM2bMcE//93nIkCFmv/32y/p3/aE4AlL+iuNXsXcTrPS5556r035+bH369PnvHLNAAibzYnaFF/KZZ57pnq7zmUDLL730knn55ZcDDSrc3LRpU7P33nub3XffPWuKs1yVnnLKKYaZc7rcdtttpkGDBrlus3+7+OKLrTKXLhdddJFVQFLyxRdfmEmTJtlAr3///XfWMhkgu3btag4//PCCQvDwArj33nvrlN+7d29z3HHHZa0z9QfyAl911VV1rmOwZlBOCQoeAaSnTJlivv7666xl0lcUL4JlE48yqviQrGg89dRT9rvhxqx0G8MKBy+6Qw45xCqECG0+99xz61zKysQ111zj3q7PIpBoAqtWrTLjxo0zEyZMMPw/iDDZnDVrlv0NH3zwweboo48OPUFjDHjkkUfMQw89ZH7//fcg1dpruI8xhvRsPuWP3zOT43yCQuhTClP35fpbvrL19/wEpPzlZ1STVzCojBkzJvBglA6JGexjjz1mXn31VTtQhBFW0caPH28HNQJX80JPiqCs0K8gAzT9RkFkdXDYsGFWEUyKwPj22283H374YaAmMZAzCUCRR+EPG5cyvRK4MOOfOHFizoE//R54z5kzx7z++uvmhBNOMN26dQvUbl0kAkkngNJFUPo333yzoKYSAB4FjtX7888/P/BWMPVed9115pVXXimoXt1U+QSSv79W+YwrrgcMCPfcc08gJcftHFu6F154oZk3b15oxS+9rM8++8wOij/++KNbRdk/o7CgCD/88MOhmTB7HTVqlFVckiCsmF1xxRWBFb/0NrMSzMpamFWC9PtZkSXFExwLmdVTLyxZLZSIQKUT+Omnn8zIkSOzKn4bbbSRIS/5gQceaPbff3/Tq1evrLZ2jLf8toLKjTfemFXxw2xlzz33NAMHDjQHHXSQXd3bdttti5r0BW2XrisfAa38lY91RdSEzQfbjWFX7FKdYybK4RNWjLbYYgs7gGEPxxYixszYE/q2HlH87rzzTnP22WcbX1YTXx2lOJeyh0kvu0WLFmbLLbe029SsTLGatmDBAsOA7grbwzAllVScdoBs05DLlC3XlLCligE3NpfY2bFlg+LNto5vWxubTVYajjrqKLebOT9TFtvvb731VtbreOm0bNnSfj9oF98NTA7Yak8J38u7777bnHzyyVnL0R9EIOkEmPxcdtll9nfmCiY15PHmd+kK339W4Jmcs8OSLtOnTzc777yz6d69u3tbnc/Y2bGz4soOO+xgjj/+eOvU4ZPUdi+2ga7JUPr1mKa4277sAj377LN1imUF37dtnLqoTZs2vmboXEQEpPxFBLJaisHgOOXhxQu4Z8+edjBBOeAFjnHwRx99ZLDnCyINGza0s1cGFn7M2RwH2PYYPXp0xkrfwoULrVLFgBiHsDWK3VtKGJCPOOII69ThCnxmz55tbWhQtNKFwRClicE1LsHOE6cKZM011zTklWbwxXjcFRSvBx980Ls6wMsD+8N0W0j3fvcz9kw+xQ+lnu/HPvvsY1jp8AmOKHwvU1tjKNusxEpEoFIJ3HfffYaxLV34LTC2DB48OOtkl2t69OhhHcqwUXaVR1b/tt9++5z20vwWXcGe94wzzshaL9dTd4cOHexBznFXmUuV2aVLF7d4+95wr99kk03saqYkHgJS/uLhnshaWb1i+wAhT/Dw4cMzXvDNmjWznl75VgZRHPEG69evX6DtAgaM5s2bm0svvTRjW5EZbVzKHwpbqq9sv3BkExRblCK8766++uoMR4a5c+faQZMVtnILNkEpGz9W1lhN3XTTTbM2g5AS2Cry79NPP13nOngw+x86dGjW+9P/wOod9pKusBKMXSffp1xCO7kOpXPs2LF2EuKuLOS6X38TgSQRYPKcPqFMtW3QoEGGI4jgjY95zWmnnVZnwswY/sILL9Rx3Esvj1V91xOXiSAr6WF2VxhDcMCSVC4B2fxV7rOLvOWsWvFiZWA566yzMhS/9ApzDRRsiV5wwQXWCy2McwDKn0+5IuZVECeLyIH8W2DKc5iQA762+epk2wQlzxXKQgGMQ1L9YJXvnHPOyan4pbcPb2WUWVfoR9BngrOIO1mgHTiP5FP80utldeLYY491m6LPIlBRBFj1c00q2AL1jRm5OsbEjO1hV3wTrdQ17lYx59mRUSgll2L1f5byV/3POHAPU84VrOhk24LLVxiDCOFRfNui+e7l72wDuMFLUTLcECxByorqGmz7MHwOI2yXo8y6EtTD1r0vqs/Y6vnala18lHxf3wk1gf1fPkFxx1HElcMOO8y+dMIK3w9WVyUiUIkEsF8lCkC6MN6ddNJJoVbeUvcTEstV3LChZpvVJzjkuZLNRtu9Tp+ri4CUv+p6nkX3BnuOYsKS1K9fP6e9Sb4GslLoUwqw+4pL8HrLZquYrU0oTTvuuGPGn9l+jUtwpsAgPKxkC7IcpC/EInMFZRo7v0KF7Sa2qiQiUGkE2JJ1V8GJYZnLBCNXH/kdYP/nimsLmPq7qyhynkkckzlJbRGQ8ldbzztvb5lJxi2+gTBfIOBStZm2kOWiECELiitsu7iDv3tNqT4TwDvXdn22elF8mRS4ks/pB49Gn5MHTiaFtCNVP/ZGxUxQ3H7oswiUi8A777yTURXmDMWIz3SC1T+f+HZkcPCTA5WPVnWfk/JX3c83VO9YtfN5aoUqJIKLfbPTuJQ/cmgWKj4lFlsfNwNJoeWHva+YvuCZ50q+1QK26t2+YuuHN2KxEpcDULHt1v21SwDzFdIopguToGK/y6msN+nlprz6XdqYfPgUwCeeeMLcdNNNNvWmpDYIyNu3Np5zoF6irARJgxaosBAXoRARX5BYfxyLFi3KuLuQoMAZhRRwwrcFHbQYlFi2ZVwFCEW23Jxx4ikmW4rv3nzKH16NrrBKEcWWbaE2qW579FkEykWAyVAqjFaqTsaIQrN7pMrwmV+4oabS+0gazyuvvDKj29OmTbOZdMgXTIpNxgxJ9RKQ8le9zzZ0z7IF9wxdUJYbUPAweGZWyrF8+XL7L0GHk2p0XOwASJxDV/lzP2fBFenpYvvhU1bz9cPn6BEmNmAuAL7V4VzX628iEDcB36oaQeEJCxW15AqFRJxAwnD5vIJRGom3Sq5hnNb69+9fsNlL1H1SedESkPIXLc+KLg1bqiiFlaE33njD5vjFyzXXbDTKeqMsq9iYfD6lKcr2BS0rjn6kZxJJtdO3fRy0D+nXhXXAKaQO3SMCURLwZf+Jsvz0svKtyp966qk2DBfB0302yEzsiKvJgb0vYa5QGsnKJKkOAlL+quM5RtILX6aHQgpm4CAwMEdctnqFtNt3TzGOCZRX7P2+NhVyrth2FHK/7wWkFbtCnp7uqQYC5VT+8vHi90xIL+ILknYxVygtnEfI6U3gfzKQELi/kPEgX5v09/ISkPJXXt6Jri0KWyy2cm+++WabHzaoEOcKmzIGF2y5sPvzecUFLU/XJYOAT/GPaoKRjB6qFSIQnAAOda6wklaK30TQlX6cTW655Ra7OzNx4kTz9ttvu0387zNjO9cyqT/33HMLjgWbtQL9oawEpPyVFXd1V0aQ6KuuusqQYsgnbNXhaUacN2LOpZQ9vNXSt/GefPJJKX8+gBV2zrfy5wbwrrAuqbkiUDABX7YjbGBvvPHGgsuM4kZW8YhJyoGT1uOPP25TxGWz6cWEZ8SIEeaKK67wZv+Jok0qo/QEpPyVnnHN1HDHHXd4FT88PNkqIBhpFKuLNQO0wjvKs3YVQPdzoV2My/u70PbqPhHwmTz88MMPiQKzxRZbWMXuxBNPNM8995yZPHmy8aWEYwv74osvNrfeemuoFJ6J6myNN0Zx/mr8CxBV90nj5YthRb5KtgiwLQmq+PkMkKNqp8opHwHfSkdUTj9RlVM+Gqqp1gk0bdo0AwG7JdlW2DIuLuMJFNUDDjjAMKEfPny4d2sah66HH364jK1SVVESkPIXJc0aLottAldIJVaIcbBe7C7Jyvzsszsi1E8Uki13aRRlqwwRKAUB4qi69n2EuHInzaWou9AyMcchLMy1117rjfvH6qAm64XSjfc+KX/x8q+K2vnxL1y4MKMvheZvzWYzmFGBTiSaANkEXPEFfnavCfI5W+7SIPfqGhGIgwCKlC8VG+Gwki7YJp5yyikZzWSsjmpCl1G4TpSUgJS/kuKtjcJ///13b1qgQgP6ZstLWRs0q6eXOPa4wiSB70sxQkaY+fPnF1OE7hWBWAj48n3jPVvsb6IcnenVq5c3M1GQybovNIzsdsvx1LLXIeUvOxv9JSAB3zYts9xCAhzjSZY0I+iAGHSZQ6Bt27YZTEhv9fLLL2ecD3Pitddes+kAJSJQaQRQoFzBeWLSpEnu6cR9Zkz3JQIIYsvtexfkykKSuM5XYYOk/FXhQy13l3yzOlZnfHHe8rWNMAOS6iCAjROHKySRL9TrF+N44pFJRKASCbAbQmw9V0intmDBAvd0qM+sHr7yyis57ylmhZHfnm9iHiTPts/+15eTOGfj9cdICUj5ixRnbRbWuHFjb8T3sIPZzJkzix4Aa/MJJLfXe+65Z0bjcNZ48MEHM84HOTF27Fjz5ZdfBrlU14hAIgkMGTIkY7zE8eOyyy6z6TALEYIzn3766YYxNJeQz/eGG24weBmHlenTp2d4JqP4+VYD3bJ9aR2XLl1qOCTxEJDyFw/3qqqVyPW+FR5WaNjmCyJEmH/ggQeCXKprKogA+UCZHLjy/PPPm4ceesg9nfUzTkX333+/DT4rEYFKJsDKX9++fTO6gPnMJZdcYu68805vvFT3BnZX5s6da0NpjRw5MvCkCA/dYcOGmbvvvttmU8on/PZQKu+6666MSwcMGJChyGZc9O8Jgvv7tn5HjRrlfUdQZyEKqq9unfMTUJBnPxedDUmgS5cuGSndWKFhlslA44txRRVsDbMNiNFzKmQAjgKLFy8O2QJdnkQCDPjHHHOMTQvlytSpU83nn39ujjrqKJvtJZuQd5SJQbqHLy+dKVOmZLtF50Ug0QQYExnjXO93VgAJrMx4iJJIYPwWLVrY1TXGR5wrOHB4IgWmz946SMe577HHHrO2hptvvrmtiwDPTNQ4Uk58H3/8sd1K9m3Rtm/f3sYCDCJk9mEiOGPGjDqX0w/iCKIMpxwE8R5mcrjHHnuYgw8+OEjxuqYAAlL+CoCmWzIJ8ONlW8C1KeGFfd5555kddtjBdOjQwTRp0sQwY8V2BK9etivSbQMZINgikPKXybhSz3Tt2tX07NnTzJ49O6MLvMD4fnTs2NG+6Ej1h30QRvA4dbz55ptmyZIldWKJkYaqd+/eUv4yaOpEpRBgUnTppZeaCy64wKBguYJ93euvv26PUgpjMQqoq4Tmq5Pt3nPOOceESdeIIsfK/apVq+oUz+qjb9cH5U9SOgJS/krHtqZKZrY4aNAgc88992T0m4Fszpw59sglbdq0satEKJGS6iIwdOhQay80b968jI7xAiJDDEc+YQJxwgkneA3PfY5H+crT30UgLgKs5hE8mRRp+Wz1grSRoPr8zkotTNJQ/ILY+qW3JTW+s90siZ+AlL/4n0HVtIDVmBUrVpgJEyaEjvreqVMnc9pppwVOAVc10GqkI6wQsNXFC+PZZ58N/f0AE6vCvNywMUVhdMVnU+Reo88ikCQCDRs2NGeddZbp06ePXf3yBcvP1V7CrKD07bPPPnblPJ/suuuuBoeradOmhd4yJnTT4YcfbusrdKLF6h9ZTlgkKCQaRL7+6e/BCUj5C85KVwYggC0WNiTjx4+39lz5hPyv++23n9lrr70McaQk1UuA5zt48GCzyy672Bdd0K39zTbbzNr+YFeaEl+oGF84ieqlqZ7FQWDgwIHG9WDHJq9Y4budspvGxo6Ub8uXL7dODxyrr766YazkO77xxhsb7KI5tttuO3suqDRr1syunDOJwt7urbfesvZ82NlRD2Y71EWZmOhgh4ctYPfu3a05ThSCoooZCI4n1P/VV1/ZRQMUYcw+sP/FTMgXEzGK+lXG/xGo968R6T9RwsAIO4gHUZR11lpZuMdjGJwu/DCDBNtM3UNSbuyq0oUfns8zsxC+qZRv2HRhs4WRMjM9Zoys/rRs2dLOVLEH40efLgxCbtR47snmNJJ+L84B7qoQA1iQmSoOKq53cliuLisUYDeSPX3PZytDAFQ3kPG6665rGLzzCX1ww6HA2JduLV9Zqb+jbC1btqzO5bwggsT4ylYHLxxs+rD95HnzAuB7zcoA5TKJ4IXIS859foQRuv766+sUzbWEu5BkJ4DXpU9xzn6H/iICIhA3ART/qG0gtfIX91MtoH5WQooVlIggikSh9fCyJpWRL51RvjJR9MLak6TK5OVWqEQxg3fr9oXAca/xfV5nnXUMRyHC9ieKU5SCQhZ1mSjWHP379w/dVF+wWQZIiQiIgAiIQH4C2mfLz0hXiIAIJIyAb8uY7TCJCIiACIhAfgJS/vIz0hUiIAIJI+ALTVHMqm/CuqfmiIAIiEBJCUj5KyleFS4CIhA1Aewose1MF5xJ2rVrF3VVKk8EREAEqpKAlL+qfKzqlAhUL4Fnnnkmo3PE/8MuUSICIiACIpCfgJS//Ix0hQiIQEIIkDHGlymE0BESERABERCBYASk/AXjpKtEQAQKJEDYmiiEcDAkvXdD+eC1vtNOO0VRhcoQAREQgZogoFAvNfGY1UkRiI/AqFGjbOgegnkXGvpm0aJFNg0WMSBdIfBuvriJ7j36LAIiIAK1TEDKXy0/ffVdBMpAgJU6shbMnTvXOmWQ4YMI/kHi8uHY8eSTT9qcwL549N26dbPppiQiIAIiIALBCUj5C85KV4qACBRBAOUNmz2OMWPG2ADPpI4inRMBrclCwhbxL7/8YlM+kfnDF8w51QQSxZ944olFtEi3ioAIiEBtEpDyV5vPXb0WgVgJoAgSsiVI/mdfQzt37myGDx9uyGYiEQEREAERCEdAyl84XrpaBEQgJAFyEkcl5BM+8MADTd++fTPy/UZVh8oRAREQgWonIOWv2p+w+icCMRNghW7hwoVmxowZZv78+eaPP/4I3SK2hnv37m2TmwexFQxdgW4QAREQgRoiIOWvhh62uioCcRCoV6+e2WqrreyxatUqg+cu6dmw6+P49ddfrULIge0fyh1HkyZNTOvWrU379u0L9hKOo7+qUwREQASSTkDKX9KfkNonAlVEgJAsHTt2tIdEBERABEQgHgIK8hwPd9UqAiIgAiIgAiIgArEQkPIXC3ZVKgIiIAIiIAIiIALxEJDyFw931SoCIiACIiACIiACsRCQ8hcLdlUqAiIgAiIgAiIgAvEQkPIXD3fVKgIiIAIiIAIiIAKxEJDyFwt2VSoCIiACIiACIiAC8RCIPNQL8bmijOgfDxbVKgIiIALVR4D8ycRdlIiACFQOATIbRS31/s2x+U/Uhao8ERABERABERABERCBZBLQtm8yn4taJQIiIAIiIAIiIAIlISDlryRYVagIiIAIiIAIiIAIJJOAlL9kPhe1SgREQAREQAREQARKQkDKX0mwqlAREAEREAEREAERSCYBKX/JfC5qlQiIgAiIgAiIgAiUhICUv5JgVaEiIAIiIAIiIAIikEwCUv6S+VzUKhEQAREQAREQAREoCQEpfyXBqkJFQAREQAREQAREIJkEpPwl87moVSIgAiIgAiIgAiJQEgJS/kqCVYWKgAiIgAiIgAiIQDIJ/A/DhqjZ0SAlQgAAAABJRU5ErkJggg==) Na validação cruzada: ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABVgAAAOgCAYAAADI6x2fAAAABHNCSVQICAgIfAhkiAAAIABJREFUeF7s3XusXdWdJ/gNk5Sq7FFM1dhMq6WyTbdaajsJD9moM8lgoKtmsFEiHkYi3cUj/IOJkmAglVSH2HnIDlQn1YCZRMH8A8Gki5RiHpUodkbTARslHTVmAnnYpfpj8HX+GCl2deN026VRS8Xs37HX8b7HZ59z7rn7vPb+LOkK7n6svdZnn3237/euvfYF7+QlUwgQIECAAAECBAgQIECAAAECBAgQIEBgwQIXLngPOxAgQIAAAQIECBAgQIAAAQIECBAgQIBAS0DA6oNAgAABAgQIECBAgAABAgQIECBAgACBIQUErEPC2Y0AAQIECBAgQIAAAQIECBAgQIAAAQICVp8BAgQIECBAgAABAgQIECBAgAABAgQIDCkgYB0Szm4ECBAgQIAAAQIECBAgQIAAAQIECBAQsPoMECBAgAABAgQIECBAgAABAgQIECBAYEgBAeuQcHYjQIAAAQIECBAgQIAAAQIECBAgQICAgNVngAABAgQIECBAgAABAgQIECBAgAABAkMKCFiHhLMbAQIECBAgQIAAAQIECBAgQIAAAQIEBKw+AwQIECBAgAABAgQIECBAgAABAgQIEBhSQMA6JJzdCBAgQIAAAQIECBAgQIAAAQIECBAgIGD1GSBAgAABAgQIECBAgAABAgQIECBAgMCQAgLWIeHsRoAAAQIECBAgQIAAAQIECBAgQIAAAQGrzwABAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglnNwIECBAgQIAAAQIECBAgQIAAAQIECAhYfQYIECBAgAABAgQIECBAgAABAgQIECAwpICAdUg4uxEgQIAAAQIECBAgQIAAAQIECBAgQEDA6jNAgAABAgQIECBAgAABAgQIECBAgACBIQUErEPC2Y0AAQIECBAgQIAAAQIECBAgQIAAAQICVp8BAgQIECBAgAABAgQIECBAgAABAgQIDCkgYB0Szm4ECBAgQIAAAQIECBAgQIAAAQIECBAQsPoMECBAgAABAgQIECBAgAABAgQIECBAYEgBAeuQcHYjQIAAAQIECBAgQIAAAQIECBAgQICAgNVngAABAgQIECBAgAABAgQIECBAgAABAkMKCFiHhLMbAQIECBAgQIAAAQIECBAgQIAAAQIEBKw+AwQIECBAgAABAgQIECBAgAABAgQIEBhSQMA6JJzdCBAgQIAAAQIECBAgQIAAAQIECBAgIGD1GSBAgAABAgQIECBAgAABAgQIECBAgMCQAgLWIeHsRoAAAQIECBAgQIAAAQIECBAgQIAAAQGrzwABAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglnNwIECBAgQIAAAQIECBAgQIAAAQIECAhYfQYIECBAgAABAgQIECBAgAABAgQIECAwpICAdUg4uxEgQIAAAQIECBAgQIAAAQIECBAgQEDA6jNAgAABAgQIECBAgAABAgQIECBAgACBIQUErEPC2Y0AAQIECBAgQIAAAQIECBAgQIAAAQICVp8BAgQIECBAgAABAgQIECBAgAABAgQIDCnwriH3sxsBAgRGJnD0dJY98tbIqlcxgZkWuPh3smzbP5vpLmg8AQIECBAgQIAAAQIEaiUgYK3V6dQZAvURePU/16cvekKgSoHN/6jK2tRFgAABAgQIECBAgAABAosVELAuVtD+BAgQIEBgjAKvHHglu+jnv8zWf2T9GI/qUASmV2D3lt3txm3ZvWV6G6plBMYocPjg4ezVb7/aOuK6D69zzxijvUNNt8CeP9uTnX47f1wuL7f929uzpRctme4Gax2BEQlcdOFF2T9/1z8fUe3NrFbA2szzrtcECBAgMKMCf/u3f5u9/O5Xsj/Y9D/NaA80m0C1Ak8//XS7wv/tG/97tZWrjcCMCvyn/+c/Zena+O0//q/uGTN6HjW7eoG/euGvshNzJ1oVX/7ZK7IV/+OK6g+iRgIzIHDxhRcLWCs+TwLWikFVR4BA9QJL/+G/Z5v/299UX7EaCcyAwPH/YUm2b+k/nYGWaiIBAgQIECBAgAABAgSaKSBgbeZ512sCMydw23/91cy1WYMJVCHw6EVXVlGNOggQIECAAAECBAgQIEBgRAIXjqhe1RIgQIAAAQIECBAgQIAAAQIECBAgQKD2AgLW2p9iHSRAgAABAgQIECBAgAABAgQIECBAYFQCAtZRyaqXAAECBAgQIECAAAECBAgQIECAAIHaCwhYa3+KdZAAAQIECBAgQIAAAQIECBAgQIAAgVEJCFhHJateAgQIECBAgAABAgQIECBAgAABAgRqLyBgrf0p1kECBAgQIECAAAECBAgQIECAAAECBEYlIGAdlax6CRAgQIAAAQIECBAgQIAAAQIECBCovYCAtfanWAcJECBAgAABAgQIECBAgAABAgQIEBiVgIB1VLLqJUCAAAECBAgQIECAAAECBAgQIECg9gIC1tqfYh0kQIAAAQIECBAgQIAAAQIECBAgQGBUAgLWUcmqlwABAgQIECBAgAABAgQIECBAgACB2gsIWGt/inWQAAECBAgQIECAAAECBAgQIECAAIFRCQhYRyWrXgIECBAgQIAAAQIECBAgQIAAAQIEai8gYK39KdZBAgQIECBAgAABAgQIECBAgAABAgRGJSBgHZWsegkQIECAAAECBAgQIECAAAECBAgQqL2AgLX2p1gHCRAgQIAAAQIECBAgQIAAAQIECBAYlYCAdVSy6iVAgAABAgQIECBAgAABAgQIECBAoPYCAtban2IdJECAAAECBAgQIECAAAECBAgQIEBgVAIC1lHJqpcAAQIECBAgQIAAAQIECBAgQIAAgdoLCFhrf4p1kAABAgQIECBAgAABAgQIECBAgACBUQkIWEclq14CBAgQIECAAAECBAgQIECAAAECBGovIGCt/SnWQQIECBAgQIAAAQIECBAgQIAAAQIERiUgYB2VrHoJECBAgAABAgQIECBAgAABAgQIEKi9gIC19qdYBwkQIECAAAECBAgQIECAAAECBAgQGJWAgHVUsuolQIAAAQIECBAgQIAAAQIECBAgQKD2AgLW2p9iHSRAgAABAgQIECBAgAABAgQIECBAYFQCAtZRyaqXAAECBAgQIECAAAECBAgQIECAAIHaCwhYa3+KdZAAAQIECBAgQIAAAQIECBAgQIAAgVEJCFhHJateAgQIECBAgAABAgQIECBAgAABAgRqLyBgrf0p1kECBAgQIECAAAECBAgQIECAAAECBEYlIGAdlax6CRAgQIAAAQIECBAgQIAAAQIECBCovYCAtfanWAcJECBAgAABAgQIECBAgAABAgQIEBiVgIB1VLLqJUCAAAECBAgQIECAAAECBAgQIECg9gIC1tqfYh0kQIAAAQIECBAgQIAAAQIECBAgQGBUAgLWUcmqlwABAgQIECBAgAABAgQIECBAgACB2gsIWGt/inWQAAECBAgQIECAAAECBAgQIECAAIFRCQhYRyWrXgIECBAgQIAAAQIECBAgQIAAAQIEai8gYK39Ka62g0ePHs3efvvtaitVGwECBAgQaIjA8bnj2emTpxrSW90kMJhAXBNxbSgECMwXcM/wiSAwuIB7yeBWthyNwLtGU+101XrttddmEQxGue+++7KtW7dOVwNnpDV33XVX9vTTT7da+8ILL2Q33njjjLS8mmZGsPytb30re/HFF7Nrrrkm++IXv1hNxWohUBBYd++9lXlsWrcu23nnnZXVpyIC4xR4/XuHsmf+bE9lh9y2b1u2YtWKyuobpqIH/5fPZXM/n2vt+vm8PWs3rB2mGvs0XGDnxh3Z8WMnKlFY+7+uybY8eU8ldQ1byZFXD2eP3PpI/oeH09maq9Zk2/ZvH7Yq+zVYID5HT2zZXZnA/X/5QLb6slWV1TdMRXs++0y2/xv7W7s+8NwD2bqPrB+mGvsQ6CpQ53vJyvevzB7+6Z937beFBEYp0IiANcLVFLAafTncxyncUrgaNUTQ2JSA9ZVXXsl27dqVxX/T5ycCVoXAKAR+fby6ETy/PX16FE1UJ4GxCJzKR7SdqNGItuhLClcD8NVnDwpYx/JJqt9Bjh87nl8b1QSsx1cunzjQwT0HW+FqlCOvHmmNZJ30H0MmjqIBQwlUec+Y9JMGcQ9M4WpgHHj2gIB1qE+FncoE6nwvOfaLY+4lZSfe8pEKNCJgHVTwpptuyt54443W5o8++mgjAsT777+/NSIzSgSm0e9u5aKLLspWr17dDqrrHjDG5+Cll17KHnvsMVMidPtAWDYygT9c0XuEXTGAfc+SJdmypUtL2xLrp7ns3rcvezL/imK07TSfqcm0bemypdnyleXXQ/zym0KZaGGvbSfTg/lH/b2LlmRLli1pt3nNhjXT0CxtmEGBFXFdvHNBactP5AFsKvGZW5JfS2VlSf65nHRZFaMEv32mFcvzwHca2jRpE8dfuEB81nvdB2btnnHmHrg8O3F2tPryCT+BsfAzYo9pF6jbvWRN/lTQwW8fbLG7l0z7p6++7ROwFs5tjE5s2kjXhfT55Zdfbo1cjbC1rtMsxPmPqRBitKpCYBICrz/+eM/DrvvUp7JfnzgzcmnLpk3ZZ265pef207zyt6dOZSkwNtp2ms/UZNoWj0L2ehwyHgfduXFnu3G7juyaTEMHPGr8svzQTx/OYrTeilXLsw23XT3gnjYjMF+g3yP08dhnjASNsv7D6yc+BUC/87fxE5taIXCMXF2fX/dxrSgEFiqw6tLVWa/7QOc9Y9v+yU8b06+P2364vXXPiPB40yc39dvcegILEqjbvWTDbRta/XcvWdDHwMYVCwhYKwatc3UxgrXu845GwFoMVyNMvvzyy1v9/vKXvyx4rfMHXN8IECAwYoEYLbL585tHfBTVE5g9gfSL8ey1XIsJjE7APWN0tmqup4B7ST3P6yz16sJZaqy2EhiXQEyBENMDvPXWW1mM3K37lAjjcnUcAgQIECBAgAABAgQIECBAgEDdBBo9grVztGLxBVgHDhxon+sYxdjvhU4x6jHm7Ex1xD6XXXZZK5iLkZ9lJeb6TPO+xnaxfXpbfVp+9dVXZx/72MfOqyK2S3OFFtseIy5vuOGG0uMWX1aVpkSIyuP/i+uiz9GPVGKu1uJLnnr1K+0zrEtqTxpNWjwHnT6xbb8+tzvR53+innBYtWqybw3t00yrCSxK4GT+aP5zBw9mv8o/66nE3K+b1q/P3tfj51XnQbvV8558TtgPrl2bXZ/X1a08V/jZ+stjx9qbHMunPSiui7b0ml+2W92WERhE4PXvv56devtUa9N1H1nXehw5Hh099L1D2amzL9qJUaatuck6Sszhd/DZV7OjPz/aXrM0f3RzbT7v17r8Uex+pXjsmIO18xhzeb1zPz9zXay6dGUWj7xGifYdPpi//Ofs3JpxzFWXrmo9Tt1rfs1+7bGeQBKIz9aR/DMWJaawWHPV2ny+4Pmf99X5Zy4e5+9WOq+h2GZ1/vmNa6zzc965f/HYS/M5YbtdSwefPTOvXuybRihF+w597/X8mjnavnbjWHFtrc3brxCoQqDbz+3Oz/tC7xmD/vzuduxin4r3jHTdxvp4oWJME1K8V629ak1+baztez1WYaaO5gqM6l6yJv/8xnXTq7iX9NKxblwCjQ9YY77NbiWCxhQ2RpBYFrBGwBkviiqbszP2jXC07NH6CGW/9KUvtZqQAs142VYx+Ix1nQFrjK6MR9aLwWqxH/fdd19rn3hpVTEkjW3K+hx9KPYjwt7ivtHP1K6nnnrqvDYVjx9hbHH74rr0/2XtS+vjWKmt6Rz06nf0OSzLrLu1oXNZ9LfTq3Mb3xOYZYGvffe7Wbxcqtucp1/bu7cVsu64445sZY+XbUWwGtumF1R1esTyCGw/s3lz9tH8D0TFcu8TT3Ru3vr+J4cPt75SeWH79uxDeVCrEKha4JnPfqv9BvbPr9yW/U3+S+jeh/bOO8yGP9kw75fQ+Ef77rufaM9r2dmmeNNzvFAhfsnuNbdq8dh3796SXd0xD2uERc+fbcvND+Yhb/5Sk0dufaT0uHu/sjd74K8eyFa9f3Vnk3xPYEECRw4eznZv2d3aZ2Ue7j/wnU9nO6/b0X7BTiw/kf+C2xmwvv79Q9kzn3lm3nbpwK9mB7M9n30muyqfF68sgIpti8denoe73QLW3VvO3TviF+1jv5jLnsivyeKL7tJxs4ey1jFv/+rt5nNto/ifYQX2ff0H7Z/B8XP77+ZO9L1nRPj/zGf3ZK8W/jDQefz4+R3zqnZeU8XtFnLPiPtWBFBRb9yTOku0Je5Td3z1jp7zm3fu53sCCxHo/Hke87xWcS+JNriXLORM2HZSAqYIWIR8BLDXXnttabgaVUdIGKFfhKH9SoS13cLVzv0idIzwshiudgsGU/vKQtjOeqv6Pvo7SD+ifVdcccV5YXJZO6Lezn53bhvb7No13S866Wyz7wmMQyBC0Wv/zb9pBaPdwtXUhn2HDmU37diRHTt+7i3UxfZFPTfm68vC1bRtvLwqwtQIdBUC0yoQYWZnuNrZ1ghX45eD9NKgtD7eVh0vHkkl3vQcAdX+b+zrrGKo72OE1Oc+8LnzjlusLI6587qd7ZGtQx3ITgQ6BE7kAVLnL8TdkJ7Pg5z4A0B6y3ls0+1N7hHsRH2n8tCpihLXbRy3a7h69gBxzAh3FQJVCsRI6kHuGfGzuzNc7XbP2JOHsHu/Us2/k+L6ihc/dgtXk0Fcq4989JFs7hdHq2RRF4FSAfeSUhorairQ6BGsMUKzOFI0RpCmx/JjpGTZqNX4LMR+xbAv6oqRk/HfKFFPBH1pFGwEfzFlQL86Y9/0OHw8rh6l+Lh61JPqjHVbt25tBbhp1GW0K9anQDe1oziqs9jnGPUZo02jRNui36kM85h8GmGa6uh06WxffB9h7M9+9rP2cbv9T2wXfYp+ptG50b65ubnWHKmxLvUrPO68804jUbtBWtZYge3PPJP9Kr9eorxnyZJsy6ZN2d35VzyGH6FpBKsRvkYw2gpHv/nN7MUvfOE8r9imrJ7Y+Mf5KNQIVqOOKLF9TBmQRqO+/vjj7Tp3/+AH2ZP7z4yy2LRuXbYzv25TiTYqBEYtkELT9OhZPJIfb59dsfrc9ADFXw5i9M89u+/JVl62qj0yLkZrPJGPrkshU4weilEWi30TemrbmZGxt+SPbOdTCuQjWo/mj34efPZA9sOzI5QiZNr/9X35aL07Rs2l/oYIxGcqviIsXZ9PfbHy7GOZ8fh+Kp1B03Wf2JhtyqcPiM9olAh7zoSce1rfx/URYVJcP4stabqAuM6uzkftxfUYJaY4iFA1XYuv5tN5bPiTq1tTeCgEqhCIJx6ixM/lmKIl7hkx5Uyve0aM3l6Xb5vuCfEzfPfd38xHYZ+ZDub5h55vTcmx2M9pTCeQ2hajYqN96Z7xf+dT4BSD4T2f2ZP1e4N8FV7qaLZA/LEuyqzdS469mT8hUfh3nXtJsz/HC+19owPWwCqGiMVHw5ctW9ZzHs4YuZpGhkYw+cILL8yzj3A0HqOPOlNoGYFsr4A1Koj9oq5u85vG8YojYePx/wgbiyX2i4Dx5MmT7ePG8YsBa1mfo63DhKrp+BFwFkePxhQAYdCtfRG8hmGUCIGjX/0e7Y++hU0KnmPfaG8cJ9al+sIp6kxh97wG+IZAAwVibtOYczVKBJcv5o/fF+dajZA1HuWPIPRf5qNcY4TrT44caYWlnY/pF+dJ3ZlPJdA5BUBs/6OHH85u2rkz++XZOV5jFOuHzoa1MXVAKsU5VuP/i+saeJp0eUICNz94c/748i1djx5BTgpr4hfqbT/cft78dTGn3UM/fTi7b83WViiVAs+yOrseqGRhPKodvwQXw9qYB3N1HqZekO+TRipFOwWsJYgWDyVQ9nlPlRVH3XW7huIzGyFPXA/xR4coEbhW9dj+lvxR7c7pOFqBVx62PpiPHkyjW2Mk+GKDq6EA7VRbgW6f99TZ4j0jlnW7Z8TP8Fgen9N0f4mpNqr4nMZ1G/ej8+4Z8UeS/KaRrsX4A178EWSxfwis7UnWscoEqrqXRIPij3RRRn0v6fx3XRzTvaSyj0TtKzJFwBCnOEZ8ptGSEexF0FlWIjQsji4tm6s17V8Wrsb6OGYcL311hqvFNsTI1lQicEztLWtnFcsjXB3UJY1sTceNELjfVAZhWQxXi22O+orr3nzzzSq6pA4CtRCIUaSpxMjVshdZxbyrd19/fXvbzsf7Y1RqcXqBzvA17Rhh6d0bN7br+eXZkbO1wNSJWgmcmc+re7gaHY0XiMRjnfEVI5DKXtYTv6SmF++c2W9u0U7xS0nMg1n2C3CMFkwlwqQYeasQqEqgWzCU6j7R+qxdcO7a+PCVpYeNka3FcvzomRFNpTsMsCICrs5wNe0W12jV1+IATbJJQwT63TPi53C6Z8TnsNc9Y9Mnz/1763D+h4AqSly3ZfeMUVyLVbRZHfUWqOpeEtdecVqmuXyU6WJLr3vJKP5dt9j22n82BBo/gnWY0xQvpkolgr0IPMtKetw/PdYf+5aNrOxXVwSIb731Vtmh5i3vbFMKZwfaeciN0lQDsXuEv8URwd2qjG0ilI1gdZBRpzFStVcJnzTFwzgC5V5tsY7AtAjEKNL0uH606daOl051tvP6/FH9vzg7b2q/YDTmaS0bdRojW8sC2M5j+p7ApATisfteJUaFDjoytPh22zR6rlfd/dbFW9DLfjmPfZeffRQ71dOa2qBjWb9jWE+gm0C8aKrfZ2/XkcHmu49fUqO+9KhozP24+uwj/d2OPciyeJy6V1mx6uL26iquxV7Hsq5ZAv3uGTEdQHwNUpYs+71KP6etaWTyPzCUlc5r8UQ+v/hir8WyY1lOIASqvpcsyaepST/TY378xRb3ksUK2r+bgBGs3VT6LCuOQr3hhhv6bH3msf+0Ub9Rmn0rm9INItAshppX9wlxohsRwBZHnRaD68V2s67Oi3Wxf/MEfpw/6p/KB9esyWKUaq9SHN0ao1WL4WyEqe/Lp+VIJeZajWkEykpsn77KtrGcAAECBAhUKVAMrqqsV10E6iRQ1Uvn6mSiLwSKAu4lPg/DCBjBukC1zsftYwRmv2CwGDxWObIygt44dvw3jQKdVLDY2a+yR/k7uWO7FFh31tG5re8JEFi4wK/OzoMae/76xInWy6sWUjpHqe7I5129aceOVhURvsb/R4ga4e2mK69sBbD9QtyFHN+2BKZF4HQ+X93B/KU5h/OXWs394syjaWcelZ6WFmoHgckIxNx0h/KX6MzlL+85nr/M6nT+0h8jRydzLhx1egTSPeNoPtVMemmhe8b0nB8tmT4B95LpOydatHABAesCzToDzH5zqi6w+oE2j2Pedddd80aMDrTjCDcqunROT9DrsMVpBDpte+1nHQECCxeIQPS5/GsxJR77/w/5S6w+9sgj7dGtUe934uvsi7Tem4esMddrvDRL2LoYbftOi0C8GGT/N/YJjablhGjHVAjEHMXx0pEUHk1FozSCwBQI7P/6vmzvQ3vdM6bgXGjC9Au4l0z/OdLCwQUErINbTcWWMc/pTTfd1G5LmuM1RoJ2znnab87SKjskHK1SU10Eplvg/fm8068//nj23IED2b7XXstiGoLiy69+lb/UKqYPiJGtO2+/vTWyVSEwqwK7tzyRj1w92G5+zHMX862uunT1vC7FS0rizbYKgSYIxC/EOzfubAdI8fKR9R9en63Mr42lFy2dR7Dns88ImprwodDHlkB83vd/Y39bY+X7V2ZrN6x1z/D5INBFwL2kC4pFMy0gYF3k6YuXTi1kxOZiDheP0N9///3tKrZu3Zp96UtfOi9YTRuMM2AtGghbF3OW7UtgdAJ3b9yY7bzzzsoOEC+yiq8o8TKtmI9136FD2U/OzvsaI1vvzEe6vrB9uxdeVaauonEKRLCawtUIkB547oFsTf6LcvfyjoC1O4ylNRR45NZH2qFpvC39tq/eXvr28r0PfVfAWsPPgC6dLxD3i2K4umX3lmzDbWf+nXT+1u4Z55tY0jQB95KmnfH699dLrhZ4jjvD1HHOGxpTA6TjRTsee+yx0nB1gd1a9OadAeugLm+88Ub72IPO27roxqqAQIMEli1Z0u5t8YVVVRPEy7G2XH999uIXvpAdyke3xujVVLY980zVh1MfgbEIxLQAqdyeB0jl4epYmuMgBKZCIObJO5HPtRpl+crlPcPVqWiwRhAYk8DBZw+0j7TxExt7hKtjapDDEJhiAfeSKT45mja0gIB1CLprrrmmvdeB/BHZcZXisYptGNfxex0nAtbiFAX9XvyV6irOYXv12ZFwvY5jHQECCxN4b35tphKP8p88dWphFRS2/kE+OjWmBYivePlVWYl5Vx+/55726pgyYJThblk7LCewWIF4aU8qK9+/erHV2Z9ALQQOHzzS7seKVStKR67WorM6QWABAsV7xrp8ygyFAIFyAfeSchtrZldAwDrEuSsGgTGKdJBH4gcd0TlEc7ruMu7jRSOKUxI8/fTTXdtVXBjbFO2mLTTu2wEbEJgBgU3r12fvOTuKNeZJ3b3v3Ii8suZHCNstQP1OHqzG3Krx9eQPflC2e2u5l1v15LFyBgXijdC9yuFXz4VOvbazjkCdBE71uS6OHzuenZg7M9q1Tv3WFwLdBE6fPN1eXPz/btsWw9hu6y0j0CQB95Imne1691XAWnJ+T548WbImy+677772aM0ICO+6667SbWNFPAZ/xRVXtLYbJIwtq2xV/mbuVOJlV2Uhaiy/9tpry6oZ2fKYEzaV6POXv/zl0mNFG4vrY9/Ol3SV7mwFAQIDCyxbujS7O390P5Un84D1F/n1V1YiXN22Z092044d521XHA373MGDXUPYVG/Mx1osxSkDuh37l/koV4XAtAnEvKupvP79Q6XNe/4re82/WqpjRd0EVqzjBv9DAAAgAElEQVRa3u7SsZ8fyyJE7VZi+aP5XK0KgaYILC9cG8UpZjr7H1MJFOdq7VzvewJNEHAvacJZbl4fBayFc16cRzRGpqYAszMUjSDwi1/8YnvPCDsvueSSrPi4e6yM/SJEjHA1/j9GbMa2w5biCM+oL0LU4ijQFFrG8crC185jF+c9LYa2nX3u3K/b9+H36KOPtlfFC7giVO5sSxyn2MbYL0JrhQCB0QhsyV9ulQLOGMX6R5/7XPa17373vIA0QtF/ma+LkarxSP/W3bvnNSjqKY6GjRA2pgsolghoY1lx3tVbN2zo2rFi6BoBawplFzONQdcDWUhgSIFNn9zU3jN+GX5iyxNZGnUUI1pj/rCdG3dkex/aO+8IZYHTkM2wG4GpElj3kfVZ8Y8PO6/bkb8M7kD+Iqszo7yPzx3P9uZ/dHjwA59rXy9T1QGNITAigasLL7Q6kj/VUHbP2L1l/r+vTr99buTriJqmWgJTJ+BeMnWnRIMqEHhXBXXUpoo787drp0fbIxSM0DSVd955Z14/IxBMAWqsiO0j8IzwNY3E7AwWI3AsPka/ULgIWCPYTSM/o/4IMLuNoI3QMtqXgtLOtqRjR59TuBnbFvv88ssvZwt9bD/qmsuDkgioo4RnfKXwutimWB/L4zhp/UJNbE+AQH+BGMX6wvbtrVGpaS7Ur+3dm8VXCjkj1IzwNZVY/vQDD8yrvFhPbBt1xXQBEabGuijd6vnMLbd0bWSaviAdN9qXSrT3Q2vL3tbetToLCVQucF3+kpIDew60X+jzav6G6PjqVlZeujKL0XxR/LLcTciyuggsXbY02/z5zdmez+5pdSleeBWBUWdoFOtSEJselz5WmNe4Lh76QSAJLOie8f78nvGLs/eMfGqBeEQ6ri2FQFME3Euacqab1U8jWAvnO8LEhQSgEZg+9dRT88LBCBAjzCwGmhG4vvDCC/NGvQ77MYtjFkfPdqsn+tEZWpa9jKtzNG63+ha6LEaxxlcxNE0mxZGx3dq50GPZngCBwQRiTtQILT/aMZo0QtL4KoarMeL0Rw8/3HUe1ffnfxT50Z//eTuYjaOnsLWzng/mAWkcs2w+1ghly8LXwXplKwKjFYh//G/74fbWm9LLSgRIt/3b27J7dp97sVuESYcPzp8mo2x/ywnMosDGT2zKNj+4uWfT11y1Jnvopw9n6wsv+zHvZE8yK2dcIN0z4rNfVtI949Pf+fS8TebeNFVSmZnl9RVwL6nvuW1qzxoxgjWNNo2T3O9N9RGYxjbf+ta3Wp+JCCB7ja6MQDa+YpTmm2++2ZpvNZXYL0LEG264oXR+0ThWhKZRinOstivp8j9pJGy0sTgtQTzuH8eKY0aJIDbaFKXX/KZRX7T1pZdeao94je87+110LE4t0KWJrVGxN954Y6t9xXqjHbFv9Du1s9v+sSyOn2zKtikuj76nNl922WWD7LKgbWK0b2pzv8/Rgiq2MYEFCMR8qr/NR5tGiQBzISWCzsc//vHsT/MRpTENQHGe1GX5i7CivuuvvHJeeNqt/qjn9ccfb+2/77XXsuL8qVHP+/JrN+oaZATqlk2bsvfl80vHS7NOnh1BG/X3m7O1W7ssa5bA8vzt5Tf3CXi6iWz6xPXZqbfPXEOrLzs3t3m3bWPZipUrsl1HHs8Ofe9Qdiifh/VE/vhzlPglec1Va7MNt29ojzoqtueCC86vsd+x126IX8rPhFarL+3ftuLxVqxecf4BLWmkwIb8MeX4bEYZ5HMU263Kr4X0eVp60bm5h3sB3pyPYr0q//wf+utDreky0ijVuGY23LYhW7PhTBviGonrtawMcuyFfNaL9RXn+Cs7vuXNEOi8ZywZ8HM+73oa8J6xbf/27Ej+R7a4Z6Q/KsQ9Y9Wlq7ONn9zYvmfc/tU72vejxd4zBvms97sHNeOToJeDCkzbvWTjpza6lwx68mw3MYEL8kff5z/7PrGmODABAgTOCBzNn1S/6fVzGkv/4b9nL/y/8+c4ZEWgKQKPXnRltm/pP2139+++92R25bv/z9YjugoBAln2J0v/dZvh26f+PRICBHKBmBc3TdsQAbV7ho8FgTMCW9fem/+x9ETrm8cO78pW9PgDEDMCdRa4+MKLsz/+3T+qcxfH3jdTBIyd3AEJECBAgAABAgQIECBAgAABAgQIEKiLgIC1LmdSPwgQIECAAAECBAgQIECAAAECBAgQGLuAgHXs5A5IgAABAgQIECBAgAABAgQIECBAgEBdBASsdTmT+kGAAAECBAgQIECAAAECBAgQIECAwNgFBKxjJ3dAAgQIECBAgAABAgQIECBAgAABAgTqIiBgrcuZ1A8CBAgQIECAAAECBAgQIECAAAECBMYuIGAdO7kDEiBAgAABAgQIECBAgAABAgQIECBQFwEBa13OpH4QIECAAAECBAgQIECAAAECBAgQIDB2AQHr2MkdkAABAgQIECBAgAABAgQIECBAgACBuggIWOtyJvWDAAECBAgQIECAAAECBAgQIECAAIGxCwhYx07ugAQIECBAgAABAgQIECBAgAABAgQI1EVAwFqXM6kfBAgQIECAAAECBAgQIECAAAECBAiMXUDAOnZyByRAgAABAgQIECBAgAABAgQIECBAoC4CAta6nEn9IECAAAECBAgQIECAAAECBAgQIEBg7AIC1rGTOyABAgQIECBAgAABAgQIECBAgAABAnURELDW5UzqBwECBAgQIECAAAECBAgQIECAAAECYxcQsI6d3AEJECBAgAABAgQIECBAgAABAgQIEKiLgIC1LmdSPwgQIECAAAECBAgQIECAAAECBAgQGLuAgHXs5A5IgAABAgQIECBAgAABAgQIECBAgEBdBASsdTmT+kGAAAECBAgQIECAAAECBAgQIECAwNgFBKxjJ3dAAgQIECBAgAABAgQIECBAgAABAgTqIiBgrcuZ1A8CBAgQIECAAAECBAgQIECAAAECBMYuIGAdO7kDEiBAgAABAgQIECBAgAABAgQIECBQFwEBa13OpH4QIECAAAECBAgQIECAAAECBAgQIDB2AQHr2MkdkAABAgQIECBAgAABAgQIECBAgACBuggIWOtyJvWDAAECBAgQIECAAAECBAgQIECAAIGxCwhYx07ugAQIECBAgAABAgQIECBAgAABAgQI1EVAwFqXM6kfBAgQIECAAAECBAgQIECAAAECBAiMXUDAOnZyByRAgAABAgQIECBAgAABAgQIECBAoC4CAta6nEn9IECAAAECBAgQIECAAAECBAgQIEBg7AIC1rGTOyABAgQIECBAgAABAgQIECBAgAABAnURELDW5UzqBwECBAgQIECAAAECBAgQIECAAAECYxcQsI6d3AEJECBAgAABAgQIECBAgAABAgQIEKiLgIC1LmdSPwgQIECAAAECBAgQIECAAAECBAgQGLuAgHXs5A5IgAABAgQIECBAgAABAgQIECBAgEBdBASsdTmT+kGAAAECBAgQIECAAAECBAgQIECAwNgFBKxjJ3dAAgQIECBAgAABAgQIECBAgAABAgTqIiBgrcuZ1A8CBAgQIECAAAECBAgQIECAAAECBMYuIGAdO7kDEiBAgAABAgQIECBAgAABAgQIECBQFwEBa13OpH4QIECAAAECBAgQIECAAAECBAgQIDB2AQHr2MkdkAABAgQIECBAgAABAgQIECBAgACBugi8qy4d0Q8CBOot8PcX+HFV7zOsdwQIECBAgAABAgQIECBAYDYFJBazed60mkCjBE5d+O7shn98S6P6rLMECBAgQIAAAQIECBAgQIDAbAiYImA2zpNWEiBAgAABAgQIECBAgAABAgQIECAwhQIXvJOXKWyXJhEg0GCBo6ez7KbXGwyg6wR6CGz+R1m27Z/12MAqAgQIECBAgAABAgQIEBirgCkCxsrtYAQIDCpw4QWDbmk7AgQIECBAgAABAgQIECBAgMDkBIxgnZy9IxMgQIAAAQIECBAgQIAAAQIECBAgMOMC5mCd8ROo+QQIECBAgAABAgQIECBAgAABAgQITE5AwDo5e0cmQIAAAQIECBAgQIAAAQIECBAgQGDGBQSsM34CNZ8AAQIECBAgQIAAAQIECBAgQIAAgckJCFgnZ+/IBAgQIECAAAECBAgQIECAAAECBAjMuICAdcZPoOYTIECAAAECBAgQIECAAAECBAgQIDA5AQHr5OwdmQABAgQIECBAgAABAgQIECBAgACBGRcQsM74CdR8AgQIECBAgAABAgQIECBAgAABAgQmJyBgnZy9IxMgQIAAAQIECBAgQIAAAQIECBAgMOMCAtYZP4GaT4AAAQIECBAgQIAAAQIECBAgQIDA5AQErJOzd2QCBAgQIECAAAECBAgQIECAAAECBGZcQMA64ydQ8wkQIECAAAECBAgQIECAAAECBAgQmJyAgHVy9o5MgAABAgQIECBAgAABAgQIECBAgMCMCwhYZ/wEaj4BAgQIECBAgAABAgQIECBAgAABApMTELBOzt6RCRAgQIAAAQIECBAgQIAAAQIECBCYcQEB64yfQM0nQIAAAQIECBAgQIAAAQIECBAgQGByAgLWydk7MgECBAgQIECAAAECBAgQIECAAAECMy4gYJ3xE6j5BAgQIECAAAECBAgQIECAAAECBAhMTkDAOjl7RyZAgAABAgQIECBAgAABAgQIECBAYMYFBKwzfgI1nwABAgQIECBAgAABAgQIECBAgACByQkIWCdn78gECBAgQIAAAQIECBAgQIAAAQIECMy4gIB1xk+g5hMgQIAAAQIECBAgQIAAAQIECBAgMDkBAevk7B2ZAAECBAgQIECAAAECBAgQIECAAIEZFxCwzvgJ1HwCBAgQIECAAAECBAgQIECAAAECBCYnIGCdnL0jEyBAgAABAgQIECBAgAABAgQIECAw4wIC1hk/gZpPgAABAgQIECBAgAABAgQIECBAgMDkBASsk7N3ZAIECBAgQIAAAQIECBAgQIAAAQIEZlxAwDrjJ1DzCRAgQIAAAQIECBAgQIAAAQIECBCYnICAdXL2jkyAAAECBAgQIECAAAECBAgQIECAwIwLCFhn/ARqPgECBAgQIECAAAECBAgQIECAAAECkxMQsE7O3pEJECBAgAABAgQIECBAgAABAgQIEJhxAQHrjJ9AzSdAgAABAgQIECBAgAABAgQIECBAYHICAtbJ2TsyAQIECBAgQIAAAQIECBAgQIAAAQIzLiBgnfETqPkECBAgQIAAAQIECBAgQIAAAQIECExOQMA6OXtHJkCAAAECBAgQIECAAAECBAgQIEBgxgUErDN+AjWfAAECBAgQIECAAAECBAgQIECAAIHJCQhYJ2fvyAQIECBAgAABAgQIECBAgAABAgQIzLiAgHXGT6DmEyBAgAABAgQIECBAgAABAgQIECAwOQEB6+TsHZkAAQIECBAgQIAAAQIECBAgQIAAgRkXELDO+AnUfAIECBAgQIAAAQIECBAgQIAAAQIEJicgYJ2cvSMTIECAAAECBAgQIECAAAECBAgQIDDjAgLWGT+Bmk+AAAECBAgQIECAAAECBAgQIECAwOQEBKyTs3dkAgQIECBAgAABAgQIECBAgAABAgRmXEDAOuMnUPMJECBAgAABAgQIECBAgAABAgQIEJicgIB1cvaOTIAAAQIECBAgQIAAAQIECBAgQIDAjAsIWGf8BGo+AQIECBAgQIAAAQIECBAgQIAAAQKTExCwTs7ekQkQIECAAAECBAgQIECAAAECBAgQmHEBAeuMn0DNJ0CAAAECBAgQIECAAAECBAgQIEBgcgIC1snZOzIBAgQIECBAgAABAgQIECBAgAABAjMuIGCd8ROo+QQIECBAgAABAgQIECBAgAABAgQITE5AwDo5e0cmQIAAAQIECBAgQIAAAQIECBAgQGDGBQSsM34CNZ8AAQIECBAgQIAAAQIECBAgQIAAgckJCFgnZ+/IBAgQIECAAAECBAgQIECAAAECBAjMuICAdcZPoOYTIECAAAECBAgQIECAAAECBAgQIDA5AQHr5OwdmQABAgQIECBAgAABAgQIECBAgACBGRcQsM74CdR8AgQIECBAgAABAgQIECBAgAABAgQmJyBgnZy9IxMgQIAAAQIECBAgQIAAAQIECBAgMOMCAtYZP4GaT4AAAQIECBAgQIAAAQIECBAgQIDA5AQErJOzd2QCBAgQIECAAAECBAgQIECAAAECBGZcQMA64ydQ8wkQIECAAAECBAgQIECAAAECBAgQmJyAgHVy9o5MgAABAgQIECBAgAABAgQIECBAgMCMCwhYZ/wEaj4BAgQIECBAgAABAgQIECBAgAABApMTELBOzt6RCRAgQIAAAQIECBAgQIAAAQIECBCYcQEB64yfQM0nQIAAAQIECBAgQIAAAQIECBAgQGByAgLWydk7MgECBAgQIECAAAECBAgQIECAAAECMy4gYJ3xE6j5BAgQIECAAAECBAgQIECAAAECBAhMTuBdkzu0IxMgQKC3wKl3TmX/1//3H3pvZC2Bhgjs//r+bP839rV6e/emTa0vhQCBLFt/771thseO7EJCgEAu8PpfH8r2/NmelsXGT2zKNn5yIxcCBHKBndftyE4cO9Gy+Py+bdmK1Su4EGikwP984cXZB37nA43s+6g6LWAdlax6CRCoRODUP5yqpB6VEJh1gd/8l99kR48ebXXjv/zmN9k/nHJtzPo51f5qBNJ1EbW5Z1RjqpbZF/jN2+fuGXH/cG3M/jnVg2oEjs4dzU7MnQlYYzDHkn9YUk3FaiEwYwL/LfO7RNWnzBQBVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxgi8qwk9PXr0aLubF110URZfysIF3n777ezFF19s+d14440Lr2BG93jjjTey+Epl9erV2eWXX+5zNKPnc5qaffrkqezU26cra9KKVSsqq2vYiqJPh773erb0oiXZug+vH7Ya+xHoKvDr48e7Lh9m4XuWLMmWLV06zK72ITBRgeNz1V0H8bN6ybLJXwdHXj2cHZ87ka3ZsCZbsXLy97KJnmAHr0zAtVIZpYpqKOD6qOFJ1aWJCzQiYL322muzFLJ+6Utfyr74xS9OHH4WG3DFFVe0HR977LFs69ats9iNgdocYfKuXbuy6Gf8f7fysY99rPVZisBVITCMwKHvHcp2b9k9zK5d93ns8K5s0iHr5z7wuezEsROt9m3ZvSXbcNvVXdtqIYFhBNbde+8wu3Xd56MbNmSPf/zjXddZSGCaBXZu2pGdyMPIKsqaq9Zk2/Zvr6Kqoes4+OzB/F74RGv/5SuXZw/99OFs6RSEvkN3yI5TI1DltbLhTzZkW568Z6J9c61MlL92B6/y+nAvqd3HQ4eGFDBFQAEuQtj0VRaqDek8tbtFPwfpc9omdeSVV16Z2j4ttmHR1wiTI4zv9Tl4+umns0suuSSL/yoECGT5L/zH2+FqeBw5eAQLAQJDCpw8dSqLEbvxFf+vEKirwJGDh9tdiz/Qna7wyY66mulXMwVcK80873o9mIDrYzAnW41WoBEjWAclvOuuu7IUHD711FNZjFCse7n//vvbAWH0N/rdrcQozWuuuabtc8MNN3TbbOaXRbhaHPEc0yGEy9VXX92eEuBb3/rWvFA1PjfJZ+YBdGCsAsvzR/pjRERZOX7seHbk1XMhZa9to44l+aOekyzRnxh9lEawrv+IKQImeT7qeOwYddqr/ODQoey3p89Mu/G+/L71vpUrSzf/4Nq1peumYcWT+/ZlX9u7t9WUD65Zk734hS9MQ7O0YQoE1ufTr/QKIQ99/1B2+uSZ6yB+Jq+9qvyzPumnHoJzw20bsoPfPtiSjfZOQ5um4DRrQgUCC7lWVl66Mlv9/tWlR43pKyZd4t9VrpVJn4X6HH8h14d7SX3Ou56MVkDAOlrfWtX+8ssvtwLWGLW5atWqWvUtdebLX/5ya0RvlAhNo8+dUwBE0BxTA8Qo1zTCNfaL5QqBhQjEL729fvGNOel2btzZrnLSj6YN0rddRx7PR64eziJs9UvyIGK2WYhAv0f6f/ypT7UD1k3r1mWfueWWhVRvWwIzIXD7V+/o2c7jG3e0/zgX95hpv3es2bA2e+zIrtac5CtWLe/ZNysJLERgIdfK+g9fmW3+/OaFVD/2bdflAatrZezstT3gQq4P95Lafgx0rGIBAWvFoHWvrs4hYgSrxcf9e82vGqFrzEEbwWqUCJ4jbPUCtbpfAfo3iED8sqwQIECAAIFBBeLFVivKB5wPWo3tCNRewLVS+1Osg4sQcH0sAs+ulQiYg7USRpXUQaA4r2wEqP2miLjvvvvmdfuNN96oA4M+ECBAgAABAgQIECBAgAABAgQILECg0SNYIxCLN8Wnkh4Nj+9jns0DBw60VsWoxEcffbSUNb1xPgK6VEfsc/nll2d33nlnz0fHX3zxxeyll15q1R3zfEaoF/VEu1JgF3XEC5c6S2wX+6bRk7E+jhvhYMyRWhYQxpyhqRRDxfj/4rroc3FEZszXmh6J79evqD8swjHqLbr0a19qW/H8FM9B1JfqTdtGndGmsj63O9zjf+bm5tpr49z1K8m6+Lnpt4/1BMYhsOeze7JTJ8+8FCced4u/5u79yt78kdHD2fH8BSJLly3J7n/ugfMe4T+d73Poe69nh/PtinO/rsjnxFt16aps4yc3terqVbodu7j96/ncgK9971Br0ZX5o27rYi7B/LgHn301P/ZrrfZFiTauzI+Z2t/rmNYRqFLgx4cPZ9/J7/8/PnJu/uM/XL48+2h+j455W1eu6H0NpLaU1fOH+f4xdUG3erY980z227MvtPpF4Z70y/z/7/3mN9vd/NOS/at0UFezBOLnfryhPMrq+Hn/iU35z+PjZ+8dZ66F9R9el3V7pDS2O7jnYPsek+Ti3hHzq8ZTDb3uHcVjx8/+bsd4YssT7RNy+1dvz+8RS1vHO5TfTw59//X2unTMeJQ6tlEIVC3Q7VqZ+/nRbN839rf/7TTstbLhtqt7Nneh18o9u+9p1eda6clqZYUC3a4P95IKgVU19QKNDlgjLCx7A3wxeIzwrixgfeyxx1qPiXd723wEhFF/hH6dYWX6ZLz55pvtNkQdEdalx87TNsXgL5bFdhGERjjbrcRxY13U020O0bI+x7E7H5EvBqxRZwoTIwzuVSKMDZtupV/70j7F8xPnIB7Zv+mmm1qBbWeJdsXyCKajz8M8qh99Sv0fdI5Z4WrnmfD9NAgc+v5r2Ym5M0FlTGD/6K2PZHM/P/cHhDNr5rc0/vH9xN1PtF9QVVx7Yu7My7b25788bPzExq6//Kbti8eOl3J1/lJ99M257NWzv8THupg0/5G8fenFWKmeaGO0Oba9+cGb86DVXJrT8NmqcxuOHT+e3fvEE9lP8oC1s/w6X/eTPHCNcHTn7bdnm668snOT9vf96snyer5z8GB2a/7Crp133JEtW3ouBNr32mvZr0+cf9sJvOoAACAASURBVIXGi7uey/dJ5db8ftUtoC1tlBUE+gjEz/n0s/lIPg/qkjzojD+YpZdlxe7F+0h8H38ciz/exb2hW0n3jvg5/8BfPZCtKnmBUPHYy/NjdwtYU9viOJsf3Jw9X3Lc9jHz9dt+uP28e1C3dlpGYCECxc/riavWVHqtxPVU9bWy/xv7ul6jrpWFnHXbDirQeS+J9zHE7xfTei/54df3Z3sfOvNC0WIfXR+DnnHbdQo0OmBNIxATSjEsi3UppItwr1uJUaXFMDTVF/+NulJ9EdpFqPizn/2sWzXtZSl47LVR1Fl8y31sG+2LuVFTQJtGvqZtO0PWYn9inxQOF/vcqw391kUIWgx/y1yiffGiqGjfICNGo9+pb8Vzl5ZFu+L/45yUBeK92r7Q+WWLx03noFf91hGYhMCezz5zXnjZ2Y6Dzx7Idm/Z3V4cv1jHL8SrL13dGsEUv1SnfxilX6S7/QLcWW+/7yPUjX/4n3vbdbwYa/m840Udzz/0fLYmXghmbtd+pNYPKRCh6E07dmQRpKby3vxljsuWLGl9G+FqlFh/5yOPZC9s3559KB/N2lm61ROhbIyAjRIjUSMsjRIha9T34he+0K4mts0uuKD1/cl8JGva9j15O4pBbHsH/0NgBAKn85dNdYar3Q4TL2Eshq5x71iTB05RjueB7bFfHGv9f/wBbed1O/PAc1tpyNqt/rJlcV97/eyo1XS/im3T8dIxd+e/1G/bv72sGssJLFogPv+ulUUzqqCmAnEv6QxXu3XVvaSbimWzKtDogDVCvbfeeqt97iLAS6MjI6Dr9bh5hKbFcDVeeBSBawplo9IYwRkjOaOk4C9GYZaVFMhG0BfHvuyyy1qbFuuMY6btItR76qmnzpuCINqWHuePbWOf2C6VYp9jJGwatXnjjTfO266snb2Wh0ExXO3mEuujfdG2CHcjkI3wudjPzmP06nOsizpS4Bnu4dyrvs76h/m+OL3EQsPZYY5nHwLDCKSRoTHyNELKeNQ//iobf1GOEgFq/HKQyrr8EdB443Tx0crYJkYLpcdHI2SNR/sXG3imaQiuy9sWUwEUjxnHil+iU/j6fP7X5cUebxg/+zRDoBiuRsj5+D33zAtQIzj9WB6s/jK/30SJka4/evjh80LPv/jud9shbVk9Eax+Ld8uSgS3X83//7P5I/9RimFrbPO1vWdGVbwvD3uL61obKwRGJHDuj17LW1MFdHtxYdwTUrgaIWc8abDpk9fPa1HcO3Zet6MVsEadez6zp5LAM8LVOGZMFVB8pLrzXhX3mMMHD7t3jOhzotoYxX3mD2bxR+lBr5XOz204TuJaaf2hIp8WKoprxad5FALDXB/uJaM4E+ocp4CXXA2pXQxXI1SMUK8z0IuXIBVHUsY23aYSKDYhAskY0RnziUYAHF9pxGkEiSkMjX2i7m7BXoSzUU8qZVMJDNn10t1SmJs2KHOJILf4GH/sl4Lo0srzFeEQQWxnn2P5Cy+8MG/XUfe581z0Cs579ck6AuMQeOg/Ptx65HJ9PiddBKvFX5b3f704gjR/jPM7nz5v3rp4lH9LPo/Xyvefe8VzzKVaRYl/SN2Rt61zrryYt29TPudrKp2PplZxbHUQCIHd+/bNC0W7jU6Nx/Ff2LatNUVAlBh5Gvt1luJj/J0hbWwb9Xxm8+bso/n0AKl0m5Kgs17fExi3QPyx7aGfPtz6ORxzsqav1I59+dMHqURg1Bmuxrq4d6Q5IOP7CHHS/OCL6U+EWdG2zvkq070qwtdUqrpXLaa99q23wEKvlc7PbehM4lp54LlPt/5Q4Vqp9+dz0r3rd30cyJ+iS2Uh95J4SmKxJT7/ve4lca9xfSxWuXn7C1iHOOcRchZHVPYK1yJkTcFrhKvd5g9NTYigsGze0rRNhKfpK4LKslJ8w32aOqBs26qWF0Pn6Esvl1hfDJ8jEO0XPveaWzXqKwavnfPWVtXHqCfOfYx2TiWC5Di+QmAaBWLkaoxaLSsrVl2cxVyp8RVBZ69SHEFaxT9s4h8uveZWjTalEn8Fr+KYvfpnXTMFniwEpRF+ls1tGo/o373pXOjfGYwWpxcIybJ6Yt3GwhyuafqBZurr9bQK3P618//wldoaIWnM753uHfFCqbISf9ArhjhHDp57eVzZPv2W93v5YfyBLhX3jX6a1i9WYFTXylw+Z/1ii2tlsYL2X6xAv+tjbf503TD3kioGXkSg2/muiGJ/Y2BKKu4li/0kNGf/Rk8RMOxpjjfYp9Ir5EzbRCCagtMD+ZuJy/bpF9LF+uKj/r3a3zmaNkLBfvX3qm+QdcXwuFe4WnRJUxlEuBoha1iVlX7tL65PAXhZXYtZHtMqDBqwL+Y49iVQhcDKHuFq1B8B7KBlSeGNzOmxn0H37bZd/KOqV1l+dhqDtE384yZNbdBrP+sIDCrw4/yFVikYjXlOP9rnBY7Xr1+fbX/mmVb1EYzGPKlpbtTYv1hiWoE04rWzPVHPb/7yLzsX+57AVAjEi6Z6/dIZTxzEVDKDliUXLWk/Sl3NCNYzI8nLjh9/OEylintV2XEsJzDKa6UK3eX5KPJepep/1/U6lnXNExjl9eFe0rzP06z0WMA6xJkqvtzohhtu6FvDqIO/4ouq+jZmRBuESTHUHOSlVdGUCJvTtAdvvvnmiFpXXbURCKcgOc5rjKpVCNRRIN4OfSqfnL5YYplCoE4CxVGoMc9pvxKBaQSp6eVT8d8UsMZ/P7RmTfbjsy/EinldY8RrfPUazdrvmNYTmCWBaRjls2TZ780SmbY2VGAarpWG0uv2DAhMw/XhXjIDH5QpbKKAdYEnpTPMjJcrdY4W7ayy+Oh7v8fgO/ct+z7CzBhJW5yuoGzbcSzv7NegAeuow+cq+57mlE11xryv/UbVVnl8dREYtcCRVw9n+76+P58n73B7tNGoj6l+ApMUKD7W/8u5uWzdvff2bU4KV2PDzlGquz7+8az4wqyYfiC+3puHtxHgfmjt2uyD+ZfAtS+zDWZEIP7wlu4b6cWFM9J0zSQwVoF0rcS8wFU83jzWxjsYgRELuJeMGFj1YxMQsC6QujNI7AxcF1jdUJvHdAODvBRqqMqH3Kk4enXY0LHTdsimjGS3CFeLc8zGVA2DhsgjaZBKCVQoEG+v3X33E60XkCgEmioQwWkxPB3GofUyrO3bs7/47nez4guvfpWHt/H1nYMHW9Xemr/k6jO33CJoHQbZPlMjEC9J3PvQXn+Qm5ozoiHTKuBamdYzo13TIOD6mIazoA1VCQhYFykZIVu/EazFQ1x22WWLOmJn0BcvdorH7K/O5437/d///Xl1Dxt0LqqBNdw5RgkXw9U4B73miq0hgS7VWCDC1Z3X7chOHDvR6mW8jCReELImnx+18+VY+/O3Ru//xv4aa+hakwXi8f8/XH7ujbGDWCzrmHc19omQ9fF8JOuf5gFqTEHwXD73eoyOLYa3EbTGV7xUK4JWhcCsCcQvxHv+bE+72WuuWpOty198FS+1Wlp4M3lssHNTfo+ZO3OPmbV+ai+BxQq4VhYraP86C7g+6nx2m9k3Aesiz/s4HxOPUaLFoO/RRx/N7rvvvkX2oJrdi6M5F/KCqeKo1YUE1dW0un8tMbdsvNQqlQhXB3mBV/+abUFgOgT2fmVvO1xdvnJ5tu2H20tfblJ8GcJ0tF4rCFQnEPOnRjBaVYmgdWX+x8/04qxf5vfweKnWk/v3t1+s9bW9e7P35HO3bsnnaVUIzIpA/GFuX/4Ht1RufvDmbPPn/aFgVs6fdo5PIK6VGOXtWhmfuSPNjkC8qMq9ZHbOl5YOJnDhYJvZKgnEqNBiEFh84dWolV566aX2IWLk6rSEq9GoztGyg4asRb/OOkbt2a/+6MO1117b3ky42k/M+lkUOHLwcLvZt3/1jtJwdRb7ps0E+gm8N7+npxLzqY6yvC8/1pbrr89+9PDD2ab169uH+lo+nYBCYJYEjr05N+8Pc8LVWTp72jpOgbhWTp8888LQ+CO2a2Wc+o417QJ/c/CIe8m0nyTtW7CAgHXBZNm8uTeLoecQVS1ol2kOIyN0Lgak8QKufiUCzFdeeaW92Q033NBvl7GtT+FqGmG7detWI1fHpu9A4xKIvxynqQHimJ1TAoyrHY5DYFIC8eKpVH5y5Eh28tSpoZsSQem93/xm6+sHhw6V1rMsH7G684472utj6oDiy7ZKd7SCwJQIHP35XLslK1atmJJWaQaB6RMoXiv+jTV950eLJivgXjJZf0cfjYCAdQjXmPM0lZifc5DRmrFdlS9x6nfMON64y5133tk+ZLyIq19/i22McDZG5U5DSeFqMo75VqM/CoG6Cxyf6z2CL958qxCok8CH1q7N3lOYRzUe2e9XYqTrvtdeO2+zCEnjxVbx9Z183tVepXjMXttZR2DaBeIPdb3K3M+Pmn+1F5B1jRGI6QJ6lc5rJY187bWPdQTqItDvXhLXj7m863K2690PAWvJ+X3zzTdL1mRZBInF0ZrxGHmvwDMeLY95PK+44oqe25Ue8OyK4jynMfKzbHqCCC7vv//+ftW11henO+jVh0EqiykLUn0Rrt50002lIWvni6OmZV7T1O5kEeZPPfXUIN23DYGZE1i6bGm2fNW5l/o8X5gnrNiZ0/kv0Lu3PJHNFUYtzVxnNZhAiUDxJVNP7tuX9XpkP8LVm3bsyO585JHztis+9r8vH8Ea862WlZiHNZXWy7Xyr84Sc7OmUnxBVud2vicwboG1+QutUjn282PZwWe7/0HhyKuHs0dufWTczXM8AlMjsKLwb6x+18rOjTvntbtf4DQ1ndQQAkMKLOReEi/kVQjMgoCXXBXOUoRp6ZH1CAAvu+yyVpAaoVtx1GqEiPFyqwhMo6QRj7FNhK8pZIwAdNeuXe06+43o7PeBibojrE31xPFjdGUaORqh8IsvvjjvsftUZ9mxo4+pRN+jvbEsto8RpQt58VRsG2FkBKtRor5oY4SnYRvrwyqmDyiOXo3jRD+moUQQXgyuo10LGQ0c/SwG4dPQJ20g0Evg6tuuzuJFV1GOvHok27rm3nyOsM158Loi+/t83rCj+fxh+/OXmXSOpOg3EqPXMa0jME0C8YKp/fmI1B/nUwREiVGsEaTGy6niRVVRYuqACE135wFsCjvf7phOYNOVV2Yxz2q8zCpKBLG3btgwr56oN0a3xijXVD6ab9OtFKcv+OXcXCvQ/WA+4vZkPqVAvJArphpQCExCYOVlq7KYTzJNMbN7y+7scH7/2HDbmc9yjDKK0DXuKQqBJgus+8j6bHnhZaKulSZ/GvS9U8C9pFPE93UQELAWzmLMAZoeBY+Asfj2+HfeeWfe+Y4QLULW2Ca2jeAw9i17lDyFsot5kVPU8eijj85rV4R/3QLACC0jKEzB6lz+y1m3EqFwjHZN2xVfnPXyyy8v+LH9qC/amEbQhkvRsbMN0c5wnIYSbY2AulgW+iKxCMAFrNNwNrVhUIGbH9ycvfbXr2XHfnGstUv8why/AHQr6z6yLnv9e6+f2S7/BTpGV8QoWIXArAs8/elPZx/7d/+uHbJ+Jx7zL4Sgnf27NQ9fv1KYFietf/qBB1rBappTtV89EZgWR9AWjxPTF8TI1lRXa/qCs1MYvLB9exbrFQKTEIif+w/81QPZzut2tv/49uqzB7P46iwrL12ZnX77dDuMPeZJiE4i39dYIK6Ve3bfk+3cdG50qmulxidc1xYk4F6yIC4bz4iAKQIKJyrCvoU8qh5h4s9+9rO+oy+j3tgu/rvYEiMqI5AsC2ojhI2QL8LRYtDXGRymdqTQdiEjVfv1IULJt956q2d/i+2s8tj92mY9AQLnC2z74fZs4yc2nr/i7JI1+eOgD/3Hh7PNeRhbLHP56FaFQB0EYjToC1/4Qvb4Pfd0fVw/9THmTt1x++3Z/5Fv163EiNcfPfxwdnc+KrbXPKux7jObN2cv5kFprxKBbbfpA3rtYx2BcQisev/q7KGfPtwaydqtLFm2JIs/4G3bvz1be9W5PwYczqcNUAg0SWDNhrXZY0d2LfhaMS1Tkz4lze2re0lzz31de35BPjJz/tDMGvY0PfYfXYtgsiycTF2PkYwvvfRSe1RnbF98gVM3otgnRowW526N/WJUbK8AMfaLryix3UJGP0a/DhRepNF5vIXUHdtGfWmka2dd0b6iY3rkv5tFcVmnZfQxpiAYZP8YVVt8XL9fQF0csTvIee5se+fxOtcP8v0wxx2k3qZuc+qdU9lLf//XTe1+a4TosUKIGf9IH6QcOXjuF9h4/GbQUabx2H+Epukf9fEL8upLV2XF4/aru9/61iT1R8+86GH56hXZipXnzz1Z7GO/+gbxqMs2MZVDmic3wrmykY916e8w/SjOexrBZHrEf6F1RT3xqH+aDiAC0Xj8f6GjRqOenxTmYk31xOP/gz7iH9MKRB1pJGv0K+Z7HXT/hfZ9Fre/+F/9q3azv33q389iFyptc/wcj7mzowzycza2K/5s/r2LlrZ+9g9a4ud0vA06TSWzIp9iJp54SPeeYt1RZ+e9bJBjL+ReMEh9g/ZtlreLaRrSEykRdsf0P8p8AdfKuX+TLfS6n+XP0ta197ZfmPTY4V1Z/MxSzhdwfdT/+rj4wouzP/7dPzr/5FsytEAjAtahdexIgMBEBZoesE4U38GnTkDAOnWnRIOmREDAOiUnQjOmSkDAOlWnQ2OmSEDAOkUnQ1MmKiBgrZ7fFAHVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCLyrIf3UTQIEZlTg4gsvntGWazaBagXee8l7s/98zd+1Kv0n731v9u6LXRvVCqttVgWuueaadtPdM2b1LGp31QKX/ME/ydK1EfcP10bVwuqbVYF/cfm/yE5dcqrV/Ev+4JJsyYVLZrUr2k1gUQK/f+FFi9rfzucLXPBOXs5fbAkBAgQIECBAgAABAgQIECBAgAABAgQI9BMwRUA/IesJECBAgAABAgQIECBAgAABAgQIECBQIiBgLYGxmAABAgQIECBAgAABAgQIECBAgAABAv0EBKz9hKwnQIAAAQIECBAgQIAAAQIECBAgQIBAiYCAtQTGYgIECBAgQIAAAQIECBAgQIAAAQIECPQTELD2E7KeAAECBAgQIECAAAECBAgQIECAAAECJQIC1hIYiwkQIECAAAECBAgQIECAAAECBAgQINBPQMDaT8h6AgQIECBAgAABAgQIECBAgAABAgQIlAgIWEtgLCZAgAABAgQIECBAgAABAgQIECBAgEA/AQFrPyHrCRAgQIAAAQIECBAgQIAAAQIECBAgUCIgYC2BsZgAAQIECBAgQIAAAQIECBAgQIAAAQL9BASs/YSsJ0CAAAECBAgQIECAAAECBAgQIECAQImAgLUExmICBAgQIECAAAECBAgQIECAAAECBAj0ExCw9hOyngABAgQIECBAgAABAgQIECBAgAABAiUCAtYSGIsJECBAgAABAgQIECBAgAABAgQIECDQT0DA2k/IegIECBAgQIAAAQIECBAgQIAAAQIECJQICFhLYCwmQIAAAQIECBAgQIAAAQIECBAgQIBAPwEBaz8h6wkQIECAAAECBAgQIECAAAECBAgQIFAiIGAtgbGYAAECBAgQIECAAAECBAgQIECAAAEC/QQErP2ErCdAgAABAgQIECBAgAABAgQIECBAgECJgIC1BMZiAgQIECBAgAABAgQIECBAgAABAgQI9BMQsPYTsp4AAQIECBAgQIAAAQIECBAgQIAAAQIlAgLWEhiLCRAgQIAAAQIECBAgQIAAAQIECBAg0E9AwNpPyHoCBAgQIECAAAECBAgQIECAAAECBAiUCAhYS2AsJkCAAAECBAgQIECAAAECBAgQIECAQD8BAWs/IesJECBAgAABAgQIECBAgAABAgQIECBQIiBgLYGxmAABAgQIECBAgAABAgQIECBAgAABAv0EBKz9hKwnQIAAAQIECBAgQIAAAQIECBAgQIBAiYCAtQTGYgIECBAgQIAAAQIECBAgQIAAAQIECPQTELD2E7KeAAECBAgQIECAAAECBAgQIECAAAECJQIC1hIYiwkQIECAAAECBAgQIECAAAECBAgQINBPQMDaT8h6AgQIECBAgAABAgQIECBAgAABAgQIlAgIWEtgLCZAgAABAgQIECBAgAABAgQIECBAgEA/AQFrPyHrCRAgQIAAAQIECBAgQIAAAQIECBAgUCIgYC2BsZgAAQIECBAgQIAAAQIECBAgQIAAAQL9BASs/YSsJ0CAAAECBAgQIECAAAECBAgQIECAQImAgLUExmICBAgQIECAAAECBAgQIECAAAECBAj0ExCw9hOyngABAgQIECBAgAABAgQIECBAgAABAiUCAtYSGIsJECBAgAABAgQIECBAgAABAgQIECDQT0DA2k/IegIECBAgQIAAAQIECBAgQIAAAQIECJQICFhLYCwmQIAAAQIECBAgQIAAAQIECBAgQIBAPwEBaz8h6wkQIECAAAECBAgQIECAAAECBAgQIFAiIGAtgbGYAAECBAgQIECAAAECBAgQIECAAAEC/QQErP2ErCdAgAABAgQIECBAgAABAgQIECBAgECJgIC1BMZiAgQIECBAgAABAgQIECBAgAABAgQI9BMQsPYTsp4AAQIECBAgQIAAAQIECBAgQIAAAQIlAgLWEhiLCRAgQIAAAQIECBAgQIAAAQIECBAg0E9AwNpPyHoCBAgQIECAAAECBAgQIECAAAECBAiUCAhYS2AsJkCAAAECBAgQIECAAAECBAgQIECAQD8BAWs/IesJECBAgAABAgQIECBAgAABAgQIECBQIiBgLYGxmAABAgQIECBAgAABAgQIECBAgAABAv0EBKz9hKwnQIAAAQIECBAgQIAAAQIECBAgQIBAiYCAtQTGYgIECBAgQIAAAQIECBAgQIAAAQIECPQTELD2E7KeAAECBAgQIECAAAECBAgQIECAAAECJQIC1hIYiwkQIECAAAECBAgQIECAAAECBAgQINBPQMDaT8h6AgQIECBAgAABAgQIECBAgAABAgQIlAgIWEtgLCZAgAABAgQIECBAgAABAgQIECBAgEA/AQFrPyHrCRAgQIAAAQIECBAgQIAAAQIECBAgUCIgYC2BsZgAAQIECBAgQIAAAQIECBAgQIAAAQL9BASs/YSsJ0CAAAECBAgQIECAAAECBAgQIECAQImAgLUExmICBAgQIECAAAECBAgQIECAAAECBAj0ExCw9hOyngABAgQIECBAgAABAgQIECBAgAABAiUCAtYSGIsJECBAgAABAgQIECBAgAABAgQIECDQT0DAkv8TMAAAIABJREFU2k/IegIECBAgQIAAAQIECBAgQIAAAQIECJQICFhLYCwmQIAAAQIECBAgQIAAAQIECBAgQIBAPwEBaz8h6wkQIECAAAECBAgQIECAAAECBAgQIFAiIGAtgbGYAAECBAgQIECAAAECBAgQIECAAAEC/QTe1W8D6wkQIDBJgVPvnJrk4R2bwNQInDp5Kjv99ulWe5ZctCRbumzp1LRNQwhMUuD43PH24f9wxYpJNsWxCUyNwG9PncpOnnbPmJoToiFTI1C8Z6xY5Z4xNSdGQ8Yu8O7s3dnvXPA7Yz9unQ94wTt5qXMH9Y0AgdkViHD1pb//69ntgJYTqFBg71f2Zs8/tLdV480Pbs42f35zhbWrisDsCvzJ0n/dbvxv/vIvZ7cjWk6gQoHnDhzI7n3iiVaN7hkVwqpq5gW2rr03OzF3otWPxw7vyoSsM39KdWBIgYsvvDj749/9oyH3tls3AVMEdFOxjAABAgQIECBAgAABAgQIECBAgAABAgMICFgHQLIJAQIECBAgQIAAAQIECBAgQIAAAQIEugkIWLupWEaAAAECBAgQIECAAAECBAgQIECAAIEBBASsAyDZhAABAgQIECBAgAABAgQIECBAgAABAt0EBKzdVCwjQIAAAQIECBAgQIAAAQIECBAgQIDAAAIC1gGQbEKAAAECBAgQIECAAAECBAgQIECAAIFuAgLWbiqWESBAgAABAgQIECBAgAABAgQIECBAYAABAesASDYhQIAAAQIECBAgQIAAAQIECBAgQIBANwEBazcVywgQIECAAAECBAgQIECAAAECBAgQIDCAgIB1ACSbECBAgAABAgQIECBAgAABAgQIECBAoJuAgLWbimUECBAgQIAAAQIECBAgQIAAAQIECBAYQEDAOgCSTQgQIECAAAECBAgQIECAAAECBAgQINBNQMDaTcUyAgQIECBAgAABAgQIECBAgAABAgQIDCAgYB0AySYECBAgQIAAAQIECBAgQIAAAQIECBDoJiBg7aZiGQECBAgQIECAAAECBAgQIECAAAECBAYQELAOgGQTAgQIECBAgAABAgQIECBAgAABAgQIdBMQsHZTsYwAAQIECBAgQIAAAQIECBAgQIAAAQIDCAhYB0CyCQECBAgQIECAAAECBAgQIECAAAECBLoJCFi7qVhGgAABAgQIECBAgAABAgQIECBAgACBAQQErAMg2YQAAQIECBAgQIAAAQIECBAgQIAAAQLdBASs3VQsI0CAAAECBAgQIECAAAECBAgQIECAwAACAtYBkGxCgAABAgQIECBAgAABAgQIECBAgACBbgIC1m4qlhEgQIAAAQIECBAgQIAAAQIECBAgQGAAAQHrAEg2IUCAAAECBAgQIECAAAECBAgQIECAQDcBAWs3FcsIECBAgAABAgQIECBAgAABAgQIECAwgICAdQAkmxAgQIAAAQIECBAgQIAAAQIECBAgQKCbgIC1m4plBAgQIECAAAECBAgQIECAAAECBAgQGEBAwDoAkk0IECBAgAABAgQIECBAgAABAgQIECDQTUDA2k3FMgIECBAgQIAAAQIECBAgQIAAAQIECAwgIGAdAMkmBAgQIECAAAECBAgQIECAAAECBAgQ6CYgYO2mYhkBAgQIECBAgAABAgQIECBAgAABAgQGEBCwDoBkEwIECBAgQIAAAQIECBAgQIAAAQIECHQTELB2U7GMAAECBAgQIECAAAECBAgQIECAAAECAwgIWAdAsgkBAgQIECBAgAABAgQIECBAgAABAgS6CQhYu6lYRoAAAQIECBAgQIAAAQIECBAgQIAAgQEEBKwDINmEAAECBAgQIECAAAECBAgQIECAAAEC3QQErN1ULCNA4P9n796D7aru/MBvHHcmLaUseiI5qalpXfFfS+ZhIjHTg8PFJJmxRNnFQ1RMYkDwD8KDzcsN3QbJQCND2g4P0XYskaniIdyBLsSjcVnqPxLMVdnxpKVgbFrKX0FXpPJHS1UNTEld0z0VZv+OWEfrnnte95x97z3n7s9ynTL3nL3XXuuz99E+53vWXpsAAQIECBAgQIAAAQIECBAgQIBAHwIC1j6QLEKAAAECBAgQIECAAAECBAgQIECAAIF2AgLWdiqeI0CAAAECBAgQIECAAAECBAgQIECAQB8CAtY+kCxCgAABAgQIECBAgAABAgQIECBAgACBdgKfbPfkUnvu1VdfLd5///1Gtz772c82HsrcBX7yk58Uzz77bHH22WcX999/f+P/l3qJPr/99tvFL37xi2ZX16xZU1x66aXF5z//+aXeff1bBIGp56cq2+qqiZXF2kvWVVbfIBWd+uBksffbe4uTH5wqNt+3uVi1etUg1VhnCQscP3a8ODJ1pLIerv/S+mL5iuWV1TdIRdO/PFrs+/7+sh3LiqvL436x2zNIH6wzHgI/Pniw+PDkyUoa+5urVhWfW7e454xKOqKSWgo4l9Ryt+v0EAKHfnSoOPl+NeePUfnOse97+4v4t+CiL24o1n9pwxA6ViUwmEAtAtY777yzOHr0aEPogQceELAOdqwUN910U9MxhawDVjXyq0WgGsdNBKydSgStr7zyiuOpE5DnBxLYvXXXQOu1W2nyK5OLHrA+d8+e4sDHofGh1w8W/+a//V/tmuq5GgscmTpc7N66uzKB+1ZvK9ZNLm5I9NiXHytOHDvR7NP137mhsv6piEAusL384fu9E2eOtWF0rp2cFLAOA2jdRRU4MX28PJdU9xnKuWRRd6eNL4DAc/c8W5yYrub8sfaStcW2/Yv72evg64eKlx/e25CL7xxPTO70A/cCHEc2MVOgFgFrvzs9H+kaoxMjQFvqJQLEFD5HfzuNyowRwGm5MMlHdC41o2eeeaYRJvcq4XHhhRcWTz/9dHHjjTf2WtzrBGopcCr7ZfxUOYr1ZDmi1Wi+Wh4Ktel0HON5uDr9y+na9F1HCcynwDvl5653pk+/n4y2nU9pdY+CgHPJKOwFbRgngfjBPpX4znHq/VO+c4zTDlwibRWwZjty586dzRGLdQnN4pL/CBSjREjYKWCNEat33HFH8cQTTzSnCFgi74EZ3YjQNA9XYzqJxx9/vDFKNQwiaI5QOh8VHf9dl0B+Ke7zUevT1t23dG3SnnueK+JDQ5T1X1xfbPjSRR2XX7V6ZcfXFuqFzds2FxEwReB09b1X+6CzUPBjtJ215WjTbsd9XG6/v7zcPpXrvnN91+No1ZrFnYYifkDYfO/mYm85imJZOUWA0atjdDCOYVN3bNlSfNBlioDv7t1bvHf8eKNnmzZsaDw6lQgtR7nsK6dDiP5EuXjt2uJz3/rWKDdX2xZYYPX5E84lC2xuc+MtcP0f3ND8TtGuJ3u//VLzB+MYoTp53aXtFms8F593Frts/PrG4vCBw402f+HWjcWqidE+py22l+3Pj4CAdX5cl2StETTGYymXPFyNEb1vvPHGjLlmI2S98sorG4FqjF6NQDZC1wjnl7rNUt7vo9S3yesmuzZn78MvNT8MTZy/pvyw0335rpUtwIsT560pdh55cgG2ZBPjKhDz8q66rvOH4CMHVs4IWDeU82qN+ofmmHc1HgqB+RboFpjGtl94881mwHruxERxbTmHvEJgKQrEj1vdPhM5lyzFva5Pwwhs6DFH6dTzbzYD1vis1u39NUw7qlrXd46qJNUzjMAnhlnZugSWkkCEpfmcqzG/aqcbeaURvan/Mb2EQoAAAQIECBAgQIAAAQIECBAgUD8BI1gr3OcxkjEeUSKA6xTO9bvJfG7UbutUvd1u2xrktflqXz4nbBXz5eb1xb6LaQG6lUuzUSBpJOuw+7zb9rxGYD4Ejpc3hYjSa0TgqXJeyZPlXEZRlp+9rLwUaGHu0p5vt1cb58NHnUtPYC7H1GId9+l9uZDvtaW3p/WoaoGYiuDDU6fPA8NMJ5DX86lly4oVyxfmfFK1h/rqLeBcUu/9r/dzF5jLZ5u0bGxloT7/z+U9PffeW6MuArUOWGO0Yn5JeApHY+fHvJoPPvhg4zhIl4p3Oiiinlg2bvyU1xHr3X777Y25TTsFb3FpecxrGiUuPY/LzOPvqC/VFevHnLB5iddi/tQYOZmPukztjUvY77///rY36jrnnHOaVeXtba0rLo/Pg8vLLruscUl8lGhntLdbibldo41zbV+qM98/aR9Ee5NZ3vZ06X6nPndrZ3ottpHmo12xYkXPVVr3abSn9bmelViAwAIIxPyV+76/r7GlDeW8rTEn5P7v7WvMEZnmc538ymSx9amZ87/GB42p5w8UB1//s+LIgSMzWhpzLa0vLy3aXF4GHZcNdSr5ttf9o7WzthF3/X1o047G6svLOh/+D4+UbTpZ7Pve/mLqh1Pl3U1PB8Dxer/b7NQWz9dL4PEvP1Yc/dXpG+Jc/wfXFxMXTBR7v723cWfZdNzft29bsa6cAzYvx48db7w/Dv7o0IzjL5aZKOf421jO69VtHrJYbsfGh4rj5RxgUWLbrZfhtXtPpu1OPT81Y060leVcynFZ3ub7rsmb6b8JLIhAhKFP7dtXvDA11ZxqIG04pie4edOm4nPret85uls9nymnLtha1hP1tYatV/7+7xfvnTj9Xsrnmv3ZkSPF+ttuaxo8e9ddxbnl5ziFQNUCziVVi6pvKQvEZ6znfndPo4sT560u7nrxG43PVPv+9f7mZ6qYz3Xb/u2zGOLzT0xLMOh3jnzbcR+Kdtu4fd3tze0+/B8ebszpf/oz38FZ211fTkl1/Xev7/o9Z1YnPFF7gVoHrLH381GL+dEQYVke4HU6UiKITQFp6zJRd7wegWBrWJmWjW2kNkRA262+tE4sd9VVV3Vse9QXQWE82t2sa9A+x3pp3W42sUwE163Bamp/al8Eug888EAjhO5U8rbGf+chb+v+iv7GNjtZd9pGej4C1jxQ7rV8q+Nc1u1Vt9cJVClw8v3yruYfB5Vxw6m4UVZ+06B224obCz1WBlT53dDz5SKgOlB+EIrH1t1bOwZO+baPd7jpVmpbfIWOkGnHFx5qu920zbhL6F1/fFf5wW1Nu6Z7jkBDoHEH5uZxf7Rx3Hc6nhNZ6w8PrZTx/tm9dXcjqN32p9s7fuiO4/jE9OlQKNrRWvL3RdzlNr5Q7LlnT9ubTUSbX3745fKHjoPFIz//l61V+ZvAvAm8U37u2vLYY7OC1bTBuOlUPO7evLm4+5rOPwBEPVc+9FBz9Gtrg/98erq4bdeuIkbFvrJ9e7E6u9lW3KArBayt66Wbd8XzH3w8srZ1GX8TGFbAuWRYQevXSSB/vxTFR8XL5eelGNDRrcRnpvghIz5jtSv5d464YW6nH5xbt92urnzgRvz34/c8NitYTesd+tHBIh7tfoxvV7fnCIRArQPWCMRidGgqEfil4DC/K3ynUYkRcuZzb8Y68YgSIWh6LQWDb731VtcRjhEOdgolUxtTXamdaeRmupw9tht1pPAvgs7oZ2pX1JP3OV+2dblO/U5taff/7ULQ2Ha0L+qL19M2ow933HFHo5puIWu8HsumcDX1OdUXfU79jf+PPkfIOt8lRuemkpvO93bVT2AYgfhVuPWX4db6GiHnxh3NsCdGjsaNheIOvVGmy1GBEXKmsCoCp5XlKNbWkYCt9fbzdwpXY5sxWjAeEUClu4JGHbHdPXfvafvLdD/bsEz9BOIHhTRqtVPvW78ExMjRGHkaN5OLS9XiR4dD5ajWdAzGsfrwzx9pjH4YpsSxHaO1o6T32rJyGo7YZrxXU7uP/epYGey+1PGLxTBtsC6BVoHWUDQu5b92crIxwjRGku47dKgZvH537+kvz+1C1mNlQJqHq1HP5eVI1TTFwE8PHy5iNGqUCEyvKoPYPGSNUa1pWoJ3jh0rol1RYv3PrV3bbPaKsl6FwHwLOJfMt7D6l5JAfH7vJ1xtHVixvrzabu0lp6+MaP3OET84ryk/l8VVdMOWfCBJjKpN3zmOlp/34jNXKru37qrk896w7bX+eAjUPmDNL71P4V/sui1btswIIlt3Z4y8TAFqBH1xQ6Q8xIzl87Ax/jtGp7Ze6t9ab/wdYWMsO1FeMtVaItRN4WoEou1Ga+bbjfVjuoG8bXkbIoxMl8XHMv20r7VN+d+xrRR2dnOJ/iW/CFljPtNuc55Gn+MRQWZMT9Aa/uYjf1OAO58jSmPUcnKL/sfUBAqBcRL4QnmZ86ZbN7Wd12j3zbuaoU6ETO1G6rWONH25/HW6ioA1wtP4kHPni3fNCq7yACyCp8NlyFvFNsdpv2nrYAIRUsaxvLE85ievn5x1bMXxnH8J6DRCIkZs7yo/aEeJYzX+jjqHKemHinbbjHY99s8ebX7Qjy/3nUZuDNMG6xLIBSIUjZGrKdiMkPPJW26Zcfn+jvJzcgSr333ppcaq8d9fLj/L5aNP4/nbfvCDGfO2/vtHHplRz93lMvv+7M+Kr5cjWGN7EbI+9eMfF1F/lPT/jW2U22oGrCtXFk9+9auNZRQCCyUQ55L4Iezqezc7lywUuu2MrUD6gfiScpqjS8upyNa2TMkUHYsrgtLnoG7fOfLPQs+VVyRVEbDGdleff3oag9bpzvLvHLFcXEV06XWXju2+0PCFE/jEwm1q6WwpAsQ0P2v0KgK/1nA1no+ALw8sI5Drdml9rBPBbYR37cLVGKkZj1TahavtthuBY6/tNisd4j8iMM1Dx3ahc96+PFCNgLRXiTlfw7M1XI31IuDMn+81ErjXttLr4RZ1pUdM9xAjafP2RpvmM8ztt62WI9CvQAQ5N5TzsLabND5G6eUjXNuFq7Gd+CByy+4zc7bGOu0uhe63TWm5+KATcya1GxV4dTnfa3yxSeXYx/NrznUblq+fQPrQvulrm9oeW1N7To8gDZlYtlOIGV8SYmRFKhHyV1HahatRb7zP4r2aSnxZyW/8UMW21UGgVSDmXE2X38dI0dZwNS0fUwPko0gjGG0taXRqPN+pnk0XXVTcfPnlzVVjvleFwCgKxPkhrlyo4lwS55NUnEtGcW9rUxUC8fkmvi+0C1fje0P8UJ3K1nK51qAzXmv9LBSBZxWfhRqfDcvvHO22Gd85YsBHKv95aua9KKqwUcfSFKj1CNZBd+lrr73WXDVCwm6Xh0fwGo8U+EUI2Wn5COm6jYSM1/MAs1uo1zoaNELh1ucG7X+n9VovmW8XOqd1IwyNYDrCyijhE23s1qdYvlOJ+mJ7aVTsdDmfVxUl6stvhJbXGduL/dWtn1W0QR0EqhToFh7FdlZOrCrnVD0TnLb70JHas7q8aVBejh89USy/YLjLpbfu6j4iKaYqSJdTT79dzfu8Sl91jaZA3CSq27G8rrwUbdXu0zdrixsjdCsxbUCaKiAuXRu29HpPxpeS+GEhjQSJLxXtfhwZth3WJ5AEYl7VVCJEbb3xVC71O+Xcqz8tL+uPEsFoPuI0vylVvN46ujWv5/L164uf/fmf50/5bwIjJ1DluWRdGd6kcMm5ZOR2tQZVIBCfXTr9YJ2qz79zxM1IO5XWz0IxX+uwn4Vi2+0GdKQ2xDQFadBJXFGkEOhHQMDaj1LLMvm8q53C0nyVCDZTwPrmm292DVi7NSdCxJi6oJ/SOspzoUawprZdccUVPZsZwWQEqmlKgQiuu83F2i18jY3lfU519mzEEAvENhZiO0M00aoEZgn0+jASHzTiC0Q/pfVDyak2N/Ppp558meXl3JPdSgTACoG5CvQ6btZOri3if4tRer0no00xJ2szYPUhfzF2U222GXOiptGrjXlXy8v+u5Vzs+ms0iX+aX7VCGajjjTVwO5yZOyOG86MyM7rPbf8PPjqt77VbVNeI7DoAs4li74LNGCMBOKzS7cyl+8cUU/+WaiKq+a6tS1eWzXR/Qf3Xut7vZ4CAtYB9nsKS2PVFStW9AzZ8sv9qw460yXsVdc7V5Z86oJYt99RnbFcGpXbWsdc2zAfy+fti/ojUI2APd1YK0a3Rmg+7Ny189F2dRKoSiCC08YUAOVk9QqBugjESNGYhqL1uD9cvhcUAktV4J3sCqAYcZrC1m79/c1yPtT3TpxoLBLzt6aANf6OS///1cfztMbUAxHgbt20qbh43bquI1q7bc9rBMZJwLlknPaWti62QHznOD59oryx6OwrhOKmWQqBURcQsM5xD7WOWOx0+XinaqsKQiOUjEvy87C30zYX4vm8XzGStHUEbac25KNSW207rbOQz0f7WkfOxrQA4R/zsEa/479jmW7TOyxkm22LQFUCU89PFVPPvzljTtaq6lYPgVEUiA/2+763v9j//X3NEaOj2E5tIjBfAv/1L/6iWXWEretvu21Om2oNZO8ppxj48OTJIsLVKH9e1nlbeUOrKJ8pR7/GCNiYg/Xy8kZaCoGlIuBcslT2pH4slMCRA4cbN7zK7wOxUNu2HQJVCghYq9RcgLoi0LvqqqtmBavtQs2FDCzzbfUbri4A17xtIqaGiH7GvogSNyaL6Q3q0Pd5Q1XxyAjEF4PHvvzYrA85MZfSsnIKgbyccLnyyOw3DRlOIG7wFsd9upttqq31uI/3R7pcf7gtWpvA6Al8cKr6EUIxLUAEqTFFQASsqcR/x+PFcu7WGPV67eRkcXc5p6tCYJwFnEvGee9p+2II7LnnufKH7f2zNr2yvNFnXnznmEXkiREUELAOuVPuuOOOOYVq+XQBg2w6Rk2mUasR5sX2I+xrV+9ZZ501yCYGWqeOweKVV17Z2PcResej2w3MBkK1EoFFEnjunj3NcDXCpY23bmrMy9pursivLP8Xi9RKmyVQnUDcvCAPV1efv7rYdOvlxfovrZ91A4QY1b176+7qNq4mAiMqEPOibipvPjWXEqNS25WYyzUe75RTLcU0AT8rHz89cqQ5P2uMfP3u3r2NG2X9+0ce6XpjrXb1e47AKAg4l4zCXtCGcRJ4uRy1moerV9+7uYgbwMVNrVrL7etuK06U0wcoBEZZQMA6x73TGiTGqMXWS8jnWGXfi0ewmuYrjZXeeuutBdt2r0bmBjGaNQLHVqt2dbROLdBumVF+Lr+B2QcffDDKTdU2An0JxCU66a62scLDP3+k693X+6rUQgRGXCAuS0sjV1euXlls2799VrA64l3QPAKVCOTzp/7m3/t7lY8ojdA2HlvLuVmjRNj6YjmXfQSrUSJo3fLoo254VcneVMlCCziXLLS47Y2zQPwgsffhvc0ubNu3rW2wOs591Pb6CXyifl0erscRGubB4ULOgRo3U0olRk8uVLDbj1hrW/q9YVXu1++NsfppzyDLRHh92WWXNR4PPPDAIFVYh8DYCxyeOnMDn/VfXC9cHfs9qgP9CBzLbqZw/XduEK72g2aZJSmQB6wxwnS+y+fKm109+dWvFjeXN75K5Wfldj8o521VCIybgHPJuO0x7V1MgSPZd461HUatLmb7bJvAIAIC1gHUItxMJW40tVBllOc5jdA5D0hfe+21nizRnzyIvbS8dGwxS4xAjcA3Hv20P9qat7/dNA2L2R/bJjCIwInyzumpLG+Zb3WQ+qxDYBwE8rvVxrQYCoG6CkTgmcqH5XysMcJ00HLbD37QuElWPL7z0ktdq7m7vBlWXlpvltV1ZS8SGBEB55IR2RGaMRYCx7PvHGPRYI0k0IeAgLUPpNZFtmzZ0nwqwriYe7NXiblTq7zpVK+6FmMEZu4So0F7tfHBBx9sssUI2LjcfjHLBRdc0Nx8BKe9RifHfs+nOFjs9i+mnW0vHYFlZ58Jl+LSnW4l5k1SCCwFgTxUPfar6Y5dihtcxSWgCoGlKhAjWD+3dm2ze9v37Ok5mvRYzJ/aJkBdsWxZ45L/eMScq93KfNxcq9v2vEZgPgScS+ZDVZ1LVWD52WdunJv/ONGuv/u/t8/8q+1gPDdyAgLWbJfkl/7nl+O37rUYqZmP1rzppptmzI2aLx8BXLwed5m/8MIL+wpjW7eX/s5HeEb4lweUaZnYXoS57V5rV2/e516BaLv18+fiZlspZIx2xKX2neqMADifT/b+++/vVf28v95uv3ZqfzwfzqlE31unSZj3BtsAgXkQmDj/zA1Kjhw4UoZJs0cdRcgUd/zM502ah6aoksCCCeTHfQSo7X5ciPmJd2zc0ZyrdcEaZ0MEFljgd665prnFuCnV13ftKiJEbVdihOs//r3fa9yg6h9/85szFtl40UXNv+Oy/3YhbFrgxY/nYI2/I+SNeVpbSz59wTvT0z2D39b1/U1gvgX6OZdM//Koc8l87wj1j4XAxHmrm+089cGpjt854nPZnt/dMxZ90kgCbnKVHQMxgjGNRk3hX4RmERY+/vjjM46Wp59+uhkgphA1pguI6QPiUvF47u23326EiGmUY4SZw4xyjLojOE2hX4SUEbSmsDeebx1VmRqdj7TMO3LFFVc0wt8oUVeEwdHGqGuQG3i98sorjSA5thd1nHPOOUWEjxEOR/+nyw/E0cZ8dGj0I5YZhRL7OYLhTu1vt1/jGBmFgHgU/LRh/AXWf2lDsTK74c/LD79cRNC69pLTl41G8HTo9YNFfBBqLe1CqdZl/E1gFAU2l3et3bFpR6NpcWzfsfb2Yv0XNxTxZTl+UIiRFfE+aC2n3p/9Pmhdxt8Exk0gpgm4uwxZUyC6/+DB4s/Lz28XlyNb0xQCMeJ0X/l8PjJ10/r1M7oay8bcqk/t29d4PkLYuJlVPBdh6YflPKvt6rl548a2ZK3TF1z50EPFteXny6jn4nJb+ettK/AkgXkWGPRc4s7o87xjVD+SAmsn15XfL9Y2P1/Fd46p56eKDeV3kWXlNGXdvnP8VZvvISPZSY2qnYCANdvld9xxR7Fz585mIJqPsGwNWCNUe+ONN2aM0ozQMA8O86MpLT/MKMcIKCPAvOqqq5oha6dtphGiKYyNsDefOza1LcLZaFNaLu9zhK9zbW8s/9Zbb81wiTrzenOXaOekb/R3AAAgAElEQVQohZMRLsd+zY27tb+K/Vq7f3V0eKQFYt7VbX+6vdjxhYeaI/UiWGoXLl1979XF1A+nmpfs5PO3jnQnNY5Ai0B8yI8vxvmo7EM/OljEIy9x+ecN5U2wdm3d1Xg6wtiYQ2zVxCqmBJaUQJoTNYWscZn/i/HIRprmHY7lI5RtLTtuuKHxVApZo57tzz3Xuljz76hn6+WXt309TV+Qbr4VoW+q68kysFUILLbAoOeSaLdzyWLvPdtfDIGtT90y4zvHiWMniv3f3z+rKV+4dWMRN5FL30em3+48ndOslT1BYAEFTBGQYUeAGeFav6FiLPfuu+8WMZq10zrxfISIETp2WmYu+zsFgJ1GfEZgGn2I0DIfLdsp+I1tz6XP/bQ1+pna0KnPeTv7qXMhl8mNO7W/6v26kP2zLQK9BFatXtUIWS+5brLtovFr87Z924rN911TrPtHZ26I0i6EbVuBJwmMoMDV95XBzu5bitXZJWupmRGsbiw/3D9xZGfxD7+0vhxZcWau4tYQdgS7pkkEBhKIsPPgk08W1062Pxd8qpxjNUaOvrJ9e9twNW00QtZYJkbAdir91BPrPvONbxStI2U71el5Aosh4FyyGOq2Oa4C+XeOdjcZTd854sftfAqOgy0/gI9r/7V76Qmc9VFZll63hu9RHkhG4JbPVdqp9hgFmkaCxjJxefx83lk+LlfP72Lfbzs7tT/qSlMJVNn2vN5wjHCyH89O7Vzo5/P2L8R+Xej+jfL2Tn50snjtr/5klJu45Nt2srw8+lj2K/HqCyaKGOWqLLxAzEH18sOnb7B0dTnacnMZCCrzIxCXpZ04enrOyV8vb8KwamKl435+qCup9SvL/0Wznr/4t/+2kjpVMlsg5ltNJW5gFSNKVyyf2/ngg/Jy/pg/ddh6Yk7YGA0bZXXZjnx+1tktr+czL7z5ZnFbOX9uFOeMxTkGnEsWx73XVm9fd1vz6qsnDu90FUovsAV8/cjUmfPMyvLqIFcIzS/+pz/x6eKf/p1/Mr8bqVntpgjosMPzm1h1WGTW0xEcdhrxOGvhCp6IkHKQdnba9DDzw3aqM56fr3q7bbPK18a9/VVaqKt+AhGmxiVvCoE6CcSIingoBAicEahijtMIZKuoJ0LVeCgERlnAuWSU9462jaKA7xyjuFe0aS4CpgiYi5ZlCRAgQIAAAQIECBAgQIAAAQIECBAgkAkIWB0OBAgQIECAAAECBAgQIECAAAECBAgQGFBAwDognNUIECBAgAABAgQIECBAgAABAgQIECAgYHUMECBAgAABAgQIECBAgAABAgQIECBAYEABAeuAcFYjQIAAAQIECBAgQIAAAQIECBAgQICAgNUxQIAAAQIECBAgQIAAAQIECBAgQIAAgQEFBKwDwlmNAAECBAgQIECAAAECBAgQIECAAAECAlbHAAECBAgQIECAAAECBAgQIECAAAECBAYUELAOCGc1AgQIECBAgAABAgQIECBAgAABAgQICFgdAwQIECBAgAABAgQIECBAgAABAgQIEBhQQMA6IJzVCBAgQIAAAQIECBAgQIAAAQIECBAgIGB1DBAgQIAAAQIECBAgQIAAAQIECBAgQGBAAQHrgHBWI0CAAAECBAgQIECAAAECBAgQIECAgIDVMUCAAAECBAgQIECAAAECBAgQIECAAIEBBQSsA8JZjQABAgQIECBAgAABAgQIECBAgAABAgJWxwABAgQIECBAgAABAgQIECBAgAABAgQGFBCwDghnNQIECBAgQIAAAQIECBAgQIAAAQIECAhYHQMECBAgQIAAAQIECBAgQIAAAQIECBAYUEDAOiCc1QgQIECAAAECBAgQIECAAAECBAgQICBgdQwQIECAAAECBAgQIECAAAECBAgQIEBgQAEB64BwViNAgAABAgQIECBAgAABAgQIECBAgICA1TFAgAABAgQIECBAgAABAgQIECBAgACBAQUErAPCWY0AAQIECBAgQIAAAQIECBAgQIAAAQICVscAAQIECBAgQIAAAQIECBAgQIAAAQIEBhQQsA4IZzUCBAgQIECAAAECBAgQIECAAAECBAgIWB0DBAgQIECAAAECBAgQIECAAAECBAgQGFBAwDognNUIECBAgAABAgQIECBAgAABAgQIECAgYHUMECBAgAABAgQIECBAgAABAgQIECBAYEABAeuAcFYjQIAAAQIECBAgQIAAAQIECBAgQICAgNUxQIAAAQIECBAgQIAAAQIECBAgQIAAgQEFBKwDwlmNAAECBAgQIECAAAECBAgQIECAAAECAlbHAAECBAgQIECAAAECBAgQIECAAAECBAYUELAOCGc1AgQIECBAgAABAgQIECBAgAABAgQICFgdAwQIECBAgAABAgQIECBAgAABAgQIEBhQQMA6IJzVCBAgQIAAAQIECBAgQIAAAQIECBAgIGB1DBAgQIAAAQIECBAgQIAAAQIECBAgQGBAAQHrgHBWI0CAAAECBAgQIECAAAECBAgQIECAgIDVMUCAAAECBAgQIECAAAECBAgQIECAAIEBBQSsA8JZjQABAgQIECBAgAABAgQIECBAgAABAgJWxwABAgQIECBAgAABAgQIECBAgAABAgQGFBCwDghnNQIECBAgQIAAAQIECBAgQIAAAQIECAhYHQMECBAgQIAAAQIECBAgQIAAAQIECBAYUEDAOiCc1QgQIECAAAECBAgQIECAAAECBAgQIPBJBAQIEBhVgV8r/nZx3q+dO6rN0y4CCyrwyX/yt4rz//Z5jW2uvWRtsfbX1i7o9m2MwKgKPPDAA82mLTvXOWNU95N2LazA//J3/27xwD/4B42NOmcsrL2tjbbAPXfeU5x6/1Sjkb+98n8tlv3astFusNYRmCeB5Wctn6ea61vtWR+Vpb7d13MCBAgQIECAAAECBAgQIECAAAECBAgMLmCKgMHtrEmAAAECBAgQIECAAAECBAgQIECAQM0FBKw1PwB0nwABAgQIECBAgAABAgQIECBAgACBwQUErIPbWZMAAQIECBAgQIAAAQIECBAgQIAAgZoLCFhrfgDoPgECBAgQIECAAAECBAgQIECAAAECgwsIWAe3syYBAgQIECBAgAABAgQIECBAgAABAjUXELDW/ADQfQIECBAgQIAAAQIECBAgQIAAAQIEBhcQsA5uZ00CBAgQIECAAAECBAgQIECAAAECBGouIGCt+QGg+wQIECBAgAABAgQIECBAgAABAgQIDC4gYB3czpoECBAgQIAAAQIECBAgQIAAAQIECNRcQMBa8wNA9wkQIECAAAECBAgQIECAAAECBAgQGFxAwDq4nTUJECBAgAABAgQIECBAgAABAgQIEKi5gIC15geA7hMgQIAAAQIECBAgQIAAAQIECBAgMLiAgHVwO2sSIECAAAECBAgQIECAAAECBAgQIFBzAQFrzQ8A3SdAgAABAgQIECBAgAABAgQIECBAYHABAevgdtYkQIAAAQIECBAgQIAAAQIECBAgQKDmAgLWmh8Auk+AAAECBAgQIECAAAECBAgQIECAwOACAtbB7axJgAABAgQIECBAgAABAgQIECBAgEDNBQSsNT8AdJ8AAQIECBAgQIAAAQIECBAgQIAAgcEFBKyD21mTAAECBAgQIECAAAECBAgQIECAAIGaCwhYa34A6D4BAgQIECBAgAABAgQIECBAgAABAoMLCFgHt7MmAQIECBAgQIAAAQIECBAgQIAAAQI1FxCw1vwA0H0CBAgQIECAAAECBAgQIECAAAECBAYXELAObmdNAgQIECBAgAABAgQIECBAgAABAgRqLiBgrfkBoPsECBAgQIAAAQIECBAgQIAAAQIECAwuIGAd3M6aBAgQIECAAAECBAgQIECAAAECBAjUXEDAWvMDQPcJECBAgAABAgQIECBAgAABAgQIEBhcQMA6uJ01CRAgQIAAAQIECBAgQIAAAQIECBCouYCAteYHgO4TIECAAAECBAgQIECAAAECBAgQIDC4gIB1cDtrEiBAgAABAgQIECBAgAABAgQIECBQcwEBa80PAN0nQIAAAQIECBAgQIAAAQIECBAgQGBwAQHr4HbWJECAAAECBAgQIECAAAECBAgQIECg5gIC1pofALpPgAABAgQIECBAgAABAgQIECBAgMDgAgLWwe2sSYAAAQIECBAgQIAAAQIECBAgQIBAzQUErDU/AHSfAAECBAgQIECAAAECBAgQIECAAIHBBQSsg9tZkwABAgQIECBAgAABAgQIECBAgACBmgsIWGt+AOg+AQIECBAgQIAAAQIECBAgQIAAAQKDCwhYB7ezJgECBAgQIECAAAECBAgQIECAAAECNRcQsNb8ANB9AgQIECBAgAABAgQIECBAgAABAgQGFxCwDm5nTQIECBAgQIAAAQIECBAgQIAAAQIEai4gYK35AaD7BAgQIECAAAECBAgQIECAAAECBAgMLiBgHdzOmgQIECBAgAABAgQIECBAgAABAgQI1FxAwFrzA0D3CRAgQIAAAQIECBAgQIAAAQIECBAYXEDAOridNQkQIECAAAECBAgQIECAAAECBAgQqLmAgLXmB4DuEyBAgAABAgQIECBAgAABAgQIECAwuICAdXA7axIgQIAAAQIECBAgQIAAAQIECBAgUHMBAWvNDwDdJ0CAAAECBAgQIECAAAECBAgQIEBgcAEB6+B21iRAgAABAgQIECBAgAABAgQIECBAoOYCAtaaHwC6T4AAAQIECBAgQIAAAQIECBAgQIDA4AIC1sHtrEmAAAECBAgQIECAAAECBAgQIECAQM0FBKw1PwB0nwABAgQIECBAgAABAgQIECBAgACBwQUErIPbWZMAAQIECBAgQIAAAQIECBAgQIAAgZoLCFhrfgDoPgECBAgQIECAAAECBAgQIECAAAECgwsIWAe3syYBAgQIECBAgAABAgQIECBAgAABAjUXELDW/ADQfQIECBAgQIAAAQIECBAgQIAAAQIEBhcQsA5uZ00CBAgQIECAAAECBAgQIECAAAECBGouIGCt+QGg+wQIECBAgAABAgQIECBAgAABAgQIDC4gYB3czpoECBAgQIAAAQIECBAgQIAAAQIECNRcQMBa8wNA9wkQIECAAAECBAgQIECAAAECBAgQGFxAwDq4nTUJECBAgAABAgQIECBAgAABAgQIEKi5gIC15geA7hMgQIAAAQIECBAgQIAAAQIECBAgMLiAgHVwO2sSIECAAAECBAgQIECAAAECBAgQIFBzAQFrzQ8A3SdAgAABAgQIECBAgAABAgQIECBAYHABAevgdtYkQIAAAQIECBAgQIAAAQIECBAgQKDmAgLWmh8Auk+AAAECBAgQIECAAAECBAgQIECAwOACAtbB7axJgAABAgQIECBAgAABAgQIECBAgEDNBQSsNT8AdJ8AAQIECBAgQIAAAQIECBAgQIAAgcEFPjn4qtYkQIDA/Ar89Ud/U/zX//7e/G5E7QTGRODY28eKo7882mjtmvPXFKsvWD0mLddMAvMrMLVnqrmByesn53djaicwJgLHjx4vjhw40mjtuWvWFOdOTIxJyzWTwPwK7Dt4sPjg5MnGRjZ8aUOx7Oxl87tBtRMYUYHlxfLi7/+tvz+irRvPZglYx3O/aTWBWgj8TfHXxc//3/+7Fn3VSQK9BPbu3Vu8/PDexmJX37u52Pxbm3ut4nUCtRC46aabmv384T/7o1r0WScJ9BKY+ndvFru37m4sdvfmzcXENdf0WsXrBGohcPvXv168d+JEo69PHN5ZrPr1VbXot04SaBX49Cc+LWBtRRnyb1MEDAlodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFPjkkOuPxeo7d+4s3n///UZbL7300uLzn//8WLR71Br5xBNPFGF59tlnF6+88kqxZs2aUWvigrTn6NGjxbPPPtvclmNqQdhrs5G9395bWV/XnD9RrP/ShsrqG6Si48eOF7tv3lWc+uBUMXndZLHx1k2DVGOdGgpM/+pocfBPDlXW841f21gsX7G8svoGqWj/9/YV+/71/mLV6pXF1qduKf9/1SDVWKfmAvu/v784+f7JShRWTaws/22+tJK6Bq0knSeOHztRTH7lkmLzfdcMWpX1CMwQ2L1vX/HhyWreK+dOTBSbLrqIMIElI+BcsmR2pY6MkEAtAtYIBiMUi/LAAw8IWAc4ACOgvvPOO5trPvjgg8XTTz89QE3jv0r0/Zlnnml2xDE1/vt0lHrw8sPVBayTX5lc9IB1z93PFUcOHGkQR3h8SRmyLnbINUr7W1s6C0y/PV1U+n5Y5GPv5Acniz2/u6fR4RPTx4sIW6//zg2dAbxCoIPAvu//uDyGTnR4dW5Pr71k7aIHrPFeSOeJlx9+udGeVRN+fJjbnrR0O4Gnfvzj4r0T1bxXrp2cFLC2Q/bc2Ao4l4ztrtPwERaoRcDar38EZymI3bJlSy2C2AgK33zzzQZRjMS88cYbO3LFyNU0ErjjQkv8hfDKw9Ul3l3dIzC0QB6mxihW4erQpCogQIDAkhZYdvayJd0/nSMwHwI/PXy4ePHj73SfKa8y3LrJFUPz4azO8RFwLhmffbWUWipgzfbmT37ykyIeUSJsrEOJcDUPDDsFrBGuxojVNEXA/fffXweeGX2McDlCeIXAfAo8cXhn1+p3bHqoOXJp460bu15yPwofLK7etrmIkXtRLl3ky1C7wnpx5ARieosnLlnXsV1HDhwudm/d3Xy913tnsUfExY8LW3ffUkw9/2ZjaoCNX/Plt+PO9UJXgW37tnd9/fFrHyumfzndWGb9F9eP/Ejpq++L88SpxsjuaK8f4rruXi/OQeDVb32r69JbHn20eGf69Hvl5o0bi62XX95x+U8tG+3g/73jx4sXpqYa7b+4/G8Ba8dd6YWPBZxLHAoEqhcQsFZvumRrvPLKK4t41LXkI5zraqDf8y8wlxBoWRnYzGX5+W/97C1EkHTXi9+Y/YJnCPQQiJClW9By4tjMS4hH/b0Q3Y15iOOhEBhGoNexvmzFmSAo3kO9lh+mLVWsG228pfzxQSFQtcBvruo+1UQemq5YvrzotXzV7VMfgcUU6HVucC5ZzL1j2+Mq8Ilxbbh2E1hIgZg6IubyjRI393KjtIXUty0CBAgQIECAAAECBAgQIECAwOgK1HoEa1zy3WlO0Xg+zccauy9CtW4llv/FL37RXCcuqf/sZz/b13qpDbFOPKLEttN0BVFPPNqVdtuNtnZaPtWd6sr736vPuUfe1nbtyusfxCXWb90/+T7IfaItvfrcrY39vHbZZZc1F4upEkwV0I+aZUZF4FR5if7J9081mrO8nNsuRr5GOV5ejpluLDJx/upi4vw1bZsc68flpsc/vqlK1LGyHJk6cf5E2+XzJzttO18m2pFK/mt6rHvw9TN3kV9Z3u16XZdLxns2xgIESoF2x1t+jMfx/VvljX86jZ7N3zcBGsflmvK9kN5X3ZDzbefvxbROt/dLvAfTZd+x7upymzFCXCFQhUCnYy//d3ghzhPRl3ajqtq9b2PZaF+cx9I5znmiiqNBHXMV+ODkyWLfwYPN1WIk7OfWdZ7iplP9cZn/r8opCz4s64sS9cRjdYeRuLF8Kh+cOv05L/7+sPzv/DUjczuJe75qAeeSqkXVN24CtQ5YI/jLg7N85915551FPKJEePfuu++23bcRAsa8pDG6MQ8r08Ix0jECuTwczCuKdeMu9FFi/tPHH3+8uOqqq5rhano+6shLBIw33XRTI9Rtt93YXsyT2m5O1XPOOWdGXemPV199tYhHKtHnvN1hlULWaE+7utO6sVz0LeZ3bde+CIBvv/32rnXk+yftg3gu9ksKn/OOxDLRrqpHl8b+Sf2OPkf9Ata2h5AnR1Rg3/f2N+/GHvPb3fLULcVjX36sGa5Gsye/MllsLZ/Py/Fjx4vdN+9qhDpxg6rWsnL1ymJzOXde3PG5U8m3HXer3rZ/5tyBMefeHetub67+w5N/VKTtpvA3r7ufbXZqi+cJhEA+j/F9+7YVZ51VNN4P+TEez6+bnPnleOr5qWL/9/c1Q85WzUvKS//j/dAt9My3ffPurbPmJc7fL1ffu7lRX9xhfe/De9u+B/vZZms7/U2gncDB1w825zReXf7gtr38t3rvt/eWx/z+5uLt/g2Pf6/jGI33x6DniXzbEZDuPPzkrCbm54nGfMvl+zbOT84Ts6g8sYAC75Tfd7bt2VP8rLzBVGuJUPPaycni7muuaX1p1t9xg6rv7t3btp5Y+OK1a4snv/rVWUHrlb//+8V7J07Mqi/mlV1/223N51/Zvn2gwHdWxZ4g0EPAuaQHkJeXvECtA9Zh926Ebnno2K6+CAIj0OwVSMa6Ud+FF17YDPPa1RfPRZibwt9Oy0RdEcBOlyfYhb4hVQpG2wWrqb2xTLTv7bffboTK/ZQIa2OdTiXtjzfeeKOykDXqTGFqCq07bd/zBMZBIL4Af/O3v1mcODb7A3ne/vjCvOd393TtUtQRNxmKkUWb7+v9BaJrZR+/OP3Lo8WOjTvaflGPRdI2Y3Rh3ARJITCMwKEfHZwRILWrK0ZjtP4g0W65A2XAdKgMqbb96bZi4rw17RaZ03Px48PurbsawVWnEts8MnW4ePjnj3QccdtpXc8T6CQQ54ldZXh56EdnriBot2zcsG3PPXs6/nsd66R/sw+XN6W7ZfdX21Uz5+dOlKFutK/TeSxtM/qx8VY3k5szsBX6Eti9b1+x/bnnOi4bI0gjNN136FDxyrZtRczx2q7EMt996aV2LzWf+9mRI8WGMjB98pZbimtrciPmriBeHAuBfs8l8Vks/k1v9yNd6qhzyVjsco0sBWodsMYoygjtUmk3UjFeW7FixayDpTVcjcvU77jjjuKCCy5oXOYfwWEEobFclAhEu13qH8vkozLTFAPxfH65f7Q3D1djNGWMqoztpkv885sxRZ8mJiZmjBTN+xz/nbab6mo0uCxpuoL0dz//n1xSuNrqEq/HyNYIWKOkeU17haxRXwpXo51xs63Uvmh/3qfweeutt/ppbs9l8hHOEVTnI3p7rmwBAiMokI/2icnr02X+cblxKvFBJw9XY9RSjFKNZf7q/ZPlCNMT5ciml5pfbl9++OXGJZ3dRrL2S5HC1djm+i9uKKc0OP2FJL6cR5iUyq4yeHpicqdQqV9Yy7UVyEfnrT5vdfN4W/bxcRcrxYf+/H0To0Y3lMdmvHciBP2z8v0Sx2Z8MYjHY//ssUoCz4NlvenLRmwzTY9xsnwPxkjaFC7F/8cPIlX9yNEWypO1EjhRTgcTjyj5eSKfFqZxpUH5A1sqcXVBhJlrPx75He+NOIbTv9sHnj/QGN1dxXGaRpzHNuO8k6YUaD1PxAjceO90mu6jVjtVZysVaA1XLy6nA4jRqulS/BfffLN4Yer0Z5YY5brl0UeLV7/1rfcg3KgAACAASURBVFlteKFcLg9XW+uJdZ/av795uf9tu3Y1tpGmH9ixZUsR0xNEiVG0aZufKb/7bd105scFUwTMovfEAgj0ey6Jf9NTcS5ZgB1jE/MqUOuANQK6LeWJKZUI6VIgemn562D+WuteyEPMCN1i1GQevkUIGOtHQJcu44/gL5brVqJNr7zyShHrtyv5pekRrLZOHRDrxPOx3RScPvvsszMC1rxfeagb7e/W53btaX0utpvC1XYuqX0R/Ka+RMh6xRVXdOxzrJPqjPVaR+RGmyNETvWFd/Srk2Frmzv93Rq4h6tCYCkIxBfmu164q/lFuLVPz919ZkRG3PF8a8vdndeWK8TzOzY+1AyeIqiqImCNQOnqe6+e9SU8tveZcv7VCFajxHLTb0/Puoy7tS/+JtBLIML8mB6j3aX9MXo0H8W3rZw6IAVIUW8EO/H3pdeX74cvnB55HYFnhErDjpyLYzy+aNz14jdmzXe86WubZr3/qgiuell5vV4C1/3BdcWmr13ettNxeX4q8R6688W7ZgSZMS/xhvIqg+Xl+Sb9kBH/X8VxGu+NL9y6sTGFRh6exnli09c2Fvf+9r2NpsVyVbwX2wJ4srYCx8qRqfnI1bs3b541DUAEoJs2bCi2PHY6OIoRqDENQOu8rBGwpvLlMqD9w3IagLw06rnoouKqhx5qhqwRyH7u47A2tpGXFLCuWLbMSNfaHqGj1/F+zyUxPU1MJZb/u+5cMnr7U4u6C3yi+8tebScQIWw+YrI1XE3rpLA0/R2hXwpw29Ubz3ULV1vXbw0a8zrz12K9bpfrd2rLXJ/PA+pYt5NLvBbhZYxCTaXXlAdpnU59jvryEbcxgniYkkbaRh0RFHfa7jDbsC6BxRLoFq7GJfr5ZZdXl19gO5XN5RyRqcQ8rSfLS6mHLafnlGw/3UC8Fl/kU4mRtgqBYQQiwIwP8+3C1ag33g+pxPzFebiabzemBIhRralE+F9F2fan22eFq6ne/P0XQVJ+E6Aqtq2Oegt0+0Ic/9bno7qv+4MbOo4Szc8hcZwereC9Ee/FG77TfpvxXszPE1W9F+t9NOh9LvCvssv5IxTtNMdqBKO/k82/2m4agAheU+l06X/c4OqhG25oLhfzqyoExkVgLueSrbu+6lwyLjtWOzsK1HoEa0eVHi/ko0gjJMxHrrauGq/FSMo0UvS1115r3NypXYnluo26jNfygDZGbXYqrfXEevlUA53WG+b5GCmbSoz27OYSy8W0AOmmWv2MOu0VckafU329guxe/YzpCFIobWqAXlpeHyeB+OLZKSSKfkycv6Zo3EDk49IpeIqXW+s5fvREsfyC9nOM9Wu08f/sPl9etOdI+b8op96ffeOtfrdjOQIh0O0HhPR6Gom67OxlXdEmLlhTFD+caiwTl08PWyJEmsv7L+6ivqrzx4Jhm2P9GgnEVQ6dRq4GQ4wumnGeKEdydyqxbNy4Kk05MP2ro8WaC4Y7UHuNDl9bXu2QAuAq3oud+ub5+gnE5fhplGj0vlMommT+eRnApkA2gtFYv9NcrN00P1fe5CpGyqYyaD3dtuE1AlULOJdULaq+cRAQsA6wl9L8obFqXNreq+QBa75ur/Xavd4tVG23fHpuvkewRv0pRI5t9jPVQGv4HKNOW4Phbn1qfS0fwTpMf/N5aSOUNjVAq7S/l7pAms9urv2MmwENW5b3CLFWdvkiP+y2rU+gVSDCoX7nb1y24tdbVx/q7363mzZSxftvqAZbeckI9PoxITo66HliIZBWlYGuQmA+BPLRo58qL8NvveS/dZsx92ks9+GpU41H3PgqD1jPLQfLpDq379lTPHPXXUWMWG0tsU6nkbKty/qbwKgIOJeMyp7QjoUUELAOoJ2HpBEK5tMFtKsuH0057MjKvP4UakYbqqy3XR96PdcaHPc7WjaWS8Fsax29tjkfr4djPkI5pmxQCNRZIEKbGAkUl3UaCVTnI0HfQyAuw485WY9P/0U5JcbMEdQu0XeM1FUgzhMxTcz0L48VR7NpNZKHqw3qemQsvX7HjaRSidAzn0O1n95GwHpueXVjKjeXN6KKG1dFiRtabbjttiKmHYjpBSJ8bRe29rMdyxAYRwHnknHca9rcKiBgbRXp8XdrkBk3aFroEsHqzp07i9j2MCM1q2x33o4YSZqPJu22nXy5Vttu683Xa/nNy2Je117THMxXO9RLYLEF4kPOvu/tb9ytPN3JfLHbZPsEFkvgyIHDRdyRPJ93crHaYrsERkXAeWJU9oR2LJTAh+Ul/qlEWJrC0X63/0E5ijUvMcXAeydOFPn8rC9OTRXxiBIjYC8upwfYWgaxeTDb7/YsR2AcBJxLxmEvaWO/AgLWfqVGZLkIIS+77LIZI1YjpIxL61tDzV4ja6vsUmvAWmXdC1VXeCWzCFZ7zfm6UO2yHQILLRAjVXd84aEZN7uKeZRi/tbWy5anPp5zcqHbaHsEFkrg5TJY3fvw3hmbW33e6mJNOV9xXuJ9I4BdqL1iO4st0Ok8ETeOa31vHCxvSOiHusXeY7ZfhUBrQFpFnTG3aoxajblaf1re9CqC21Tiv1+MRxm4RtD65Fe/alRrFejqGBkB55KR2RUaUpGAgHVIyAjk5jIvamsIOtfN5+FqhIBPP/10x3lLFzJgrWr+07l6VLV869QA0Z+w7lTy6QzCOZ9/9o033ui0mucJjIVAHq7Gl+Vbdt8y64ZWqSMC1rHYpRo5oMDU81MzwtUv3Lqx2Hzf5lk/NET1U8+/KWAd0Nlq4yew5+49zR/hep0nDq+7TcA6frtYi9sIrCjnU03l4nXrZtx4qs3is56Ky/7blZgKIMLTKD8tpyGI6QJiOoIIXGPu1ig/K//7H//e7xWvbN9enJdNM9CuPs8RGBcB55Jx2VPa2a+AgLVfqY+Xaw1IL7jggqLf+UbnuKlZi7/66qvNkasRrkaQNyqXsOftiNGs8Wi1mtWh8ok8qFzMvkTAmk9RMJf5YFvXbddPzxEYF4FDrx+c8aV5259u73on83Hpl3YSGEQgQtNULrlusrjhOzcMUo11CCwpgelyntVD5ajUVJwnltTu1ZkuAp8q511tlo8+6nmTqy5VdXwpbpwVj62XX95YJuZ53fbcc80bZW0v//vVb32r4/peIDAuAs4l47KntHMuAgLWuWiVy0ZoGEFgCuPeLE96CxWwxrZSiW0uZiDZyhbtCZs0VUAElDFtQa+SB5mXlvMQLVZJ+7Xf7edhbKzbT5jcb92WI7CYAofLuSZTmTh/Qri6mDvDthddIL/kf/Irk4veHg0gMAoCRw7852YzYuqYVatn3/V8FNqpDQSqFvhMNnL0nenp4oNyTta42dV8lpinNeZiveqhhxqbiZGsMXVAPKcQGGcB55Jx3nva3kngE51e8HxngSuvvLL54kJehj+XeU7zALBzT6p9JQ+a42ZRvUpcVp+3s59Atledg74ebX/33Xf7fuRtveOOO2asN2gbrEdgFATyuz23zrfa2r6YN0khUBeBVRPdv8wenz5RFwr9rLnAyffP3OinH4r8vNLP8pYhMKoCnyvnQf3Ux9MExKX7+w6eGck91zbf9oMfFJ/+5/+88dj27LNdV48RrXk5ls3T2nVFLxIYYQHnkhHeOZo2sICAtQPdBx980OGVorjiiiuar8UIzF5hYoSI55xzTrFz586OdfbzQj5KMqYL6FbuvPPObi/Py2v5TaEiPO3WxgiLb7rppmY7brzxxpEakTsvQColMAYCy84+M79Y3JikW9lz93PdXvYagbEXiJu7pZJPF9DasbjMbf/397U+7W8CS1Jg+dlnRuxN/3K6OPlB58A1bhLnBldL8jCoZaditOrlGzY0+x6X7vcKO3fv21esv+22Wcvlo2FfKG9iFaNhO5XWbeRzwbZbJ83b2u41zxEYFQHnklHZE9pRpYCANdPsN8CM0YsxajGVBx54oGPIGkFjujFVrJPfDGmuOzIPdlNA2TpSNeq/8MILu4ab+XbzaQYiEM1Hyc61feGSj+6NALXdCN9oc36zrthOHs7OdbuWJ0CgOoENXzzzxSG+FD927aNF60jVI+U0Avf+b98s5+A7VN2G1URgBAVimoxU9n9/f+NGVnk5VQZLe8sAacfGHTNCpNb3zAh2TZMIDCyw9pLfaq4b54m4MWLrMR/niR0bH5pxk7iBN2hFAiMk8DvXXDNjFGtcuh83pmotEZhGABtzpsYl/Tc+9tiMRa6dnJxZz44ds0LYWCHq2fLoo811Y2qAc7OpCtIL+fywMX1Bayjb2j5/E1hsgQ1fXN9sgnPJYu8N269KwBysmWQEhGnUZQSVMeo0Lh2PQPCtt96aYR6BYCyT5hCNkDXCxFg+rRPr5YFqPD/MZfCxbjxSnbG9eKQ6Y3utgWtqdKfn83lPI1yNcDa1/+mnn57z/LKxTpjE9lIIHCN8o40RYMdr8ciD3FhnlOaTrerNpR4C4yiwdnJdEXPqpbknD71+qIhHPBfl+LETxYnp9lMDtH7BHsf+azOBXOD6715f3Pvb9zaeig//u7fubgSqMV1A/H28fC+0G53nkmjH0VIWmDh/TbHx1o3lqO39jW4e+9Wx4o61t/d1nvir8n2jEBhngdVlwPnsN77RnBM1wtMIWT8zMdG86dWHZSj643L6gHwk6d1XXz2j2zEa9u4yrI0ANso75XenDeVI14vLaQhSgBpBaTyf17Pj+uvb8qXpC9Ky0aZzyzZF+XI5j2s+8rZtBZ4ksMACK8vPUs4lC4xuc/MuYARrRrxly5YZQV+EhBG4trujfISFb7zxxoyRrGn5FLbm4ertt9/eWH7Y0i70jO3EI4WoqW15aPn222+33XQKbdOLeZ8HGc2ath2X/Od1RhD8xBNPNNqZ6o1lX3nllSJftm0jPUmAwIIK3PniXcXq81bP2GYErvFI4WpcOr1t37Zi5cTK5nLHyktFFQJLSWDivDXFLbtvKfKpAk6UPzLEeyEujU7havwAcdcLdzW7nsLXpWShLwRygavv21ys/9KZ0UfxWrvzxHV/cF2xPhulNP2284QjafwFYk7Uf/fIIzNuNPXnZRj6VDkdQDzikv8UdMacra9s315suuiiWR3fumlT8eQtt8yoJ25iler5WTkyNq8nlm1XT1ScAtu0kQh+Y47YeETgqxAYRYFBzyWHp2aPGh/F/mlT/QRqMYL1ggsuaAanEx//ktduV6dwMEZc5pe2dxpdGcs//vjjRYSnsU4eckb9sV4EmBHcplGmrduN9qTXop29StQZo2mjfc+WE6LHNlOJ1yKsjPZE2+L/X3vttcbL3cLSCDljztZ8ioCoK+rIS+7YySSWj9ciCI4pDVIb8+2nUDe1c8ZGsj9i+53c2q2TW+Y33Gq37LDP5fV3O6aG3Y71CbQKTJw30bxjc6+b7qR1Y7k0AjW/5Lm17vR33NzqkZ//y/Jy6KnGJdH5ndRXlneLnrxustj4tY1FLLfp1k3FwddPz9XabtReP9tObYvt53PAtmtfXt/q7PLtdst6bmkLROiZHzv99nau76FLyuP9tybXNkauHik/0EfA2jhWy+3H+2nzvZuLGPkd81Dm7YkfHFrfo722Pdfje+Z7Z37vZN2vr+UWXyD/d77ffyfj3/N0PK0q/53vVWL5u174RsfzxIYyfN34tU2N89VZZ53V/DGi3Xyt/Wx7Lsd6Xl8/57xeffX60hXIL7WPS+/nUs4rv+8cevLJ4oU332w8YrRpHoZG3TGq9OYyRI3ws1O5thxdenEZ2EaYGvVEwJpKhLP91hPrRGAbbYh6ImCNEnXMtW+d2ur5egmM8rmk3Z5wLmmn4rmFFjjro7Is9EbHZXvT5YlykPBs0PUGdYnwsjUMHYW6WttQZTtb6/b30hQ4+dHJ4rW/+pOl2bkx61V8KY4PLsriCUTA9/LDexsNuLoM9TaXI8iUxRHwflgc905b/cryf9F86Ycn/6jTYp6fZ4EUnjpXzDN0n9XHD6QxrUmUuzdvblyOrsyvQLpRVbdAtZ8WVFFPFXX009ZxXGb9179evHfi9I+lTxzeOevH0HHs01Jqs3PJwu3NT3/i08U//Tv/ZOE2WIMt1WIE66D7cZBwNbY16HqDtrOqcDW2X2Vdrf2Zz7pbt+VvAgSqFfCFuVpPtY23gPfDeO8/rZ8fAe+L+XFV6/gIDBuspp5WUU8VdYyPvJYuJQHnkqW0N+vXF3Ow1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJnPVRWSqqSzUECBCoVOCvP/qb4v3//peV1qkyAuMqcPzY8eL49PFG81dNrCpWrV41rl3RbgKVChw+cLhZ37pL1lVat8oIjKtAfs5YvWpV8T+XD4UAgaL42WHnDMcBgRD4tbP+dvEbnzgbRoUCAtYKMVVFgAABAgQIECBAgAABAgQIECBAgEC9BEwRUK/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS+CT9equ3hIgME4Cf/3R3xT/6W8OjVOTtZXAvAkcfP1gcehHp98P67+4vtjwpQ3zti0VExgngd1bdzebu3X31nFqurYSmDeBw1OHiwM/PNCo3zlj3phVPIYCe353T3Hq/VONlj90ww3FimXLxrAXmkygWoFf+/Sni79zzjnVVlrD2gSsNdzpukxgXAT+pvjr4r/8f++OS3O1k8C8Crzxn35SvPzM3sY2Pvyf/p/if9z09+Z1eyonMC4CzzzzTLOp//v3/49xabZ2EphXgf/4X/5jkd4bzhnzSq3yMRP441f+uDgxfaLR6q//w39Y/A+rVo1ZDzSXwPwICFiHdzVFwPCGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQ/8dRIAAAIABJREFUIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7vhBu3306NHi/fffH3R16xEgUEOBUx+cLI5PH69hz3WZwGmBOP7jfaAQINBbwDmjt5El6ivg/VHffa/nBAiMvsAnR7+Jw7fwpptuKiIYjLJly5bixhtvHL7SGtZw5513Fk888USj52+88Ubx+c9/fskq/OQnPykefPDBOfXv0ksvLR544IE5rWNhAq0COzY+1PrUwH+vm1xXXH3v5oHXr2LFQz86WDz25ccaVW28dWNx/XduqKJadSxRgcMHDhcvf3tvZb277g9uKNZcMFFZfYNUtGvrruLA81ONVe/bt62I96VCoCqBOL5OVPQD1qicM3bdvKv8QeJUsf5L64u7XvhGVVTqqaHAUnt/HCnPkfGZKt4fay9ZW2zbv72Ge1WXqxK47Qc/KI4dr2YAxLkTE8WOMmdRCNRdoBYBa4RlKWBdyqHgfB7MMWo1hauxnZ07dy7pgPXNN98s4riZS1mzZs1cFrcsgbYCRw4cafv8IE+uWr1qkNUqXWff9/Y169v//f3F1fdtLpavWF7pNlS2dAQiKKryPbDYo0ZPlqNWU7gaeyn+W8C6dI7XUehJBC4npk9U0pRROGcc/JODjfAoyqHXDzWuflg1sfjnskqAVbLgAkvt/TG1Z6r5/ohzpffHgh9SS2qDPz18uHjvRDXnjyUFozMEhhAwRUCGd9VVVxXnnHNO4/Hqq68OwTo+q8ao1NTn+O9O5eyzzy7ikcpnP/vZTosuiedNg7AkdqNOjIBAHiatXL1SuDoC+0QTFlZg2YplzQ2uFBQtLL6tjZ3AqjVnwtQ4Zyw7+8z7Z+w6o8EEKhbw/qgYVHW1EfjxwYPF+ttuazxi5K5CYL4EajGCtV+8CNXSSNe6BGxz6fNbb71VPPvss42g9fbbb++XdSyXS8dBNP7pp58uLrvssp79WLFiRc9lLECgl8APT/5R10VuX3dbc7RSXP6/uRwROsrlC+W0ACvLkbQxymLy+slRbqq2jYDA5HWXFvHoVGI00o6NO5ovP3F450iPbovR2g///JEiRh1F0Lrpa5s6dc3zBAYS2Hn4ya7rxbQzaVT4OJwzoo3pnLHhSxv8KNd173qxl4D3Ry8hr9dZ4NAf/mHX7l/5+79f/OzI6Svrrp2cLJ786le7Lj/KL3548mTx3sfTIRxbuXKUm6ptYy4gYB3zHbiQzY9L4O+///6F3OSibSsP2KPfE+W8MgoBAnMXiIBp8jrB6tzlrLFUBOKy61H/IWSpWOvH0hBwzlga+1Ev5kfA+2N+XNVKgACBKgRMEVCFojqWnEA+gtXcqktu9+oQAQIECBAgQIAAAQIECBAgQKAygVqPYI1Rir/4xS+amPmoxenp6Rk3Oep1c6y4IdLbb7/drC8uo4+7yl955ZVdd1YEeSnMi3VibtNoR1yKn9p2xRVXdKwnths3ZMrruOCCCxo3oOoUDOY3b8qDxNhu/lq0JZ93NdqTjKLuTvXnHW7n0qt9af3W/ZP2QTwfc+SGd2pPtDWc+mlT1x3y8YsC1n6ULDMuAsePHW9coh9leXmZ8sT5a8qbJJwspp4/UBz95dHG8xd9cUN5x+YNbbsUl2UfnipvplDWc6aOiWJteTf0XjdFabft1o3EneNTWXfJ6TusR/sOljc4mS7bd/LjG56sKdsdd5Xutc3W+v1NYPqX0+VxdLIBMXH+ROOy4zg249L9+P94X3S6AVu7YzGOwTUXTBTry/dNr5JvO27W03r85u+R/PXW9+jp9+5E1ykUerXF6wT6EYhj72j5nomSnzPi3+T073Wvc8b0L481zy9Rx6qJT/f173e+7dh+Oifk7e50zoj32sHXDzpn9LOTLTOwQLv3R1R26EeHij97/c8a9Y7a+yPaFJ/l8vdHnIti6qbWc9LAMFYk0Ebgg/LS/H3l/KfvlNlKXKYf5TNljvC5tWuLc+dwg+h29Xxq+fLi4nXriss3tP8s9k6Zs3xw6vRNE2P7qXxYPhc3+Erl3PJK1RVlXQqBKgRqHbBGYNhpbs0HHnig6Ruh3bvvvtvWO4K4m266aUYwmRZ84oknGoFfXFZ/4403tl0/gtS0rQhjY9m42VYe8MWKrUFt1P3ggw82A8Z2ld9xxx2N+vKQNJbr1OcILfObe0Wf83XzdsW8pJ36FNuIYDVcWvuRt7NT+9Iy+f5J+6Bbv6O+sBx2GoM8aG+1a+fsOQKjLhAh0ssP7200c/0X1zcuV37sy48VJ46duXPoWR+Vr7UErPu/t6/YW66X7ujcrp+XlJf/X/+d6zvOk5dve+0la4tt+7fPqCbuGv/tbE7NmIO203YPFFPFnnueK66+9+qyD9e0a47nCLQVeOzaR5tzF9/5wl3FsTKIiWM7LxGW5jdliy/Re7+9t9j//f1t64wn4yY88X7qNm9svu2bd28tLm2ZYzZ/j0x+pXw/fff6rtuNNt31x3cVE+et6dguLxAYRiBCmN1bdzeqiH+3tz51S7HjCw/NOGfEv92t54xDPzpYPHf3czOWy9sR/37HOSPeM51CnXzbKydWFu3m0Gw9Z3TabjpnbCznAr/+OzcMQ2JdAk2BcXp/PPXf/k3jfLfr5l1t35fx2dD7w8E9XwK79+0rvvvSS0UEmjPK1FTjz4vLkDXmdV296szNDdu1pWM95cJPldv4zXL9mzdtKraWj7xsefTR4r0TZ77rpNcibL3qoYeai76yfXvxuTKoVQhUIWCKgCEUIwC88MIL24arqdoUwEYY2qtEfe3C1db1Iri88847u4arsU6EkRGm5oFha13z8XeEnLHdbuFqal/49VoutTHq7dXvWGbnzp1DdStvT1UjYodqkJUJVCgQI3xaw9V21ccX4T2/u6druBrrHXh+qvHFO40ObFfXXJ57uQyPem335YdfLkOvfXOp1rIEmgL7ymOnNVxt5YlRpd/87W92DVdjnfiRIoKoqo7HeB/FTby6hbqxzR1f2NEcUd7adn8TqFIgrn5oDVfb1R//dvdzbqn6nLF7666e2433U5zTFAJVC8S5YpTfH/GDdZxT8h/UWw28P1pF/F2FwNd37Sq2P/fc7HA1qzxuoBVB57GPbz7Vbrv91BM3r4ptbSsfCoHFFqj1CNa45DwP02KUaLosP8LJ1lGj+c6K9fLwMi5Rf/zxxxuX+Meox6gngr5nnnmmsVoEfzFlQLepBlJbYv3Ydgr34pL6VKKeVGc8d/vttzdCx3QTpqgjwty0TGpHPqoz73OM+kyjVmOb0e9UBrmxU2w3D5Ojv7HtTi7RlgiV33rrreZ22/1H6le8FgYxHcBv/MZvFH/5l3/ZsI5tpn7F61u2bJk1crddve2eax3BGn1KUzbkUxJEn6JvQth2ip4bVYH0ITvuaL6hHLG3srxkOcqa8rLpVOKLch7wfKEc/bPp1k3Nu7XHF4oD5ajYFFId+9WxxqjTKkaVpjpjm3GJXbQvpghojDj89kvNLwkxii9GQsWl3gqBuQj85wOn74gb74G4Wciyj4+hZWefOZbyL8wxSnVjefzHpZTpeDsydbjYVQY76f205549jdF8nUbl9du+uMQ0Stpm3EU9SrwHIsSNcCpKjCrfXY5Iah0R3nhRIVChQB7MxGjWtR9P47Jsxa83tzJVHpf5jxbxb3O8Z9J5pfWcEXVWdc6IbUeJbV5ajgBP57Qj5fs8P2fEOa3TNCAVcqmqZgInps+MjpvL+yOO1ZhmKUq790d8Dqti1HX8IB2ln/dHnA9jCimFwP/f3v3FWlYVdgA+JH1oZ1LxYXjsgI+DWGKQ2GgZ/7w4Q2wsjomkHaA+1IFooDFi1A4YdQT8BzJt44wvAmMbSJzi1Eb0RTJjTEyEB6IyfRT7CA/QBHyk+3eGdWbfw9nnnHvvunfuvetbyQ33nr332mt9+2zO3N9de631Cnzz9OnRE90UhqXcfejQ6OYuB8lI0/Kof/ZJMJqvhKw/v//+Nz2m/3hXR7+efzxwYHT7jTeO60nJY/7f6kbI/rILalMymjWP++dcKT+6997xf1N+0k1TkBA2JSNn/6UbOVvKW3btmnzvGwLrFWg6YA1eP0TsPw5++eWXz105vh+uJph88sknV1yLhG95jD51ltAyI0+HphooByese/rpp2eGdgn3EvSVkiBx+nH4HD993py/v99Qn9PWtYSqpT39EDSvzXNJ2ByPlBKQTvdl0tE3vkn7YhPbUtLe/Jx+l6kPytyt88Ls6br7P/cD6Ex10J+XtuyXNucr4eus6zCvftsIXGqBhDdHf3bPYBh09gcX/1E063H8hEj5RTWj7UoQm//WCFhjc/Spo5NfPPJz/hmVX9T37d83+qd9d435EjDlMb3px63HGxUCCwRmva/LIQlsSqg0dK/kF+P7fnX/+P1YptA4d+pslXsg50zd/T8e5B64/eTt4/kwyz2X0egKgc0QGLoPyrkTZJYy694a+sw48OmDVf5IdqSbemN6mo7MZ5zPjC92I9HLPZo/UCT4VQjUFKh1f4wuG42nh0nJ51CtPwgse3/kjxIC1prvjDbrymjUTAtQyvHbb58Ennktc50mAM3cqR/8/OfHI1wTsj7eTRsw/Yh/AtNSEtLe/bGV04Plsf73diHqTV/5yiRkTShbAtYSxI7POxWi9rdNTuIbAhUETBGwBsQEbv0QLiNXh0p/DtQcMyus6x87FK5mn3LOhIn5mhdIZmRrKQkc++0daut6X89I2H4bE/QOlczf2m9/QuBFUxmUEcKz6kyY2g9eswDWWksWOJtVivv0tgSsy0wBMX2cnwlcKoF54WoWlRp1/8rf04Wo+ZoXmmZUayn5BbYspLWefuWX8zKqY7qe/JKeESKlZFSrQmC1AhfmgByewzf3QHn/Z07WoVGpCUAz4qeUGoFnRtXm/hwamb0R99xq/ezfnsC8z4zMxdr/zNh/y4WRQ7OUEhiVMv7M+P2b58abddy81/KZMR2ulv1z7+ZJjVJeeM5nxjxL29YmsJr747oPXz94kjy5U4r7Y5DJhi0uUEaJppkf379/Rbjab3rmXf1kNxq1lKd+fWGBuP4+/YWpEsgOlc/2gtdMO5BRsgqBSyXQ/AjWtcD3R5EmKEzwNlTK4/7lkf2z3V9VhkZW5vV5dSVAXDQCtrRjup5F4eVQ+1fzen/u00w9sGiBqOyTY9K2ZUadzltUK+2MT5nioVagXILsjMYt/UlbEyb354NNyLpoCojVWNqXwEYJJKAcCoxyzoxeePj8cvMYl0cxS1sTsGbU0HrK0C/Kpc6sAJ9RFimvvTw1af56TuzYZgT6If2sTuexzGUfzcz7sZQXfrP+8Cb1zbs/c88lhC0j8mrcc7MMvEagCGShqUXvyWU/M/KHg9RXHqvOQnLrLfMCq9Td/5zKo9gKgZoCW/3+KFN6DPXZ/TEk4/W1CGT06lPdo/ilTI9Ina7zSPfIfx7xTynBaEa4zir/Nyc0zbQAWaiqlKE6ZtXrNQK1BYxgXYNofxRq5gJdVPojK2sFf4vOOb19owPW6VGyCRsXlQSWfZszZ84sOmTp7evpb0bWJsjOtA8ZUZxgtx8W5/u8lm39141iXfry2JHAmgXKfJlrrsCBBLa5wK63mitsm19Cza8osNv9UFFTVQQIEFi7QH/EaeY1vWbOILScJUHoX+zZMzlhpgrol/d2c6WWcvTUqfE8qrNK6hlPF/DG16x9vEZgswSMYF2l9HSQmBGYi4LBfqhaM2BN0Jtzp84ycnOV3am2+/T5+8HpvJNkvxJY17SZd85ltg1NB9A/Nm3PVAwlWE0/8v5YNHJ3mfPbh8BWETj/i+fHc52+2C3kUGOE3lbpl3YQWEYgI+zO/eAXo9930wZkSoosNtUvr728/hF4y7TDPgS2i8Ciz4z+okDbpU/aSaCWgPujlqR6tqLA77pMol/u/O53FzbzlW4O1lIS0PZD2a/ceut4AawyT+s/fPvb4wWuskjVweuvHy9olakGFAJbSUDAusqrMT0yctGcqqusfqndc86Eepfi3EMN7LusJmDs7zttO3SurfR6pjnoj1xNSLxsuLyV+qEtBKYFMg9lVkYvj+NPb/czgZ0ukFXOszJ6eRx/p/dX/wisR8Bnxnr0HLvTBTI9xclPnvBvqp1+ofVvIpBQNAtXrae8oxsB+/MHHhiHrGV0a/77RL7eqPvtXciaqQgyR6uwdT3ajq0lIGCtJblJ9WTuz5tuumlytgSUmR901ojLRXOW1mzyWgPWmm24FHXFP/Zl9G1G8gpYL8WVcM6aAs/+9zOjBz/+4KTKzPmYeSszP+v0HKsnj5yoeWp1EdgSAqc+99jop//200lbsujVu/7muvHiV7vfenF+sOe7Ed5ZmVwh0LJAwtVjB45N/hiRz4zMJ5z5H6c/M3Jv+aNFy++W9vru/mjvmrfa43nzpK7VJKHps8ePjx7v1rHJQli/7BaxSnhbyu+6Ua93njgxHtl696FDg4tqrfX8jiOwWgEB62rFpvbPXJ3TC0qts8rBwxPiZWGlUvJ4ehZXGhoxupkBa99gO45EHUS3gUBjAhll8djdj016nVXSD3/jlsFVzQWsjb1BGujuuS4w7YerR04eGVylfDR6XcDawHtCF+cL5A9yJTS97sPXjY587/bBz4zT9/1QwDqf09YdJuD+2GEXVHcGBd7SW6Dq5v37R8fvuGNw39VuuLlb3yVfKb98/vnRb7tcJAtqZXGslIxsTdCaKQcWLa612nPbn8BqBCxytRqtbt/pMHUz5w3NlADlfGnHd77zncFwdZXdWvfu0wHrsi79uVu3wsjPMsdu2r+WoHgo7F43sAoIbJLA+XPnRy/94aXx2fbs3TM6cnL4F+VNapLTENhUgXM/ODs53+GvH54Trm5qs5yMwJYUeLabo3vFZ8accHVLdkCjCGyggPtjA3FVveUE+gFrf8Gr2g3NYlZHbrxx9KN77x09041uzejVUr75wx/WPp36CKxKQMC6Kq4LO7///e+fHHW2G66+WeW5556bnKrfhs06/7zzJGDth4uLFv4qdfXnkb322mvnnWJTtj366KOjt73tbeOv/mjhoZOXQLZsF7AOSXl9uwjkUbZSru4e71QItCbQn3c402IoBAgMC/y+W/ytlEwLsPvyi1NoDB9lC4E2BPr3R6bLcH+0cd1b7WUWnSolAesrr659IdA/dCNSM1I1X/l+qGQKgUc/85nJ5kwfkNGtCoFLJSBgXYP8+94Ynp5DM4p0mZGOy47onNecZc5Tjq9xvnltmbWtPyVB5opdVB555JEVdlshNO6HvOnDIvN+PxOuboU+LHK3ncA8gddeXrlK+rx9M52AQmAnCyyaK/KFXri0kx30jcAyAovul3xmvPTChScklqnPPgRaEnB/tHS1d2ZfM7L0Lbt2TTp38qmnlurorCD2W91I1Cxula98P69c0w306pdME6AQuFQCAtYB+VdeeWVgy2iUlePLSMUEcItGOuYx+He+853j/RYFdoMn7Tb0R0cm2BsKUfP6Bz7wgXlVbci2zAlbSkamfvnLXx48T9rY355wdnr6hcGDN3BDAtLSjlyr1fQhi40pBLa7wBVXXXzM5plusauhEDWvH/vQV7d7d7WfwJsE9ly5Z/Ja7oGhkqkE+nO1Du3ndQI7WaC/6Fv+4DDvM+Oh3uKJO9lE3wgUgf79kacjhu6P1155dTR9fwztS5fAVhb4ZPfofinfOn16vDDVvPLNbp8PfuELb9rvPV1YW8pPurlW541ind7WnzJg1rkzX6tCYKMEBKw92X7Al9GVJcCcDkUTdH7pS1+aHJl9E6D25xPNxhLQJezM9xntuszIzqGL/ZGPfGSyKfWl3v4j9iW0TFuGwtfpuvvzns4LbaePm/Vz/B566KHJpizA9YlPfOJNbUmb0/bSxhzX95xV92a+1m9LrllC1mnP6T5Mvyc2s73ORaCmwL6/3jepLqOREqJmpfRSXnzhxdHpr50effGvvjCZd6/m+dVF4FILvO/whUUU0o5fdAtenThyYsUvxee7++HYga+OTh45eamb6vwELrnADYdvmLShfGb05zHuf2ZMj/gWIF3yy6cBGyyQ+2PX5RdH9OXfVP37I8Fq/k111767RtP3xwY3TfUENkTgyIEDK+ZEve3BB0eZF3U6BM2j/3/bjU7NtgSeR0+dWtGeg+9616SePPafkayPz5iaMfVkWynv2bdvlGkDpks/dP3fl15aUdesEbTTx/uZwLICf7Lsji3sd9ttt40SlqYkLM08nAn/Eq69/vrrKwgyirU/wrGMUk3QltAyx0yHcgkc+4/Rr9Y0oytz3oR+Kal/aKRq2p32lXB4ui3l3Olz6kzJvulz+pDvn3766VU/8p66Mldsccx/81WC3LSjtCnnTDtznvx3q5Rco7SzjF7NdcvXUB/S7gTLW6kPW8VSO7afwL79V48OfOrAZGReFi/52oFjMzuSRbDyC3V5LNQvyzOZvLjNBD7Uvf/Pnjo7+QNCQtZ8zSq5B8oCPx59niXktZ0ukDklb/nGLaNTn7vwy3Huh/zxYdYfIErQVD4z/th9figEdrJA7o9D/3xoTfeHz5Sd/M7YuX27fPfu0ZP33DMOPctI0YxSzdfb35ijNa8nNC0l4WeO6ZfU80g3t2rqyb455s4TJ0ZHH3tsErxO15PpCY7fccdM3DJ9QTlvqSs7H7v11tHNvSkgZ1bgRQJLChjB2oNKgJmvfhkKJrNPQrfvf//7K4K1hIcZ3dg/LoFlArgaozSXqSd9mA4t+wtk9fs3a+RlPwBd8n20YreYTPc1AXS++nXPaudazrcRx+Tapg/9aRlm9aEExOsJzjei/eoksB6BW75x6+jQFw/NrWLfDftGR392zyiLNpTyP+fOzz3GRgLbQSC/EOe9nfB0Xjn89cOjzzxxcWGF7Pv8uYujvecdaxuBnSRw4FMHl/rMuO9X94/y2VHKMz8enoJjJ/noS9sCy9wfe/9y72j6/sjTEgqB7SiQEaQJTG/ev39F83/XLXyVr364mqkAsu+sUafv6AZg/fyBB0YZlVpKjh2qJ/vOqqcce6wbWNYvqavflu1orc1bT6CJEaxltGn4+wtUzbocCSYz4jKrySckLSNSZ+2b1xKsZe7NPF5/5syZcYDYPy6jHjNKtB/U9etKexLmpVzZW3lv6Hx5PfvnvA8//PCK0LKcq4TECQhLsDp0/lJfgsL0uQSgqWt6RGbfsT+1wKy2ljYmbO67lHoz3cF0mD1dT85fbKa3zfo5dZY29xermrXvMq8N9SHnyFfan3POs13mPPYhsBqBg5+6cfTqyxdW5bx6/8V/cMyr48J+FwLTK3rzS8475qPdiIsbbtk/+um/PjV+bK2c86puVfX9h/ePMtI1JUFsWSV391svPgZX6l507j/rjvloL8zdNaOOfjv79V3VrVittCWwpwv0V/N+KTr9++aqaxe/b67Ye8Xo4fPHu0c5z42e+fGvx6O0X+wW57my+yX4yu4eOPDpA5PVoPvtueyyN1+PRededI9M19ivrz9n8vR+fiYQgf3dlBf7brjw/+tlPzOXbo8dAAALwklEQVSu7O6R8r6e9f/1WbLLfmYc/PTB8T00VJY592r+H7Da+2uoXV7fmQI7/f5Y9Bnh/tiZ7+tavcqozoz8TLlmyZyinDtBZ0aTfvZjHxs90T3an0f5s/hUAs1r9u4dj0I9eP31k/qH2px6fnTvvePjM5/rb7uAdi31pP4Evjn397rFt37T1ZOyd8+eycjaoTZ4ncBqBC7rHn1f+ez7ao62LwECBDZQ4NXXXx2d+eN/beAZVE1g+whknrb/vO/0uMEJGPLYoUKAwGj097v/bsLw76/+BxICBDqBzPVZpmrwmeEtQeCiwF1X3zkqUzA8e/z4ijlDORFoVeBPu6ki//zd7261+9X6bYqAapQqIkCAAAECBAgQIECAAAECBAgQIECgNQEBa2tXXH8JECBAgAABAgQIECBAgAABAgQIEKgmIGCtRqkiAgQIECBAgAABAgQIECBAgAABAgRaExCwtnbF9ZcAAQIECBAgQIAAAQIECBAgQIAAgWoCAtZqlCoiQIAAAQIECBAgQIAAAQIECBAgQKA1AQFra1dcfwkQIECAAAECBAgQIECAAAECBAgQqCYgYK1GqSICBAgQIECAAAECBAgQIECAAAECBFoTELC2dsX1lwABAgQIECBAgAABAgQIECBAgACBagIC1mqUKiJAgAABAgQIECBAgAABAgQIECBAoDUBAWtrV1x/CRAgQIAAAQIECBAgQIAAAQIECBCoJiBgrUapIgIECBAgQIAAAQIECBAgQIAAAQIEWhMQsLZ2xfWXAAECBAgQIECAAAECBAgQIECAAIFqAgLWapQqIkCAAAECBAgQIECAAAECBAgQIECgNQEBa2tXXH8JECBAgAABAgQIECBAgAABAgQIEKgmIGCtRqkiAgQIECBAgAABAgQIECBAgAABAgRaExCwtnbF9ZcAAQIECBAgQIAAAQIECBAgQIAAgWoCAtZqlCoiQIAAAQIECBAgQIAAAQIECBAgQKA1AQFra1dcfwkQIECAAAECBAgQIECAAAECBAgQqCYgYK1GqSICBAgQIECAAAECBAgQIECAAAECBFoTELC2dsX1lwABAgQIECBAgAABAgQIECBAgACBagIC1mqUKiJAgAABAgQIECBAgAABAgQIECBAoDUBAWtrV1x/CRAgQIAAAQIECBAgQIAAAQIECBCoJiBgrUapIgIECBAgQIAAAQIECBAgQIAAAQIEWhMQsLZ2xfWXAAECBAgQIECAAAECBAgQIECAAIFqAgLWapQqIkCAAAECBAgQIECAAAECBAgQIECgNQEBa2tXXH8JECBAgAABAgQIECBAgAABAgQIEKgmIGCtRqkiAgQIECBAgAABAgQIECBAgAABAgRaExCwtnbF9ZcAAQIECBAgQIAAAQIECBAgQIAAgWoCAtZqlCoiQIAAAQIECBAgQIAAAQIECBAgQKA1AQFra1dcfwkQIECAAAECBAgQIECAAAECBAgQqCYgYK1GqSICBAgQIECAAAECBAgQIECAAAECBFoTELC2dsX1lwABAgQIECBAgAABAgQIECBAgACBagIC1mqUKiJAgAABAgQIECBAgAABAgQIECBAoDUBAWtrV1x/CRAgQIAAAQIECBAgQIAAAQIECBCoJiBgrUapIgIECBAgQIAAAQIECBAgQIAAAQIEWhMQsLZ2xfWXAAECBAgQIECAAAECBAgQIECAAIFqAgLWapQqIkCAAAECBAgQIECAAAECBAgQIECgNQEBa2tXXH8JECBAgAABAgQIECBAgAABAgQIEKgmIGCtRqkiAgQIECBAgAABAgQIECBAgAABAgRaExCwtnbF9ZcAAQIECBAgQIAAAQIECBAgQIAAgWoCAtZqlCoiQIAAAQIECBAgQIAAAQIECBAgQKA1AQFra1dcfwkQIECAAAECBAgQIECAAAECBAgQqCYgYK1GqSICBAgQIECAAAECBAgQIECAAAECBFoTELC2dsX1lwABAgQIECBAgAABAgQIECBAgACBagKXvd6VarWpiAABAgQIECBAgAABAgQIECBAgAABAg0JGMHa0MXWVQIECBAgQIAAAQIECBAgQIAAAQIE6goIWOt6qo0AAQIECBAgQIAAAQIECBAgQIAAgYYEBKwNXWxdJUCAAAECBAgQIECAAAECBAgQIECgroCAta6n2ggQIECAAAECBAgQIECAAAECBAgQaEhAwNrQxdZVAgQIECBAgAABAgQIECBAgAABAgTqCghY63qqjQABAgQIECBAgAABAgQIECBAgACBhgQErA1dbF0lQIAAAQIECBAgQIAAAQIECBAgQKCugIC1rqfaCBAgQIAAAQIECBAgQIAAAQIECBBoSEDA2tDF1lUCBAgQIECAAAECBAgQIECAAAECBOoKCFjreqqNAAECBAgQIECAAAECBAgQIECAAIGGBASsDV1sXSVAgAABAgQIECBAgAABAgQIECBAoK6AgLWup9oIECBAgAABAgQIECBAgAABAgQIEGhIQMDa0MXWVQIECBAgQIAAAQIECBAgQIAAAQIE6goIWOt6qo0AAQIECBAgQIAAAQIECBAgQIAAgYYEBKwNXWxdJUCAAAECBAgQIECAAAECBAgQIECgroCAta6n2ggQIECAAAECBAgQIECAAAECBAgQaEhAwNrQxdZVAgQIECBAgAABAgQIECBAgAABAgTqCghY63qqjQABAgQIECBAgAABAgQIECBAgACBhgQErA1dbF0lQIAAAQIECBAgQIAAAQIECBAgQKCugIC1rqfaCBAgQIAAAQIECBAgQIAAAQIECBBoSEDA2tDF1lUCBAgQIECAAAECBAgQIECAAAECBOoKCFjreqqNAAECBAgQIECAAAECBAgQIECAAIGGBASsDV1sXSVAgAABAgQIECBAgAABAgQIECBAoK6AgLWup9oIECBAgAABAgQIECBAgAABAgQIEGhIQMDa0MXWVQIECBAgQIAAAQIECBAgQIAAAQIE6goIWOt6qo0AAQIECBAgQIAAAQIECBAgQIAAgYYEBKwNXWxdJUCAAAECBAgQIECAAAECBAgQIECgroCAta6n2ggQIECAAAECBAgQIECAAAECBAgQaEhAwNrQxdZVAgQIECBAgAABAgQIECBAgAABAgTqCghY63qqjQABAgQIECBAgAABAgQIECBAgACBhgQErA1dbF0lQIAAAQIECBAgQIAAAQIECBAgQKCugIC1rqfaCBAgQIAAAQIECBAgQIAAAQIECBBoSEDA2tDF1lUCBAgQIECAAAECBAgQIECAAAECBOoKCFjreqqNAAECBAgQIECAAAECBAgQIECAAIGGBASsDV1sXSVAgAABAgQIECBAgAABAgQIECBAoK6AgLWup9oIECBAgAABAgQIECBAgAABAgQIEGhIQMDa0MXWVQIECBAgQIAAAQIECBAgQIAAAQIE6goIWOt6qo0AAQIECBAgQIAAAQIECBAgQIAAgYYEBKwNXWxdJUCAAAECBAgQIECAAAECBAgQIECgroCAta6n2ggQIECAAAECBAgQIECAAAECBAgQaEhAwNrQxdZVAgQIECBAgAABAgQIECBAgAABAgTqCghY63qqjQABAgQIECBAgAABAgQIECBAgACBhgQErA1dbF0lQIAAAQIECBAgQIAAAQIECBAgQKCugIC1rqfaCBAgQIAAAQIECBAgQIAAAQIECBBoSEDA2tDF1lUCBAgQIECAAAECBAgQIECAAAECBOoKCFjreqqNAAECBAgQIECAAAECBAgQIECAAIGGBASsDV1sXSVAgAABAgQIECBAgAABAgQIECBAoK6AgLWup9oIECBAgAABAgQIECBAgAABAgQIEGhIQMDa0MXWVQIECBAgQIAAAQIECBAgQIAAAQIE6goIWOt6qo0AAQIECBAgQIAAAQIECBAgQIAAgYYEBKwNXWxdJUCAAAECBAgQIECAAAECBAgQIECgroCAta6n2ggQIECAAAECBAgQIECAAAECBAgQaEjg/wE2Mysh0hVdCAAAAABJRU5ErkJggg==) Uso de validação cruzada com 5 *folds*:
###Code
CV = 5
###Output
_____no_output_____
###Markdown
Geração dos modelos:
###Code
cv_df = pd.DataFrame(index=range(CV * len(models)))
entries = []
for model in models:
model_name = model.__class__.__name__
accuracies = cross_val_score(model, features, labels, scoring='accuracy', cv=CV)
for fold_idx, accuracy in enumerate(accuracies):
entries.append((model_name, fold_idx, accuracy))
cv_df = pd.DataFrame(entries, columns=['model_name', 'fold_idx', 'accuracy'])
###Output
_____no_output_____
###Markdown
Gráfico BoxPlot ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABAAAAAKYCAYAAADt+IqXAAAABHNCSVQICAgIfAhkiAAAIABJREFUeF7svQu0JWV5rosJAzkcjofNYbDZpNN7pdP2IIQQTocgu0NwpTcSREREVEKQIPEWvG3jPe5sh8PhTjxuh8Nbsr0hXuIVFRERCWKLbUsQERERAduVtm0REQkSQEU479PWj2UxLzXnWt1r1pzPN8bbVavqvz7/XzX/76uas3fZRZOABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQksDMI/PrOqGQZ6tgvdd4b/WwZ6rZKCUhAAhKQgAQkIAEJSEACEpDAxBGYxgDAXqH8tGjX6DvRzyeOug2SgAQkIAEJSEACEpCABCQgAQlIYNEEjk8JX4veH80tujQLkIAEJCABCUhAAhKQgAQkIAEJSGDiCOyfFr0zuiv6UXRqtMfEtdIGSUACEpCABCQgAQlIQAISkIAEJLAoAqcl97ei+yp9JttViyrRzBKQgAQkIAEJSEACEpCABCQgAQlMFIHVac0HI374rwQAeBPgzGjPiWqpjZGABCQgAQlIQAISkIAEJCABCUhgLAK/llw4+t+LivNftl/PsTVjlWomCUhAAhKQgAQkIAEJSEACEpCABCaKwCFpzScifvG/GQDgjYBXRL4FMFFDZmMkIAEJSEACEpCABCQgAQlIQAKjEdg9yV8Q/ThqOv/l7x/k3IERbwpoEpCABCQgAQlIQAISkIAEJCABCXSQwLq0+XNRP+e/HH9L0hAs0CQgAQlIQAISkIAEJCABCUhAAhLoGIG90t5XRr1e/W8GBPgqwGEd65/NlYAEJCABCUhAAhKQgAQkIAEJzDwBXuc/Ovpa1HT2+/396aTdbebJCUACEpCABCQgAQlIQAISkIAEJNAhAvulra+L+jn7/Y4f36E+2lQJSEACEpCABCQgAQlIQAISkMBME9g1vceR/07Uz9Hvd/zLyfOQmaZn5yUgAQlIQAISkIAEJCABCUhAAh0hMJd2vivq5+QPOs5vAZwZ+T8CdGSwbaYEJCABCUhAAhKQgAQkIAEJzCYBfsn/5OiH0SBHf9C5ryfvytnEZ68lIAEJSEACEpCABCQgAQlIQALdILAmzfxENMjBH3buruR/VcRXCTQJSEACEpCABCQgAQlIQAISkIAEJozAHmnPU6J/j4Y5+cPO35Ay1k5Y/2yOBCQgAQlIQAISkIAEJCABCUhg5gnwnf2Doi9Gw5z7NucJIrwtIqigSUACEpCABCQgAQlIQAISkIAEpppAl16BJwCwW3RddP2AUeErAgdHd0abotsGpL0l5/as0g5I5ikJSEACEpCABCQgAQlIQAISkEC3CTyoQ80nAICzjgbZaTn57Oim6PnRoGDBPTl/e3T3oAI9JwEJSEACEpCABCQgAQlIQAIS6DqBLr0BcG9g46yjQcYTf9L+NOIJ/7ZBiT0nAQlIQAISkIAEJCABCUhAAhKYBQI8VdckIAEJSEACEpCABCQgAQlIQAISmHICBgCmfIDtngQkIAEJSEACEpCABCQgAQlIAAIGAJwHEpCABCQgAQlIQAISkIAEJCCBGSBgAGAGBtkuSkACEpCABCQgAQlIQAISkIAEDAA4ByQgAQlIQAISkIAEJCABCUhAAjNAwADADAyyXZSABCQgAQlIQAISkIAEJCABCRgAcA5IQAISkIAEJCABCUhAAhKQgARmgIABgBkYZLsoAQlIQAISkIAEJCABCUhAAhIwAOAckIAEJCABCUhAAhKQgAQkIAEJzAABAwAzMMh2UQISkIAEJCABCUhAAhKQgAQkYADAOSABCUhAAhKQgAQkIAEJSEACEpgBAgYAZmCQ7aIEJCABCUhAAhKQgAQkIAEJSMAAgHNAAhKQgAQkIAEJSEACEpCABCQwAwQMAMzAINtFCUhAAhKQgAQkIAEJSEACEpCAAQDngAQkIAEJSEACEpCABCQgAQlIYAYIGACYgUG2ixKQgAQkIAEJSEACEpCABCQggV1FIAEJTB2B3dOjfaOHRLtVvbsz21sr3TN1PbZDEpCABCQgAQlIQAISkMBQAgYAhiIygQQ6QYC3eeaiQ6IDot+O9o72iO6N7oi2Rd+Kro2uiggIcE6TgAQkIAEJSEACEpCABGaAgAGAGRhkuzj1BPZJD4+PHhGtjVZF/a7t23Pu+mhT9KloY0RwYJaMYImBj1kacfsqAQlIQAISkIAEJLCdQD8nQTwSkEA3CByUZj4zOjZa2aLJfC3g0OjgaF304ejd0U0t8nY9CV+HOCzaK9oQzVrgo+vjZ/slIAEJSEACEpCABBZJwADAIgGaXQLLSABn9m+jo6PyXf/SHL7zzyv/PPHnOuctAVTSsSUQsCL6jejVEemn0XjiT6CEtyT+NLomuiwyADCNoz27feKa3i/iOueavzu6JSK45xsvszsv7LkEJCABCUjgVwgYAHBCSKCbBPie/yuj9VG5jlnkb4nOj/4lujkiEIADzJP/uejh0XyEk4DhMJwe8cOAr4gIGEyTzaUzJ0SPjPh6BL+LsBDBRJPANBBYk05wH3hYxPW8Z8T85prmet4afSHaUO1nM9Cek7O/G1HGGyMCZm0CCPzwKNfaH0bU+84qbzbb31Ai+MZvkizGPpnMF0cE7+ajx0X86Cn3u7dH4xhB0KdHsON++eKI4EkxeB4T0f5xjXvxy2qZuWdzP3pqywIZS9r2w+jGaFPEuGoSkIAEJCABCYTAM6LvRF+MeOqnSWDaCLAgfU/0k+i+SuzzOv9R0f5Rr+Aei2+chTMiFsw/i0p+FpZP6ZMvhztnBDhOiz4efTeq9xV2OCuaBLpMgKDe06JPR3zm1ed4ua7Z3hXx458fi06KcJgH2SdykjyUd1zUNli2OmnfX+XjmsNpLvaS7PygOke54+pVyVuCl3zWc9+iLK7pcY11wteqcmgjXOtG0JA3pMZtM/m+2SiTtzVOHqFM7u8/jmjfDdFnotOjtmOTpJoEJCABCUjgFwR6OQmykYAEJpsAC8ejIxaRGE+Hzo5eE22O+j2t4wkSPwDIkyPS8QbB4RH3ARa5PPni6dpC1FUjOHJk9KSI3zjgqV7h1NU+2W4JNAkQwHpudErEE2yuYe4D/O8eXOM8Iee3Lgj4HRitikjH3+y/Nbot6mWUxTUzjnNJ3qJ6fvbL8V51tj3Wr8zFrmXq7e7VltL+XufaHOvVvnqZjBdf1+hnjAeBD+5vbFdHKyPKOKtfJo9LQAISkIAEehHo9aHUK53HJCCBySDA4u8vIhz2Yhdmh9f3t9WODdolELCpyvP6bHEQMBwDXkl9WfV3lzYskPlhQ9p/VMRbEMOedHapf7ZVAoUAjj3O/1Oi8jT8vOzzyn1x/gkG8PnO02yecHNdzEd8dejZEUHCf4i4F+wMe3cqIbjYa83Btcr9i/sQgck3R5f1adTWHO8XuOiTZUkP89WGD0TvGrHU+lcKemW9KAd5u+GnvU7mGI4+zv+6iPHD+ed+zTzYEMFNk4AEJCABCbQi0OvDuFVGE0lAAstCAOeWhV+5dnly9NKorfNfGs1C89KIV3afFxFQYJF5esSbAcMWrEkyEUabWQzzHd6TIpyJxX7PeCI6ZiMk0IcA8/yUiLcAcOD/LnpfhHPcy4HkO+PXRFwjvDbPNUIQEafxo9HOMNqGehn3MxxrjP5cFxGgnESDL1+3WOr23ZoyGaNe41c4cK8jDcERvs5B8GcuYj78f5EmAQlIQAISaEWADxRNAhLoDoFHpKk8ASz2oexcO2bzWWyeHdWDB7wyv27M8nZ2Njjwg2Wfip4VrY50/nf2KFjfziTA0/zHRAS9eIr/v6K3RwtRP+eRYB73CH7Q770Rn/u8CfDYaC6aJKNPaJJtudpHvQRKCACcXQHifvfwSYZl2yQgAQlIYPIIGACYvDGxRRLoR4DFHgv3upP7T/l7MQtSnP/Lo/LEn3vCpAcAaCNPvT4b8fRzTcTrsZoEpp3AfDp4WMQ1cEn0keimaNg9gPO8CcCP5REM4A2iI6JJv9bTRK1BgK93fL46xjjy2w6aBCQgAQlIoDWB8hpx6wwmlIAElo0AC726o8sTvysX2Rocg69E/Pddu1dl/c4iy9xR2blfrY34RW6cl3HvX3PJe2JUXjveUe213OknwPfRL9hJ3eTtnD+IePUf43+4wKlva1zrvEJ+fnRgNBdRHu1fzu/Vp3ptRALl4Q1jSkBAk4AEJCABCbQmMO4CunUFJpSABJaMAD/oVb9mb8nf/V77HaXSm5O4vojk9wAmzWgTPxT2lKgEKsZtI8EDpElgsQR4mr6zAgCrU9eqqsEL2fJd+TtH7ADX+pcjvnPONcUbRXPRVSOWY/LlI8BnwKOq6rlvb16+plizBCQgAQl0kYABgC6Omm2eVQI87am/6rtUX+GhnHpZw14nXg7+OCz8/+R8B/qQiGDIuP3HaaK8SezncrC1zvEJ8Pr9zjLeANi/qown/wQAxzHaTP7DohUR5WrtCHDP+T+iUb9ydEe74gemYr3G17/42sapVUrK5TdQNAlIQAISkEBrAgYAWqMyoQSWnQCv6daf1PMr0CxER30K2OwITkXzzYJmmkn4+6I04pLotOivIp6G8kOAowYCPpo8L4zGdaCSVZPAdgI7M4jEXEcYT/LHdSq5j5Af4x5SyqwOuRlAgLePcMD/ekCa5inmyP+MBs0VxoDfMun1Rhf3N+7PBGv4EdgzItpBWn6/5ZxIk4AEJCABCbQmYACgNSoTSmDZCfDkjsU7C8myKDw8++ctomXcA/gecP21+q8torwdnZUAyFlVn5+S7RMjAgG8ETCKwbAeTBklr2klsBwECPbxBBjD+e/lLLZpFz/4WYIHlLfYr9S0qXNa0sBrfaVR+vT3STwoAHDcgDK51zP29fUanwNXRS+O/C2TUUbCtBKQgAQkMPaPaIlOAhLY+QR40n9NdGhUXkF9Uvb5DvK4zixPndZGxQlgkboxmnTj6f3/igh+/GV0dDQXFS6T3n7bJ4FRCeAIlrdduN4HOZSDyq4Hv0p5g9J77pcEYIfDvdRO924ps+7g18e61M79nyAwb29cGr052lJOupWABCQgAQm0JeAbAG1JmU4Ck0Hgn9OME6Li6B6bfV5JZUE4qlHGk6P6d4BvzN+L/Z8FRm3HuOlxgq6NXhZ9LCIYcmQ0F/lUMxC0qSLAE/8S6OOze1znnbw4nFi9zOrQ2Jtx2zN2hcuQESd8Q8TvkYxiw4I1CynsiojxLc4/bxvsH5X/5pTfLflQ9JZoIdIkIAEJSEACYxEwADAWNjNJYNkI4OjzFgBOO9cvi0R+Hf+ZEc5wW8NBJnhwYlS+A8wilcXluK8Wt617qdPxSjNvLcBlPnpcRFBkZeQ9LhC0qSCA88mr+wTuuGbHDXJxzyjXPE+yy9cBCiTuA6g4ouX4sG09fSljWJ6unede86XorCVu+CUpj3t4/d7LOPG2F0Fagr4EA46IvhhtiYYFFZJEk4AEJCABCTyQgIvjBzLxiAQmmQDf/XxndFC0omooi8KXR6+NeHpfnhJWpx+wYfHPK/N8fxQnuRh5ecLUVYPNudHl0TER/1UWv5FAsGQWnk52ddxsdzsC5fVv5jPXfnHi2+X+Zaq9q/wcoUyeLNcNJ7c4lzihXDttnM3mmwVt8jSq9s8aAQI+BHwXqmNPyJZ7/T4RgZsN1XE3EpCABCQggZEIuCgeCZeJJTARBM5PK86JWCBiLLx5QvSq6NSIV0Z7Bff4oTwc4mdFBAwOrqXblv1XRzgEXTf6cnb00og+XRQ1nZyu99H2zx6BhXQZYVzjBAF6XedVkp6b3XJ0VVQCfwvZ39pIiXNZnHeCBW2NNxMIGGC8VdC1N4na9nNnp+NpP8Hd8hWBA7LPW1+MoyYBCUhAAhIYmcCoi4eRKzCDBCSw5ARw/F8f7RvxVKg8eVuf/QOjjRG/5I8jzEKcQB9PC38r4pXStdXf2Wy3W6I3RhdEw94eqLJM/AYH5vpoIYIHbzzwRgB91yTQRQI4gtdER0U8Bf7j6LKI67ytETQgH8469wbuE82gH3/jvO8e/XbU9kEBbeKehBFwo3xtaQgw7tzzV0flqwDPy/7zIwMtS8PYUiQgAQnMDIFJDgDwhKIsJkYZEJwcnnK0XbSMUrZpJTApBBbSEJ74Eww4LSpznsUhQYHjo7IILwEAggDNa34hx/je/9sjypo2Y3HMVxsIBhAIeGT042haAh3TNl72pz8BHOovRPx2xyER1/jnIwJ3vLY/zHD65yMChdi1EV+XaV7336yO8cYQr5xzbxnmZHJfmYu4/2ALEV/J0ZaOAOP88OgZEWPCfZ4fhT1v6aqwJAlIQAISmAUCTWdgkvrM4uYx0ahtZAGCo8PTC4MAkzSitmWpCbCAJwjwrxG/gL+mVgHzvyzGe9VbfjjvPTl5fjTtr8jjPBEAgBn3B59O9poVHpt0AjzxvyRaVenZ2XLtbooGOencD46Mnh5xX+A1/4ujq6Km8ao55/mtgYOjE6N3NxM1/ubeg3PKtUVw7evRzUPyeHo0AtyzeVOLcSQAxNcznhsxXqO8BTJaraaWgAQkIIGpIzCqc70zAfDqYXn6MGq9OP5Fo+Y1vQS6RGAhjX1TxCLwEdFREV8D6Hdt81SORf+nog3R1VGbp4dJNhWGszTtwY6pGCg70ZMAc/f9Edc41zqfka+M3hbxJLjXU3dezceJ/8tobYSDviH6cNTrWrgxxwko8BYegQN+S4PP03OiZuCMJ9G0hcDCfJWO19W5HzXT5pC2SAKMDb8H8I4I9odGZ0T/Myq/27DIKswuAQlIQALTTqCfkzAJ/eYD7jPRqE/xecX3pAinhoWOJoFpJ8Ci/6IIx/4jEU8HHxqtiHjtl+uAhT5vCrCAXKjEUz4XjYGgSaBDBAja4QRybfOjnusintbzxtwXo4WI1/o5vzp6WMSTfBx67gW8CfO6CEe9l/HZ+Y8RziXO/QERPxr66Ig834+4b/C0n0B9ScMT6VsiAhRXVmmy2SkGA+ptY/w3fh+KtrZJ3EjzkPz9Z9HvjZgXXvzvLRePmK9X8nNzkLFmncMYPzG6JCJoo0lAAhKQgARmkgDfj/tOxELooJkkYKdnnQBPhvaJWPATDJiLeO2XxeskB/3SPE0CEmhBgCfzOL08xec3Le6LfhbhnN8Q8T3+b0U/qI5z/t8ivvJD0ID8g4z7xLHRV6OfR6X8H2b/uxGfsZR9V3WO89+LeBuB+84oxj2Kz2vKoL5jWmY+M+noE/loI/1vI4KkBC2KsU74RkQ5P4q4T9aNwMZrqvOj1lXaA6en1QrlHn1KrUze4OBYW+MrAPCnPZTNuBIM0CQgAQlIQAJDCegMDEVkAgl0jgDfBeZJnCYBCUwnAZ7SXx7xK/CfiB4fzUf7VsrmfuPtnw0RwQKe/t8UDXs7jvM8reYp+Z9H/ODcXIQz3DTeQNoQ8QSe7XLce3hTsO3bgqOkbfaVv8fN37Z9vepsHuO3THiLg9+AIZgzHzFGZzUT+rcEJCABCUigSeBBzQNT8DdvALwsYuHy1Kjfa45T0FW7IAEJSEACM05gj/Sf1/H5GgCv6/PVn/8r4un4QnR9hFNOIGCc3/vA6S9vFK3OPn8/OKJ8Pmf5WhFBhXHL58k3baYftI+y7oiGGe2gz6M61nz16eaosMCBpn62BD7gVf9qFA9K6D8a1yivMKIM2sybBtSLwY7z9XqrU303jHnJT7sZ4+UIvvRtoCckIAEJSEACO4sAAQC/ArCzaFuPBCQgAQlMAgGcSpxYXgVHONSjvFY+rA84wpTZLH9UB3xYPZ6XgAQkIAEJSGAHEvArADsQrkVLQAISkIAEdhIBnh6P84S/bfN4yow0CUhAAhKQgAQ6TMDIfYcHz6ZLQAISkIAEJCABCUhAAhKQgATaEjAA0JaU6SQgAQlIQAISkIAEJCABCUhAAh0mYACgw4Nn0yUgAQlIQAISkIAEJCABCUhAAm0JGABoS8p0EpCABCQgAQlIQAISkIAEJCCBDhMwANDhwbPpEpCABCQgAQlIQAISkIAEJCCBtgQMALQlZToJSEACEpCABCQgAQlIQAISkECHCRgA6PDg2XQJSEACEpCABCQgAQlIQAISkEBbAgYA2pIynQQkIAEJSEACEpCABCQgAQlIoMMEDAB0ePBsugQkIAEJSEACEpCABCQgAQlIoC0BAwBtSZlOAhKQgAQkIAEJSEACEpCABCTQYQIGADo8eDZdAhKQgAQkIAEJSEACEpCABCTQloABgLakTCcBCUhAAhKQgAQkIAEJSEACEugwAQMAHR48my4BCUhAAhKQgAQkIAEJSEACEmhLwABAW1Kmk4AEJCABCUhAAhKQgAQkIAEJdJiAAYAOD55Nl4AEJCABCUhAAhKQgAQkIAEJtCVgAKAtKdNJQAISkIAEJCABCUhAAhKQgAQ6TMAAQIcHz6ZLQAISkIAEJCABCUhAAhKQgATaEjAA0JaU6SQgAQlIQAISkIAEJCABCUhAAh0mYACgw4Nn0yUgAQlIQAISkIAEJCABCUhAAm0JGABoS8p0EpCABCQgAQlIQAISkIAEJCCBDhMwANDhwbPpEpCABCQgAQlIQAISkIAEJCCBtgQMALQlZToJSEACEpCABCQgAQlIQAISkECHCRgA6PDg2XQJSEACEpCABCQgAQlIQAISkEBbAgYA2pIynQQkIAEJSEACEpCABCQgAQlIoMMEDAB0ePBsugQkIAEJSEACEpCABCQgAQlIoC0BAwBtSZlOAhKQgAQkIAEJSEACEpCABCTQYQIGADo8eDZdAhKQgAQkIAEJSEACEpCABCTQloABgLakTCcBCUhAAhKQgAQkIAEJSEACEugwAQMAHR48my4BCUhAAhKQgAQkIAEJSEACEmhLwABAW1Kmk4AEJCABCUhAAhKQgAQkIAEJdJiAAYAOD55Nl4AEJCABCUhAAhKQgAQkIAEJtCVgAKAtKdNJQAISkIAEJCABCUhAAhKQgAQ6TMAAQIcHz6ZLQAISkIAEJCABCUhAAhKQgATaEjAA0JaU6SQgAQlIQAISkIAEJCABCUhAAh0mYACgw4Nn0yUgAQlIQAISkIAEJCABCUhAAm0JGABoS8p0EpCABCQgAQlIQAISkIAEJCCBDhMwANDhwbPpEpCABCQgAQlIQAISkIAEJCCBtgQMALQlZToJSEACEpCABCQgAQlIQAISkECHCRgA6PDg2XQJSEACEpCABCQgAQlIQAISkEBbAgYA2pIynQQkIAEJSEACEpCABCQgAQlIoMMEDAB0ePBsugQkIAEJSEACEpCABCQgAQlIoC0BAwBtSZlOAhKQgAQkIAEJSEACEpCABCTQYQIGADo8eDZdAhKQgAQkIAEJSEACEpCABCTQloABgLakTCcBCUhAAhKQgAQkIAEJSEACEugwAQMAHR48my4BCUhAAhKQgAQkIAEJSEACEmhLwABAW1Kmk4AEJCABCUhAAhKQgAQkIAEJdJiAAYAOD55Nl4AEJCABCUhAAhKQgAQkIAEJtCVgAKAtKdNJQAISkIAEJCABCUhAAhKQgAQ6TMAAQIcHz6ZLQAISkIAEJCABCUhAAhKQgATaEjAA0JaU6SQgAQlIQAISkIAEJCABCUhAAh0mYACgw4Nn0yUgAQlIQAISkIAEJCABCUhAAm0JGABoS8p0EpCABCQgAQlIQAISkIAEJCCBDhMwANDhwbPpEpCABCQgAQlIQAISkIAEJCCBtgQMALQlZToJSEACEpCABCQgAQlIQAISkECHCRgA6PDg2XQJSEACEpCABCQgAQlIQAISkEBbAgYA2pIynQQkIAEJSEACEpCABCQgAQlIoMMEDAB0ePBsugQkIAEJSEACEpCABCQgAQlIoC0BAwBtSZlOAhKQgAQkIAEJSEACEpCABCTQYQIGADo8eDZdAhKQgAQkIAEJSEACEpCABCTQloABgLakTCcBCUhAAhKQgAQkIAEJSEACEugwAQMAHR48my4BCUhAAhKQgAQkIAEJSEACEmhLwABAW1Kmk4AEJCABCUhAAhKQgAQkIAEJdJiAAYAOD55Nl4AEJCABCUhAAhKQgAQkIAEJtCVgAKAtKdNJQAISkIAEJCABCUhAAhKQgAQ6TMAAQIcHz6ZLQAISkIAEJCCRWLGrAAAgAElEQVQBCUhAAhKQgATaEjAA0JaU6SQgAQlIQAISkIAEJCABCUhAAh0msOuEt32cAMU4eSYcg82TgAQkIAEJSEACEpCABCQgAQksjsAkBwDWpmuro1Ed+j9Inj2i3cbIuzia5paABCQgAQlIQAISkIAEJCABCUhgZAKvT46fRPeNqa8n3yEj12oGCUhAAhKQgAQkIAEJSEACEpDAFBKY5DcAcOAvjkZt44rkWRXdE907hWNmlyQgAQlIQAISkIAEJCABCUhAAiMTGNW5HrmCRWQ4P3kvGyP/E5PnjOjuyADAGADNIgEJSEACEpCABCQgAQlIQALTR2CSAwDbghuNauuSwaf/o1IzvQQkIAEJSEACEpCABCQgAQlMNYFRf2BvqmHYOQlIQAISkIAEJCABCUhAAhKQwLQSMAAwrSNrvyQgAQlIQAISkIAEJCABCUhAAjUCBgCcDhKQgAQkIAEJSEACEpCABCQggRkgYABgBgbZLkpAAhKQgAQkIAEJSEACEpCABAwAOAckIAEJSEACEpCABCQgAQlIQAIzQMAAwAwMsl2UgAQkIAEJSEACEpCABCQgAQkYAHAOSEACEpCABCQgAQlIQAISkIAEZoCAAYAZGGS7KAEJSEACEpCABCQgAQlIQAISMADgHJCABCQgAQlIQAISkIAEJCABCcwAAQMAMzDIdlECEpCABCQgAQlIQAISkIAEJGAAwDkgAQlIQAISkIAEJCABCUhAAhKYAQIGAGZgkO2iBCQgAQlIQAISkIAEJCABCUjAAIBzQAISkIAEJCABCUhAAhKQgAQkMAMEDADMwCDbRQlIQAISkIAEJCABCUhAAhKQgAEA54AEJCABCUhAAhKQgAQkIAEJSGAGCBgAmIFBtosSkIAEJCABCUhAAhKQgAQkIAEDAM4BCUhAAhKQgAQkIAEJSEACEpDADBAwADADg2wXJSABCUhAAhKQgAQkIAEJSEACBgCcAxKQgAQkIAEJSEACEpCABCQggRkgYABgBgbZLkpAAhKQgAQkIAEJSEACEpCABAwAOAckIAEJSEACEpCABCQgAQlIQAIzQMAAwAwMsl2UgAQkIAEJSEACEpCABCQgAQkYAHAOSEACEpCABCQgAQlIQAISkIAEZoCAAYAZGGS7KAEJSEACEpCABCQgAQlIQAISMADgHJCABCQgAQlIQAISkIAEJCABCcwAAQMAMzDIdlECEpCABCQgAQlIQAISkIAEJGAAwDkgAQlIQAISkIAEJCABCUhAAhKYAQIGAGZgkO2iBCQgAQlIQAISkIAEJCABCUjAAIBzQAISkIAEJCABCUhAAhKQgAQkMAMEDADMwCDbRQlIQAISkIAEJCABCUhAAhKQgAEA54AEJCABCUhAAhKQgAQkIAEJSGAGCBgAmIFBtosSkIAEJCABCUhAAhKQgAQkIAEDAM4BCUhAAhKQgAQkIAEJSEACEpDADBAwADADg2wXJSABCUhAAhKQgAQkIAEJSEACBgCcAxKQgAQkIAEJSEACEpCABCQggRkgYABgBgbZLkpAAhKQgAQkIAEJSEACEpCABAwAOAckIAEJSEACEpCABCQgAQlIQAIzQMAAwAwMsl2UgAQkIAEJSEACEpCABCQgAQkYAHAOSEACEpCABCQgAQlIQAISkIAEZoCAAYAZGGS7KAEJSEACEpCABCQgAQlIQAISMADgHJCABCQgAQlIQAISkIAEJCABCcwAAQMAMzDIdlECEpCABCQgAQlIQAISkIAEJGAAwDkgAQlIQAISkIAEJCABCUhAAhKYAQIGAGZgkO2iBCQgAQlIQAISkIAEJCABCUjAAIBzQAISkIAEJCABCUhAAhKQgAQkMAMEdp2BPtpFCUhAAhKQgAQWT4CHBvtG66qi7s72+ujGxRdtCRKQgAQkIAEJ7AwCBgB2BmXrkIAEJCABCXSfwJ7pwhnRs6N7o0ujl3a/W/ZAAhKQgAQkMDsE/ArA7Iy1PZWABCQgAQkshsA+yfzUaL/o9uj90dbFFGheCUhAAhKQgAR2LgEDADuXt7VJQAISkIAEukhg9zT6+GhldHP0juiC6J4udsY2S0ACEpCABGaVgAGAWR15+y0BCUwaAe7HfC3L+/KkjYztgQCv//P0nyf/50ZviHT+nRsSkIAEJCCBjhHwNwA6NmA2VwISmDoCOPw8XT04enL02ogfVtMkUCdQAkT1YzjgfBd/FGsGmdqWcWcqYX5S3+YR623W+dMRGky/i8jWbG+97OY50i83t9KG3bLT5EB70Sg8KE+TgAQkIAEJjE3AAMDY6MwoAQlIYFEEcEz2iFZFOFanRDgJ/7ioUs08rQQIEB1W6xwO+SXRthE6vFfSHhox5zAcz01Rm4ATjirz88BobXRTRP13RMOMdh8QseagznOiNvkod/+I/Pz+AP/rwIZoa1QCH+uzPxfRPvpyXVS3NfnjyNoBuJGOIEZb4+0HuFEWRh+ujK6u/u63ob97R6uj+ej3In4/geO3RQvRF6PLI75WAZNRAzrJ0soYO9pC+bdG8MI4xr2on5G+BCno945qX7/6Pf6L6+4h1Tgwf0cJGDHXuO4x3t4ZJW+Vbeo2zHeC7nz+wgOm5XrY2Z2lDYj2FJ+sXG+0Cy3XNcc9l7q579IOjDbSXu4npX3VqYncwJX5Tz+45zr/J3KYlqZRz0gx34n4UD1oaYq0FAlIQAJLSoAPUByiF0TfjH4e3Rf9KDpkSWuysGkh8JJqjjBP0M+iUyIWYm1tXRJ+OSpl/Fv2T2+Zed+k+5da3m9k/9iWeY9OuhuiMs+Z923azcLtVRHXBf39WIQzXTeO0Z8fR2c0zrHQ51jpL9ufRM+KWBi2tQOT8EtRnduLhmTGsYbPp6LS73o7yj7nvh29MiLIM0q7hjTh/tNwODKiD/8UrahlfE/2YdhLH87xd0WvieBIEGRYwKBWtLtLQACH6/CI8XljNOrnA3P3g5VGzbsEzZ/IIgjonVox5b66chlayf2PgOKZ0Sci/Bbuceh70Wci2nZYRPBnUJAup8c22sH9ACZ14zhzjh96PbF2Yi773Ks+EsFw0o05D99PRty/dhTHSecwE+0zADATw2wnJdBJAizucWCeEhGk5MO+7hQYAOjksO6URjcDAMwb3hbZv2XtLOhw4nCA647s6S3ys2iaj3BWEWXgcL8+auPIJ9ku/z0i4EDdP4xwSAcZ18ppEQEy8nwtmo/KU7KSd9QAAGXh1K4qBQzZ0j9+/LDJbVAAgDFhkVz6Wxb1X88xnHACKV+JvhVxzReun8/+CVFzMZ5DizLeOsDx/9eIoESdIWMBE9pQHJCybQYucFJeHeFUNschh7QdQID5x5xgjJg/R49YxxFJf1elUfOOWFVnkhPE+rsIptw/mM8707i+T4q4p9EGrjeuQ64v9P2IMePcDREBU4IUS+280o6jIhzk+ah+TXP/pX7uYfV7HQ71ZyPaDMNJN/r0nAiuBJMJZGsh4A3caSABCUhgxxNgEccinCewfx7NR0u9yN/xvbCGSSHAK6IsBlncs6DZ1qJhzL//ErV12OtFslb4s4g6b4uui3gquTZaE10TDbM3JMEfRjigPNln8UiZW3pkpD6e1vxFRPk3R2+Orojo+7hWXqWl7Lloc4uCePr2X6O23BiPl0anR1zjvG5/efTxaFNEf3kNFScEx+NPIxbh9JPxJA/9vyC6M1qs0W6e4HHvuTC6OurFEMa0r260A0cABswfAhv/LVoREQi4NupVVg5rS0SAOXt7xfrGbO9YonItZnkI7JFqeXOLt2q41rdGXJMEBRcirieutT+IuE/h+L88+s2IgOtCVO5j2V2U4cxzH6aetl89vDtpF6J9IgIVk27wvCj6k+ikCM7cB7kHz7QZAJjp4bfzEpDADiaAw4RDcFj02Oi4iA9OTQKLIXBTMuOUHRDhOOKIDVvQ4LzhBOJUsigif1tjzvIUnHwsVnlqxpxeFa2PqH/YohQn5hUR7aXda6PnR38bca5uLHr5XYwjItr7oeiCaLHOD04u6x7agHB4WdAOMvqOg0462rL3gMQ4y6dHJ0cs7rdFZ0f8sCeBgLpRFov/SyO+JvC8aD5iUf5X1TkCB8O4JslAW52zj4oIBPCkj7nTNOq4Knpc80T+hhdzZz4ieMkcYiENS5wY+qjtOAKMzTXRcyPmP0EArbsEjknTeRLN/YFxxfH+QERgtW6c577DvYD74BnRD6L/Hd1ST7iIfe5lBCRGMe5Zb4zIuzBKxmVMe33q5q0FPrNYh3GvIxA708biVJOABCQggaUnwFNOPsBfHBG5Pz3S+V96zrNYIo4hCxicM55sMNcGGY7pgRFOL/lw4kexo5OYp1I4rZ+OqB8HcN/oj6ptNkPtyqTAacSRoU04ygQW6msRFpY4mBynfzjp74mWYsHGgpvFIOU+LKJPgwynGW4ELFh00+9BRlADJ5nrnKAG1/0ro6bzXy+DwMIlEU4BfSXIcmT0mGix9wvaz9jRLuogUEP5oxjpYf/eiL7QRsaLMWJBTR3DbNy15rj5hrVnR50fp73D8hAA4Fq7OLosGub8DStvR/W9bbnL1b7F1rvY/PBZEbEe4B63JXpZhEPfdP5Jyz3yvIj7AuPO/ZI3og6NuH8tl9Eu7uPMxzbBqKXgNqivbcrnGir3PwIwBFnb3LcG1dv5c8s5iToPzw5MDYE2N5Cp6eyUdYQb+6QZH9R8SD8iOiE6eNIaaHs6T4Dvj/P5jWM6H+Eo4iT0M84/vDqJA7wt4qlSG6MeFp4YTi0LP+raGOEE4iAfHp0btTEcSYIGZ0QshHmyeV10RcS1sz6iPtrM8XdEV0dLYZS3R4TjSptZkC8MKJi0vKLPfWZzxJMjFpC9jAXl46O56uRF2Z4V4eAPM8onuMD39FdV4t7BmwE4fOPe51Ym7x9HPE38fMS4j2u0AUfk/dHqiHYypzh2U6NQPlOZm4hxZA7hOGyNCEI00+fQdoPhmojyycffP42Yb8xb1I/FQ3KO+mgX+7SBwAvjRp0Er5pGmv2jg6L9ot0jAh6lvoXsN8ePYBvpmRtcAxgBFtrMMdLTP+pciHoZddFW5h9jQ5/gsyW6Jqo7hLQRFjgtpIFBMwhAvXBDXFP0gbHGUSP/ICvl0yfaQ1m0hzpKH5oMGM/1EWPDtQnvdRF9gR3HFiLKIS3l0rZ9I9JQJ2VSx3XRjVF9XBmHAyPKhQk84Qsz+lfaR17Gt9m+HNpupGVsqJ8yCz/KG2a95iJzqMxh+I5qJyYDfaD/b4vOH1IA/dwUlfsCDFhXcB8qfTg0+/QTlox30+C/MiIvaWCNMR7MKcYDgxP1lfGrDj9gw5iQl+uAsWE+Nq1+LZIOuz1irGh7c7yYc6sirgvax3n6xbXJmDGfqIf5RtqDo7mIeugfaRgP0tCmXsY5yjk8emR0RTTOGPYqu5PHAKdJYJYJcJM/JuKmrHWLAB8GfDhePiHN5kOdD6ajokdHLIhGvcdSBoskPvi0ySfAHGRRtbPt1lTIQoo5xsKOxTILJ9rTNOYU8+mIiMUd/wvAg5uJBvzNvZFF0z1RWUQxrz8THR/NRTj0F0csxIYZbeT741wrLPJo+zMjvjfPQv0vq2Nwxdmk3OaCMYfGMrjdEFE23OjblVEv55AK9om4nnHILov4oa5+xiKbRTSLUtr7zoh8bQ2+GyICDnMR7WOBTvvacE2yBxh84czCm7FbLMfSRt5OYMHO/ON+VXeoWKCfEPF1grKIZw7CmDZcEn04ol+UVwxux0b1fHw+M1+2Rtzn+erJudWxbLYbZcOKOmF3UIRDxHHGmUU/v7/wgervbLYbc5ix5ZVg+jEX0XbqwzHAQaA+5l/9GqffL4zmIr6mMh/RZjjvGcF4S0Q/eXOFz6hitIl0T4yom3bTb+zWCD4bIubOjduP/qIf9Ik3ZxhHtvX24DQVbsw/5iyOHGkvjL4W9TP4lvbMZ39NRB8YF8a0ML8g+/W5TL7yJg9PqPm8OynaK1qIeEX8TRF9m4+YL8wFrhGOwQFOhTMO7nlRMcrhPkC/+doK48/vhRwW0T+stO9d2Yd18xrhvsL4wJl+7R7hhF4Z8RZTKSe7DzDm9HHRoyKY7h/RZ+pgXC6NPhJtjGDdxsgPJ+YYY312m0xJw1jQP9oCP/rDvCrXHF8RoI3cn9hvGv1eHz072hTxVhL29Iiy6CsGb+YlfWN+9+sX863UyTxlztaNOc21iJPN+BXOzJ+rIsbzQxFjX4zrlfGlPXClz7RvVUS+s6LXRbTp1AgWpWzmEuO6OaJ/3FvoQ7P9XNdfio6Pjo5oOwyb6XJoNmzX2eimvZRAXwLcHJ8U8eGldYsAN30WHyxSltv40ONDhQ89Pmz5kB/HmI98uLJA0CafAIsYvsO+HMYTXRZaLLB4KsQisde8YU6xyJ+LuFZYKJa3AbI71FiYUQaLX/4rpeJEsphjMcxCmwXoAdEVURsj399FfP+VBSjXzpaoOAzUcXF0TlR3dvLnooxftccBoH4WsgQuLoqou2ks2OnXXMQil4AHHPsZ5ygTo/xrIxayo9jWJP5qdEyEE8SPJp4XNZ2bNmXiyP1+hNMAR8peCqMcFtuMEWPO4r0Yi/HTIxwI6r062hDh/MPm0OgZEfdLgkDMxWLMAYJAnGOMmM/kY07goD8hOiTCIWDMis1lB+fm5Igxo07Gi4U9TgJ52bLefXtUWB6XfV7BZoxviphvzDXqYyxPrLaMAw7LrRHGvX1NRN+fGXENch+4IGK8aT9lrojIC68yv+ayj2NzSkR59JG6aRvODm2FEdcEnwOUB9PSJtIxrsUon7ULvx9Bfcw5+vHTiDaeEV0TwYVjdaMseMLgqAiul0b0ZfcIBjCCHX/DoLCjTZxnfHhb59iItt4cURfnKX99RPmkvTFi3KiHc8yPwyLGlXsI7WReYeU87WMf5syzKyPy7x+VMWKfNnOOMcfg9/KIMWSebowYg8KR+ULbexn1PC1inJiztIv8pKeuw6vztJk5DO82Rl7GmP5cEcGqrdH26yL4M66URbtgTpmwYB71MsYCHqQpY3t7tU8e+sv4co75/72ocMzuA4yxoE7mxX9qnOU4bE+JKPPyiHGnvNXRERFj+hsR7KgPIy3zgTZizHHYw4g5XuyM7Dw/Iv2GaGtE2WVcOE85t0ZXR03jGH1eF1EXDPvNg2beqfubiahJQAISkMDiCLD4+vOIxQEfuOMai6cTxs1svp1O4NrUuFwBgKtS90LEQnR9xEKJhU/TOE6AgMXi9RFtbhsAYMFc5iML7/NqhbP42hBRP2LuXxkNWjyW7KS5KHp79KKIBSqLbo6XhSNPaFj0LrXRfzjQXhakLDS39KiEa5Enl3BjEctisSxQeyTfvsAti9VxF5alLhaplMVinzEYx7gnkZ9+0P6y2B6nrHoe2oiTgBPBuO0TsZbkODxZoLMIPz8iwAPvuyPSMk+fGh0VbYuYQ4gyYH1AdEFEPhiSj/4fFr02wsEg/8URcwVH4Zjo5Ah7X8TT5M3V3/T/udHREY7JhggnAIcBB5HxZI5R9mUR1w/1HRrh2NJeHEHmx4VRc25TL9cEdTKnYMA8IC9OEO2GCe3C+Ju2cC3RR/IyLnxmwIyAAmUinqL3cmJy+H6jnTzAIC/te0fEfYF2zEV8Jp0W9fpM2i/HeeoLP+bH26JLIpwu5gwOXhkrAgyM4xURZRfjWoUR7HjyemfEXCDdXMRXYtZGG6O3RDiEzBvSrIh4c+JZEXUdG70pqtse+YMxYi68IuL+UuYdgUk4wQDG8Occdnp0YnR39IaINzmYZ8yXwyP6fWTUy46vzjNHzol42l7mIszmI+YUdd4RMdfKfMtuX2NewAtjXJtzqW/GKu23s2XeMNaIsqh/HOPa4/76/YjrlTlPPzdF1PHTiDkwitGeMneZb/8Q8eYNbOjrXMR8ODM6LaI/pKkbdTLezGECxFx3MN8Q7R0x5mwZU978oB+UvW90XMTvKzC+zCUYN20hB26qDv6XbM+LmLMzaVyEmgQkIAEJLI4AiyMi2o+OWHjwITWOsWA5K2KBrU02ARYetyxjE29P3f8SrYvmokMiFrks3oqxEGMhy2KVhf0Xo1EWjfNJPxex6GeRz2KuGOV9ITo1YgH2sIgFFYu2NsbCi0Uo7WbBxkIPI39ZjMJ4qQ1uX4lwfOYinA8W+M2F4D45hnMDr89V22z62n/MGRwWjHGoO0p9M/U4AVcW4RhMitPQI+nAQzhYKyPuKd+N6PdSGWVRLobzwMKdseKpNXUyhjjYLOSL0S+4kJag2XzEWxUcY47SXs59NsIRKfMUp5w0vxcxj9ln7co8py5eB+Z+y9zj9ebromI4CKRljFdHXCvcq0+KmHf0A+f/vVG5bnAQaD/zgTmAQ/KnEX2hvLqRjvv+1VGZqxxj7I+OaNfv1jLQXvrJtbQhqrcVPrzmzHnY4qwOMs7/cUTfKAcH/sKo9ANO6MAIbnUrTE7MQebaB6Ozq/2SjrzwmYtgwGcb9ZS5Sbpfi2jrK6ONEQwoG+PaOSBi/HDAz43q1xgsYfWEaC76f6OmURb3WO4T74vKnNucfeYHY3hk9PsRPGgvzHHwadslEUzr9+kyhozFqqhu++UPvhLCGFwZMTfYMp4Yc4O6KRsH9bCIe9ebODnEuEcWNlyPoxocyzVBWbuNWkAtPRwXIsa4MGUsuDZKX0ctnnnIdcI1c1ZEgIsymRNYuf5hfnz0xOijUXHISQPXMh8pg/Gkn7SJ+zVjSxrma/2aY3xh81vRXEQ+0pW6s7vdmH+MP1vmDnOGds2klck4k5230xIIAW5+74pYyGrdIsBC5/IJaTIfIudFfCj9c8SHGx9Ye4zYPuYjT5MoR5t8As0Fxs5u8YZUyBNAFl2PjC6KigNAW3AeD41Y2F4RsUgfxXjiUhZg789+vb/Uc12EY3RktLYSi742Rlmk5anlUVFZ0FLmxVFZ7LYpa5Q01Mt9g7r3jx4eXRjVnRMWjzhNcLsxguswe0gSlD78KPvjLqTpd2lLvcxh9TfPMydwFFhQo6Wcq/StlFf6vDLHDotYV3IvZF40jXZsihjjgyOchvOjenl/kr9xkOuOF+d5gswTxXpfGD/mNw4AQRrGqmnUhyNHO9mn3TyJZEs7zonq10z+3L4uuDTiesFRoI65aFtUt0vyB3XW2dJWHCvmF873XrUMpZ9zOXZ4tDmqO6c4YLwyT/3DriN4w4/PGNoJr2Y/FnIMZ4m66sa84gkocwTWF0RwrRt94jqh3DURn2c4dfV09AeGhWt275/312SfIBD3IMqoX1+kw3D+bo3oC23qZQs5CGeY1O36/FGcN/pR/BnuQysi2sZnaZ0v+WkHfabNqzhQM+YkxygLbowH5dQN5xJevF1B+j+Kzop69a+ej3HivoJxjY9q9L+05cHZL2WNWs6OSs+1Dzv4cJ1yDdSvC+plvBiTEyPGnOuK679uXE98VlEOVuZ0fW5z/XJ91K9H5hLX+Z4R9TTrpizsOxFjRVuZmzNr5YKZWQB2fOYJcFO5ONow8yS6B4AbfPPDebl7sZAG8MGHA8/3pXHOjohGudfy4TTOAiHZtBkjgJO1OVodHR2xiK7PHRZDPCHlPsdilsVV20UPZa6LmLsstliEN20hBy6NjoxYUPGd9Q1R05nIoZ7GohhHpL6YncvfLOJZ4PdbxPUsbISDhQX10HYciPpikj6zyIQbjgxOzjBu9KH0g3zjtr2er84lRY5kzAWcTxbGS30/2S1llntaWZgfkGPFifth9ptOZ2n8iqo98PzNaO+I+cV4k+eoaP+Ieyg/2oUzwD73VVSMNpCOsWOMmNu9Pg+4n76vOsc+1wSOOWnrjkb+/BVjDn8zYjsXUU/TvpEDverkGA4M40c7izGP6CeOO6+RPyLCOS79xHHB6W5j+yYRLJkvN0TFGW7m5fqkPfW5xHWH84qRn3nSb7zIixhf8tWN41xLvRhwPaF6vcwZONLuVRFvdbBPmjKfsvsrBv/6uJeTzDvGk7rJW+o5pNqnXxtL4saWdjF2TVudA7CgbMaB8nvZrdV57h/7RYzFQq+EtWP1e0K/vg4qot5H+jzu/WVQHYs599vJzNhyv2F7aJ/C6AdtL9fh+Y10jA1lNK3cA7jmT4sYZ47xVhvX0OboxmamHn8zdgRTaCP3K+bNpLHs0eylPzTOJFz6VliiBJaXADcDTQJLRYAPZxZ6WyIWmOujJ0dlwbVU9ViOBFigfj5i8c5C+rDovIgFDQsbFktHRLdEpONeN8yRTZLtdnyEc4ZRJgvxpnGM1+mZ6ysj2rEmauPE4BgdE50elUUhbZ6LeLqGo8T1syPsjhTK1ye4NuHGYpL6ymcBC/r5CL680cPifRg30hRnhLT0ZRwjb1mb0Z5xF6eUgyiDti2lsXAuPJgDlA/H0m7udwRQehnjDl+McnAEbop4E4Q5xJgwj7lfMj+Yu1sjvhrAPCyLfOpnfsKZ8WRh389ur50gD3nhyveQ+xnnKZP+7Rc1nV/ycb7X+HCMudC0TTnAG4c4/6si+rsuoo848Dg0n44uispczG5Pgx3OKunoX78xXsg52lOfj4wBfcIIRvCafD/DUaLvjC311cuh3O/3y1gdn8v2qIjgIP2l3ZTHuFMewnpdL5RP//qxKIzreQkq8Td5YNrLOMfYNceIuUHb4Mm49xpbyoN1eY2/zMMFTgwwxrjU958GpOt3qrSN8z+I+o13v/w78jgMGEe2XNvlLZZedTKPGB/mIF+bahrsuZ6bBr9XRa+J+IxZHxFkOD7i3PUR92quHe4n/Yx7OuNAG5iDbPuNc78ypuK4AYCpGEY7IQEJTCABPmiuiRaijRHfLTw1YhGkSWCpCFycgp4asUB8THRBxOKQBdaREQuzzdElUVtjIcd8LU4PiywWW72M8nESsLXRIdGVUXNxXSXZvmHRxSKOV4tKicgAACAASURBVITJy0KdfnCMemg3QYAt1blsltxwxrZF+0c8ib0wKo4GjieOCvXDs43RB6552LHAH3d9BU8Wplgps/pzpA2MEeOwlAtc+vUbEXxwkopjQ7+LMfdKH2qH798lX1E5iPOLY4yzyFsrR0QrKh2ULUEBvlr15ujdEX0rddLHQfMtp+832lasn2NZzpdyC8ta1u27o3Klz2dH9JU+Ms9WRVy7zH2unWOjy6OnR6TvZ4wDGtZ35mSznfSncGB/0FjBaGufRlBuP4aUT1+eHR0YMV/KeDGvr40ui06JVka9jPKbbe+Vrn6MOsq87zcnKJN7ZNOJLq/W92JWr6Pk51i/udFsJw5qYXVw82SLv2HEPKFPC7WyWmTd4UnKXCwV7ZGdQfc/2g/jXo5+v/kM84sjvpZ2XMT1wz2BecU1BNP5iB+WJKC1IepllF/m1KA29so7VcdmuvNTNZJ2RgISmEQCfNAQ0cYhujH6VMTTsRMiPsw1CSyWAM4Ec4sFIgtuFl8sbNkSEGChdVXUbxGfUw+wdTnCoorFLcb+XLXfa1PSsRjjO7Es1Db3Slgdo23lSQ5txRnnqdFR0epor+ik6OvRu6PmQj2HFm04IHBj4Ui91IljgrHIZKF4ebSlOjZsw9NkrnWu6wMi+njrsEw9zjOO+1bHF7LttUjuke0Bh8pCGkdsKdd6cykPUe4VUXFS647gX+U4fIcZfSuMGGPG46bovIjA0NqI32g4Otq/OsavllN23aGif8Whze5A43ooNsjxJQ3OJKJtSzUH6e+GiM8Eful/TcTXYOajw6O5aL9oIXpx1M9oD8zp96C+c65cn6Ws4sAyRy6KBtVTr597SHGe6sd77a/LwRdGOGkEiT4afS66LuI6YxwYf4IgzPmlMsqkjYwb84I+9jLONa+Lu6r0XLvNc/Uy4Pl/Vgcov18QpJ5nIX8g+npkxL1yUICnnpf58DtVHuY910izX80xLvk53u9cvY7F7Jd7DWUwr/lRyEH3f9IxRtwvRzE483lH2XwucE/gmvmT6IhoRcS9k2uDNL3u3fXroc24pZjptEETfDp7bK8kIAEJ7HwCfNjxYX9ZxOL1wxFPRtZHLFQ0CYxLgMUOrz6y0MZpYnF5fsTiiEU4DgfnmwvGHOprPP3HIcYIHvRaSNUzk/aAiMUXCzH2By0AcQyY+yxMSceCkToujPhe8LMi2v8XEQveS6OlNhZ/X4jgxQL70Gghog9w4/wnIq7dNsbCFMdmLmIsKGdb1DY/deB4/G5EezAW08VBrg613tB+nCzKXMp7DGx4Io9tinBGMJzDMseo88bqeHPDmGO9uLAmpc04cbBj7M+NcJaeGb0oWhkdFcEbxxJj/sG7n52WE3xHme99c21QB2U+tF+G6jjXE6KPtGkpjP7Td8YVMe83RgQD4MpXIVZFJ0cExfoZnyfkPzCijTCnX02by4HCvJzjngFfeOMQlbY08/J3aW+vc/2OwZbriusAdlzf50S0j7rrY0/aZvtyaGz7VlU+Za6OrutREqwI1DUDJ1y/tBFHkvPw6XXf5HpijDDSl3lYHeq5oZyPRdxnKPsZ0d/3TPnAg/M5RDCMPl0cMXbFSvuafSnnaet/qKXfEbvca5iPbOkb86nf9U/948yp0m7y3l6Jew73gfdGjAefKydFXBNw3hI1jaAfrJiDlNNrfJt5pvLvpbzophKQnZKABCSwhAT4sOHDkQ9xfiCQtwGuiGb2Q2gJ2c5yURem8zgofKbzCiQL12MjFn/Mt0uitobzyeKdRTKLJJxwXscfJObx+VUFc9k+LCoBhOrw/Zv12cORK47Hq7NPkIG6WNi+P7o0oi+HR9S7MtoRtiGFsuinLv4XBfp8QkTb4Fn6lN2hxkIUZ4NFMItM3iJgO4rhMCHGj3HjdwpYWI9jxaljHOjXUhgLa4JDcxHc+F5+cX4Yw+KAMgf7GY4MvxLO2x3PiWB0XPTJ6MsR/S9WHAsW8vzCNwt22PDd4eLE4mASLFpTMjW2pGcsXhDRduzyiOPM835jRJkEY2C3OSr9zO5YtiK5XhF9s9ruU5XCvZ+5RvnM+/MijuFIDRo3nB/ahf1+1O8amc85+lo36vtqdYB2Hdo4X/8T7gROeHsNJm2Mtv9WRL3MC3gzHxlPrvNizCcCANhS+SObUhb8qPuYquzmpoxt8zjXL+2kLTxV7jc3uKaYO8xBxoFroY29N4mYyzAob0cMy3dQEjB/V0fUwzgw54sVpmU+1U5t36Wtq5oHq7/rY9EnSevDNyQlc5h2ol4BCcbkqOg7EUHp46O2xr3iSxH3HPqE0f4SgLki+wTPOMZ1U9JsT1izfbPP5yIsy/2qkWQ2/lyqC242aNlLCXSLADfbIq/1yRq7Egj4UJqF48FrrTdGBgIma5y60ppr0lAWr8wfFlUsXB8fscDZGLVdoCbpdmeMRRJGXuYlDsMg4YjgrLIA5F6zPmIR2DQW3m+LWICxcP1o9O6oLETZspAjzdaI+9cpEX1iUbfUdm0KpH8s5HEW4PbEiHZcHOEMtDXY82YPC3zszAhnlj60sf2SiDErztgF2WdMx12kM+Y4CiyEGc9h7WDcyudFcwv7IyOcQeYHbTo7YqxK+6gLZrBkvI6Omp87tIW5AWvaBDPmFVvmxsHVeRzIulEO6RkfyseBwGB9aUS5OGywq9fJ/rERwQHm3Kcj8r8zot3U+dKoyQbnhfaTl3SXRcyTxRj9pP2rIso+MGry4e+V1XHGj4BHP6PvJUAET8aHPhajLJjxplmzf5S7IWJ+0w4cTBzIJrv1OYYOiO6MGKc2BrOSljb1cgY5/uKoOGq90rSpq5nmqhy4MqIv9L3wLOmoZ10Es6Yxn7nmmCPPiGDTZMfc5PcZKHdbhENaroHsDrTbcpb5xlygnI9EtIM6muw5xnz+u6jMw7dm//KoXh/zgL/XRIxTvZw98jf3IMawl9HPMk7MzWY7euXpd2xjTlwfwRfutKfeFvJRB2+1cN0h2t7W6ONcxNgdHjXHhfnEPZQ64cv108v+cw7SDtrKnJ5Zaw7OJIFgEjFIo4pJMMn9miTGtmV6CaxK1/4m4ukaOjViMaBNFgE+1G6J3hCxgGWBzSJBk8CoBHhdHaeaBfUZ0dqIhT5PVtsai6pHRcUB42k8i8Q2xsIbYSw6D4rqi3oWo2+OWDgz76+OcACai2cWpBdG747oD2sAFpRHREv92U7dn4lwhHCATotYdHMcZ35UuyAZLolYgNLf90QsWFmX9DP6xGKYPlI/Y7Al+mC0EI1rlLE5onyextK/fkYanAd+PKvo9dl/S4ST8uWIJ3Y4K7A5J8KJbi6ycVa2RvSB9j8t4nOHObkqek6EA4TB6aJqf1O2zB3mGnMC52suIh9iTGCJ3RSdV+0vZMscxRFbH/Gq+ZER85f+nhDxy+Eroysi5hV1fDS6NGJcnhXRbtpHXaQ9M3p5hENBOp663hYtxsjPV05wPLg+cIRoM+2kXubA30SFMU8zBxnjcHFEn5hrtBdutL8wY+zmIsa3buS9LnprdfAJ2fIVBDjDDh0XsXbgOuYz6h+jtgEx5gVvOsCa65ag2uroIRFtI9BD/xifYuWeUzs09i5zDOduLoIBbSiMz8g+rPi7adxvaBdsaM8/RbApY4RTy1xhHlM+Ti9zaRQ7P4kZ+9ujFRFfC+BaoZ5DIngfGzEenGMcuCeeHXHNMRZ147qk3VxztHddRNtpK9cSY8g5ymgabWCMML5ic3p0dFS/b//i7PB/r0kS3uzh+qcNBHEpq7Dj/vKuaD7iWoDDVVFbOzcJt1SJ4XBiRD8ZR65T5hLjSj+vjS6LmkZamHPdcz9gDLUJJPCUtIkLlwk1init6a7oKxE3GU0Cs0iARdB3o/sqsZjlA0GTgAS6SeAlaXa5nnFQmos0Ftjfq9L8KNufR3we7tnoLosgnCLK+rfo9Nr5w7PPq9nl3Fzt3LBdnJDXRD+r8rMYxanCaCvt/3FE2d+PqGuQ8fnNAriUxz2MRWTTmSENZVI2i/u6kZZjhdv/yH4zP/fFr1Vp4EFa7p30p244Ly+ozpPuRY3z5U8cOYIurEMKx7/OPgtP2DMeiPL2jbhX4wCU9D/MPuk5vxhj0Y9DSbk4sIf0KYz6Cp9eW+bRTyL6/K/R6yPmWj87Kie+FVEvecn37WrL3/8efTqi33U7LH/gzHC+pKMcnvbTLtrA36dGdYMpDhnpSIO4DphjzB3awZzGCYRJsbns4JAzb0p99A8e5W/O44zV5wzlfCOiTSdHzeswh7Y7nvSFtuAIFWNMGRPmF22jHvbpV+k37WH+4KRglI9zQ330A6eqbnCEZ8n/g+xTJnVT1jujwqGZl+uT84wRbYEVeQu7wvJpOVa/j7BPeyifudrL1ubgx6MyrxkTAkn0gXzco1jjvyOiHtpd6uBa+UREu7mP9DPykob+zzUSEQyhTM4j5kfp11ez/7mIPnP/ODCq22n5g3aWtlMOY0Q7ycMc4Zol2DKu4fDfEJX517z2qIcxZU6+KoJJL2NO0wfKoYwyjmXM6eebI8a4eR/AOWcM6Fepn34zT5l/HGve67iPfDaifIIhdWP8CMLRZs6Tn3H/dkS7ECzfFXHdFuN+QoCP9P8Y0a5edlIOfjMq7WUcYEiZhRf9bd5bSllcu1+KaBvXVK9rt6Sd+m39ZjhpnX1oGjQ/xgDRJwYVNT/oJ62PtkcCEpCABCSwFARuTCFXRziVLK54KnRxdEfLwvm8fGTE0xTswujWar/NhqcpLPA3R2ui9RELvS3V/jOzZYFIu3gq1esJTQ7fb/SFRSFl4fizYGPh/g/RKO26v8A+O9fn+LUR9RSnm6dT9Gcc25ZMz41eF7EQpczXVsc2ZLsQsYD9DxGLaRxf0twTweot0Xuj26PFGOVdFV0XHRThrMD03kahnK8vxstp0lEG84c+fSW6KLomKk8NS9r6ljn36OiFEU8CcSRYlzFmzNFLondE1Fu3y/PH86KnRvMRTFjH0Q7GiLa/Mbo0qttt+ePt0UJEXuYK+TDm4qaIsWCM6U+xhew8Lnp2hJMPA9oJd85tiGgn+erGvKAftIu6mzxJSz0LEY4M7IpR9psixvlJ0aqItqKbo1sinLk3RFwnGOWTj3ZQb/N6hgf5nh5xzZU5TNqPRjhUjP+vRc288HlxxNg+PiJ4BYPSfupkPm6I6tcDbWIe0Ebq7mVX5uDLI/rFPODa3zdi7lwR4dh/IGL+cx3QPtrJfYH6YUT9W6N+BlvSLETNOfnWHKMMrkUcTPpF/5lHMCHAt3dEmmbed+cYc4571qFVWsaItlAnzj9pGJdx7UPJuDF6SsR9Fza0EaP/XC9cE3CCV7ON2xPGSPvkiHE8PoJzGWvyvT6in/SDvpZ5ld3tY8e1gcGefGWcGWPYwox0xci/EFEmAZW6kZYAAPdp2gR3ONOvheim6P3RWVF9PtE32FIfASj61MvOyUHmE9fswRH3FkS9jMuG6G0Rc7OXMc/2r84zvv2Y9srrsZ1IgInMIoEPzlH0maQnEsYNjQmiSWAWCRyZTnMjLVHdD2d/zSyCsM8SmBICL6ldz2dmnwVp056RA/WnI/PNBPkbR4cnSuXpzulVGhag/xzxJIUnJCdHveqokvfcsOD7SFUG5bMYXBd9IaJMysbBaVsui9n/Ef0wojzuaSc08lMe5/jcPyOqGwtajpX7IGVxrGmk4SkS6Wjj+maC/I1j9YIqTfOpWI/k2xf0r4l4ovajKl9pR33LeNEv2J8S0eelMha7OLGw/5uol6O/VHU1y4EzDBh/eK6N9mkm6vM36XBYjoqOiFb0Sdc8jDPAHJyP+AwkX3GqmmnrfzO2tI92Hha1beegMoed4xqgrbSTeotDMyzfoPP0F94I9r3mer/8zDucwPkI5iujttdpvzI5Dn/Kosz5iAAN47SzjPoPjOAM41GvL64h2s4YwWcpmKSYXzHKZH1GG+cj6mFOjmo45YUzc6vt+MOIubMqGqfeXu2kzHItFnZtrsVeZfU6xrgwz4+KuGbp+yCjXwSkud++okX6QWVNxbmlHIylBnJeCkSjGgsgLiSiVPeOmtn0EpCABCQggQkksCVtujhiUbc16vX5dmGVhgX2TdFlPfpxT459K7ok4skJ6TAWgNSxIeL4xuinnBjBFpL20xFPfVjU0g4WaLdEl0Z8LvOkqm25tON90f8d8fQGYzFP2ygTuzpiUU/Z26pj9Q3H6Cu2UD9R2+f8RdF+EU+mNvVIBzf4FG4LPdLUD/GkiqfgPC3kSTOLVBxwuDCGlEf/KJMHF9TP/lIaff98tD76k+j8iKfWO8OYnzBAoxpjW8Z3lLzMgRsrjZLv9iS+cpQMS5CWa2Cctg6qmvsCGseYi9eMk3FInnLdLPXcHlLt/aep/9q2iXuk4xrqdV/pkXTsQ8yF6yuNXUgy3hpxbxzVYDTuvOlXF2Uu9fyu1zXquByazHx2kI/77c66D/bj4/EdQIAAAN/1+WJEFE2TwCwSIJLsGwCzOPL2WQISmEQCBEXKk0iCIgQEVkS77uDGzqV8XrvlLYpTo5359HUHd83iJSABCQwlwD2Pp/58ZYG3sgj0zrzt6A+emQcsAAlIQAISkIAEZp4AbxYs5knkuAAXkpEfVOMV98dGvBXCkzlNAhKQwCwQ4GHwERFvu30yGuetpKnjxGtomgQkMH0EeKWM18HKK5i84sgrWZoEJCABCcwWAb5ewNdDCAIcHvE2giYBCUhg2gnw9bBjopURPyrLV8Z6fX1u2jk8oH++AfAAJB6QwFQQ4Ptc/PprWejxxIeAgCYBCUhAArNFgO/Ts/hlzcePw/G5wBsJmgQkIIFpJrB3OsfD7nMjfgPFdXA12gYApnna27dZJkAA4O2zDMC+S0ACEpDA/QT4kTsCAfwIIT/4pklAAhKYdgL82N8HIu55vvpfG20DANM+9e2fBCQgAQlIQAKzToDXXhdmHYL9l4AEZooAX39FWoOAvwHglJCABCQgAQlIQAISkIAEJCABCcwAAQMAMzDIdlECEpCABCQgAQlIQAISkIAEJGAAwDkgAQlIQAISkIAEJCABCUhAAhKYAQIGAGZgkO2iBCQgAQlIQAISkIAEJCABCUjAAIBzQAISkIAEJCABCUhAAhKQgAQkMAMEDADMwCDbRQlIQAISkIAEJCABCUhAAhKQgAEA54AEJCABCUhAAhKQgAQkIAEJSGAGCBgAmIFBtosSkIAEJCABCUhAAhKQgAQkIIFdRSABCUwtgfr1fW96iTQJSEACEpCABCQgAQlIYEYJGACY0YG321NP4OD08G+jvaqefj7bs6KtU99zOygBCUhAAhKQgAQkIAEJ9CRgAKAnFg9KoPMEcPzXRftXPbkt2z063ys7IAEJSEACEpCABCQgAQmMTcDfABgbnRklIAEJSEACEpCABCQgAQlIQALdIWAAoDtjZUslIAEJSEACEpCABCQgAQlIQAJjEzAAMDY6M0pAAhKQgAQkIAEJSEACEpCABLpDwABAd8bKlkpAAhKQgAQkIAEJSEACEpCABMYmYABgbHRmlIAEJCABCUhAAhKQgAQkIAEJdIeA/wtAd8bKlkpgFAJ3JvHm6PYq07ZsfzpKAaaVgAQkIAEJSEACEpCABKaLgAGA6RpPeyOBQmBLdl4T7V4d2JrtLeKRgAQkIAEJSEACEpCABGaXgAGA2R17ez7dBG5O986b7i7aOwlIQAISkIAEJCABCUhgFAL+BsAotEwrAQlIQAISkIAEJCABCUhAAhLoKAEDAB0dOJstAQlIQAISkIAEJCABCUhAAhIYhYABgFFomVYCEpCABCQgAQlIQAISkIAEJNBRAgYAOjpwNlsCEpCABCQgAQlIQAISkIAEJDAKAQMAo9AyrQQkIAEJSEACEpCABCQgAQlIoKMEDAB0dOBstgQkIAEJSEACEpCABCQgAQlIYBQCBgBGoWVaCUhAAhKQgAQkIAEJSEACEpBARwkYAOjowNlsCUhAAhKQgAQkIAEJSEACEpDAKAQMAIxCy7QSkIAEJCABCUhAAhKQgAQkIIGOEti1o+222RKQwHACBPhKkO/e7CNNAhKQgAQkIAEJSEACEphRAr4BMKMDb7ennsDh6eFXou9Welu2q6a+13ZQAhKQgAQkIAEJSEACEuhLwDcA+qLxhAQ6TWC3tH6faN+qFw/J1uu900Nq4yUgAQlIQAISkIAEJLA4Ar4BsDh+5paABCQgAQlIQAISkIAEJCABCXSCgAGATgyTjZSABCQgAQlIQAISkIAEJCABCSyOgAGAxfEztwQkIAEJSEACEpCABCQgAQlIoBMEDAB0YphspAQkIAEJSEACEpCABCQgAQlIYHEEDAAsjp+5JSABCUhAAhKQgAQkIAEJSEACnSDgr4J3YphspARGJnB7clwR8T8BYNdHd1f7biQgAQlIQAISkIAEJCCBGSRgAGAGB90uzwSBzenlKyP+O0Dslujmat+NBCQgAQlIQAISkIAEJDCDBAwAzOCg2+WZIFDeAJiJztpJCUhAAhKQgAQkIAEJSGA4AX8DYDgjU0hAAhKQgAQkIAEJSEACEpCABDpPwABA54fQDkhAAhKQgAQkIAEJSEACEpCABIYTMAAwnJEpJCABCUhAAhKQgAQkIAEJSEACnSdgAKDzQ2gHJCABCUhAAhKQgAQkIAEJSEACwwkYABjOyBQSkIAEJCABCUhAAhKQgAQkIIHOEzAA0PkhtAMSkIAEloXAutR6cuTnyLLgt1IJSEACEpCABCQwOgEXbqMzM4cEJCCBWSawJp1/ffTO6OGRnyOzPBsmq+/818Z7R/tFe0xW0yayNQ9Jq1ZE+y8jr92r8WLM9oy8n0zkVLFREpDANBHgw1KTgAQkIAEJDCOwbxKcET0pmotwsDYNy+T5JSNwbEp6fLSYz+07k//T0UerVh2T7WOjcZ3lbybvu6MtVXlsjooeE+1VO9Zr994cvCe6I/pudFV0RXRrr8QDju2TcydFj4gOjEpffpr9rRFz9OPRlRH19bMzc+IPo3H5/nPynh/V249j/fIIJ5fjz4vqxjV1YvRHjeO9/oTX3dFt0Q3RZdE1vRIOOEbfjogeHfEGD/WX/sLmxuiz0bnRddGOMOpbHZ0QEUBcFcEHow3bIsbsIxFzgnEcZJR1SvTQ6BvROdH1gzJU5wg0UDfjg5H37yM4DzPm2VMj5h72lmhjtT9sc3AScB3PDUuY8/Sda/YH0bXR5VH9WitF0P8/jRYTPLk9+blOLqoKPSDbP4tgtBC9ObqpOsdmt+iVEXP85uht0Y6aM7Vq3ZWABCTQm8Azcvg70Rejg3on8agEJCABCbQkwFO506IvRf8e/Ty6rxJvAYzrMLWs3mQVgRdU/H+S7bj6fvL+9xrR/5Z9nItxy8NZxKGpG470v45Q5l1Jy7z6UcQcOzXCuRhmOI0viqjrx9HPojIvy5a5SvmsCV4bzUX97P05QTnjsnhj8q5oFI6j+MOqTJz2puG8vqc637Ze+kM7Gct/ipp1Nusof6/NzgerfJRRv44LLxgyFjB9TcRT+aU02kq5jAf1DBoz5gP9mxvSgMNz/vMR/D4TEdhoYzjLh1X5yEsAp60DzTUE/zJmH2tTYZXm6Gz/pZZ32LiX6+Pfkoc5RN3N4BpMSTesrEHnCcJx7RY7Mjuso8lDe5mrdSPQ9rXq/DezJb0mAQl0hIALt44MlM2UwIgEWBzzKmxZ0JQnRzxh0STQhgBO2ProhRGLbOZU2wVym/JNMxoB2POZ3e9zuz42/Z5ikreerpRZHO5++fq1tFdbKJPyEE/3ebLYr1ycCF5DJw/7OKmvqo797wH5eOr4loj5ST7Kpy7eIFiIeHKKs3lIRFr2nxOtjF4R9XpyXm93v/Yma1+rc60nglHh0StzOc+9GVY88e1lpCMYR3+5FtnnzQf69sSo/nQ2f/6K8ZYBT7p5KEI76d8t0eXR1urY6mwPjSgXTvBiPF4cwXUxRp2MxesiHHT6Qht4K+LqaHP1N32hTp6s4+SeHM1H9G9j1M8KQ7b9xqFX3jLmnGsTdCId85Wn7bSRPlDnsRHMtkTDjDpLe5mnMOj3uUyb4MCWfNTN/fj/if42Ys5jnCvlVod+ZVNn0m9uN9kVNtTNuV5W+tGWXa8yPCYBCUhgSQj4BsCSYLSQLhPYddddj0z7ieiXJzsfzv6aLvfJtu8UAiz6cC5w+HlayJPG5lPV+t/vzPl+i8Od0uAZqqQE9QjsNcVTeJ6qlrFpnq//jQNZ7K+zw5NW8uGccY8YlLd5DoekOf7PyrHvVWXyWxGDniIz33CkTou+HJWn0jzR7fckl9fWeVrL02PS8wYDbzLQlqbRXhzZb0WlbAIHOGtNY77ztBMWT4uafR32N1zrjhblHxjx5JYyearetNU5wJsHnOfp7hOaCRp/U/4BEa9jl7ZyjcK5WTdZOUaQ4OsRdcDsq9GpUX0e5M/tRh95q4LPDnihz0UEBsY12kDggafItIEyeWLM3GMsm0YAgvlAm8sbAjDkntTLOM6TasqmrUf0StTjGO0ib7lmyNuLYTMrPL9d5XtXtrzJQBkErtrYMUnEXCcP8xg2g4xxOjr6ZFR4MJ9hVIw0/ebn2pwrbeSthX7pOM49pth8dko72TJX60ad34joBzxYc2gSkIAElo2AAYBlQ2/Fk0LAAMCkjERn2sHClwUdjiSOIAvFsjAetH1n0jUdwM50eooaikOLg1nGqm3XcMJKAAAHBmd8sTZKAKDUxfzjdWy+AkAfcPheEjUdMp404uwWpxpHaJhzyvzEscbBpmwCBjhPzXlbDwCckvNLYUsdAChtou1/E5WgBk79XI8G4/x9OqLfBAx4pb7pyDWzUTbBlxKQwenkawqDAjnNMup/k+8dEW2grM9GbZz0uaTDQS5OLwGBXgGDnRkAgA19waFG8GUOxEcXRAAAIABJREFU0rdvRwQvhtmoAYBS3lx2GD/qYtzfFvUKejXrX5UDJQBAYK6tzSehAYC2tEwngY4RaH64dqz5NlcCEpCABBZJAMefJ79nRh+JeJraa6G9yGrMLoG+BO7NmesinHAMx+Y/V9vq0PYNjiNPQzl/e/RX0ZXbz/Q3Xq++MHpfdHdEkOMR0Vz/LBN/hj69Nbqpaik8eMW+blzXj41gBt8LoldGN/5Kqgf+QdmXRy+MFiKcXp7uHheNumYkL+0ioEIbro4ING2MhtlCEvxlxLzACFw8Nxq1DVX2Jdnw9gVBUthuirZF50YwI9BxQrSjbEsK5m2ROyMYrIjmdlRllisBCUw3geW8kU43WXsngQki8Ou//uu7PPjBD56gFtmUCSDA01SeDp0c8Srrq6NhTwebzWaBz2ujsyoYaktDgO8z41AVw8lCxZhrj4lwfLBzossiHMthRrCAJ8+XRDiU5Km/7jws/ySe5/vjOIUYbJpPg3FUeZIPw5sjnuJfT+IWhkN7VcTTbupZGT08wskdxWjTX0Swvi0iCMEYtDX6R8CAwA3XGm9yLGdw8qjUX+YfbycwZ/l6XWnfk7K/o+YVcxaGt0QY9dSvj+qwGwlIQALDCfChoUlAAlNO4N5726yRpxyC3SsEytMjXp1+cnRMNO5nwVzy8p1YHIZZNBbk589ix3dAn3FminPHDYsnnThWxTjH02Res2a+8bYKadoabwq8rMqDY1kvu20Zk5SuzgsezMViXOMHROX75Ruzf200ygfBrUl/abS5Vhb860Ga/DnQ+F750VWKrdl+PBqlDWT9aPSKaE3E2xs44e/lxE42+vLHEW2AzYaI+Xd5RFCJ+ylfCUCboqU27tF7RSXQw/wdZf4vdXssTwIS6DCBcRd9He6yTW8S2HPPPfe9++67V+f4rzzNuueeWV3TNwl14+9873+X3Xf/xcOHn/zkJyzU7n8Scd999+2bY4cnzf70xrGd7DFlLIvV9+vj9qAHPej2n/3sZyw8R10EMi9Oj54fNZ8ajgrmiGRAs2o4VQYAFj/6TPgDo0dWRfFkle9U151anFmcMIwn2sz9UT6keAvgqip/1zdcw8dGcxEONQ4pPIpx/rcigiYY/13bKI57lW17nisiggkrI94YamuM6cERY0YbKeuatplr6XgD4cKIAAD9+qNoOQIAh1dtoF+XRFujEszg7QoCAASn/jy6rHYuu4u2ErTlfx8gCMC8p/5xxnTRjbEACUig+wR+ucrsfl/swZgE7rrrrnVxEF8eh2L/bHcpGrM4sy0TAZzDO+5g3bzLLhlLgjl1547Fy0FJM8qCeZl6YrX1YWo4/ffDyXXKIvN5EU/oRrUbk2FDRKCIhf24xgIUJ3gWjcV/eQV7GvrPPOA+UXe6B/WLwBNzb1B6nlLjAPa67+DUlPsUjuXjovkIrldGm6r9bLYbwUscLAxn9+5qf0dsfieFjhLYwgFfiEYNxtXbDg/u2Txh7mWs13CA4cl1+/KIPAQ2cEi5pouRZnuwNwYnrtNffDhUB1tu6NcNVVocz/8YMWY45cOsBHVIRxsWqu2wfL3O8+OQGGUSjNjZRp/5CgTXCH3/VFSf9+fmb95SYPzmq3QL2Q4zyoXroDFnzq+IcP5PqQpcyPZzUfk6QHXYjQQkIIF2BAwAtOM07am2Lyrymvi+OP/F4kTiSG7/s2x/7ddYb/zy73r6+zO6s9MJlPEpFfN3CeRUr/+z0Ni7jF893WIa6/gvht7wvM2vbvB3jTkLx3Hu4TgpH4hwGk6MHh2tjcrTwuEN+2UKnLSXRr0cvFHK6WraNo5QV/p2dBrKPGg7ljj/r4mYA/0MR5Uf6uvlGPNhQoBgv4gnxasj6sa559fOefJcN5xaPqsw3gBo2856GW33T0vC49smTroN0RujuhM+QvbtSekf1+JD+2TkWsfBhNNhEezgemHEL8ITOCmG08j9ASNAgOrnq1NDNzjuxclkvCiTetvMe9L/ZlUD6RfjrG6pyoEB9ynKHqc/VTEjb1YlB3MZ/tdGV0b1Ob01f5f7KfOZufOGaJgRUOBrWP3Y8LlNf3k75qCIfm+L3hddFGkSkIAExiIwzuJxrIrM1A0C/Fhc/S2ApoP385/zv88YAJi00RwUAKi3tYxfOdbMN2q/mvNj1PymXzYCLJ5vit4abYhYsPKECQeQRW5b46niQrQjnbG2bTHd4gjgaKC2xvoBR3OQ8Vo0GmY4iAQUroo+Fp0fNZ/wU98vItC/cL52pAOIY4ba2kISluBE2zzNdDjXXIdtAg849JdHG6N3Rdc3CsNxRBhsx70+yVd3dBmDUm5V/MBNYUI5dw1MOfhk/e2F0obm/BhcwuLOHpHsa6oicLxxwuvGXORrACdE3D8fFb03uvVXUj3wj7kcOuOBhx9whPIJel0TfTI6p/r7AQk9IAEJSKANAQMAbSjNSBqcOb5vvM8+++yy33777bL33nvvku+Nb+99802A8rcO4GRMjqYjX38DoD5GvdItpgeO/2LoDc/b5HvppZfW3wAYXsDwFCzMeaKF87UhYuF6dHRwtFiHJkVoHSLAE3cc8LaO1feStjyZ7dfNrVWa+hNjniDjTOHwFsfmvOx/PqINPEXv5bByrDj9zM0SDMjukhtOXtOpHlTJV3Oy31PcQfnq53C0FyIcvWKs0QjKzEXF8b4s+wRJrowIAhAMaBqcCqvmuVH/rq8TRy23jCNjtZjxWkwbRu1vMz1vZvxhtH90W8Sr92ybtjEHmLvM7QMiggb/P3vvA3VbVdf9HlBPXONFQi4hnU5PjBMvERERIREdj4RIiIiGhKSGSWp5zdtteB2NbqPRsN7e8vZ67a+akZoRKZERIdGJjkRIRISERKdzj09HPJ6LvIjEa4aI9/t5XPO8i3X2n7X23s/z7D+f3xjfZ6+91pxzzfmZc649f7+19n4Y14OMchhn9SAL/bwp2lxlJPixPeJrB/Q5QYC2c7QqwhcJSEACTyZgAMAR8SQCPGJ80kknbbjooos2POMZz9jwtKc9bSChpoMyMLEHV53AsP5oBgCGpR9W4WZ5w9J7vBsBAnL5ob8NfHVj3759Gz72sY9tePjhXmvPbuX2SM2CEsfivogFLj/Gdl60JfJzogewOdx1c9rEY+zD7lqWpuMMDnNEbk0a7owW5xgnEOcdB+ml0daqMAIEBKIYf/2MgV8CCQQPxnEo+52j7Odfu31gWKLacRzdYSyGFQf3P4hw9oqVAAA/fHdpVJ7QIC39VQ+s1LKt1KU4lQRcRg3mkY872sUos+6s1g4dsMn4KP1OO555QIr2O0q7KROHuF+7u46Jevp+wQ2CoQjHnPZ8R0RQoGkcL8GYI7PN1zluiPrVlfwEDN4e7eZNZZSD808w9iXVPsbXnogAHduaBCQggbEIuLAbC9/8ZcbZ2Lhx44ZnPvOZG571rGetPBEwyMZ1IAeV7bHuBIb1R9NhH5Z+WA2a5Q1L7/FuBOoBAL6eswaGk3VTxF0m7siyCD034u5X18X1GlTXU0yQAE4jzlWvO8qjnmZfMnJXn9e67cgbxtjPRGdHl0U4Tb8U4eT0sr3ZSf2wY6NyR7xX2n77CBzg6A1rI07usDT9zjHqfvhzN5hAXNNuyY5PRfzexukR8xHn/B1RL4eQAEFhjgMPW3gNckZz+AA7NHvK9/hh8pmobaADzuUHBKkrd7VZUPSq7wEnbuw4oXpP3uXGMc5TyuQaNXjR8uTMBEewehn1FJR1WnRctRPH//KoXxtKsIRyT41OjO6s8vZ64Xp7d8RcaBrzhjHI+S6ICILQvuubCX0vAQlIoCuBLhfKrmWbfkYJPOXgp6z8O7mv/uqv3pB/DziwFeM6kAML92BnAs0fjWsW0PwRwGHpm/mb75vlNY/7fjwCzC/6qHylY7zSWudmMYzzcF2EM/aX0YsjHDUWwJoExiXAGOPpgF+McGZx8HjiBAf/LRF3O5t2X3bg2GLcISXf/RFltTGcsosjfs19V8T/pMf56ufMtSlzrdI8kBNdET0r+vGI9vMfQJinV/eoBI4j/0aRV5zS/xzhQMKri5HnpCoDARj6pS1vuOL8soggAECdlyLYd7XvrTIQwCj/EaCUwb4SGOI8qI3hTJfrGW0qZdTzLuUNd/wJHNEOeA4aL6UMmG+KGNODAgA53NcIBr2tVg6BiDdGPIVwe99cHpCABCTQgoABgBaQFi3Jl5740orjz7+UG/YEwKKxsb0SWCACLHSXIxb+LGL/PHpZdEZU7pxlU5PASAQYX7dFvxLh6OA0nR9x1/hXo2b0mXFIEOCUiPF3ToQDz53pNsZTAzzRwjlwqPnePvlnxbhbzOPiJ0dnRUvRm6J7K+Vlv8EWVjiR3IneGv1R1CUAwBMDJ0Twxiir153q6vABLzjVnI872WdGOMQEEfsFADiOo70c1Z1xAhC0AaOvt1fb5YW0D1VvcOgpo40RANhSJSSIwJhoGqwRaW+O+DpLebKimZb3rKmpK0Es6sFXN2hXF+6Ug8EPVm+NeAKBunLt/ZGIQMygeuSwJgEJSKA/AS5qmgQkMCcEuCM/SM1m8lj5OGqW5/u5JMDimMX/VRF3HX86wnHCydAkMA4BHLoboiurQrjjzG8DbOtRKOOQp1GKo/aD2eaucpt1DM4szhN3UUlfHKhZG8M4kj8XEQygHTjHzEna1zTmKAEW2ogD+aIIZ7StHZuEMMaRfSTCkcch7WLk41864szSt9Th+D4FcLecAMfvRhdHh0bczSfIQR1oxy0RAY+6waI42Edneylq8xQAzjoOOsY4/Odqu7wQkOLu/1LEuf8i+lC0fYBuzLFrot0R5cOd4MuoxnnhDhfGP0wIfL0kajPuRz2v+SQggTkn4AVkzju4bfN8lL8tKdNJYGEJsEjmDuAV0Sui/xLtXVgaNnxSBHDo3xvdFbEmOTH6gQjnvmk4WDi2OEPcEf3ZCEdtkFEmjj9Pr+CE4lQRdOAO+SwaTj1PTGA4ujiEF1Xv6y8P5g2/HA8vAgSXRpdE3CUfZsckwRuis6qEOKF8ZaL5VMawcugnHOabIxxigjA8xt4MRFC/b4lOjy6I+C0IngzhKyIEBuhDHH32NevA/n+KCDbA43kRgZFhdkoSFOecMuBaNwIV3M2nbgQ+4Mg1cJAR6NgXMb4wOPL1BZ5YGdU45/URwQeMMr8/YkxrEpCABEYiwAV5Wo3oMxe6rkGKb0oeLtjT3LZpZW69JCABCQwjwEKbxfCeiAU3DpUmgVEJMH4ILP1WhGOLs4RTy3e93xXVxxePevOfCggSsEbAWSQ9PybImGwa4xMn780RziXriR0RjjFO3ywaPODCr8zjBLJOem2EA8ud52I4ozje/GcB0hwdcTf96yIYLkekqRvrJtjyY4PnRrDF+eXRd/qoq1E+1wkeY6e/COoQhDg8oq8JLJCGNuHc/2306ohAwaXVMe56k+YXItI3rYyf23Pg7Ih+pv6ck329rk9nZT/no2wCCndWysuKMU7gQJAAoxw4NHlVh5/0wvXxT6PLI8YfwQgCCfTFKFYY/nYy099L1StBsp0Rc0KTgAQk0InANDvJr0pLXhnhzHcxPrC4qD8QTXP7urRpVdN+6UtfWtXyLVwCEphLArPqQM1lZ0ygURenDByVxzqUxd3JP4ve1yFPr6SPZuf26OqIz32cVb4KgFN/SyMDjhTOIHeJj4xwVE+u0v1VXvdGOHBLET/4hzNJOtYSJdCAwzfImcNR5omBLgYL6kTZq20P5gR8FYe78qx5aP/rIwIddYcXrldE/yl6XXRUhGMKs5uij0b7ItZKSxGPxG+NSIfzivNOsIB+6TIukny/kY8+o77c0d8UXRgx1nCs/zriPPQHazfqTBvYLgZTxli/OtCvfxidEB0TlTFxR7b/PoIX7eFHFDnviRFjjHPi2FMvAgHFKOPZEYEK+pXAxP3/8/DALepOmbdG26ItEU8BwGBUo0zaQpCMsQ+b8yOCZFeOWugE8zFeYNgmGAFzAjT3TvD8FiUBCXQkMM0O8mFpCxforgGAgoAFgNaRAL82XlfzqwHNf/vWfN/xdCafcQL2/+p2IPOvqdU9o6UvMIFj0/alju3HWcPZmYQtpxDuNHMH97gIx51Hne+LcOCK4QTi9BCA4i7y5koX5fWCqDjArG9YPyCcjtsifpxte9TPkcyhFeOuLU5iF+PO7+90yTBG2tKed6SM/yNivXRe9JHo2ka5OGXcDf/XiMAGbNFS9PKIsrA6r+JwwuvGqO4cV8k7vTBOrom4McPTGvQtjjF9V+8z1m2lz7K5Ujf20Rd/FPGVj16ONPX7QISTTxspdynCkYdLaWMpn7ayjwDTj1avedlvx2eLcUh6ggs4q8PGTC37SjsJjG2L6BuCDjDfWU/UcZvxfn30PRHOP/O1BMmo43oaQajTWlaAscV41CQggXUkwMVtWo0PiW+MeFyti4gs7o34wCkLgWlto/WSgAQkIAEJTJJAcXa6lsl6AMeoi8jTax0xSh3Iwx1hHnXG2cIRxKk/p8c5cPhwdL8r+vloT5WeO6PctUVsUzcCCNwZ55HpG6I2jtwoLAbdUKFtw5gMO54inmSscfhxuOL84VDzC/FHNxPmPWnfFz0/4k48TKgvzmmdV96uPHGBU0za66JxnX/KxLiTTvAFp5V63hrR5nqfUR+ceBxu6vmtEfWmP7ZGPFr/wQgHvWkEYN4RETR6V7Qvoo04p5wDsc2+3RHl83sBjLk6e9IQADohwu6I4NXFqAscqQN1p6wz+xTQpd+pB0GmUi5zg6cpqPOkrc2YrZ8Trm016bpangQk0JHAQR3Tz0Ly16WSXNjvj/iQWe/I6CwwuySV5NHFzdzRfdGLXrThVa961Yajjz565Rfl6+Yd31noTus4LwSYf4899tjKf2r49Kc/veEVr3jFhs997nOleTdng2vcOHeV5gWV7ZDApiDAMTwywhF5MNoVLUfeDAiEhm3O+8ILPjiVON5wWwtjccGj49wZJ2jBe86Nk0td6n12ft5z13gp2hMRnFiOBtkhObglWooIclAeT0NwvaSMLo53kmsSkIAE5ocAH5KaBCQgAQlIQAISmGUCBP2R1o4ATjBaL8MBx9FHw4wnEW6KeCIEZ355WIYc58kFbgB5E6gFLJNIQAKLRcAAwGL1t62VgAQkIAEJSEACs0aArxC8b9YqbX0lIAEJTCMBHrnSJCABCUhAAhKQgAQkIAEJSEACEphzAgYA5ryDbZ4EJCABCUhAAhKQgAQkIAEJSAACBgAcBxKQgAQkIAEJSEACEpCABCQggQUgYABgATrZJkpAAhKQgAQkIAEJSEACEpCABAwAOAYkIAEJSEACEpCABCQgAQlIQAILQMAAwAJ0sk2UgAQkIAEJSEACEpCABCQgAQkYAHAMSEACEpCABCQgAQlIQAISkIAEFoCAAYAF6GSbKAEJSEACEpCABCQgAQlIQAISMADgGJCABCQgAQlIQAISkIAEJCABCSwAAQMAC9DJNlECEpCABCQgAQlIQAISkIAEJGAAwDEgAQlIQAISkIAEJCABCUhAAhJYAAIGABagk22iBCQgAQlIQAISkIAEJCABCUjAAIBjQAISkIAEJCABCUhAAhKQgAQksAAEDAAsQCfbRAlIQAISkIAEJCABCUhAAhKQgAEAx4AEJCABCUhAAhKQgAQkIAEJSGABCBgAWIBOtokSkIAEJCABCUhAAhKQgAQkIAEDAI4BCUhAAhKQgAQkIAEJSEACEpDAAhAwALAAnWwTJSABCUhAAhKQgAQkIAEJSEACBgAcAxKQgAQkIAEJSEACEpCABCQggQUgYABgATrZJkpAAhKQgAQkIAEJSEACEpCABAwAOAYkIAEJSEACEpCABCQgAQlIQAILQMAAwAJ0sk2UgAQkIAEJSEACEpCABCQgAQkYAHAMSEACEpCABCQgAQlIQAISkIAEFoCAAYAF6GSbKAEJSEACEpCABCQgAQlIQAISMADgGJCABCQgAQlIQAISkIAEJCABCSwAAQMAC9DJNlECEpCABCQgAQlIQAISkIAEJGAAwDEgAQlIQAISkIAEJCABCUhAAhJYAAIGABagk22iBCQgAQlIQAISkIAEJCABCUjAAIBjQAISkIAEJCABCUhAAhKQgAQksAAEDAAsQCfbRAlIQAISkIAEJCABCUhAAhKQgAEAx4AEJCABCUhAAhKQgAQkIAEJSGABCBgAWIBOtokSkIAEJCABCUhAAhKQgAQkIAEDAI4BCUhAAhKQgAQkIAEJSEACEpDAAhAwALAAnWwTJSABCUhAAhKQgAQkIAEJSEACBgAcAxKQgAQkIAEJSEACEpCABCQggQUgYABgATrZJkpAAhKQgAQkIAEJSEACEpCABAwAOAYkIAEJSEACEpCABCQgAQlIQAILQMAAwAJ0sk2UgAQkIAEJSEACEpCABCQgAQkYAHAMSEACEpCABCQgAQlIQAISkIAEFoCAAYAF6GSbKAEJSEACEpCABCQgAQlIQAISMADgGJCABCQgAQlIQAISkIAEJCABCSwAAQMAC9DJNlECEpCABCQgAQlIQAISkIAEJGAAwDEgAQlIQAISkIAEJCABCUhAAhJYAAIGABagk22iBCQgAQlIQAISkIAEJCABCUjAAIBjQAISkIAEJCABCUhAAhKQgAQksAAEDAAsQCfbRAlIQAISkIAEJCABCUhAAhKQgAEAx4AEJCABCUhAAhKQgAQkIAEJSGABCBgAWIBOtokSkIAEJCABCUhAAhKQgAQkIAEDAI4BCUhAAhKQgAQkIAEJSEACEpDAAhAwALAAnWwTJSABCUhAAhKQgAQkIAEJSEACBgAcAxKQgAQkIAEJSEACEpCABCQggQUgYABgATrZJkpAAhKQgAQkIAEJSEACEpCABAwAOAYkIAEJSEACEpCABCQgAQlIQAILQMAAwAJ0sk2UgAQkIAEJSEACEpCABCQgAQkYAHAMSEACEpCABCQgAQlIQAISkIAEFoCAAYAF6GSbKAEJSEACEpCABCQgAQlIQAISMADgGJCABCQgAQlIQAISkIAEJCABCSwAAQMAC9DJNlECEpCABCQgAQlIQAISkIAEJGAAwDEgAQlIQAISkIAEJCABCUhAAhJYAAIGABagk22iBCQwewQOOuigDb00ey2xxhKQgAQkIAEJSEAC00LAAMC09IT1kIAEJCABCUhAAhKQgAQkIAEJrCIBAwCrCNeiJSABCUhAAhKQgAQkIAEJSEAC00LAAMC09IT1kIAEJCABCUhAAhKQgAQkIAEJrCIBAwCrCNeiJSABCUhAAhKQgAQkIAEJSEAC00LAAMC09IT1kIAEJDB7BPwMmb0+s8YSkIAEJCABCSwwgacucNttugQkIAEJjEZgY7KdGD09ujV6YrRizDUmAfgfEx0dHRrxmf549HC0L7q/ep+XvnZYjhwRjRrM+ULyPhTx2sY416aI10Mi6vxYVOpMvXk/zGgz7cfI8/lhGarjjF3OTV5YPRA16745+6gX4/rB6NGojVH2URGvtIF6cY5ilEnbB7HmnOShPZy3DYv9J+ixQV1gxTihr2FO+ZRN25ajtuySdCSjvTCHa+l3Cip1YJw+ErW5jtAeGGL0G4zb5CP94RHnxzgf7W9jzC3ylTGxN9tt+4VxxpgY1OelDrSDcuFC/XoZdaAd9OU4xhig/aXv623k/MzHehvLOOL82O7qdbVfaCufNbC5L+Jag9F+jpX6sI9+ac7llcR9jLz0K20v1utawnij/aMadW6Ob+Yjc7Gf0V5Ee8jbtl2MM9pzZFVwl3FOXtpamDI++o3Dqvj9L7SFc8KJenedI+TlvIw9eNWvm3nb06jvUnWEcxbOPRM3dm7Je3R/xLhqc7425c5MmvrEmZlKW1EJSEACElgXAnzgHh9ti14YsQi8LeLDV1s7AjgVp0RnRt8abY5YDLP4YtHOwm05+ofo1uiuqN8C5/Qcoy/JP4r9SzJdFe0ckJlxc1x0avTtEQsvnFIWqqxDWNziiC9H/xgxpu6JBjmml+T4t0Xkf1t0Z9TGWGi+PIIbC8a3R4zjuv1k3uAYwPJvovdEbcY4Zb82WopYWP5iVF9A47S8KeK1n9FPxQn8TLapG21bjvr1Ya+yWJCfEJ0RwfzYiPqxn3Jw8KgjvG+PbolYfE/S6HfOyzilDowB+r0EbmCDo8AC/O8ixuq+aBBrnOmfi7CPR78eta03Y/2lEfPkL6P3RG1sWxK9IGK8wu7Xorur7WH5uV7+aAT3YUbZzAW4fDKCC33THEPnZ9/3DitsyHHm2+9HZd5Qzx+MGCOM+Q9F9EUx5sNPRBxnfL66dmy1NmG2rTov9fyViDmLcf17UUR9ir0zG7dFbecJTvjFEdeRYr2uJT+Sg4zjUe2PkvHGqH49uyzvv3lAgcwBOJPnv0d7IsYcY2LQdZHr4ckRdS7Xryuy3ca4Lr8x4vrE+f8ien+bjEkDn5dFSxH83xsxn6nDMCMv44m+/Ovo6qj086C85PvZKgFM/iq6alCG2jH6/qci5gHjmmvhQhkDRZPAhoMPPnjDl7/85ZX/O37wQQd/5TX7eK9JQAISCIGlaFvEQphXFoTvi7S1JcAi6SURi/VTo+JM9aoF6XDsWOizoO+1cGThf1GEYzaKseC+KdrZJzOLSRwWFus4gpzn4D5p2Y1jitPzwei6qO6E1LM9N2/Oi1jHsMAujkw9Ta9tAh04T+dELPpg0wwAwIMFIotgHI17Iuo0zCibOpGHxTrOxCO1TPQVZePEtjEW0iz874j+OLo26tWHzbIon3r8QHRaxFztZ4wR6grrP4ju7Zew436c7FKHrdmGZz9jfOyKGEf0B+3FEe5ltOXl1YHteX139GivhD32nZB9l0Q4luR5T480zV2cjwDZZRFtwugTxgxjdZhtSoKLI+ZBF6OfGXd/EuGa0d0UAAAgAElEQVTAlXnAGHp2VBh0KbOelvrjbJV5s5TtCyNemZ+wrRv1Z6xsjqjbagcAqAMOKQE1zsm4qF8LOFbqk80Vw2lkLD9Sdgx5PSnHfyg6sZaO607zWnJu9jGPRrV/TsYdUX3uPj/vmRdt7IkkeiCibR+OcJAJnLG/aYUb44M5RBrGTxu7IInIV8bqcdm+JqrXu185zO9y7SPNkRHzZDnqVc96OeQtY+/xbF8f0ZeDjHbS/2UeEGignBujYXkp966I8USdCYL+l4hzL4wZAFiYrh7cUBx9AgCaBCQggQYBPsi3RTj+Z0V8yPrZsT7DBMeFRRp3qFkAs2BhsYrT9mDEIog0m6KTo2MjHF0W0NhV0aDFGA4xizbKaWucu58jhEN8eVTuoDFuWLjeHu2KPhvRhmdE1PHUiEXn2RF1x9m5MqJt62EsMqnP66PdE64HDih385rOK4zoQ+YdwRI4IPrzqOhdUT/nOIdW5ucrIxw08tHfOyPGCW34XLQxelZ0UoRjQ9m0E/1SRNpxDG6XRXDjHNSBfr8tot+pA/a10fHRKRHnXqrevzWvN0WD2pnDa2LUH9EvOJaMSZwVHNJ+475fxeCPE9fPoaJfmDMwYQ7TN/Qh5/6vEfMSFYcymwcYY2YpYgwxbxhjvRwb+mO95tUBle6xAw5c686IbqjUqx31rOfmDeO3TQAAPjj+8G1rnJ/r3UNtM1Tp9uS1X93ZTx81+4I5RB3hQJ8WMRa/MeJpCK7XzK1J2KEp5PsixjfBBq41J0RnRjeOcAKu4ZdEvxo1r3EjFHdAFuYKTxzQfvqD6yXXkNOj6w9IfeAOxghPjNA+gkyMMT6XFsa4qEyzvS6V+6aOFeSDjAnD5Dmiep3UBOlYldlJ/pSnPGXDE0+IaXZ6zJpKYNUJsCBg8cWd223RlogPXW39CLBg5a4ufcEC5v3Rn0Y4FjgjLCbpIxZDOBE8JfCSavtVeb0rYgHbz27NgfdGXRa41IMFbtOox6URTuDmCMflAxF3NKkDDggLQz54WHSWBSfj7cKINrIwo+zron4L6BxaVaNuLGYvit4VTeqDksU7i/jlRu1Zu8DusAgGL45OjVjc8vUBFudw7FUP1jzUE+abqrQshgvzfdn+fMTaj7RLEXOcR9RxhODO+Xk0dlc0qp1flbGUV863PeLpgrsj6lAcAtZqjI3TovJEy9Zs4/gwBu+I1qvfc+oVToU93Bm3zEHmFnVmbH4hamu0nz6HQS/jfFx3YYJzQ1/ST2zfHO2ImOcfim6LehmB2tdEMKS+PxfRB02j3r3mbTPderyHA87nKyIc4z+LmC/9jPbRXsYwDjJ8e82Pen7mx7dV+UjLuB9mMPvtiHHZxah7v3HCXCCYtKNHgVwHGA/U9XnRWRGBgFdGXHcZS4yHSRjjmesN7K+KXhfBlHnJ/B3GM0meZOQl8HtrdEs06XnMvGQu0vc471y7jokIYhCwaHM+6nZTdHH05ojP1jb5kmz2jY6eZisLzy51pE2ID20+XJjUXQdul/OZVgISkMCaEFijp3RYdJwc8V1ZFhw4HixCtPUnwCKNvuFz7dqI7yLvipqfcTgrOyMWniwYcfJYLJ0T4RT0MxbOd0a8jmuckwU8i1fq9+vR70TUq1lfFrNoObovYrF8WcTYw6G5K1qO1suOyolZCMPm9glVgoU/DuGg/mAdwyL1p6OzI1iyjTO4N6ob6x4WxQRNSMfxK6LfjXD06gtbtjmOqANjCGcCB4oxQnqCAI9FXY3x9jPRUkQ/Xh1xR59+bS6uGafFseY4bTszYpzzlAuBCRzA9TI4fnvEWhLmfA3jyIjx8P0RjgdtbGtljDMvBxkOJn2CQ8M84pWvIeyI6JPlSnk5wJgzpd8IojBmJ+UkHnCyVdoBbwJfjEcY3xg1rxn1UzOGCRgQNMMB3BEN6xc4cS3lWsfcoU+HGXVgnN42LGGH48wJromDytyY44w/xs2lEXW9PPqbaHvUnFfZ1dnwt46oyiIgcXrEPGQ+bomoY1dbSgaClvdEk57HXI/pt89HH4w4F3PllIixwJgYZowRPpPOj7jubYvguRAGvGm2v0jlGPAHd6jk8UnLpObDlYvtoItGh2IXI+nK7wAcnN8ByPf/MX8DYDH63VZOHwHm3uOPP77y1Rye0Pmqr/qqDY89VtZ1q1JfJj0f9BdHOF1cS1mIadNBgEXgN0Q4IAwEPh93R/0+41gUsvD6vYiFEflwZvjcn8SCMcX0NQJGBJAIOnA+nEAetxxUXwqjXrsi7haTd2vEApSxuByttbFAJPjAYph1BU9RsBBeK4eKxS2OAXdxN0cwYXF7SfTforodnTc4pbAi37URzFlDDTLWStdFBBtYDNN350Xcdb1pUMY+xy7LflgxLm+P3hLRp4OM+uLgMMZpB208NzoruiZa7fGaU/Q02oG4NjIOboi+N6J+26JjI5zsfnMwh0a2e5PzDyPmLn1C31OP1TjXyJVchYy0Ea4XRcwzfqfggSHnIcgBI/qFscsPwzF3+xljnbRL0S3RIRFO9bQa1/v7ordFzI8Lq1d+v4A5xhgcx7i2cI2DC+OO+crvqpwW8bnB+X45amuUQT7EHP7hiOvVpOYx5Z4fMRe4vsGA30bgs4KxwznbBACSbOX6ytzms+aNEde8eZ9jtHvlg3ma7X2pHBOzi70yiRkAeyMuHgvRkV0ANdPiXBTD8cfp4OsAT3va0zb8x3/8RzO57yUggTUgUJ+LX/rSlzYg5uYqGYufiyOcNhZG3AlgIaZNDwH6AweJV8SicNjnG4tgFrg7Ipysj0drEQDAaTo1YlH+QIQjuhwNq2+SrKRhQbYj4m4O7SxtbpOfMiZlrCF+K9ocHRPhlP599O5JnaBlOfAgkMOdde5yEoj4f6LCAz6se1gUs31PxF0x1kFtjAsLzu2HokuiTRHBhB1RF+aHJz1OCWOM8fb2CGeijVGHHdH1EaxLWdSLmzlrbYxdAmZwfTT6h2g5wiHFKeeayV1TWNPWSRvccfow+pT6sB5ejXNVp5mKF5zQcyLGYLkODBuD9M+tEfN0KeK6cVPUzyj7OyKupziKfN6d2S/xFO3fk7pwHaB9x0Vcj5grXKeGMRrUjLNyECaMsxsjPjcIvBG8Y9wxzt8RtZ2Hy0nLPL4swlnHsaZc+nMSRruZf1wz6Gc+Y/icgwPn++7oymp/XgYabeXJHvqfeQ3bOwbmmJOD0x4AeHAEzkTCGBQsGnjVWhDAucA2bty44d/+7d82fOpTn9rw1Kc+1QBAC3YmkcBqEChPABCM4+7/Zz7zmQ1f/OIXJ30qFlvnRXxX76SID1UWAdr0EeAz7bMRCxYcgedGLHq5A9LPWBTujOhfPg9ZKFPOahsLqaXqJCzQcGS6fB5TTx5D/UjE4o6F7zgL3KoqnV+oM3eXfiNiMcwimSAZdxzRWhn1+ED0ExEBgC3RCRHOJ8Yi/eTo6Ii07GcR24UZi/v3RhdHXBcoj/bCvq2dmYTHVIkZdzjvXawsxi9IJgIA2yKuSW0djy7nGpYWxqdGzDXm2a4ItjdGBGCo1yURQY7VcMq5DheW9CNs0Lwb45s5RpsZe2WMD2s3d4D5LCM/Xx/YEfUb/8fmGH3LtfNjEcGAWbEdqShjkTYwR06PmGujXtfxA58f4Tgzvv8kYpzBfkd0TrQUbYuujdoYdeH6vRQxlxnHvxgRSBi1nsm63+qP//OkQhkrBAEIgh4fwaVtfa9LWurHdZTfATAAEAjaghH493//9w07duzYsGvXrpU7juWrAAuGweZKYN0JEAAod/5LAGCCleJDnw9IHIozIj78RwkIb04+PnD7LbQmWOWpK4o2PxrtWKOacb67IxZ/J0aXRiwAfzvCQennHLDgIs9aGc7jN0eMKYz/zfxQtd3lhTrvjliUrqfhfH4o+p6IO0/MFxag1G8tHVOclZ3R0RFzlTtVxTli4crdapzGB6N/irrWDc6UtxzhXNB/J0U4Am3tOUm4MWKs3hyN4hgTWNkb4YAzlk6LGAdrfY0hwAJj7LaozKH7ss08pH5L0dnRVdGk60fggacpMDhy3kmfoyp+al4Y1zhvsGf88tRFv+tas9I4bcwRnE2cTn5DopezyVyh/KVoe8SYn6UAAGOB8cd1iOv/d0ZXRr3amt1DjUAfzJm3ZZwzzrge8FUsAgBHRDwRhKPcdgw+nLS/EJ0aLUXbotdEvx6NY3z2MS+51t0bcb3AuO4RBGI9wvWLa9H1UZvPD65xzG/K5hrPk1Ztx12SzqaNsuCbzZZa61YEcPi54/jJT35y5bvHOCGDfgdgjX6UrFXdR0k0qG1typv19rdpo2nWh0AJvjEfJzzO+DDnXyWxSGIBwQf/qMYihAXEohqLch43XCu7qTrfUl6503VhhAOyHN0S8aNQBANwoEYxxgJOF2pjZaFYX2ThoHJ3lAUai1Kc1lEcQcpuu9hsU9dR01AHFod8FYCFJ21jkclXAVh4r5VRD1gy52D7n2snxqnBIcUItnRx2mvFrCx6WVSzgKbMpfrBFts4VmVdifM2ilEH2nlaxDj81ugD0VqOBfoYx4rXfRF9jYOBMdZxNM6KGOs46VdHozpglNk0yuWHE+lrjHPzlY55N64/OG6MIQIAJcDVpt2MG66PBK02RYwfrolNOyY7vqvaSfmMtS5GYKbt9ZFyGS+THBuU+YmoOKjMVa4HoxqfH/DCuPsPd4z5dmPE2CMYyLWP+d2lT0j7toi76zB7U7Q94nNzVHtJMvLZR/34jYzC9tFsEwzYHcGEdQljoQQIstnX6KPbq/TMPfISDJlrMwAw193brnH8BkD5CkD5d4A4HPwGAI8cT9j5aFepNUo1z21bI4SeZpUIlDm5CsXzAc9d2W0RC9xxjMUQWlQjgLKWRt/xa+0slF8ZcX74E9RhscMdFhZELGpviP40YgHU1nm6LGkv7ZCe83CX59qoGItFFmgYi0cWZrNuMCWw8q7o/4pwtnlU9O6oy4J4XA7wpC9Zu9XnLuMB7tjno4eq7a4vtLMEDxhXX9uxABwJnBHqeH/HvPXkn8wb6oLTUJyTMYrrnPW45MCBpC3Mn3uj+hxibr05wlkgEED6NuMAx4J8xcnK5n6jT5k3jC3SMbfZR9r3RbdE826099SqkVw3dnVsMNc7roGMG74G0IvZ5uw/PWKc8/g/86Wt0T/cFcdhbGv0HQH3vW0ztEjH/GZ+YFwHGKejGGOMADafH4wzxnWdx8N5z7X9hyPG+vlRm3GeZCsGJ66ZPPq/LTomggVPE3RhmOQrxjWJfuV6R/6rvrJ7/1+uOTdFx0YEK86ImL9tjCDf5RFlnxIZAGhDzTSzT6A4wuU7xvwWAMGAEhDo18JZd6B9AqBfz7p/vQnUnwCof/efMTvmvCsfnHzYs1h6Q8SH+yiLCBY3/KuuUT7M1xvxJM7f1rGexLlKGfuyweOtLHZfH22LuFvL4pnFC4tfnBcW0v9nhJPKHZjro7JozGZPY4FFGW2NBTHnrRvnL2WwiB80Njgfdd3SKKP5dk923BGxIF0vw/nm+6Y4D2dH50SwXY5o51rY/6idBHbFmLuMAYw+7uLUlDLKa2kL/UpfdjHqQF2YF+Mwof5lbpUyu9RjnLS0G+cBJ4A6/G10X6NAxkJxNGD06oivUw2zk5LgxGGJquOcm7nO3P3NaNA8alnk1CfjulH4MAaY913s9iRejnBoXxIRbKlz43pFH2yK+Pxr6xwm6X5jPHa57jNPR/lsrZ+zuf2FWh3q14FmumHvtybBsVX9tuf1gVq55OVawl32yyKYPi96R9TlOkxdWSP8XQR/zsm6gzHd1bju0nfwpP/2Ngpgvvxl9MqINc2zo6sj9g+zMscZgzx1NPfW/OCe+wbbwN4EcCiKU/HCF75wwyWXXLJhy5YtKz8EOMjGdEQGFb0mxwwArAlmTzICgWYAgCcCnvOc56zM0wnMOxYw3EX45eiqCIfy4og7Al0cQMphkbAIi9M0c2oM3izYEAuibdH3RmdGR0b0ISoO9u9m+z3Rz0WDFm8sqO6PhgUKkmTFWKAzjurGmCgLZD5ABi1+WUzzA4Uvb5TRfMtdqJ+OBtW9mWc13nP3i99cwElhgcmc4S7iNVEXp2DUutU/kNdiznU9Rz394MXDYAL1MbMWXOu12Zw3/N4DcwengCAPTkzTCAbR/wQAGL98b3hY0IN5RVmlTaWdZb7y/pHo1ojHsRlXOGWLYnDgLjF8CAB0ne/kIzCKk085Z0Q3R8WYs3zFgD64N9pZO9Zmk/HNmOhSr/836dteT9vUgTSTuA4w1p4bLUVwY7w1r+W0l/GPTq7SnpPXD0RdDM4/G/G1FtYY/FeAHRF90NaoL08OlAAM/w2hadSXcxHYOT2izrx+qJmwx3s+yzDYcg2YexvnAj33cBalgTgWdYeC7xw//elPX/kBwGGOxrDj885w0ds/7/272u0bFIBiHpZ/BUggjv8CUL6WM4GnAErT+OBfjggA8IHKB/PZER/SLMa06SeAw/7+Snym45yeFT0/YgFEXyKcFJwLHtnvtyDlbgl3HNvcMUmynsbCvZTPHZ9B42jQQp+24IT1W6eQt+lI9axQYycLyeJ4tUlfT8MCE4cC3j8eHRfxVQAWsl0Ws/Uyu2w/M4lL3enLYtSrvIc3TukoRtlHVBnpw3/rWMjDSV/6pJTTsYiV5F8TlX7HKSllcqy+3bXsYf3O8S0RjiO2HNGmXg4B8466HRXR1gsjxsUguyEHfy0qTn3hzZ1VgglLEe37aIST1XTIsmuuDY70O2MP7qPYtcnE5xlzgMfFma8YrDdF9O2eiDvSzJthY4K8xbi28aTH9vrOddjmulrmR/060KUqXLtOiuDE9R4xlpuGw31rxGfJMRGfKx+K+n2GNPOX97+ZDfpja8R8ImBG8HdY0KzkL33HZwJtJhDTa17SnwRqcfxpI08B3BjRd4Psweog+Qmiz731+2Cd+4bbQAlIQAKzRoCAQPmhzgnXnQ/z26PXRtuq19PyWhZkEz6dxY1BgAVKPyeIBe1dlX41r/ThT0VnRyxqzon+LKKvV8twbsrinfFzeMRag7o1jUUZP27GXfSmfUt2nB/1WuSRtl7eoCBDs1zSlkU/474fy2a+8p6FMnd/T41YzJ4X4UzgEK62waL0/7/WTsYdTc5/QsSCfdQFLP20VJVL3+ytttu+7E7CUyLquKVtph7paGfpU37wrG70O33GOVCXdSzOQ7FeDgzscHSOrRIxX86q5Wlucu4yll6VbZz2XuWWfMwLHJfmWLkp+z4cERwggMeTOt8U4WwWxySbc21wxBktxpgexe5Opl0R/XhuxJNDjGX6lrHJ3LijUl5m0o5OrctYZiz1urYOa9iZSVDGOY4/T070szLGOSdBA9h2/QxhXvAjgHz+0AfbokujK6I2dkES8XmC0Zd8Z7+flfpyDeE6zXWRPh9kJUBA3vp1YlCemT5WIM10I6z85AiUX/0vr5Mr2ZIkIIFxCfDEyaCnBsYtP/kfja6L+E4rEfpbIhagXZ2kCVTFIhoEWKSxgD0j4m7IsM9vFoW3RjgRt1VlkY8F0WoaAQCcZM5PHakzC7ZexiJ/e/SbPcQjqYOcH8ZqGZfcEWtrLO7KAo/zj7J4vjP5fj+inTgtr4hYUHdxRpO8k8Hw+No5cHSKweLe6g3j5Lho2PioZd+/SbDmxOrdI3nd1SvRgH0syosDzI+LjVIHFvkED1i807+wrl9/2C7OIWnqTmPeDrRn5GipE+1r2ubs4Ks0GO3AYSddP3GHvrSXO444R6PazcmIs1qCAxdX77uM7VHPPS356v08ytihHZRBgA7DUaZfMOYFfVvmyu5q/6y9cI0hOFquqf+Y7Tq3Nu1hTHFnnM8Drn/9xnd9P9ww5sjZ0Sj9wzWL/wrAnKE/WGfgnA8z5jhPHlBv6jtsXlLv4tAzJ0+Jhl2b6+0ZpW3D2jB1x4cBmboKW6HJE1jFXxuffGUtUQISWAsCOHHvinDOLoleEPFBzQfwQnw4rgXkjud4SdJzZ/DIiB+3wmkui5xBRdGX3N3ZFrFoJP9qGs4Zd/RxjllgsnC7OmLRNkmj/LLw/YYOBdP+4lQRYCgOXIciVvLcEH17dFmE0/yyiKcZVmt+EPgpfQfLO6JiLHj5sTrY0zZ+xAr2e/anGL6BM00QA6eJRTaOKAv2LrYjiRmTLNhxvAbVgWAD45E+oN7FtmaDOsCRPqYOdQeHujGmlyICOTBp68zhvJR176eyXTf2b4lOizjfndGN0X88KdWBb74vu+gb8hMIIl9Xh6yUel02cFaY3zD84Yi59L4xyixlT/srzIqTSd+XIN0o9b42mQhgUwafXTsixiJ9uzfiKxaj9lGyrqsxhgkE0jauQ7dUr10qxR18rlmM2Z3R9dFnBhRAf/BEyqUR841/o3hMxDWiizHP3xMRiDkrYk3BDwR+ZEghzAmCmlyjmOtXRv8+IA/1/Y7ovIhAA8EOrteDrofMN6w+Dqtd8/lSLoTz2TpbNTaBVb7bOHb9xi1g3O/wzzufcfmaf7IEGK/N3+yY7BkOKI0P2/8a8eGJg8OHNouP8mF5QAZ3rBoBHCsWfGVxw4KWxdswY0GD04TVt4flG+c4i1LqxiKRRffZEQvvssAfp+ySlx/XKu06NdtwGbaoJw0L6OJIsyActU4sfnH4WUzj7F5QlYVTO2mjTOZfKfvGbNcDKiys74lwPs+IWDCfG70vqjvXedvX6Cu+k4tRNovyrkGbu5KHOtDfR0SU95aIcVs3FvLU77nRX0e3R/QF7XtpxKIduyZq1oGyuC4tRZxjS0T+YUbZOBGMAey+RgbK+p6IwMRD0R9H/3c0bEwxzhnjtAmHg39zxlgf1X41GXl64pyIOvMEzx0R/Tvvxlykf/FNaDuvZY53afu9SUz/nhRtixjbzFH6FpZoFo0xxnWGaxhGG2nrsDFaJV95gSnjlXmDXR8xZvdV7/u9cJ3j2kI+5tGZ0VX9Eg/Y/0COcT7K4DOMcU6/UK9exnwlyFa/JvA7NZ/vlbi2j+sLaxVE358Y3R/1Y1U+Ezj+YK2cud0sF8K5baANk4AEJDAvBPhhQLTGxglZ1PN4Kt/hw6lg0THKwmyNqz5Xp9uV1iCMxReLMRbJwwzHhjs2GM4UjtZqG+PjryIcqUOiN0RnV9ttzk27WOQeNiAxi/iyCCQAgNM7zI5OAu4MsdhjXPP47CPDMg04fluO/WHE4pm6viRiUTtJI9jGop+FMiypL/+JoGksbglIcHwp4scJWfjiNAwzePD7H6RnXuNYfGhYph7Hycv32KkDdb0sYiHerAPnw8m9PCL9L0S08TXR1og2057fi74Q1Q0nsThwOAX8qjtjfJidlQT0Dete5kHToeYYYxRjnnGONhfb7UlHXTHqc361PeoLdeNJH8YUdd0SMX8GzYVRzzVt+ejrvVW7ae+obabf+BoA/HD+6dfnRYwd+n0troE5zUTtqSnttKgEyAiUMD+6Xr/gUa6B5OVpCK7Tw4zxyFjHuDYzf9t8/lRZ9r9wjSBgd0VEf1MWgTOCAL2MazafdYwFrvf8hkDzmtArH9cwrs8Yc+iUaFB9qQdG/Zar7bl+MQAw193bvXH+BkB3ZuaQwFoRWOO7/81mseC4KeLxVB6vvDIqC99mWt9PngBONQsaFm04KzhsF0Us6Hp9lrNvKbokwnlkYbMzKs5TNlfNONdVEeOFRdvxEf8G6pXRcVGv+mb3ygKNhdoPRz8UlUUZY48y6waPu6MnIhaPjMlTo6azSR7OB7NLo60Ri2mcgDujrgvoZNlvLESvjW6MaCcOII7vJIwF74kRzPghR8qmrVdHt/Y4AY7jDRF1IR2LZgJ29H2/xTVc6BsczB+PeM9C/70RC+hRjPN/IKK/WLzjzDJO2S5Gf/KVBdpCvS+MuKvH3W76CUftnRHOGm2pG5zL0wmMl7Oil0fHRL3GFf1xevT6qDiU12X7wagYaWANizJPGFttjPozBjACFy+OqNc4Rv9eEdF2ysRBOnecAmckL+OCeY3RJ5vHqDd9zPykz38w4trA2GbsNMfUGKdZ9az0/7ERc4Qg/MkR4/y2iHFHG7sY11fGOnZXtDOC+zAjSPCXEXOTvqEeJw3L1Oc419w/iG6JuBbTR/3mzNYcYxzQZuYk16U2/Ucg6e8i5iefCTzdQyCgn51QHYDFP/VLNE/7Aa9JQAISkIAE2hLgw/uaiEUqi/o2H8ZtyzZdfwIsZPhhPBZeOD04NUdGfL+RH18jGEPf0B8sqFg0cef//OiIaDnisWYWfP2MhSHOFOfqYjhqLEjrtitvfiMqThr15k4vDiJ3nfZELCqpLwtKHFyCA9T5tIg644xRNgvPusOWtyuLVu4eU2ecP5wkFsvcIeLcsMA4P+P0OyOcYRbTLGKviih73PG7nDJYzFIPFte9nNDsPsDou4siFqrFyIvg8bURi9LSz6SB3Vujfgv23TkGc9htjWgvbb8+YoxwLhxozkEaePN93AsizvlAdEUEm1GNuuHM0z7KxUmg3z8U0e9lnN6dbYx01JF+KcZYog70U9MoH8eF8hir5CNwwHeUCSrsi3CKWPTTRo7zCDEc2cf4h1HdqANpOE5+WDXH25NzPPkd/f9jVX7GwanRjkEZWhz7raTZFp0RMTdeHd0ZMbbX2/AdXhO1mTvM8dsj+n2YMd/pQ+Yy17DjI/p6FKOfmd/MyTMj+nZPRF1GNcrgNwWWOhbAeGQOwqJpzDueTmCs1g3GiP2MbcbvUrUPJr8S0Z42fZBkK8a1kN8tYU6Q75aoTb+Ql3l3X8S54UnfMDbhSb91Mc69HDHGuQZt7pMZNjzhw/UB47dsel0TqsNPeqG+9D8BJerJnGRuso9jTeOpCIxjdzQPzuN7BpcmAQlIQAIzQOBpT3vaSi35GsC4v18xgeayUEba2hHAAdggdLgAACAASURBVMDpxYFjEbalEo7b3qg47iyeWVTh2JTFFndUr6ne56WnUSYLpS6LShZ/74pu61EiC0yOU7cLIupzaXRhtByxIOb40yOcnGMiFtkYY2tHxGPtvPZaPN+Y/SyEfzIi79kRi73lqLAoAQDODTfKwXnk8Vm4TcJoJ48cb4rKYnVYuaSl3k3WrMtY+NbXZ9T5hujt0c4BBbN4pR9wuMkDDxygE6Ll6P6IBTRlw4NF/OERxkKZO/9XRKQZx5aTmScyqAN9vRT9WLXNsUcirDjo9baynz47J8JxYuw0GdFvOA/kJ91S9LqIczFuyMc4oo30B+MLuycigFJf4DMm6IutX0mywunWarvtC04RKoGr78/2zVGz3m3LIx3tYGzTR7STeclTMQRXCOKsp8GW73G3sbuTiKdAGHvDjPHLXOL1sOjboquGZepznDIIeMKN+cSYuzOC66hGOZdHXLO6GOfk3L2uYYzNV0aXNApkXMIZsY3R74wrxv5NEeO8i+H4cz3gnNSJQFevOvUrk7n4FxGfE4xJgqqbo939MgzYT93p6yuj/z2CbdMY+ydF1Jd67oi6jH2uacz1Mi/52gJl7InqxvWHNMxXxin55t6aF925b7ANlIAEJDCrBA4++OANaB1+B2BWkc1bvVnUbo9wbl8YnRXh3B1Vqd5eFjM4SiyyeHIAB/LBeoIe22XB2eNQ310shnst3shAHW6NWDhyZ4+7zSzIWYhS76bRvl3R7RGP6rLY5X2/BTfp3xWxOOTRaxZxOHy9ysYBwAHkCYFrIsodx0FL9v3GovSqiLtI50f9eNTzsP46rL6j2qZOLI7pO7jdF8GO/mOhPazO5IUb+Vngw/zk6LiIBXXd4Mpil/Qfjm6KxnX+S/nUG8fvH6LnR/T7UqWShlfqwKKbccL4PDsi7aaIp0FwOGBLXxcjDw73W6KdEfOAtpEH1Q1eLPhpI0EaONYZ4lycGR0TwY56d3UAqNvvR4w/+v6MiDHOGBvVaCP9Qdt/LDo8uiD6m4g2rLdRnzbGGG/ra9Bm+pN5iuN3YkR+5m5Xo4/hxGPz9DHzgSeJxjWCU12N60M/BgfnGPVDTWNccW3bFzGWeKSdMcHYZ6x2Mc4DU64F2J0RZQ67nlTJV17oBxxqrkvMF8piru7m4AjGfCfASxnM+6ZtzY6laudteWUed6kv7Lj+UN/N0bboDyOuN/VyuHYsRYy/7dGkroEpanqt34Cc3hpbMwlIQAISkMDiEmBRiLPE4g2nbUv0jdGREYtIFnosYD4VsZjGmcGpYXHTy1hQElDAcRnVuMs3yJZz8D0RThiOKHX+uoi7SBsjFrPU4V+j5Yj6stijrcOMtr4/YmF6QoTj9ayoOB6UzeL/E1Fx7lgYDrI35yAsWbi3dT6Wk5Y7tgQYaBPnaC4kaeObokGsWZjSV+RlgcxitS2LJF0x8t9T5WNBywK3cKFdHKcuMGEckZbzTNJoB2W+O7olKnX4+mwXZ4c6fDLCgWCc8h4nje9snxtdGuFofCBqGmMDJwY2zAPG1TdEjCn4cn76rpTPGF2OmvOA9zgXr62OUY9mv2XXULsmKYpTxnkZOxjn5SsKrLeZj7SxrVEO/zf9Y1UGyseZGWS0hd+MgPFyVOoxKA/HYImzjIPLmGDO1I33lNvVAWYewLStwYc796dEOG0nRYyfprGv1OeubDf7jP5nvvPbD2U+co1oGulwCj9eHaC/msbTDkc2d3Z4Tx805xfXCoJGg4yxSV4Y0u/LUbOdzfzk4fOB8UzbGHPFaBtBOT4jYMO862KUDcM3RkdEzMHSt5THk0dwWo64dg2z0kf8PgyBAKw+R27Pe54kYu5wnjZlfqWUr/yl/JsimFFf6k+b2V+38/KGMUI6xsJC2EFz2MrXpU1cxJhsPxLxwaYNJnBJDnOB28zdxRe84AUbLr/88g2bNm0anCtHp+Ax5KF1HJRg3Pr7bwAH0fXYuASYj4899tiGpzzlKRs+/elPb7jkkks2fPGLX9zw+OOPM/dwprjG1T/gxz2l+WePAIu5wyIW5ixiMBZmLKSHLRar5Gv+Qj0Pj3BSWNyxMKOuODzUfVSDBc4fPCib95RXyuY8i2hwYHzABfYsgItjsVZMqAN9Qh2Kg45DS5/XnVTGw3HRGRFPuRSnZVg9aRdtRJSBUS7zoDjm1W5fppgA4+S0iKc16FO+9vLzU1xfqzbbBLgWEXQ8Pbouemk0zmfQzNAoF8mZqbAVXVsC4zrIa1vb7mfTge/OzBwSkMBUEcCZw8npcmdxvRvAAqt5h3ESdSqObd2hnES5s14GXHC00XoZdSAQMywohaPP3T7u1HEnkzoPc/5pE2OKO6VIm10CjJNd0TURN/SeHW2O9sxuk6z5FBM4M3U7PmLc8fs6C+H80x9E2jQJSEACEpCABCQgAQlMCwHu2hMIaD46PS31sx6rR4BgJk8A7I1OiLat3qkseYEJ4AO/KuKpoe3RLYvEwgDAIvW2bZWABCQgAQlIQAISkMD0EuCJD76++/5oU8SPSB41vdW1ZjNKgK8Z8QQAd/35QdGF+qqQAYAZHbVWWwISkIAEJCABCUhAAnNIgK9y8GOA/BAcvwlwzhy20SatHwG+As8PJRJYekd05/pVZX3ObABgfbh7VglIQAISkIAEJCABCUjgQAJ8J5unAPif9zyizQ9CHntgMvdIYCQC5yYXd//5DxL80ORC3f2HmD8CONK4MZMEJCABCUhAAhKQgAQksEoEcMqujXDSuGHZ9d/ArVK1LHYOCDCm+MX/fRG/NbFwZgBg4brcBktAAhKQgAQkIAEJSGDqCfBfI/gxSE0CkyTAj4su9A+M+hWASQ4ny5KABCQgAQlIQAISkIAEJCABCUwpAQMAU9oxVksCEpCABCQgAQlIQAISkIAEJDBJAgYAJknTsiQgAQlIQAISkIAEJCABCUhAAlNKwADAlHaM1ZKABCQgAQlIQAISkIAEJCABCUySgAGASdK0LAlIQAISkIAEJCABCUhAAhKQwJQSMAAwpR1jtSQgAQlIQAISkIAEJCABCUhAApMkYABgkjQtSwISkIAEJCABCUhAAhKQgAQkMKUEDABMacdYLQlIQAISkIAEJCABCUhAAhKQwCQJGACYJE3LkoAEJLA4BPj8OGJxmmtLJSABCUhAAhKQwOwTeOrsN8EWSEACEpg/AgcddNCGpg4++OD9+7785S+vZ6OXcvKzoyei91Sv61mfRTj35jTy2BYNpU8+Hz0U7Y2+MCDPphxbitqsBSj3sejhaF/1OqDoDYfl4HHRoRH1uTuq14VznhgdXivkwWzfG3GutnZUEp5QS/xItu9sZD4676nLqDc9qD/1erRW7iHZpj84P1zui2Bet6W8od/anJc2wwe+9Bvn7MIhyTdQJ853fESb6QPOTbmw3R1RTxh1LTtZWtsxSUnfMr7oX/oaRvDZE1GHB6I2daBvYUzaXRFs2hgs4MD5ycuYqPdfvzIKQ9qAcT7qXB+7/fLSzi0R7NvY41W59A3ngVHTnp4dpzV3jvCecXVXlW9jXhkn9A/nvD+ijXVjbMOANi1HpKG+mgQkMAcE2nzoz0EzbYIEJCABCUyAAAvxs6IXVK835PV9UZuF/AROv9BFbEvrXxsNcibpB4SzUpytj2b7pggHoGmnZ8erojZPclBuCQDgrPx9RLnLzUKr9zgYb4hwvHEu3hgROCiGo0V7Tq7tw0n7xQhno62dn4SvjuBCHXHUf6SR+dS8/4kIZ2oU251MPxvhgBY7Mhsvi86OcOCo963/8/DKFnPlhyIcrkFW+q3wpf1/G22PcJSHGWs5OMPiu6ItEXP10Agu9fFAIOYvoh1RG4c4yVobDuN50XMinG/elwAAdSDwwFigjz4SMX4YS4PsB3IQxrB5W3TtoMS1YwQ/GBf0PXkZ523ai1PMuDyjKuu2vL4zImgxzBjTF0VcH4dZ6fPSN8vJ8NcRfc6+YszNX4gGzfta8r6bOP8/Wh1lHtBPjF+uE78fXdnICfOXRoyh34quidrwaxTjWwlIYBoJGACYxl6xThKQgASmiwCLwDOjF0fbomMjPj/GXZROVyunuzY4UzjsXZjj+JwbfXvEIr7pbOEk4iDx2sW4E0hZpVwcuqYxZk6MTolwYppOMOOHu7u0qRhOG84pDnAbg8UrouKs4VT1Mpx17qJSp1GMejWDBzh7ONrUn8AGjm7TcCY5L2m7GHxx0L41+o1oEA/KxlnDaaUutBWjjM9Xr9SdOYtgtS3Cke41JrJ7JKNcAi/UhXZj9Ad1wPGnnuwvTLZlm2DF70YEfvr1XWHMWO4yTjlfGV841M3+y64DjPFEIAUnfnN1lHlHEG1n1K+OpSDyf1NUH9Pl2LBX2ndORHt/M+I9Rr27zvsq65NeSnnsZO59Y1XuA3n9qx4Zvj77GLuM/T+t8vRI5i4JSGAWCXAR0CQgAQlIQAK9COC04SB+f3RWdHzU1ZnpVa77xiNwR7LfE+HkNY0+wxnFkaG/cMJxnL4Y/XJUv7uYt/vt1mzdEn2uvrPaxrH5qohycOi5a4+D9PKI8rj7zZ3EcQ3n8NkRzukwZ4tz0caTOp703qTH4ezHoVdxe7NznPbRFvh+uFfh2Qff/yXC2cTpwglEl0efjt4T4UQ3jbl4fvTTUeGwJ9vbo7+LHowYIwQ+cP6/N2I+kxbWnO+nIpzAcWxbVc7WvFInnja5LuIphn0RdWA/Y+Z7ojMj2sd4og5vjW6L2vR5kq2aMW/gQ53ob+rDNmNyR9SFE/1FHhj0Mvr8q6OlCB6ch3kFk53R9REGy3dHpG8aTKkv85y63hQtNxNV7/+5z353S0ACC0jAAMACdrpNloAEJDCEAItN7p7xCOg51TZ3grTpILAj1eDO6aM9qsPnOo7B0RF3x18ZHRm9KLo52hH1Mh4R/p2on5PDmMCRxIkjIPS6CIeJ8cHj3Dh8oxoO4ucjyseZwTnFkR1m5yUBd0i5u7lxWOLq+O155a56v3b2Kob64UyPajhnBB1+c0AB9Ftx1LmbT9voNx5j3x4RuKgb/YHDyFcTCPLAj/7lcXX6Ese7BDlIy/yljxgHr4kYH5dEOKpvimjjKEZ/kX9bRB/cHREQgvPeqF4H+oo6nBXxtACBCMYP4xiH+75oPY1x99yIvqC/YHpBhIP+x1GXMUNe7qy/J+pn8KJfYEgQB2eeQMCboxsixg1c+ApALzsiO0lbAgAfzDb5elnph17H3CcBCSwYAQMAC9bhNlcCEpDAEAIsQC+KeNyfBTpOHg6ENj0EPpuq3B/1uitcaokzhdN6XHRGdGzEHeAdJUHjFSea8rjj2M9wRjgvaZaiCyPu5PIoN3csR72DS7n3RDiG1JNxtycaZIxJnFmcqFuirYMS147hbNIGHOS1NBywQWypC/1Fu3EecUbptxMiHPzdUd2JI1iA08hx9t8YEQyg3+nLutEvnBunlvIZP+QlwHBZxJMJ5O9qrCEJUFBP+oHAA7/1cFvUqw6wJ5ABf/qcwMHJ0fkRvylBwGDQmM7hVTPqz1zBCYfVrdH/iLZF8GdMwo++aWMwb9PnBBWWqwIJwB0VnRZRF/qSwEy/uUD5MC1Wxk9tl5sSkIAEDiTAB6gmAQlIQAISODwILo1YhPJYMHe9uMPk58Rsjg0cEJxG7lxih0U4FTg64xjlLkd8Vx/jru4xEeWPajhc3DHGmVmKTomGjTscMtpDOu58zovBF0eTgAgOHk42d3jr/Uabt0VnR9iuiLvE5Gk63lWS/S84ie+Jro0on3nPXeRhvPcXUNvAUT0rou/pu5+JcJyH1QEnn/P/QYQDTB142oj+XC/jWsePFxJYIUDxDxEBDdgyxjlGUGY1DCd/e7Qnov8PiQg6aBKQgARWhcAoF/xVqYiFSkACEpDAYAJPPPHEBv7934T/BSCLTRbxOP6/Um0fnVccD222CeCI4cBgfN7jyKBxDccRR7IYzuk4gQUcoE9EOFw4YN8SDXO2zq3SUo+bonky+g0nGS4Yvw9QX6+x/UMRcxfH+7oIdjiPbYyAy9sj7sJjp0cEXbra9yXDsRH1uSbizn+p87CySr3Jg50aEdShTethBLFKQGU52/C8uxL12RptiVZr3VyeFih9yDzQJCABCawKARd4q4J1tgp9ylOesuFLX/rSSqVxMD7xiU9s+PM///MNhx9++KQdjdkCY20lsI4EDj74K+tM5uQXv/jFlTlaNKFqcQLuuP1kdH7EHbBxnLgJVctiJkyg/jmPc9HWQRtUDYIIS7UEPBaNQzeO7U1m7h6fGeFocQd0T58CGbs4n9Tj6mjcc/c5zbrtxgkmCFec4f+e7Xq/0e7irOLM8yvtXfuVR/Fxvi+oznNeXu+I2hp37U+OypMff5RtghZdbGcS82OFZ0U4vPzY3o0Rd+DX0uBJWxh3tAHHnzowX3gSADY8mv/d0e0RQadJG+0nCFHm6wOTPoHlSUACEigEDAA4Fg4g8PGPf3zDzp07N2zcuHHDQQcdZBDgAELukMDqE2gGAB5//PGVuUhAYAKGs8//Rb88Gsfxx1n77WgilZpAu9a6iH05Id+lnlbDsfmOqnLcVcZxmYSzvJRy+B/iGHeR/yVq+93oKtsBL5SDM0j9CExxN/j6A1J9ZQd3i5ciAgHz9Ph/ae452cAhZY1Gv3E3mjvExWBTHG8c1jtrx9puEjDgR+rOjQg08Ih7Fzs+iQlS0AfMA5x56trFqMN9Ec425dEurkdrHQDgnM+P4L0c/U1UAiqwpW0EAOgXAh2rEQB4ZcqFJ0ZfE5zRJCABCawKAQMAq4J1tgqtPwFAzXEyHnvssRVpEpDA3BFgIcv3t1lwj3vH/9iUsTR3hNo3iLuo0xoA4POdPr6sag53FPmhtX6GI0ceXnsZ+xkvPArNd71x1jCcU+7ajms4XLsjHC7OUb4G0MsZxGnlDjTHbomKM5zNoYazR4CB1zZGvXg6YdzACfz6saUeHOMuMG3jO/knsDN2XYQDWr/DX46xbzmqBwdWMrU07nRTBudmrPDaNpi3OWkL913ZHjUARB8yNjk/ZcJgLY0243jj3GOMwVurbV4Yj8zzM6JTohOr922ZD+tzxjHOPz+IWMbkFdl+KNIkIAEJrAoBAwCrgnXmCuUDn0XAY1/zNV+z4bOf/ezK3f9yx3HmWmOFJTAHBOpPAPT5zj9ztu1ivU6ExTZ3u1hwsvBkATrqZ8Fy8i7qnSrY4xiuh/Gd8H6OUnFoLkwanvIg4ENdudN67YDKnp9jOGG9HBvGB87JsRHlYZSJY8RvRxAEmIThDDKetkY4uThbzQDAIdn3vIj2vz/qGqm+NHlQW+P8/Mu6G9pm6JEOfvQHgYdeRmDlyGipeiUN8xu+b42aDL6OBFWacR4VZ/yWawhPi+DQP1yVPeyF6wZ9gXFHvB6gGJa3fpzzleAKZcJiLa0EXTg3LLn7X2dAYONvIwIES9ELo5uj5WiQMV9+NOKrKr0MdgQemFNlLjOWCThMa1CxVzvcJwEJzCCBURd9M9hUq9yPQJyL5TgbV+f4EZ/73OdWknn3vx8t90tgbQjUH/cvwQDOXPtazr88/elPf+TRR0e6MbkvRXGX8b0RvwFwXsTiv+vim4UwztGoi/9k1UYg8L8lD8GbZgCIz3T6sX5HnL7B+f+NqOlI1k+NI4IGGWXhECHuHuOcTuLufzknjiTfuX4kwlkmIEH59XaelvfHVBn+JK9dAwCU1SV4RvlNztXpO71sSWo0yKgXExq+t0S/GBFcaZ4fZx1jP2lHtWbe4tC3KY9rRVlDEjRq1rFNGaSBb7l+cP61Xpfi+OPUY3sirmlNIyi1K1qKzo42R6Qd1GbaclKlvPQ12s94py8I0PGvHNsGYfoW6gEJSEACgwis9YV2UF08tk4E8gNjt+fUSJOABKaEQP27/v2+9z+i819ayKIbJ+610dboDdHpEQtiPxsKpel8pY9QP6NvccpwJndGb4s+1C9xtR+nAxVn7OBsI4IJ3M3E2eEOKU4KjjeOUtOBzK6xjHPsjrgLui361ghn//5aqdz9pz44ZHdFg5ywWrb9m/C4J2pb94eSloDZuAZbAhzFYMs8qwdsaNP1Ef+6EQb96lj6iLIoZ1RrzvMuwRTqUNgTDBi1HtSh5K2XOWqbuuTj3ASZTo1oO/y5Jjbbwv57ozMi5h1PUDGGGBv9jLaUOVXSUC6sKIMgDvzo5z+LmJ8E6up9m7eaBCQggckTaF78J38GS5SABCQggWkmgKN4Y3RHdGH0iujECCeruRCe5nYsUt14dBtnspezwD4cExzdj0bbq7TD+PAUGE8J4ORj9D2OyraIIBGOEk4LTillM25Ww5ZTKE4RQSnGIU8ClAAAzjJBKl7fHfVzkHOor8HjF6JJOPV9T9I4gKN3ZXXecgi+BDfOjX4gop3w/WxEewfx/UyOY6zh6KNRja8dlDmOA9zlcSLuWpc6cq0YdT15aPKWJxoosxmEqI/xSV+POO+LKwa0HYed8dXLaCv1I8/50e9FgwIAzE8Cb++vFYbzf2zEOS+INkW0f0/EnO41n7Nbk4AEJDBZAqNesCdbC0uTgAQkIIH1JsBi9ooIB+mS6EURTh8OxqQX3uvd1lk//zvTgHdEOCR1w9FsOlBt24oDhFNcd4xxRLnTeUvEd/1PiX4s2hz9VLQrmrThOP1jhDPG+EM3RzhHp0Wcmzb+eTTISc7hqTH6Bb44eXWD712VePT75Aiuz4xwHjlO3qbdV+1gDQcPXkdxHgk6lLm9nO0uY4e2lIABdTgkGsWOSqYSxKDMZlCnXicc6LZGfcoalzJ6cSQAcl5VIEGM11Uadg4CU8yF5ajfGOR8zM9mn5Pntoj/nMFvdJwQvT0iqPW+qDmns0uTgAQkMFkCLuomy9PSJDAtBJjbLJaKDPZNS89Mfz24G/XfIn7A6l0Rd2O73Bmc/hbOfg1x9nA8muriwHWhcHcS85sRjA0cq3MifjsC523ShuO0MyLwgFPE1wCOjrimPTfinDyOTZpRnN5kmyqjz26MePoCvtxhviy6NOp3d5/+wFGGyTHRsdEo9t3JxGcEzO/oWADBH4I15CUAsCnq+jlD/bdEtAEjsEHgp244xMV5Zzy0Ne6sl4AB169mAIC6bo2oO+OItiwPEcEx+ot6vzTqUp8k32/03XuiD0S0l37mP2swr7oEOZJck4AEJNCdQNeLdfczmEMCElgPAiySz4pYBGG7IxZ4zcVVddgXCTyJAAtinAycrA9HLHa3RSzWR73Tl6zajBLAebol4k7lWyIcn/Ojj0fvjvrdBc2hkQznkvF3ZsSd1qWIc7DNuW+I5ulOKW27Pvrm6DURDuGrI+Yf+5t8cUS5i8w1nrQXRATtmk5udvU1nG4cYNaBzHe+h97F9iYxgZjTIz5neGKIz5he/cI1YynC8SUf58P4nPr26pV9fxfhiNftU9UbnO5vaBwb9JagUfn8eyDbOO51o0589QLjc/H9EU+eDDLKpF8IuMB+KaK+XbiX8uH029Fx0bkRLPj1//siuI5SZrJpEpCABIYTMAAwnJEpJDCLBFhU8Avd5c7K1dneExkAmMXeXL8643jcHN0VbYv47mq5a+bnx/r1y3qcmbFwVfQ90YUR15aXRTwhcuuEK4RThTP2UMS1DOHocrcWJ/Ij0bw9lYJT/4fRidHZEcG2H4pwBgkE1B1CtvkPHsxFAiI439dFOI9tbGMSvTxainCsyUeAp4tRB76GcU50fMSY+KOIcprOK2OFrzbQd38V3R7R3m3RGRF1IOhDAKHZr9SN8rjenBTxhATlDDPGzBFVIspm/NZtKW8IMFH2csRXLviMHGQEFAhYbIoKd55UaVOfXuVSL75aQ11heHLEj7HydE2TQ6/87pOABCQwEgEuupoEJCABCUhgEAHuVl0bsYjnu8oElFjANxf6g8rw2OwT4E7qr0U4LqwfcFZ/MOLu5SSNcYXji/OH4/9tEU7u0dHd0e6IO8bzZrTtg1FxRAkEnB/hbNYNPjdGBF6KY4zjuPTkZD3fkZ47zq+KcGjhSJ/2unPfs4DaTs6PcFY3RziufKe9boyTIyPa8pqIwPSboldG/OAogQ6cc64vxdnP5n4jwETdKAdHmXKGGQ76syPOi/FkQd2hpqyXRLSfc3OOYc4/5VAGAQwCUxhlHFFtj/qyPRlpe+F/SbbPGrUw80lAAhJoQ8AAQBtKppGABCQgAQjg9L8/4lHVn4v4lXmDAIszNnAWcZZ+N8JxwjHFIeMuMI7lJI273uVR6G3Z5jwEA3CYigM2yfNNQ1k8pn5D1UacTe5246ifFDX5Pph9vxLhuNIPF0c44KdGG6NeRgDl0oj5i+PNGhDnk4DeKMYTZTzGfk/E2CBYwXfZcWDrdSiBo+uznzZdFnH92BZRh1sinh4gXdO45hDswGgnwYPzIsppGmUtRT8c8WQBdViObovqd+kPyXu+1oTheHf5+gPjb2+Vl4DEmdE4a2nmEU8B8JQVDGkjgVb6SpOABCSwKgSaHyirchILlYAEJCCBuSLAHdh3Rdz9MgAwV107tDE4TB+KnhPh+B8bfX+EA4MjOCnDweVrALxyVxkni3MTdCp3S0c5Fw4yv77+bx0y45jfHeGcr7bhXP5BxOPg1JW2EwS4L6o7yDiLN0UEAXCKN0XcVccp/UhE8IT0xalcyvZ3RgRSuFvP+g/Hmn+JiCM/qt2ejNQBx5+6XhRRPnX7WER7cHIJHH1NRDDjyKg48NTxTyLGTr9rCb89gUO/FJ0W8TsUOOL/HDE+yIfjfEzEI/pbI3jg9L8/ggUcip2eDepKvvujW2vHhm3uTgLGOo/s0wa+psF8oI2j2q5kfGdEnY6KaOPl0c+PWuCE8z035X119O9DyoUn4/SaIek8LAEJrDMBAwDr3AGeXgISkMCMEmCxx+JbWywC9PtyxFMAJ0Y4XWdGv5ll8QAAIABJREFUF0Z7onGc82Tfb5wHpxvnCOcPw4lcjurOXHWo9QtONU5oF8ORvDJaiwAA9cKp/uNoc8Sd4Auiv4y4U193NB/N+/dV+/hxOtqGg48DiWOLYw8rHFX6CedyY0Qf4aThdMJ4HKOfrosIkvCfQ86KcLAJYDAeuEZQZ+660xbqgJEPYz8OJml3RL3Gzx3ZzxMD3BknwHFKhLNM8KAEL2gjgQUCAQSLHoquin6v2s7LfuNrK6ShX2+OulzH4MkTA+dGnJOxTzCAoMA4BsPnRS+P6CP6c3vE0wvrbVtTgTIHB9WFPmVcGQAYRMljEpgCAgYApqATrIIEJCABCUhghggUx4mF/o9FOF58Rx/HlbvKk7J7U9DOCIcSh4072zh94xjrnq5rHxybrnnGqSN8PxB9V4SjeUT0xghncHdUnOdsrjjABCcIlDw/Ij1OMsGZpu3LDsrgP3vgXC5H9bKa6du+x8G/PiLoQACAHwslGEE9UN1w8KnDLVEJbpyf7S0RzvjvR3dGBBSKUUd44KjztAnn2FzT/oTZoHzyE0DBqSawUG8j5zynygBnnj7oatSTcinr0Ohl0bgBAII5POlAQAFmmyL+9Sa/k8Cx9TQCEmiYwfmQYYk8LgEJrD+Bg9a/ChOvwetSIt9v44PoR6J7Jn4GC5TA9BPYmiryGCl3fTDuHDEvWExrEpDA7BGoO3U4xjh8j4/ZjGOT//iIRTvlcX3AmWtjOMQ4YSdFOOfko17LVWacVu4CHx7hwODw4XAVw6HAsSd4UJw27trWjXJxZKkn2zhZOF7NdnMnFscXo4wd1XZ5WcoG9RzVied8nLfu5OH4nRDhqNF2nE4c7LrBFmHw7bIeob3wo3zqTR1ujnD4exnpYYkjvRTB7H+N2A//T0bLlXbntW0/J2knOyypGatLEXX52oi+pu8/HS1HnB+epKXfXhXRVpz+10bXRL3qR1sYc6WNX59txhd8KP//i5YjWHOOXk8T0G9nR5TFOXZE9XGZt63szKSCN+XsjW6LqAftoI6UfV9EPdoaZfHZzdzB6LcdUT0YUh1aeYEr82MpeiIiAEddhhn5GJdwpJ7waq4NGNscp01d7P4kph6aBCQwxQQMAExx51g1CYxBgEXjeRGLHYwP91uifovHKpkvEpCABCQw4wRw8Lj284pTSfAAZxJnD0dxLYzz4uATXCp14PzUo16HEjAgEMB/e+C/BNzRSJO3BxjlEvjhFaNMHHlEezUJSEACEuhDoGtkr08x7paABKaMwL7U58qIhRfGgqjXHZXqsC8SkIAEJDAnBLhjzJMQ62k45G0CzuXpD+6UEwx4IGoTpODzzM+09exhzy0BCcwsAQMAM9t1VlwCAwmUOz4DE3lQAhKQgAQksM4EcPgJWKx30GKdMXh6CUhAAmtDoNwdXJuzeRYJSEACEpCABCQgAQlIQAISkIAE1oWAAYB1we5JJSABCUhAAhKQgAQkIAEJSEACa0vAAMDa8vZsEpCABCQgAQlIQAISkIAEJCCBdSFgAGBdsHtSCUhAAhKQgAQkIAEJSEACEpDA2hIwALC2vD2bBCQgAQlIQAISkIAEJCABCUhgXQgYAFgX7J5UAhKQgAQkIAEJSEACEpCABCSwtgQMAKwtb88mAQlIQAISkIAEJCABCUhAAhJYFwIGANYFuyeVgAQkIAEJSEACEpCABCQgAQmsLYGnru3pPJsEJLBGBDbmPIdGZY5/Idufjx5fo/N7GglIQAISkIAEJCABCUhgyghMcwBgU1gdOQKvb0genB+fbhgBnlnmhsAxackl0TOqFv1jXm+MHpybFtoQCUhAAhKQgAQkIAEJSKATgWkOAFyQlrww6lpHAgeHRYdEBgE6DQcTzxGBzWnLGyICAdjV0R2RAYAKiC8SkIAEJCABCUhAAhJYNAJdneu15POsnOzEiLv5XQzHvzwBYACgCznTSkACEpCABCQgAQlIQAISkMDcEpjmAMCHQ/2TUVcn/jnJc270WPTE3PacDZOABCQgAQlIQAISkIAEJCABCXQgMM0BgFvTDjSKnZlMBgBGIWceCUhAAhKQgAQkIAEJSEACEphLAl3vrs8lBBslAQlIQAISkIAEJCABCUhAAhKYdwIGAOa9h22fBCQgAQlIQAISkIAEJCABCUggBAwAOAwkIAEJSEACEpCABCQgAQlIQAILQGCafwNgAfDbRAmsGoF7U/KrI/4rBra3UvXWFwlIQAISkIAEJCABCUhg0QgYAFi0Hre9i0LgwTT0xkZj/a8Yi9L7tlMCEpCABCQgAQlIQAI9CBgA6AHFXRKYEwI6/HPSkTZDAhKQgAQkIAEJSEACkyDgbwBMgqJlSEACEpCABCQgAQlIQAISkIAEppyAAYAp7yCrJwEJSEACEpCABCQgAQlIQAISmAQBAwCToGgZEpCABCQgAQlIQAISkIAEJCCBKSdgAGDKO8jqSUACEpCABCQgAQlIQAISkIAEJkHAAMAkKFqGBCQgAQlIQAISkIAEJCABCUhgygkYAJjyDrJ6EpCABCQgAQlIQAISkIAEJCCBSRAwADAJipYhAQlIQAISkIAEJCABCUhAAhKYcgIGAKa8g6yeBCQgAQlIQAISkIAEJCABCUhgEgSeOolCLEMCEpg6AoekRkdHZY4/mu2HosemrqZWSAISkIAEJCABCUhAAhJYEwIGANYEsyeRwJoTWMoZ3xgdUZ35o3m9Ktq35jXxhBKQgAQkIAEJSEACEpDAVBAwADAV3WAlJDBxAkelxAuiY6qS+brP9ZEBgImjtkAJSEACEpCABCQgAQnMBgF/A2A2+slaSkACEpCABCQgAQlIQAISkIAExiJgAGAsfGaWgAQkIAEJSEACEpCABCQgAQnMBgEDALPRT9ZSAhKQgAQkIAEJSEACEpCABCQwFgEDAGPhM7MEJCABCUhAAhKQgAQkIAEJSGA2CBgAmI1+spYSkIAEJCABCUhAAhKQgAQkIIGxCPhfAMbCZ2YJTC2Bx1Kz+i/+P5z3j09tba2YBCQgAQlIQAISkIAEJLDqBAwArDpiTyCBdSFwb876+mhjdfYH8+q/AFyXrvCkEpCABCQgAQlIQAISmA4CBgCmox+shQQmTeCRFHjbpAu1PAlIQAISkIAEJCABCUhgdgn4GwCz23fWXAISkIAEJCABCUhAAhKQgAQk0JqAAYDWqEwoAQlIQAISkIAEJCABCUhAAhKYXQIGAGa376y5BCQgAQlIQAISkIAEJCABCUigNQEDAK1RmVACEpCABCQgAQlIQAISkIAEJDC7BAwAzG7fWXMJSEACEpCABCQgAQlIQAISkEBrAgYAWqMyoQQkIAEJSEACEpCABCQgAQlIYHYJGACY3b6z5hKQgAQkIAEJSEACEpCABCQggdYEDAC0RmVCCUhAAhKQgAQkIAEJSEACEpDA7BIwADC7fWfNJSABCUhAAhKQgAQkIAEJSEACrQk8tXVKE0pAArNE4NBUdikqc/zhbO+LvjBLjbCuEpCABCQgAQlIQAISkMDkCBgAmBxLS5LANBHYksq8JTqyqtSOvP5WtGeaKmldJCABCUhAAhKQgAQkIIG1I2AAYO1YeyYJrCWBw3KyU6JjqpPen9dD1rICnksCEpCABCQgAQlIQAISmC4C/gbAdPWHtZGABCQgAQlIQAISkIAEJCABCawKAQMAq4LVQiUgAQlIQAISkIAEJCABCUhAAtNFwADAdPWHtZGABCQgAQlIQAISkIAEJCABCawKAQMAq4LVQiUgAQlIQAISkIAEJCABCUhAAtNFwADAdPWHtZGABCQgAQlIQAISkIAEJCABCawKAf8LwKpgtVAJrDuBR1KD26PybwDvy/YX1r1WVkACEpCABCQgAQlIQAISWDcCBgDWDb0nlsCqEtiV0n862lid5eG8PrCqZ7RwCUhAAhKQgAQkIAEJSGCqCRgAmOrusXISGJnAo8l578i5zSgBCUhAAhKQgAQkIAEJzB0BfwNg7rrUBklAAhKQgAQkIAEJSEACEpCABA4kYADgQCbukYAEJCABCUhAAhKQgAQkIAEJzB0BAwBz16U2SAISkIAEJCABCUhAAhKQgAQkcCABAwAHMnGPBCQgAQlIQAISkIAEJCABCUhg7ggYAJi7LrVBEpCABCQgAQlIQAISkIAEJCCBAwkYADiQiXskIAEJSEACEpCABCQgAQlIQAJzR8AAwNx1qQ2SgAQkIAEJSEACEpCABCQgAQkcSMAAwIFM3CMBCUhAAhKQgAQkIAEJSEACEpg7AgYA5q5LbZAEJCABCUhAAhKQgAQkIAEJSOBAAk89cJd7JCCBOSBweNpwfLSxasuDeV2OPj8HbbMJEpCABCQgAQlIQAISkMAIBAwAjADNLBKYAQInpI7vjI6q6npdXn8h2j0DdbeKEpCABCQgAQlIQAISkMAqEDAAsApQLVICU0CAuX1EVAIAh2Xb+T4FHWMVJCABCUhAAhKQgAQksF4E/A2A9SLveSUgAQlIQAISkIAEJCABCUhAAmtIwADAGsL2VBKQgAQkIAEJSEACEpCABCQggfUiYABgvch7XglIQAISkIAEJCABCUhAAhKQwBoSMACwhrA9lQQkIAEJSEACEpCABCQgAQlIYL0IGABYL/KeVwISkIAEJCABCUhAAhKQgAQksIYE/FXwNYTtqSSwhgQeyLmuifhPANjfRo9U275IQAISkIAEJCABCUhAAgtIwADAAna6TV4IAstp5VujMsc/n+2HFqLlNlICEpCABCQgAQlIQAIS6EnAAEBPLO6UwMwT+EJasGfmW2EDJCABCUhAAhKQgAQkIIGJEfA3ACaG0oIkIAEJSEACEpCABCQgAQlIQALTS8AAwPT2jTWTgAQkIAEJSEACEpCABCQgAQlMjIABgImhtCAJSEACEpCABCQgAQlIQAISkMD0EjAAML19Y80kIAEJSEACEpCABCQgAQlIQAITI2AAYGIoLUgCEpCABCQgAQlIQAISkIAEJDC9BKb5vwAQnBglQDHNbZrekWDNJCABCUhAAhKQgAQkIAEJSGCuCUyzs3xCyC9FXYMA35I8h0TT3La5HlQ2TgISkIAEJCABCUhAAhKQgAQk0IXA25P4P6Ivj6iPJ9/JXU5oWglIQAISkIAEJCABCUhAAhKQwLwSmOa75J8M9LujrnU8MnmOip6Y106zXRJoQaD5FRrmg3OiBTiTSEACEpCABCQgAQlIYF4JHDTFDTs8dTs06voVgJcnz+uj+6NXR/dMcRutmgRWiwCBsNOip1cnYD4wFx5drRNargQkIAEJSEACEpCABCQggbUm8LqckKcHPhqduNYn93wSmBICW1OPT0XlKzQfzPZxU1I3qyEBCUhAAhKQgAQkIAEJrAOBrnfX16GKnlICEpCABCQgAQlIQAISkIAEJCCBcQkYABiXoPklIAEJSEACEpCABCQgAQlIQAIzQMAAwAx0klWUgAQkIAEJSEACEpCABCQgAQmMS8AAwLgEzS8BCUhAAhKQgAQkIAEJSEACEpgBAgYAZqCTrKIEJCABCUhAAhKQgAQkIAEJSGBcAgYAxiVofglIQAISkIAEJCABCUhAAhKQwAwQeOoM1NEqSkAC3QnsSZa3R/+pyvrxvD7UvRhzSEACEpCABCQgAQlIQALzQsAAwLz0pO2QwJMJ3J+3747KUz6PZftRIUlAAhKQgAQkIAEJSEACi0vAAMDi9r0tn28Cj6d53vGf7z62dRKQgAQkIAEJSEACEuhEwN8A6ITLxBKQgAQkIAEJSEACEpCABCQggdkkYABgNvvNWktAAhKQgAQkIAEJSEACEpCABDoR8CsAnXCZWAISkIAEJLAQBE5IK0+KWCc8Ed0YPbgQLbeREpCABCQggTkmYABgjjvXpklAAhKQgARGIMDTgb8UbY1YJ1wbfWCEcswiAQlIQAISkMCUEfArAFPWIVZHAhKQgAQksM4Ezs75T40Oi/ZFr4/4YVFNAhKQgAQkIIEZJ2AAYMY70OpLQAJzQ4A7rYdGXpfnpktnsiGMw1dHR0T8J5GXVa8z2RgrLQEJSEACEpDAkwm40HRESEACElhfAjhch0dnRe+Mjlvf6nj2BSfAd/83R3zf/83RnQvOw+ZLQAISkIAE5oqAvwEwV91pYyQggRkiQAAWxx+H61XRhRH73jpDbbCqa0dgY051SO10/DDf5yNe2xrjizLKZz95vxDVH++/P+955J/z7azSNh//p5ynR7xy7LHqNS9DjTw86YKRlzbUjeOcG/WqH2lLG0hb7JHadpvNJgvyPBq14VnywqDUhXylvtQFJqtpfD3jmIinNB6oTkRdeHJjkFFH6gZ3+l5bewKM/zIH6L8uY4U+5nMDY5w15091aOFe4MmcwPjaUpt5vBqQSv9QH65hWJlzXF8ejrr0d1XE2C/luku9mPeMnXJdZzxxLeM9gd/1Yte2kUcm4VHR3gie2ggEDACMAM0sEpgBAsxtLujFuLBz0Z/2C/sMoB27iuWDmDv9L/7/27sfoGvrus7jO42z4zqO6zosy7LMM3csyxqLRCwREdEjIRIRmSkZkZFamlmW2f/dmsZpnLbZMctyyzQ1AjUzUyIkRERCZEkJkQgJbxHJjEbLP9mqtZ8XnMsuD+ec+9z3c98P9/Ocz3fm85zrXNfv7/v3vX7n9/1d5z5PdEG0Z1JqP8z2Ge9BW8AJ6dlpo95ZTF4a3bWJHh+WtKdHAkfG366K7pi89yIgsSnlNwDUd310zei6Q+WcF1ngSn9tdNtUmnlv5eXz5qI7ozdMJbRAPSU6LtLHq6Nbp9KcnPfHR8MC2/z2ykhbljXtOCPyOthv5GDRPejetVhei7TvK6OjJue0Qd7boz+J8FiP9GG7Tb8vjH42ekH0S5MKtOmnI+2cZUPgb+H83uiWyNjvRBtn1d9z9xM4My9PjgRhL4qWvXfkPjb64egz0e9E7t/av/pXZwUCptY93xbt740Rgb959dTo8ZH584jIvaot5mlz6R9EN0U2WnfC3Ps2Ac1T43kdlydF1hzmp4uioQ3m8sdFgn/zx2bm0STf7+b++dXof0f/K3owNlT2e6db4cYEnpUkH4reGZkoayWwigR88Dw7+rGJzs3rRk+GVpHT/u6z4MYi/XnRe6LPR/880sdyLLCplcA0gZ8a+Qmf+cfoKdEQBE+nn35vYXh25PNx8Ln35/gJ0wnz/uejT0zS+d8Apk0APlz3KohZdn45KWn5/WejN08XnPdr0csjbfyr6PwZaSz8hvoHFhaxmzHl/mU0vv8E8/MMZ2uK/xH9eTR9747Lce1Po+dEi8qcV9dG580R2FjIHzJKfE6O+YW2aIPjQXhPt1k/fE4cGtX2H4Hnp6q/if4ici9txtzD8n4geupmMh7kaa11fH7y/Ufs5776XBeUCu4/FbnPtOWDkTnW699Nzv9DXn87sqE7b6Mul7ZkyjNHvTB61VQJ5mfzJj6/Gz16dP3FOTafmg/GG6KjJLvq0Fz8tsh9MN4U31WN3O2Nechub2DbVwIlsCUCRyaXnVw70uz1kacMu31nd9Lcg+7F0wFj4umAH1Wz6HOuVgJbJWAR9DXR1dFHlijEotjmk83B7TSLX0GJgPd10YPxNMZaxpMt85xvFmxk2H11tOxi173qnvXkVV/V5+mtp3r3RsPTRiwE03siC3y81WMT5eZoO+xhKcTc7vXXI/VP22dywtP9W0cXBAfarY2eDtqYODqyefMfI396pE+1nSfw0VRhfHxjpN++2HneO1mD+/CM6Ccjm5vG9rroxsgGj/F1zz0m8q0A88IFkQD8uZG022XmKXOUjaErN1GoTWHzk7Z/bhP5HqykPmPMV6+N/E6Ne6lr202ORjcANgmsyUugBEpgEwTMsQIugf+3RqdH+/vpxCaa26QHCAGLNAGdpx8vi5bZABDsfu0O9U8w6Wu3FmK+3ro/zWJQQH9KJPhehoXNuGOiZTbh3MMW7f8zcv8KrgXWFthvjfRZEG481O/pvK/TnhmtRb5pIOC22L8j2lfThnMinN84pzBBh6+H/9KM6/q8FtnI+K7o2Oj5kSfKr4hqO0/A1/b5gvvYJlLtwCRgbnC/C7oF93dGF0ee8Bvf6c1I99ozI3OCOcWTevfgerQdZg5adlNzXJ955PrIXHqgbAKaf2+IzPvnRq+Mapsg0A2ATcBq0hIogRJYkoAP4kMigf83RWdFW/lgXrK6JlsxAnenvzaSPEXyFPe2aNGTd5/1a5FFquBQ2kdF22X8na/7G1xtm/VUervqmi5HAG4xrT+CYwvwjcyTurVIO3FctBFweK7/QKRsT/otlH8lsgDFcmzeCwKuiN4V/Uhko8H9/6ORQEEZWzVPG7UF79+KtrJYt4HBX8hY+VqwPmqrdju3U6bd00HRTtWl3P1d37J9wXgnOY/bsT8Y7I86lmW7P9P5jLexLwi9J/KNnN+M5j2NNlf59o574DmReciGwHAuhw+K2aygZW0nx3vZsj+Xxr40stFpE+WyyDcYaksS6AbAkqCarARKoASWJODrfoKhb4g8ZTtqyXxNVgLLEvDkQ/DvK+ZfF10TLVr88ElBsifUFqEWqL49sB328RQiqBVEPiHy2xZvihZtSGxHvUMZl+cAC98C+MZoow0AQbQf77MhJ4jH5YholtkYOD3yhMmiHTtf55dvUSBrI+DVkUWqABv386I/jLDZqp2cjIT5pVstZJTv9Tn+quiHorXIfPUbM8p9ZM7ZyLDZNGwc2Xy4Pbo50p5Zxu88ITUHymdxj43g99borgijWYaZvGuRTRrp+LiNC+Mwy7/4wJGRJ63G1/h9JhKcaed6NF2fvrkX9OemSBl8Qjl8RX7tfHeknFnjzvdPjLzqszTusTsj/bTRNDY8SB+UO81PG5SHufZJN7Qhhxuadpgb1qKhPdjhRtMMjI268MJXv/nZIZF8N0T6MuTbk2Plq8fYGFechvFRhveDiTWkd5/hYfz5knPGWX68XDMG47x5+wXTTnnkVe+nI+3SPmUsMkyHdh+WY31Uz0ciTJQzy6fmlak+7XhCZLyviV4T6cci42fmkNMjfvqk6OXRHZEyMeV/fGKW7+T0fb4trz5cF/Ev743hwNPruZH6jMeivsmLDR7Km+Zv/I6MsNc+LKUxjsbL67RPKc+87Dw2/PmkSB+HMcNe2WN/0id59AkTDPRh2gbm6tbvM6KLpxP1/XwCwNdKYNUJmJDoYDL39rhPjp072O55HwK0GwzbkyNByFmRD8vN+pX0FkmzPvB2Qx/bhi8mYFFlAbK/TZBtsW7hszfyRNrie55ZDH599MnIgsri7bR5iTd53uLZAvP8yILPb1x47/z+sL9MJRahp0QWme6fRWNyVK5rJ35vn6SXZ5YJvjxdsijF9/eiq6Nl5hxpXhf5nYbvjQRi3xNdEU0vsHNqKfvOpNLuq6J7lsqxOJE2+hOSZ0UW9eau3xhlMR8dGZ0XPT7ib5g4L0Axzm+JLorWozEXPqff8uEteGOfju6Kro/8De810TQPY+nbJHzUeAn2BAUfiYz170evj8bzpGD1zMhvQRwfCVSM2xCo3Jhj4/emyH3L9EM6X8UWbGBhvM6K1GvMtHc90k7Xb460ZTBBjXHV5j2RPDgIBG+PjJV+ju8H/Xp6hKG/Hfc6GL4XRvohMPNefXdGfMcPn80z/dEOf45zamTssNOeaXbjOrXbk+jDojdHXxo9JcKU3+Pz6giLc6Jvjtxr8imfYaoOfPzIHM5DHXxLm+Q1VxkTZWCnTu2+N5LXGL0yUtdgrhsPbbKxzp/UqxyML41wkm6WSXt2pM4TIvf74Bvuo8E3rsyxcVvG5PcZvxYp4x3RXctknKT3ZwI2Aga/5X/ar3148xc/zofLtOm/v33nu/xIm/XvuyN9U87RkXL4zYuim6J5Zlx8k2EYv/H9qJ+uux8HdsZzfB9fkvfXTs7l5T7jH77x9MmI//kTKj7JjPPPRldH/NUcyxe0/aGR9Pp9WzTML7Pm9L/PdWP21MjYvj7ih7UlCDxkiTRNUgIHMwGTjQ+VrzjIOumDYfhg1jUfVD4wxh/6B3qXfQD9cXTVLugIvj4kLXZPjnxAbsX443Ojg2mctsLhQMljUWKRs7/N4uhPI4snCz0LwvVo1uLH5/xaZPGmvb6a/mXRdpnFrwW/YOP0ibTNYnLW4nW76h3KsVhUPxaPivZGFy2oBAfBBBYWxRaOs8wiWp8sTNl69KZoFuNJkge8SOu/q7ogenh0fGSsFi3GH1DI5IRASR/NLea97bLbU9B6JLgn7bRoZ2sR/z4v4kfvjm6NMOd35jp5/IjgCyK+wLB7RuTPHqS9OlqfHFvkY4qJzymL+OujwZQpKD4jMseb3++MzI3yCXQwVMcrIuUbd200d65F0r8xEhi5JvBw3TytnIsj+ZgARx8EYq4dG+mH/NJ47zpf+FwkaOM7zJj4+ra5fz16fcTnjZE2qterOuT7aMS0CT/tc20wjJ8dYS4N3jdG2qEcfTA28gxjlMMvmPp8zu+NcL0iuivSHgHZmZH+PDJ6STQEevq9FmmTa8ZFWz4yeW8c9N2YCOT48R0RRkN/13I8jI/8+nZpxIyVccdRIOlYnfo31HF6jvdGaxG+7rXB9uTgmdFTI32RT3CojSdEfE1/lTltWOGGqf6tR2PfwOWJkbZJ69rHo43sEUlg3ahv2nvLRhmmruvfz0fqtOn0a5Gy8NcWzF2bZeo+KlobpcHx1si1wT+0SduwWWTuX/718AjTwbDGxv2oPvfVxZGxPTQy3tbPrr0wujIa5kftwNt798je6J5IHw+JMHZd2eY1/vSaSNnab1yVz1/56Suj6XHhk2+LvneSTnmbHYdkWU0bD/RqEmivV52ACc5Tjycd5CBM0HQwmQ+1j0VX7YJO+dB+fGQB5gNuq8Yfz9hq5ubb7wQsuB4suzYVW0wrn6ttAAAgAElEQVRbTD0uuiYaFl/jNlnUCaosyK6Pboi2cwNAcHJTdEl0ZLQWfcfk3HhBmFM7ZpelZItpfXUfXjSnJtct2gUoFvp3RUMgOJ3FvWgBajGKqwUqbdYEKhalxkD95gi8NmsWxMaaXbfZzAvS6782Cjq077BIP81p50QW+Bbgr448yR6YHZFjTw2fFp0fvS96RSRweVT0fZE1pnO/HglQ1CVwODf6/giLgYc6+Kgnmnsji31BMx+SV1nHRzYVTot8G8I9cHuErfr4n/S+Uo2xzwj9wE6+4yLByLuj6XsXW8EGX+LL6xETVGjrmZF222y6J9IXc/Xe6JOR8pWrTm3VFk9knxGdHvnBSGUvMv3wbQL8Lo08udU/thZ5io73rMDQeAzs7s7xi6Oroo9GfFmA50nrUyN1GHN1jA0rZV8T/X4kGDNeyvmSCAcM74z8ZgSGQ3+lOysSaPOlx0bGwriOTbDnfvjFyFykDuOOj/HWD2OrbRg/NDLefAxXQeKrovVIm4yP38UwPvo5NteNqbFZi8x/uAy+od4To2dG/NAGkr7xK3UvMpyMMdOHuxYlnnFN+nsj3I6K9G1f7PJkxpXPKJPf2Kwxd6lnmk1ObWjYDsH/dTn250zq4O/Y8VfsvPL19WjWfYXtS6K3R8ZE3juip0SuKfOFEX9Q9uCvxs099u2R/pkTxmaMbo74GN81lsqqLUFgXx1uiSqapARKoAQOegI+hHzY2ky6IDp8iz3+dPL9cvShLeZvtv1LwMLvwTKL2PXIwt4iyeJseoGkbYKJx0cWVhZH9zi5zcZvLdD+e3RhZOEo2LDIGwKYHO6YCXgs7vdGApS1aD2aNu0SnFiEvjMSWM4z66Mvn1zETl8+Ny/xgvPyWDxbJAtmHrMg7aJLFrcWxlgLUrbTBAgMF/7CjogEnDYFjK2vbq+7MDGBJV9ai54QCT7fEGmfvOT+8G2F8aLcOUz+XWSslDOsRU/KsQBRnS+JXh2N7zGBKIbSCL5Ojfj810fuA/XYbNBeQQFTPl7qfGkknQBTYDNtfNV9dNvogkBNXwSSeybH2iuw+srImPCjN0XqGAxTPAVj+jC0Z5TkAYfaddgkva9Iu8cHU4dgW1ueODo/HO7NAR7q/NXookj6wXDQJszdA4KqKyL9GExf9N0GyqWTa87plzyYK3+4nsMvmPYpy9PscyP9eEQ03W9B28ui10Tuq8Hcw98UnR25TwV0uB0eGV/lXR4ZX1yGAF0+5chjbLVvML7y5Mm19bwaW/0axgkT+ZlxMsbmSkHscE/cf/WB/yr7kMlpPj9r7n1grn85ow2Y6Rd/f1g0zWpR/ulr+qIdA1PH/Hmw4b6ezjfvPY42cjHFyAar+2/gjo/2azefPCPy5yN3RGOfwumayIbPwNo542sjmn+5x/jitL8ae/2gcZl5e59pCx8xDxmLYb6+/2r/XUjAJFYrgRIogRLYNwI+/H3IWTx5kmHh/KTIAmgz5kPOk5ebN5OpaR80AsNi6MFoAJ97V3RKZGFu8WqBNQ5CfMa7dnxkkfWOqet5u21mEXZJpB0CkbMi7XtFtNnF8WYb5b75g+j0yEJe/eszChH8WNDeGd0SWVjOMwvgPZOLFuYfnpdwifN/MUljPCyWt2L/JZnkN46L2r2VsseLawty9RwVnRAZV3OaeqdNACDA3xsJEAVhzg3tExwI6sxn69FgyhKMC3wEEkPgw5eNn3PmwWm/0c5rI08GLfxvjPA8MTJe102uTwdS7onLIoHj3sjc7JsAYxM4Ke/2qfPy8hdtWosELhhpy9BPwcd50RtH58wN+m2jQRnyLzKfFfqv7Kui6c8A5d0R/WEkSB6bPF8dHR7dGl0djYMpaeXXDxyMq/EyxtKPzX2h7sEnhld+4BsAxscYzDJ1Dn6iTQK9adOHGyK8x4al8dE3+fAYNgCMr/Q27bRPX8b27rzRprWIzw2mHOXpg7KvjIzF2PjKFZHNe/57WvQ70Ubjxd+G/ilz2udyakMb/EdZeG2ljA0r2WICc4D7RN/cF9dE09y1/7rImJwb2QDGmK+Mjc+aFwbTT2UN/eWHZ0aXR4NfuO69sXU8XWZO3Wfax6/3RGv3n+q/yxAwwLUSWGUCJjBfCbPYqB1YBHyoTy9eHuwefDQN8AF4W2Q33Nfizop8uC9r+rWbFgLLtrvp9j8BizJPaR4VeXJ1dTReWFsMPzayUL07un5ynJdtN4s0C0VfkV6Ljoh81dg5i8TphXdObZup2yJe3wVoj4sujsYLVuc9ITosel20PnU9b7/ILMoFIUzbpwOqL0q8wZshkB2XuUGWB1wW3FmzzVsIPyDDJk6MAzVzD7+xKHfefPTw6JQ55UmLsznu2MhY8zXjLtAURBwTCRL+JLohuiMaAsUcfsH+c44wl9dcOh6/IZGg8JWRdpH5dS0yPu+PXJ9lPuvVfWokWDgkGgd5fOeD0aw6XRvm5PG6+S05/72R+88TUt96UYdAVeAi6PFZsIwdmUTK4SM26ma1Qxtw4wPSDaYv/MN48bWjo8F3/yXV/f7z2ckJ143x9GeosdPuacP16ki9Q9scqxtPZflGxBkRc23cxsnp+9qP5yz7m9FJ/sQwWYv4w3pkzKfN/fl/oydG/HGwoY/G/uZoXr36ZpOOD+nHLHZfKHRygMHQFv3kF7PaNp1v/H58320276Jyt+PaoSlkLcIWt5PmFGpuGOzIHJhnp+comy/T/oy1DWJlY/7CyDdgjKM5RB7jQotM+/gG/urGtOunRcQm1wCrlcAqEzB5XB/50K4deASmP1R2Qw+0ySLHIsqi4/cigZAF9KwF0W5oc9twYBIQKPE1CyiB0Aui8SLXQvbsyGJLWgHPEdFOmYX26yOBwHmRQMRTwzuju3eq0pTrnnO/XRudGR0TrUXqHUxbjo2ktfBcpj3jNZLPiq3aOO9W5wBjKa+x3G4TPDLttOAWfP2Hybk9efXE3djOMgtuwQKTRxstwOUZvnJ/co6Pj/ii8u+IBM+XR/yXKccCXt0W9PMCIuM39nEBnyBRuYLfRZ8JAnx9lEebxxsAzs9jO69Mawd/t+7Pv9YirARKF0T68O7IRvA10bzgM5fus8Oiwd+mA6ghjVflaPfA3DnchkDsxBxr0yx/NTb6znA+ZHI8fvlE3swba+nkeWL0tZH+qtfYeR2Uw7mm/bPaJsM05yGo02ZjY3znGWbT95m26qd8H56XcXKe/+i3cZBnI+OfQ3CqfcZgEbdZ5bnv9FnfcNlNMZm2aQ+fcd+6h2eZ64MfuQ/5wrSZm6fH1hzhvvi16NnRUdFadGpkI02eGyIP59xH89iO5wPjZiy6ARAIG9lucraN2trrJbBTBOZ9GO1UfS13NQhYINwZWZj4IDsremb06NXofnu5HwhYFHlaaNG/Fgmyrowsiny+87VjovXIryVPL8JyatvNws3fCKtbu86J1P3qaN4ibjsaYdEn2HKfWcTbcHP/DYaDDYDbo9uiZRaJQ3stch/6L0Vt+kjwzsaL1c0WMizGt3sM9Q0XJgi5OxLMDP1V3+BP96f64n99fsrDPjV5lf666Fsj4+/PAPjCERPxjdOiJ0f+NpjP6t+wJp0X/CfJA0z75dOOjT7Lx+XOWv9uli0f+j+RvvqzL753VITfnghXwdNlkX7eGc2zcdC5yDf1cfq6vEN/Fo2VuuXVjo/OKMd15c/jcGau/Vykj4L9oc0C4Zsj99XR0RnRPFtU/nSeYWyd16ZF4+tenW730D7np5lN18U3pFPnMuZeMZecGgl8j4juWSbjJI0AW+CsT8Zjuu2bKGpHko7nO22bdb8MFRt/4lOzxmgee58VfpfBZ9i3RKdHeyI8j4x8nj0p8tnhN0iUP8uGOsf+Mitdz40ILBrQgiqBEiiBEth3Aj78LBTuiq6M7G5fEB2270W3hBK4z6e+O7JoEmhdHVnMWsB9Q+Rz3sL0+mh/mMXidZE/BbAo5uc/Pjl3yw42QJ+vjizMsXhsdNGkPgvtx0SHRldEd0zOL3qxqHTPnhIJJOTdqn3pJKMyNxMkjOszj2A7BDVbbct0PkGqMdI2T9qGQGjY/Lgp5/wd+zLMsB8W+8q5LVqPXhmpA0t/nnFqhOfeyNNZTKTVBuYp3rKBmDzqxGUjNjZihnK1dTvM01v+LgD25F3gsjfSzxMjAc350b2RPxMY+OTwi0x7hiBw2DCaTuO99k/3E+uB3Wty/KuRgGyRSb8ZBjZt9O+4SF731tujWyOBnH657/wvAIs2AHJ5aRvGVoaNxtd8N+0zQ//MgTYsFhmfk27WRsKsfPzW19WfFhnjE6IbZiWcc87GmPqUIwCeZdP9GdLIN+/arHK2cm64/zE07vxqI+OH7odljb8L6s3J10Y2zvjZ10VnTY6x/aHofdEbo+n7Bwdjz5SnDbUlCHCiWgmUQAmUwM4S8MHkA9ViyddFBUfPjSwCFi32drZVLf1gIHBjOnFXtBY9IRKsWQRZFPEvCzKLq80szJJ8n0z9r4j8MJk2WMRp1/ftU6mLM7vHBFlXR+q0kDwyujM6Kjo+0q73RHhtZIIPi04mePiyjTIsuC4QZOr/8wXpFl0S0OmjIGs7zZ8nWQtaWPshRaadfzU55keCL0H6LLMA165Z5pp5j4yNIP+iyLj4E4Hzo5Oio6NbImnUfUQ0HeTm1H1mvpTPporASYAiiJDnkGiR/ddc1Fdje/eihJu8pkz3F+EkEHxJdEok6Bc087+1CINZtp6T+s4W+Zr+HzpJN7zwjSHY5R+OF42XfPPGbKroL7y1ce0+MqY2HAVjONJQlvtEIL1dplwBsv7p12ELCjb+xmEwefnFsDnk9yUW2X/KRQHoRyZ5FqV1jU/fFN0e4WLD8bJombkFI5//jM+8aXKszQNLnGmW8YFxX2el2ddzQz/U8++jef6knkVzwEbtGPIaZ3JfXh35xswTI58b5ovHR1dFszYAjBtzbbgPJqf6Mo/APOeal77nS6AESqAEtk7Ah7sPqRsjiyhfkb0ispiolcBWCFiA+yV2Cx+L4JMjn+0CDoGVxbO/t97fJph7UWSBrD3nRedG8wK77WgfBn84KejwvJ46OcbhhOjWiTDbyARjFpzuWQv2Y6I9G2WacV3gsndy3n1+7Yw0y5xaTyLt3kobZpVvTM6Inho5Vr6gjuGIlblqLTopmrdefEqu2dSwWXJmJGB4duQH+d4bDQEhjpgq++bojyJBhWBmSKMM/npsZPxm1YmnP6VSh6BAwMbHlOFbHmvRLBOc7o20793RdCAxK8+iczZGbOT+dfTCUcJhjteP66NLJ9e0Txvm2Z25IPiR330yq+/qPDKa9gEM1iN9Oi06LJpl2vATkTbb7HFPLGv837370Uif+LLx1N7BjI10TPtn9WFIu+yrvt0SHRr9t4i/TJt6PDWe5ovHDZPzfHgIFKfzr+XEoyN81CUQXcb43WsmCfn+hdF0G6bL4X+CW+OoffIbdzbcI461ZVZfXePn8+oZj4e0WzX+6z7lc6dH8+ozt/5O9KHIfeAzaBnTf3O1uePHRhnMcbio/+LIPMT4FnbTZuz5uzzmk2Xm9ukyVvL9dtycKwmunS6BEiiBfSAwfNBfmTL87ZuncDdGFlW1EtgsgcuT4e8nmZ6cVwslm0v87K5IIPJg2DWp9JLIYs56w1dJBS87tfZw/1w7qc+i8GsigeRXRBb/N0XDgjKHCw27OyPlMQvWJ0Wbabu0z4ksopV3RyQg2YoJjgVc+mMxvMjUK1gTREzLQv6Q6GmRhbsgQ1DuzzQsopm2rkdXRer6xui0aLwAV4e2DE+GlYmXBfi9EV7G4PxIW8aGh2vGREA5BFzqEwypxxNS5Y95a/vpkc0t+XwDYD16e4SNIOycSJ/H+dTnSeKeiL1q8rovL1jpFz76qD/jOh3r3xAQuQdwWWTGw1jon/EZc8PkuMgvpc8KhPzOBv/SDhska9G4PfIcFf1ApF0YSb+sDb5hDKYDU/WoF/thU0F9s9q5bH1DuvUc+G8o+eQwvtNcnD81Ms5jMx9gKu+jo2dH4zTajYPxOyXiuzZTPxItY8bUj9TxW0z4rPvImKtH+YSD9zYx/C37MyImwDYnDqad7gXl8tUvj7RvMP22kXFGND0GQxp9VqdxUqe6x34wpNvoVVuG++ToHP/gpLxxWdpmzLVH32wsDfdyDhea8knZflDyyGhctuO1iK+y9cg9Pm36aGz126ZjbUkC23FzLllVk5VACZRACcwg4IPr4uiK6ImRhW2tBDZD4KYkvjMSMFmQ+TMTrzYFPP23sH2w7CWpWCBukShI0LadNItnC3LBusXl6dGJERaeSN8TLWsWs78VnRxpux96uy66MdqIqQWsegVcjpX1siXyJclMuyFnzRUW/sq9bGaq+09amH9zNP7as/We8xbqx0YCdqZdvqlx6eT98LKeA30X0Om/p3v+y9zrIwvxtehHIoGX964NgZOybosszH82stC/MsLMgp0vPD0S0FwT3RIxfvzmSPBj/JTrR8KUq/1nR8Pf0eOBwScj4638MyMBGEZvjPTNsbqeEan72uiiaDvMj10OvvHaHP905D5kNgZsjuiHOf2d0V33XZlvr84lG3iYGhNB3Bsi/I6JfjRS3yzf8/nx9dER0YWRvMpQp/zHR4JN478eGVv3xLL2J0lo3PRLudrCH/m2eccm9vmTc+o2tvxtX839bGwfF+m7jRzjaMxxwMqcol/6qT2D8Z83Re7b0yL34r+JbAro+yOj74ieFvET/qRcPrWs3ZyEuOqze+UnIn74e5F5QlmYnRR9e2RO0u5bI36pf2Pj6/IpA89PRHxZ306JbDKYi/QN47FJc2+EgXpeEPmdAveV85sxZb0y4o/a7l7/t5FNAfeV/j4h+r4IR9zcy8uyc0/4Bs3eyNgM88t6jhk/9nsS/BYj3xhS79j0c0+E73r07i++3HerRuBZ6fCHIpPtsavW+fa3BEqgBErgoCTwU+nVP08kmBkvdHX4x6JPTa7/zOT1g3m1KB2bBZNAVFmemFjETZsgy8JTmvETqiGdhfhw/ZIcWwwusr25qK7PR0MfPptjwd60reWEwEq6v4osgqdNUDjUf+HURUGHdYC6/jLyVeePRe+KBDDT9racGNp01PTFvLcQVYb2kg0V/V8U3OAhzZ9F2jH0dXrBnktLm7x/OilP/2fZOTn5j9HQn+nXoS38xNO690ZYTfvSULaF/Q9GH4j+IZL/b6IPT97rl/e/HgnABlMe1vgP+f4ux8r520g53v9xtDcam0DsVyLruIH5OJ+xFAwIiAcb6ntrTihX+dIpQ1+H98ZaYDSYfCdGOLlXLhhdGx9K845JOjwGf5ff17n5qbaqx7H28k/nvNcffjTY83OA219E/GRs+mX9Kv/Q7oG3MdOOP4/U8dRxxhyr4+XR0B71Y+C9PvKN90eC1PGYa8PQP9dm2SE56X7FV1n4Ci75kHPaJujFQ73vic6OGF6Cbvl+OzIHTdtDcgJbaXAZj5Nr5in1GU/lqw9DfdI/dQ9t40OD6acg8u2T65hii8vYV/iUAHvevZBLc00eDH830hY+rx/Tct71P4hmzTU5fd+8cmHEH4d72esg7TZGxlH5p0dDm80R5ktjM9QtvXtckGzecF47Hx0N9uIcYMKvDhudH9jxx4EV/spUx8CS75wVGafBLsyBMVLfeCxHSe7zV/eGOcGY4sPXle94GFv91f5p01+fe9rBN8Zz0HTavp8iMB6swimBEiiBEiiBEjgwCVyeZv9IZAHp1VOmO6KbdkF3rk4bLo6eF220WbCvzf10Crgx8iRNoHF4ZKF4y0SbLf+eZLAhohyBhMBWMPTS6PrIU6nPRBbL6hE0W5Q/N1qLmKeExsRTu62avAIHAaI2qGu6PG3R93lrO+mluSuyYL4iwumfolkm7W9Gt0ffHQkaHhrpq2vY/H70mmj8JFN5V0XfEX1/dFw05Ptojm+Lros8TTQuY/v7vPnxSAD5bZEgQV/vjfizcrHXh8HUd2WkL98ZnRYJGLRTOnn/KNIX9Y/NE8sbInmlm2V8yhNb7ZB/4OX1JyOB07dHgif9ZOq9O8Ln9dG4bHV5Wqmv6h+bejx1xUA/hvLWcywAvz4yFoLc6faqj58Z22+NBt/P4X0+iB/ml0ZDH1zThqF/xnSWqcvTXuVr18Mjvs6n8BOAvy4S2PJRPigNG+Yi6e6Mpv12kuy+MZAGO/fUYPJrs+vfE50QPSzSB+1+WcT/nDPu47451nfMnh65N7Vb+/TbeL4lek2kbVsxdRiXZ0ZnRt8SHR1pDx/Ufv3h6+5hfRn3L2+/YHztDZG28WU8h3t9Pcf6ynceE+kzHxoM18uiX47OjviONM5rg00FfPnBuP4P5P2NERbSDTaw49vGfm/E7wZ25gX9/q0I4zF3/qKdxmNeX/nrCyKbGd8cuX/0VTnaaDwuifRp3M+8vc+kfVyE2XWR9tdWmMCz0ne7R3asjl1hDu16CZRACZTAwUPgp9KV4anOM3JsYTk2798eeRoi3SeiH/ziJPe92xNZREpj4fWEGWm2+xsAqrAQfGs0tM8THwHNtK3lxMsj7fO07PzpBHnvSZb+SXPhjOv6+NrJdWk8KfyhGemcetsoncX2LLPQFDhIOzxl9ITqzyIB1S9G2uQpp2BweHKnjZ6OnRhthz06hXhCpvztKnPZdmGA6ynRqZG2CHA2Mn5pYX9SdFp0QsQXljEBzDHR3kj+Q5bJlDSHRvio7/hoCESXzL6lZIIiQR826l2Wz7zKBm54nxxhuBkbs9OmtWh6zthMeeO0+GrX3sj4qGt/GT9ci9TPlwTzy/ZLOr4n3+Ab8m+38YXDo8EHvfLdZds5tIffimO01euyfqwe/VqLjJX27KsNZQ7s9EnZm+3TvHbom3uGrxrbI6ON/Irv+UbCByb58lJblsB2OMWydTVdCZRACZRACZTAzhD4pxTr6ZJgwSLZE5MrdqaqLZXqSZSNBUHSEVsqYflM6rIZ8sTIOsfTJE+jtmr/LxmviX448gTSIlUwbFF+3IxC1X9XdG304kj922G3pZDLo6dG3xXpk3HfH4aBPtFmTPs+MtFm8knryeGtE20mryeB+/tp4OdS5+0Tbaat89LuCzdlbpXdvPaMzz8YfIf6+eH6RMu0dZwGU/cm7aTxhXsm2pd6PpnMt2yhAP38+ERbyD4zy1DmvsyjMwuenNRX89uyZuPBN4TM7zdE1y+bsenuJ9ANgHpCCZRACZRACex+AhaCF0+aOS+gvDTXfTXUBoBATUAybb4u+a7I01uB2d3TCfLek+3XRZ7A+Cr2tN07uu7bdha8y9jVSfQr0ZdHFpSzyrYQVKa6LWLXo2mTb2jfndMX814Zgu9XR8rxlWiB5Cz745y0WGfyzTN9vCnylW9Pox8beWLliaI69EfQJbhQl28LWJQ6t52G3zkT+dbB+nYW3rJKoARK4AAgcEjaeH5kM8o32pb9DDoAutYmbpXAs5KxfwKwVXrNVwIlUAIlUAIlsBEBT6AE/8dEvnVBNgQeEbm2kybw9ycIP7cf6trJfrTsEiiBEtgKgWcnkz+FEvzb8K6VwH2/dtkNgDpCCZRACZRACZTAwUhgLZ3yrYb3RTYgaiVQAiWwKgSOSEffG/mmmj/Dqm2BwE7vUm+hSc1SAiVQAiVQAiVQAiUwh4A/7/Dr2f4e98g5aXq6BEqgBA5GAmvplD+z+oVoK7+RcDAy2XSf+hsAm0bWDCVQAiVQAiVQAiXwoBHwewN+DNBvPCz63YIHrYGtuARKoAR2iIDfffnZaH2Hyl+JYrsBsBLD3E6WQAmUQAmUQAkcRAT8GvpmfjX7IOp6u1ICJbDCBLbjf1hYYXz3d71/ArDyLlAAJVACJVACJVACJVACJVACJVACq0CgGwCrMMrtYwmUQAmUQAmUQAmUQAmUQAmUwMoT6AbAyrtAAZRACZRACZRACZRACZRACZRACawCgW4ArMIot48lUAIlUAIlUAIlUAIlUAIlUAIrT6AbACvvAgVQAiVQAiVQAiVQAiVQAiVQAiWwCgS6AbAKo9w+lkAJlEAJlEAJlEAJlEAJlEAJrDyBbgCsvAsUQAmUQAmUQAmUQAmUQAmUQAmUwCoQ6AbAKoxy+1gCJVACJVACJVACJVACJVACJbDyBLoBsPIuUAAlUAIlUAIlUAIlUAIlUAIlUAKrQKAbAKswyu1jCZRACZRACZRACZRACZRACZTAyhPoBsDKu0ABlEAJlEAJlEAJlEAJlEAJlEAJrAKBbgCswii3jyVQAiVQAiVQAiVQAiVQAiVQAitPoBsAK+8CBVACJVACJVACJVACJVACJVACJbAKBLoBsAqj3D6WQAmUQAmUQAmUQAmUQAmUQAmsPIFuAKy8CxRACZRACZRACZRACZRACZRACZTAKhDoBsAqjHL7WAIlUAIlUAIlUAIlUAIlUAIlsPIEugGw8i5QACVQAiVQAiVQAiVQAiVQAiVQAqtAoBsAqzDK7WMJlEAJlEAJlEAJlEAJlEAJlMDKE+gGwMq7QAGUQAmUQAmUQAmUQAmUQAmUQAmsAoFuAKzCKLePJVACJVACJVACJVACJVACJVACK0+gGwAr7wIFUAIlUAIlUAIlUAIlUAIlUAIlsAoEugGwCqPcPpZACZRACZRACZRACZRACZRACaw8gW4ArLwLFEAJlEAJlEAJlEAJlEAJlEAJlMAqEOgGwCqMcvtYAiVQAiVQAiVQAiVQAiVQAiWw8gS6AbDyLlAAJVACJVACJVACJVACJVACJVACq0DgIbu8k7+Q9p2wyTYekfSHRI+IDotujf5pk2U0eQmUQAmUQAmUQAmUQAmUQAmUQAkcVAR2+wbA8aG9d5PEfauBHho9bHLcDYBNQmzyEiiBEiiBEiiBEiiBEiiBEiiBg4vAbt8AeG1w37hJ5L4xcEr0yejjUYP/TQISmMIAAAWgSURBVAJs8hIogRIogRIogRIogRIogRIogRI4EAg8K438UPTO6NgDocFtYwmUQAmUQAmUQAmUQAmUQAmUQAnsNIH+COBOE275JVACJVACJVACJVACJVACJVACJbALCHQDYBcMQptQAiVQAiVQAiVQAiVQAiVQAiVQAjtNoBsAO0245ZdACZRACZRACZRACZRACZRACZTALiDQDYBdMAhtQgmUQAmUQAmUQAmUQAmUQAmUQAnsNIFuAOw04ZZfAiVQAiVQAiVQAiVQAiVQAiVQAruAQDcAdsEgtAklUAIlUAIlUAIlUAIlUAIlUAIlsNMEugGw04RbfgmUQAmUQAmUQAmUQAmUQAmUQAnsAgLdANgFg9AmlEAJlEAJlEAJlEAJlEAJlEAJlMBOE+gGwE4TbvklUAIlUAIlUAIlUAIlUAIlUAIlsAsIdANgFwxCm1ACJVACJVACJVACJVACJVACJVACO02gGwA7Tbjll0AJlEAJlEAJlEAJlEAJlEAJlMAuINANgF0wCG1CCZRACZRACZRACZRACZRACZRACew0gW4A7DThll8CJVACJVACJVACJVACJVACJVACu4BANwB2wSC0CSVQAiVQAiVQAiVQAiVQAiVQAiWw0wS6AbDThFt+CZRACZRACZRACZRACZRACZRACewCAt0A2AWD0CaUQAmUQAmUQAmUQAmUQAmUQAmUwE4T6AbAThNu+SVQAiVQAiVQAiVQAiVQAiVQAiWwCwh0A2AXDEKbUAIlUAIlUAIlUAIlUAIlUAIlUAI7TeBg3AA4GPu0037Q8kugBEqgBEqgBEqgBEqgBEqgBA5yAg85gPqnrcdHp0eLgvyvyvVHRIdHT4/+ekEfP55rl0V3LUjTSyVQAiVQAiVQAiVQAiVQAiVQAiVQAvuRgKDfBsCHo3/eBn02ZfxR9LD92IdWVQIlUAIlUAIlUAIlUAIlUAIlUAIlsASBRybNz0SC933dBLCRcPYSdTZJCZRACZRACZRACZRACZRACZRACZTAg0DguNT5nmhfNgD+MfkviRb9KcGD0LVWWQIlUAIlUAIlUAIlUAIlUAIlUAIlMBDw9/3PiT4VbXUT4IPJe2KRlkAJlEAJlEAJlEAJlEAJlEAJlEAJ7G4Cx6R5b4m2sgHgzwdetLu719aVQAmUQAmUQAmUQAmUQAmUQAmUQAkg4If7Loz+NtrsJsD7k+fIYiyBEiiBEiiBEiiBEiiBEiiBEiiBEjgwCBydZvo7/s9Hy24CSPu8A6N7bWUJlEAJlEAJlEAJlEAJlEAJlEAJlAAC/zo6L/L3/MtuAPjxwEOKrwRKoARKoARKoARKoARKoARKoARK4MAisCfNfWm0zH8L6Jf/LziwutfWlkAJlEAJlEAJlEAJlEAJlEAJlEAJIPCQ6NzovdFG3wLwo4GPKrYSKIESKIESKIESKIESKIESKIESKIEDk8ChafYvRv8QzdsE+LtcOzuyYVArgRIogRIogRIogRIogRIogRIogRI4QAmcnna/M5q3AfCqXOvT/wN0cNvsEiiBEiiBEiiBEiiBEiiBEiiBEhgIPCIHPxN9LJreBPhwzp0a9el//aUESqAESqAESqAESqAESqAESqAEDgICJ6UPb42mNwB+IeceeRD0r10ogRIogRIogRIogRIogRIogRIogRIIgYdGz4/+Oho2Ad6X4xOjLymhEiiBEiiBEiiBEiiBEiiBEiiBEiiBg4fA0enKm6PPT/S8vPrzgFoJlEAJlEAJlEAJlEAJlEAJlEAJlMBBRMDf+T8r+mD0jui4qE//D6IBbldKoARKoARKoARKoARKoARKoARKYCDgvwW8JLowelixlEAJlEAJlEAJlEAJlEAJlEAJlEAJHLwEjkjX+tX/g3d827MSKIESKIESKIESKIESKIESKIFNEvj/rZ01IKQVfGEAAAAASUVORK5CYII=) Plotar o gráfico comparativo:
###Code
import seaborn as sns
sns.boxplot(x='model_name', y='accuracy', data=cv_df)
sns.stripplot(x='model_name', y='accuracy', data=cv_df,
size=8, jitter=True, edgecolor="gray", linewidth=2)
plt.show()
###Output
_____no_output_____
###Markdown
Acurácia média entre os 5 modelos:
###Code
cv_df.groupby('model_name').accuracy.mean()
###Output
_____no_output_____
###Markdown
Matriz de Confusão Geração de modelo baseado em SVM:
###Code
model = LinearSVC()
X_train, X_test, y_train, y_test, indices_train, indices_test = train_test_split(features, labels, df.index, test_size=0.33, random_state=0)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
###Output
_____no_output_____
###Markdown
Plotar matriz de confusão para o modelo SVM:
###Code
from sklearn.metrics import confusion_matrix
conf_mat = confusion_matrix(y_test, y_pred)
fig, ax = plt.subplots(figsize=(10,10))
sns.heatmap(conf_mat, annot=True, fmt='d',
xticklabels=category_id_df.CATEGORIA.values, yticklabels=category_id_df.CATEGORIA.values)
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.show()
###Output
_____no_output_____
###Markdown
Vamos avaliar os resultados incorretos.Apresentação de textos classificados incorretamente:
###Code
from IPython.display import display
for predicted in category_id_df.ID_CATEGORIA:
for actual in category_id_df.ID_CATEGORIA:
if predicted != actual and conf_mat[actual, predicted] >= 5:
print("'{}' predicted as '{}' : {} examples.".format(id_to_category[actual], id_to_category[predicted], conf_mat[actual, predicted]))
display(df.loc[indices_test[(y_test == actual) & (y_pred == predicted)]][['CATEGORIA', 'TEXTO']])
print('')
###Output
'esporte' predicted as 'coronavirus' : 6 examples.
###Markdown
Reportar o resultado do classificador em cada classe:
###Code
from sklearn import metrics
print(metrics.classification_report(y_test, y_pred, target_names=df['CATEGORIA'].unique()))
###Output
precision recall f1-score support
coronavirus 0.35 0.26 0.30 66
politica 0.67 0.69 0.68 58
esporte 0.70 0.58 0.63 78
carro 0.65 0.50 0.56 70
educacao 0.48 0.74 0.58 62
entretenimento 0.55 0.52 0.54 71
economia 0.47 0.54 0.50 56
saude 0.59 0.66 0.62 67
accuracy 0.56 528
macro avg 0.56 0.56 0.55 528
weighted avg 0.56 0.56 0.55 528
###Markdown
Scikit-Learn * É considerada como a biblitoca de Python mais utilizada para a implementação de métodos baseados em algoritmos de aprendizagem de máquina (*machine learning*).* A versão atual é a 0.24.2 (abril 2021).* URL: http://scikit-learn.org Formulação do Problema * Problema de classificação **supervisionada** de texto.* Hoje iremos investigar o método de aprendizagem de máquina que seja mais apropriado para resolvê-lo.* Considere um site de notícias que publica matérias jornalísticas de vários temas. * Economia, saúde e esportes são exemplos de temas. * O objetivo é criar um método classificador que receba um texto de entrada e consiga identificar qual é o assunto do texto.* O classificador assume que cada texto está associado a um tema.* É um problema de classificação de texto multiclasses. ![problema-machine-learning.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABBkAAAIjCAYAAAExSfOVAAAACXBIWXMAAA7EAAAOxAGVKw4bAANc+klEQVR4nOydBVwWyRvHf2+/dDdISIoCigV2d2J3nN116oXn3Xl3xp39907P7u7u7kRRMBClu+Ht9z+zLyAICihg3H75LLM5O7vvs795ZnZ2hq9Wq8FSIpRk4n7AcSoy8co4LeUG/1MngOXzgjUIlgKwBlEK2k7cg1XftYS1qQ7mbbqJc7dfw9RQm9lmrC/GtYeRuLW+P2oN2sSsu7iyN3S0hJ8yyaWGNYhScHhhF2RkyZn56f1rMxNFoVDB75st4JC/AxdfMOu4XA7WHXqEsT1qfLL0fgisQZQCLpcLfV1RofVCIRd3Ng7MW+7S2KUik1Wm/BcNQi1RKpG23RkCEx3EGq6Ee916JTlOVd4J+xz4LxoEXs61QauFKnSunoYF2w9BLvGFj68vgoKCIBAIkJycDENDQ5IVKLBkyRJMmDCBHvauomN9dxfXS8HPnr7rdLSoqm7epClOnzv79jZO2VxR2fGfMwiFXAn+dXusco1DZjofw7vfQ2dhHeTWx1Aj0NPTY5Y5HA74fH6uQbyTJ09D4OzgiGs3b8KvTh3y86thbmHBGNaadWvhX68eYwyHDx7CyZMnYW9vj5s3bmDH7l0Vccml4j9nEMQRQIhaAhOuCts5aVi4wgSJV5xwt98ixkegRkDh8Xg4deoUWrRowax7XwUe3f487CUz//xlaN76ZCVglE9X2nfswEyfM/85g+DzOIj1r4RmLWTI6puMZaMfYPa5IGabSlXYTShBTe5lvEP6jb6Y+sk3/OcMgsAZ+t0hZmbnO7P9dyOTyfpkZmZuyb/OyMjos/MFPpT/okGwvAfWIFgKwBoESwFYg6hgSFGU8VLv378PHx+fCjnnxBXXIzd838a2JPuyBvEJoMZQkSwe7Vfi8g5rEKXk0v3Xi2q4mBRYp1KruVwOp8RV25deqDEuoHqZp+1txGLxvFtPE1419Lb7u6THsAZRSprVdrag4e7zT0d3a+y64kPiKM4YXl3YiJSqfeBt8nE/j5aW1gxiDKU6hjWID+RDjaEk2DcaAHs6o5ZDyREUeIlyaLgzOqx6Xl6nZg2ipKxZs2bpzJkzxx07dgy+vr4Fto0dOxbnz59PevTokck7Di8VMTe3Q+HdC/UcbKA08EBE8AVmvYWZBWLjX2Hb3QT0rmFaFqcqBGsQ7yev3nro0KHMVBTLly+ngXH+/XV1dVUZGRkfVHltWbsXE76KjiuwPjY+lgl71xB/SLQlgjWId+Dq6qp++vQD6rZzIMbwIS20c0hF0N9D0PNOP1z4twtMOID0wGDwKrvBoul2NJp7EHuHVPrw6N8DaxDvIL8x+Pv74+rVq8UeQ4uTtP3EunXrPurcqvDdMLW3huJUNAS5b0m4HHCEIqiJX3Hv9GVgSJ+POse7YA2iBJTEGCi0ouljjYHCtRsKC1I4CG77Zp2ow1omTIqf9NHxvw/WID5zrAI2IHrPwOJ3LCNYgygWFeKVXJjluIffrrmBtVeSkPBvDYBnAbFABChksLfUQ0h0GloN+RUn1v748aeVnQOETZhZ6qlyyP+J5+RY3KR8m/WzBlEMHK4O1KpsNPzrHtrvqoP512VYsnEplke1whgi6xK5FGIOhzEGypZe0WVz4hxjeKMOnHI3BgprEMVAjYFycUp1YIqMmZdeGF9gH0m+VlWmLcugvkoWgzHzT+F/P/TXnC9kHURug5GrFeUJaxCfI0IT7FqznDGI5ktf4PR4YgzKJySLckHvbek4MWkwkmL2l8upWYN4BxcvXkTDhg0/ybkVCUHoMXQUejeqipdPE2C12BzRoYHYEpiKf7zPw6CcjIHCGsQ7IMbAaHObNm2OEVqX9LjatWuH3Lx50/1jzs039cHyH3yAHwYVWN/Xy4D87/IxURd/7nKN/SuAGEOb0uxPjKG8klIhsAbBUgDWIEqJXC5vl5GRcTj/OrYZPkupCJi5Sz22pWmFtaH8GFiDqACoMVAqsmHth8IaRAXgVq2WU8jDW6Gh6QbwqaBzfmg2xhpEBWBtqvvSukkTTpNPnZASwBpECXkWkeJlqqN+UNQ2+q3F3suvHg3t4FOtotNV1rAGUUJcbA0DkfMiIfdjG8q+a5E/DGlb9behHYw+WdrKEtYgPoD8+fOQtl+HIeTCGgRLAViDYCkAaxDvQV2KjsCVkOPEWk8I+LEwsGkOK8Mg9Lv/I2pcvY9FaxaUKA5Obn9Gn5Bcg/iqe0CfO3fuzBkzZswtz3Ns+MsEJoZAouk1pIe2h0pbCO2rZ9C05naytWQG8TnAKkQZMWRKGhas2YgpbT1xdbcdvASROPM0CKPK/5te2kGaMi0tTV9HRyezuH2VSiWP7v+u7axBvIcBh++Bp+IgSy7FrchkiHh8pqc6CID5lblo16Rx3r7Tpn6HNt6rMLb9BMz9eyROHfwLkx3CIJEIyjWNJJdRT2xhgzbVbTIuPk15b5Yje7BNLRQJ8L8l+zDm7y1F7ptnEJUcPPA67EmJEzJ04VmsmdwU+xeMRedpy/PW//PDWIycs/w9R74N/Yr+3R85bdy6EwP69ChFfGXHxvbVMfJkIPhifSgU8dASCMnTCCbJI8JUiMi3by2jeeBIxejSQYXYWBBD0IWHuwQp2eVrEJRzwSmo5ahX7H4XLwTCwEAHg755dxOPNwZRpxMO/jURhy4+xL8H9qBD41Z4Eq+CMjsdztUbIiXkEhKyVOBr6aGlmwEyas6ANC0aDbv3hSwzCp1atoS/tSXa92iF03N6459zL1Bj8hpsGNcZnWvaYOedGBxd/StaTzwCdcp97LpwEcNa1URgyHMg7hhadP0BkeGJyLBsgebdOuD6v5MgVyjhVdUVe889QPLd/dDX1cKBE0fx9Po5jJjyMy7ceVyiGyYWiyUl2rGoY7l8yIjLqCL+JZFbpv9KqhJmxIjVKjU4XM2D9iK6Fy6tO4Mo/1aY3WQY7qfxIOdp4fcffsAdaTZ0FWq46Wh9aDLeCfF7OZNa2pbIB2ze0hdnTt6Bjs67vw3NM4j+XRtCy1qB5jUGIl2pj0Pnr2Lxyp2YOKI3lq7dg7YrlyDz0XGcvBGMb8ZOg5GO5lCRvhUTHrvyqODJf9CE37XT9A4/L2f9qwc98/ZhjIFi3ganLr9ltdM6vpXU3/Lm3Bv3JMbQEyXlYwyCMQB6uxVScMRvflAOWZdN1mkLxWi0bx9u+zSDzNgTIg83aGuLscetCqRRr7BkxmSoeRyI9Q2RMfOHD01GkVjN4GoMoQYQNCOh+C/P3btxgpfsUzcb/8c7s5Y8gzh1MwG7/xpQYCM1Bsr4IQGaFY27wLtxqdP9yTE0NEwpyX5T1t9XLxjozcvfG0zvKhbYFByLzrU9cOYxzSS4jELQ6Wp0EprbW6NpZDjurdgCfiUzSN0dcWT/PmzgqfFdRiKOdjyMCOnPWLJ8R4nTS9MhlyuwdFjNIn+4rct/nZsQerde9EIVJ9coPOeaJkbPVTH7U8VYdDJCc2zwbjU1BDorGH9UDUFfTKQhQT46C7nbcskziLeN4Wsiv0HQm/2+fadteMB44H8N8mFuVE0LQ2x7HAu5Uk27sNY4ldAox3eXX6G5pQEO790H++kTkLlmJWLUChg4O6BXcjwstl/CZe0Y+LYy14zY9Q6mbsh7Z5aXNoGAz6Q1Nx356TP2xxlSqVSUkJiQ10kENYbcrCPPGCj0BydG8UpqMvfwAMfu7Te+ZDrYli9tq9knZI8cbgF5jk6eQZy4GYLQ+0HwMM1AeqocRvoc3E3Ug5gnxvDedfA8WwjhszPgmJjC2qYaImLjYEhKOUfPREJt447uvoaIiomF7NVDJFv6wdTUkty8DPCEJjCUhUDP1AObV6xFv9FDEBwvQ3RsIhraJOPBzXt4rPREXEYavJxtoOZrQ1skhrXqKWw9yHnP7ibpMIJ33TqoKibZjxl9ochFZsRtZEIbqbHJOHcnGhyREMaVXRFQv3CD5w/JMuiP8edAb2ZeqlZCoeIhIz2JxGXFGAXtFF3A4SL99RqIGjVCxJUTGN2mJX4JegLjLh3xKj4e2R37YeZfP2ERpz+qWptj1byxGDx5IfmxS/4FFk2HNCXp3PKJTZvmXz+jQ2XJ9hpRBfYtYAj5IUZhH3dvhb25x25sfFlwm1pdoKSZt9CqthtuqhJRu64/yMNAPOYk1BcbM9uuvs6AfyVdJOt0hpGYh8zkMNg7khuvzELv3m96U7GvTJ6Eym/eAGdDC5pc14P5T42BSZ+ZkEzU97BCjVZVaBaIyGzAJieLjrl9CJY1O+DgzVfwb9oNffNi9M6b07GtidO3YtGpVhW4FOzQpRD5FcLbwfCdXdA/CEvpTsPE6Njb62e2qpVbU7m0qSumnnsIiToJ9i71ER8ZTW4cB8RGEHj2CuTateHg1xw7jh6Glq0NeBevYreHK/xbtMKVqi7oOXM8atR3xQI3byydVQ1T/ggpcF4TvTwDYXpOT0yXOeWuKEohKOOXnKy6yKPKo6K2BQUFqatXr067YabD/2giN68+uuirLthZWgHroMZA4dEk5BgDhRoDhRoDRcfIQbOBp130OXJ426eWklOL8pcws14D2pqOL2zy7UyNgdKxNtPTEpQyKVGawiPZdKpl8d7z55JfIQY0dnhnGfZdEk30AAubVMdfjX3Q8sZDnOviDRVfirW/j8A9/msoHqpw38EObs3qohPxN07cv4Pf0tOwy70KlCIRRLqmWO6Wguz4jUgVFD79zK6aB4bD4VTOTYceV3589oBa7ywfOnpUCXr27NlYFxcXpoz/78+jrg/76e+68fHxVp6entQY0KtXL8H27dvzH6YmWQUzo1KpeETplHAPKNDLTZ5BxCpUiCfFStOYuzByqolbe1fDv8dwbP7fetiZyJDo2B7pUa+Jp62L+i4S2FnaQy0yRParG5ClJUBm6I7UiMe4FGUGZ+UjRBnXA5d44E3ruCD5wnpcS7dGM7+qEBoYw0JLhcQHJ3DmYSbUWenwcdLFtXh98E0tYJoUjER9N2jp6aFL02oQkp+HGsPZPZvwKtsCVZs2QEJsOhxJduXu6ojAuGyYanNgJlRAINQt8uaV1Kl819OYC33VcKquV86SCM7aQmx/yIeWyBgDEtOwTT8Ld+fORquu3XCjmgFcNqwFNyIKHFsf1L2YgVk2p/DLwksfnY5ciDH8j/gNy2lW8erh5QRaBDUzM4smKOzt7fnEKN74CTRryHEgiTFwuSkvlDB24RCj8iLLD35esfHV1iW/OeQZhAWfCwt98vjqawYWq0+MgTJgzKB8SbAulCixc/28eTsnd1Rl5hoU2Mem3eCc9W+wrN0VfWu/WX6z3a/Ii28a0D9fMt7IiZd58WX7khpEaWk1bQNa5cw/jkrEOitjooISPHz6Gk28EpB58ypOHTkOPnFAm2xZjVvbo94bX2kIDw/3uxp3Jq8nk34LdjWnjiZVQysrKwFVCMqDe3cmTjprwj87pXGekRFlIE//Q9mzeLnazMK6Erk/nOTYBCZ7zDOI63duwblqZdx+rQO3rPOw8myBkLO7kJCQirhUCRTm1eFSoxpcBGGQxUYg8HUy9C0ckJ6YhASeEap4esAg4hISBW7gk6zl5r1QDB9I5EmlwOnr99DYTorjV57Agvgeteo0QGB4ErzsjFERfEw9RElxNTeAXs0aULVogy6V9HHw4jHIlBzyfNWE46QJSP7hV/DrFm3spWXSj/PUo/t3qd7TdxCn58lBzDpzA/MC1hYUHOI7dfbC24t/noyzU3wLK45bV1H+oeKObV/J7JNnEHV9azFha2Yvjd17tyyq8scHIBJoVZQjZ98Vjrl7eeV4+1w+mvtr4m7f642aVJQxVBQhx1rij2aeUAlXYOSfXfDt9ydhNOZfXFu2BBcylHh87x7cOweUybmCn4XCxdX1fv51cXHxVvNXbLizdunvTE+lnu5ud36dOqxWRkaG+s6dO8W43Rp8fX3vMAYhkUi0yCROSUkxpFP++fxTmVxNCaFPNZ2o3L895V9fkWl6H0IdAyzfdhlHdvdApMQBJiZ8zOrQDgtCgtBUkoItq1bjp/bN8dvsHzHzx9ng8j5suB36486ZNrxmUT/ymIEBnXPX0x+3Zs2at0sbP2MQ+W/+B6XyP07I05cw9d+B5+PmYe/GfXh44CRsqnjg8upNCNu4HlECLWSoFTDWN4LY0gJqhfqDh4enP3TZpr4g7OvvUvL9ohWkGCxHUkYaNnYaBz1tEYRL5kPn2VM4NayF7yp1wZjh+th04Qx61W8ED1LiMFyyACqpHNmSdKZSq22nVvD08IWSS3ufkUAk1MHiub8RG/n0g3SxBlFCktLTER6TgGuX78C3fn1kxL9G6+VTcXjTOtwNeoB2TZuCm5QKZZcWWNm3P+7duQZuSiTaZcaB36kNsmMiII7NgBAcVN23A5ci40nJVQiu/BX4elWwRM35LEbxZA2iBNAKy+27D0EqzYSntT5IIQKvstMR9ewlnD1qoF+PAYiOS4IwORs26xMR1XsGTtVqAavgF0iKi2fedkJXTH5/IQR8PtrVbQiRvh7c3V0gFGlj4tyJ+AyaUzKwBlEC6I81enAfjO/fF5XsHRAf+gJulgZwMqkKHnmyL5+7gksXL+HPRr1wIj4ekdwU/Pbr/yDTt4RFZUukdvUhxYBMyOWJkHHUCAt6gYYkO0kPTSTWFoNp7UezBvElYmygj8zEFDwIeQg9sR4sLa2QnZWN2MQEpKWkIF4lR2eVA/7XeDjcHN1RSaAPZXo27gc/hFqmQEJSCngyJRQSBWLCIyDiCxAeG45spZwZMzT3TeqnhDWI95B/RF/tyg4Y3ncAoFDCqUETpvWURKHAleNnoGNVCb8u+h9i+w3GU5UAMrkMai1tqMxsIM/KQH1nI1RPCYcp2Z8nzca9+FgoLEzJPiKY21lApaa1ivT9OGsQnzXpxJGUSqW01xi8vn4LKoUMAoEYKpkMMlJ0TM/OxPihg5FB1CGb7MchP6ichJmSbKSlZZCSiBLEtUB8fCSMq1ZBJStrGGlz0HNQP9gYGYCjVEFODEtOspGoyBjY2pVu9JvygF+Kb1H+63zojfo8nIMSwKoDSx6sMbDkwRpDCaGef8MR23FxZS/UGrQJ19b0hd/QN+PA31rfn1lfq4oVVnzbHF2+3Y8Df3b9hCkuPawxlBCVSs0Ywq9rrjHLHPKnJRIQ51Get8/Pw+rjp38vg1YhrJje4lMl9YNhjaGE8Pmadwo/DavHTJTL/xYc0qh9g8rMRLExK/6rqc+N/5wxkCKBelyz6vh5RDpENjzo1Av5ctz9cuY/ZwzPeneEi2UKWozJxI2gAGQ+qApd70dM03lap0ArlbS0tGBjY4PIyEhmXQ5F2oy7s4s6+Pmzd55vxDfD1BfOn0cR+3x2NvifM4YRZ25jcTUjuLnwMMbvMHh8fSwPYT6DZ2oexWIx0yKZhgqFotj46D4x0dGo7+ePNevXY/rUqbh++xacHRzxIOgRbGxtmX16de8BfQMDnD19Gs/DXhYb76fgP2cMu/62gXQDEJakxJTxKZiyVAKuWsUYAn1PQA1BIHjzNXb+qumiyP1hc0NqCPmXZ83+iZm+BP5zxmAScAUZRo5orqqGu7P52Blyl/zgXKbomAtViVLw2cn9h/KfMwYOhBy9ppHMfIPmpT8+f7+TFLZ3e5avEtYYWPJgjYElD9YYKhBSKqHfrGRX9HlL6tewxlCB1B6xNXteb2um+PE5DoTCGkMpeB2b5qr3Vj8dC3fc3jy5Z81+JY1DrWOLZ1Fp5T4IilgsZrrkUqh5Je5slTWGUlDJQv8pcuoVwuPSXezM9Z4RQyhVHDVcTJnpfZiZWyE+7uPG/tbS0ppR2mNYY/hAqCGUR7z/tLMkhhDDzL9+/QJ1q2s+bpaoRWglkmFzdGK5fZPFGkMJsbGxkfXv318wd25B1aVDODdv3hzbtm37JiAgYM3HnmfkkZg8ZTCvVDlvfVJSOOIVfNhZmCEqNv5jT1MkrDG8AwsLC3Us7T42B/oGsyjoWN45HWmszply+eCaydwsgnYFGpWYlLfejPxa5WUIFNYY3kF+Q/gQaF/RtCue0hwz+64Cs2vwQXshEdOWF6pYmFi1g+M3y7H+ez9Uzel6a98gB0w7nI7nCYkflca3YY2hGBo0aICDBw/Ssnqx+1pZWSE6WvNUf8wnCHkFFkk8jQj6Rvp5hkDZeDod+pyy/8SBNYZiuHSp+E68csk1hA+FqgJF830VERXtakiMu1tov30RZasIubDG8JlBswr5CGv8disub52p20QkhCwu93OzxlAMeiIBXHlKKFVEsS2M8eBVAoQCMfj61tDquAyJ69ox+42Yfxi3/hmCu6FxxcRYPLmGMPqEFCtaZjDziSRXMCFiUXX4AVh6euL0BOePPs/bsMZQDOk5TeGjiDFY53wfK5NTFy8bfPvp4Aim44/qT/HPkb/B+fbjDSE3q6CsaEU7UBUVUIVHqzp99DneBWsMJcS60IfSWuC7V4Xi1dIyP5etpRkiYt4UIdOIKugTVYgnoVk5NqVhjeEjkJwYXi7xyuWaZnfhr1+jds4LreikpPccUTawxlBOHD9+/H+tW7f+oGP5Ak2F86YHckSFnUNd5yZoWcMFa24/g7mNE+KiQssyqW/OWy6xfh0wgtyuXbtXR44cqVSSA+g3FkOGDNm8adOm/h9qCJTInCziuw6a6ugbcW9UobwMgcIaQzEQQ7Av6b60iT0xhPJMTrnCGgNLHqwxlBK2qTzLR3Hu3Lkvoq8k1hjKmYj4dM/ceXuLogdU+VxgjaGcsTXTC8ptEvXv5IYVck4ul/v8Q45jjaECaNKkyRfhV7DGUEJuPQpTO9sYFFqf61Dq6hmIBHyurMITVoawxlBCalV1YJ7u0QvPXPhtcA1G79VqSIyNjcp+wOxPBGsMpWTF5GaNPnUaygvWGFjyYI2BJQ/WGN6DusStWhW4dGw4JAmHyZwp3KsIwOVzAVUW7KsHo6St5jmfeBAKagxfRO3YR1KuN/ng3qnYuuk8Tl0VIPDOQQSd9MZW6Wr0446DyOQmLCvVKc/TlxmsMpQB95PrwqtxLWzf1xePL52AiggKR5iNn4/Wx2juAvQetrvczv3HH3/M/O23377PyMgoUfWmVCoVi0SiIgeIZY3hPQw9/ABy8sOefxYBoVAIEY+P3CErA4c3zdvvx6G98Nf8f7D2Rz04uQnAUXPQNH446nioIEt8/0e2H8t33333+8QWNhALeGqJXPleBXy6f5Ha1d0OK5buV41esbnQJ5t5xnB7z0+oGfBzgY2pEhUMxAUb//VuWQfbTt7IW67r7YHrD54g5cVxGFZujT96N0dQfBI2ny7Y3n/ZiI4Yt/IgMz/23wtYPqwkJbQ0Mukzc/5DfsfVtd+V4JiyQywAeCry46oUUKmIIeR83UK7CFSoVODnDCmkJH9VtCZAJeASP0OA16ktoKN1CVxFNuw8ZpZrGqkhUJzNxcXuSw2BIuALVEVtzzOGM/vPYMJ3mzFp01l0qW2PwJhsOBmK8OLKLlSu1x1ZchW0+Sqkp2bCxb0W+KokPHn6AhLdSvCu1xbnto7EiZ97MHGlZGbDuUoDVB/0A3Z9S4dc1rglrfqOx5PLT+Hf1BKXWtXFoo3HsbphKF6lamHfvsN4FBaLkPAkbL31Amu3HMOcdkaIiQqHi3935vigOAm6tWoGJ8+aeBwlw8uzf3/EbSweI5EQqQoV2lR3w6lHYcxXUrk+3o3jV1Gvbf2cm8jD3c2ueBwWhpoTv0frDq1Qv1VTxIenIkatwsKUBEwyNC1Xx2XfiQvFOiY7t51Dj95NMGzpOkFR2/OMYcLGS5AvWYeWrjxsW7kYHTt3wLL52/D9rB8Q9/wGGgRMwd3bF9FsyQnUOb0JP/6g+fz//pUTeZG1+qlj3ijzBeEwqjDurbUNftA0+6ajZ1fvMLLAtiVj2jChnkNdJsxVhSf3rhR3zR9Ezu9cwJme09AZk8+/ICtVRPo1Pcjmdh46JYGL6zn7GazbANngMVBevgg7oQ6qPnsG7k9zIJgzB8hIhU62CpOXLgHKsLAwsZ9n3A7bJ2aoAVQ18HrqUrXmzeKOiY9PVStIVvIu3yBvvZgk9IeJQ5j5fiMmMiE1BIq5cx2EPLjMzE+qS2Spbqn7gfjsmbrhPiOdfw3yefOLcQXERyBZKx1PKqcP6dzR5tQ5P6xCKSOlBj42/PQbFB07IlxfG7zMTOhzFIgc/hrBTwWYPi8GS2b5YsKvhT+VK4op6++rC6TjLSa1tFUvPhnB2TGDyxjvo9RAVysyHz1XxcndvohsR/ButdKli4Bcg4J+BDyR34cz8V+6x1G1fHSWJjL3bnnn+c86kPSGv2v9vP5eQj6Py/z62SrqZ8lhxuch9y0UNQg6PX31Eon3pmPLq2oI6NgSD1q0RNzaFeANHwWJnhjBgUZISolGh9bNkfB4XYHzTN3woKjT56WJpkOhUGLJN77FykkDq/qPbI6EVqVG4Naw63rGECjkh06Jed3KJOXmMc04u/m+3qVGQIwlfzx5xkDXhmXIYKMlwLNsGaIiFagqeI60pBio0hNxPkIf+tZuyIiPh17GU/T5ZhDO3n8JN/0k3Dx3CeHJOjBy94Ze9AN0HjYMkaGBOH7yFtF5a6gMrTCsrRfik+KR9PQmshNTEZeUhlsZtvh+dEekvLqEV8/CoZQqkZouB19XG89S9OFcwxem+rrgPj+BB0GhSBI7Qc/WEbIX99Bu9BBYkDRHx0VDmhoDS10eYmIz4ODjX9y9K5bpmwJlL5+/fr7nV5JVNnbCmDOP4O1RGVdeRUEnRy0EHC6Wx6hQ/+5pOGjVhKubD5wlCgRa2UIrm0N8jVi03RCIgx3vQR75GFzJIxxe+R3aj/i9xOmgY1wUpRK6BiYFOmnYOeFiNUx4s5ycnOxsZGTEtGkwsax0ApaVckoBRws+AMQgJBKJtlgsZmQizxjo2Rx1Ne6yp64I967uQovB/WDl5M2su7N+C/o2cyFzdPLH3dNH0bR5W3oUujj55jtDbTy/cwPOvnUwdKQXsybx2W1yAi7MTCxwS+mMtu09mPUtc44wtG9ApoI3okH+BduucG+cu0Cvp3reJitzK6SQifrSDlYoMd4OhrvyLz8IS+meO6+5+T6aGkgyN963Mn65/gzqiECoK9dhjIFLfqiHYRkQRkuglXwdy+4+wITJ3yLKygyOVd3Q78Bd/GjoiMGHdeFraQMrUzVqu4ci8PpheNVtD5O3ewrTwLSDT0yXOdFQS5l9cM5Qv0Lf0/2664E5DUm2UFQcagsLi9wORIpVFXHY4czcrOKd2US/wQU7MOs3qG+B5RqMIZCbGJsCbwvDAtuoIeTHxKUmIoOuwcbTD23rexTYtuXQJfTtUOCnL5Lw26dgV5MO9VP4+gwLrSmeAY0deuRfpk+gJCX58v8mNimQGHo2dyMtbG3jBZ2gM/B15uKUnw9+n/YN0kgp4UWYHQyzE6DXswdm790Nvfp10MTaAA1adYLZlk3YOXkYanwzBN2c5TDy3wRDtcb3mNm14H1gzsXhVM5Ny/t8hrcPg+YJydufjslZBGr50raF1xblM2RGPcb6E2GQSSToNKg97j9JQi1zJbKEejBCJgwNdHD36F48TxBDl5+FWINqsMm4jWSZEYI5WTCt1wqVjYQwNzaCQJGMuIRMIvty2Dk6gNbU6Lr7Yd2/25Gm4KNG4wYIexkDHy8beHrXQGRkBHMtenpZyH4WgniSjVRtGYBnweFIjw5EjSYtkGbqjvDQB7CwMkXQhfNEaDjw8K6Kl6R4qzR0g2e1KiW8d0VTkpufOXVS3vx3f6zAmJG98CpBhj4D2+HQhClo3rod4oaNRejly6j3JAQqPQG85q6E0toDjc0FOUMXFFmqK3VaKDM7e6T9sf8JneXkOY1kXk9PTx0dHT3BxMREsyPxDVKtmpkZGBgl5Byal128ePHi9YRZf9kd3rLiTSlDx7oKxgx+c0OdaljmO62m4qdul0GoWyA5tYpOpcAMNnZmBVYZED9s8LBeecsNPCzybdV5cyZfF+RucXEnWRKdCJ4ONA2aSpPqrd+olLuFV9FpKGdEIl2sXneYmQ98HYtsJQfnT5/Aog2bsPjmBTTiC7B+2vcws7ZCk/0HUb9aVWzZuAl9B/QvqySoa7XMy9nw3Z7befMki8gzppCQELUbefrfaqPFOI/W6/UQNbcVZ/2SX5gfO88Y/t50Fj56L5EmMIc0MQlxMEC77q2QTTxac2RBnhyBQ5dewrluQwSHRKNjI0+Ykt/w2sWzcLPWwolbsdAj2UATiyjwdHUg1vFAJsnSti3+A99MnoG/t51FsxYNcO/GZbRo0xT7D1zHkC4FTetLxauSBR69jkT1Lh3AEQjBD3+OpKZNYd+xHURECVp16IzfD+zDGXsb9C0+uvfStvdI5aFNy8Hj8zldR/+St95Mz7LAfm16jVAv/nky3NzcilYZYiBROR3XmZqaMn0N5hnDqP5NizxGgy4pFZhjQN8azJJ/5TdPvV9DzZB9vfP6jrDO26ZDfNhvJmuqY0f1bsaEru005/laDIGB+JkbfhqCAFsF/liyD3OnNUU7y3CMvanC65DbaHPmBnb9PBOuf5S4s9Z3cnTbPyXqBpLsxx0y/vvHYwcF+Ba/N+Dr63uHP2PGjHmkaCExNDRMyT+9vY4uf9xllA5S5BHHxMRY0jAlJcWQTvnnc5dLEtfbfTeWNfQt5YGdN2BfzRmDOjXB4rnfo3qboXCtewALFixAY5USf1w9iuGkRDVr2gz8suDD0kPvR2RkpM07NtOuapkfXltbO8vDw+PJumW/F/ZS3wOf3KjPsjqRGp+Dg0PYp05HcYTHJyEqJgavf/kWKXxDLJIcxbrFu3Fx1lq49++HS1u3Mu80skQ82JpagKev/8HnsrS0jKFTGSa/AP/ZGsgPYfTMH2BqboOsrCwkpKVgY//J4K6fgcoHTqNv115Yb6zAlRpj8ShwPx5YibCwe384RcdCR8iFJCMdKpkUIh1tTJ70LVNXkaXMgoKjxsZth5EdXX6f2pcU1hhKQHp2BlKzlDBz8sbdu3fJDymAPCYMHF9r6KoUkDTww/4jh5H54B50li/EbzcuYNLB3YiIeIwpvTojKyEJOgmJkGdLwNcRofKs6eAI9SHkpkCpkMHQ2u1TXyIDawwlwLxmR/wxoQ/EiSmo7WyO4ODniMhIg4eHDynGZSDs8g3oiETQ1dFH3YnT8euOsxhOivjXyfrU5BSSifMh1NJi1CDkxn30rdMAFpbWcHByhFQhx6jhoz71JTKwxlACsoPOIlOpxIpZP4Kj5sPZUh/2Bk7gVnEiTzZwnkj/snl/om6rpri/bB5mHH+EhYEhGPvyNvZs/xvyVDnUSTLwlHIY6xqBn0rK3KTcHfcwFFwoMUO/KhZmPfnUl8kaQ0nhcRRQKlUIf/ECr+PC4FTZDXyZEuFZKUhNTYGBmQlawxY1tBwhSpahslsNPOoyBS+8laToKYFUoiQqkQR9E2PI0qQQ8/hQyLMRnhALc06pxtEsN1hjeA+0IQsd/JSG3876lb5GBN/NES6eLkz7hgvHzzADoupa2aPlH+uh6NIV10Q6UHJ5xDcwgFzXAEoDMcarwsFXKKArV0IpzUTgq+vw8K3FvAa3cXHCGsVjrCHn4fHKaySJksEaw3tITk5m3v7RH3zW6FHgiUV0nB8oSD4vkSqQ1qs3eeoVyCSlC9qDrFwuIwogQVp6BjKkKiSmSxEdGQmBowvsLY1hoa8LfZEAEyeOJEojgEopJcbGwXeTaCukeFp0/KTXy/+Y3s9ZWIqA5nmFulAtR+h7/E8rqV8pbE7BwsJSJKw4sJQ5MrkCHPK3Ys99jOtRvfgDikGlUmPDkSAM7lC1wPrc9uos5QMrDixlDp/HRa1BmzCpd00mpNxa35+ZtzbTw4EFnZl5Z1sjbJvTnpmn2+sO2ULK6ipmvvagzTj1vx74+d+ruHQ/nIljSMeqaDluN44t6YZv5hzH+p+KaNzJUmaw4sBSLtAHnGJupA13B5O89VHx6ZDKlLShGBJTs/PW1xm8GTfW9cPZ26/x4Fk81OSv1bhdGN/TF39NbMw0A289YQ90tAREODbh9oYya1fK8g5Ycfj6KVDjrFRLwFEJoeCqIVRx6Qu5sj6fKr+737KuY978nY0D8+bvbhpY5PrmtR0KrcvlxNLuhdaxlB+sOPwHIHqApTOro6d7IvMllnHtKuDY1IOCbFBk7ce23ywx+N8znzqZLJ8ZrDh85VDHfXw1Y8i0rVH/oQ7iM+XgbgyDnjoC30ZnY9fuHugz5hqejOoN57+3wruKJ548eZI3wDdt4EHHhaYDhdPOrigKhYJpkEEbevD5/LwOLXIo7WvF+s4OjpeePHuKiIgItGii+djmedhLXDh/AUMHD8buvXtha2dLvwCCf526iIuNhYCk5cixozh86DD31atX6r8WLQSJhzmWLxAgmMQ3/JthtOk9jh4+jHkL5iOge3dm3arV/34Rvb1/alhx+MrhQI4xfQ1hclcXmZnpSFGpAGkmdkqBFSMEMDB8jjUBL5Ct/RwTVTJYWVkx4kAb+mRmZjJxLFu2jAlpIyAqBvSByxUEKiL5heRDOHPhPPOxcnxcPCMK5GFHLBEAB0cHnDl/jnkrkUXTQsRh284dsLe3Z/bR09PDwMGDkJKSggP79zPx0G007XR7fSJoAwYOwJRpU2Fja4sD+/ZTYSiDu/rfgBWHrxwxBBA1WgbLNr/g4sYq8I2LRkJyOLo350LKl+LQsKvoNMARlpMukyxfgTNnzjAP/vLly2FsbEz7F2U8hh07diAwMBCHSS5MWwlSkaBiQQUhNTWVfu2X51mUFvpAU2rXqV1gucjrCegD3L4Cl9u6kAUYM+t2bNuG4SPf9B1HvRoaBxWG/PF16tL5g9L3X4UVh/8AHk3akSy+DRr5pOPo7BaozBEiLSgd3oNWw3dy+3z9FWrMgebiEyZMKBSPl5cXM+ViZGRUFsmjnQyW2OWwuK3pGFIW8OYNSH5hYCk7WHH4+tE8eBwu+CIDdPxD07mo5/uOKEOIl9GHFE+2vG+fr2kk2a8JVhxYWFiKhBUHFhaWImHFgeWrZtyi0wm/DPJlKiju378Pb2/vD36r8rnyIDRR1djXucy/TGXFgeWrpe23e1RbZjbNUwIfHx8mVKrU4HG/HoHwdjLh0gHcy7ruhhUHljKnx0+Hg1ZOrFfinuCpYdOQy+U9XX382bEpPX0nlkU64lMkzMNCPYZcVpxLx6pJxY968KWRKuV5lMm7o3yw4sBS5uz8uX2hlyEtJu9O2fljswL94Y9cciVox+z2Bb7DJsJQ5unJ9Rgoq3zeudsXQ1EeQlkLA4UVB5YK4dTCboZvryPC8AlSUnpU4asBu2+Y7q3UCcfAMW2DkV0a4m6WI26e2AArY1OIhFxIZSomDIuJhJWZNa68joeTlhqVrCyw/FwoOrqXaFzrzwZWHFhYioEj1mXGvxTReW0TQHYe/+y7CCgfM33icTgqIggJsDYxZkJjYyskJcWju5UJdkUn4nV0HGqZG6NjXNInvpLSwYoDywezZcuWf8ePH/9NVFQURCJRuZyDdrRqZ2eHvXv3tm/btu2RcjlJMXDMeiHv6rRrv9nAq8J8ZRaVqHnoc8OkpGgmpMKQy60vTBgorDiwlJa8/iH69u3LTOWJmZkZHeGKzh5+a1O5v26YfVeB2TU+8BFRhQFch7JMToXDigNLqaEfYzk7OyM8PLxCz0vbJ3zK3tK71XPDhShjxL+8ilg1BxZEnoYclMLKnIvf6gqY4kT0hjYQdVgLxePt4FedwXy8xsA1Q1JCCKxMTNByxk5smNbsk11HSWHFgaXU0CJESYTh6NGj2LVrF06fPs301VAUbm5uaNGiBfr06QM/P7/3xlfRwvC217D7SkhuSvL63l/b8U1xKrc4QaHCoFlXsDgRnZiILwVWHFjKjbZt2zLT1wcHZl9PG6p3wooDy2eBmBQZJJ/pAEu5dQ8DXEyx8VkUEtVCmBBxOJkNDOy3AdG720PNMYGZ20QkhCwi2zmoN+4Egpe3wtpnSuiGPESP9j7osj4W+wZZfOrLKTGsOLB8FByuFtSqbHTS4eJApgqVBq7Cyw3DMb+WEDNvyaBO3gSOUX8IfH6G/P5P+PWREj9WlRDHXAcCp8lQhC4ER+DGvA34raYQ398mxyRtAMe4Cw4Eq9HJ3aDYNFQUG58lMKGruQMcdTJxOyy+wHZV/GYosYiZ5+/sg7Y3DHHk1guY+gdgOpeLdLEZMOhqhaf7Q2HFgeWjUKsyoSvi40KUAiY8LhKVKoyqLMDfL+TwsTdF1f4LsXkOwDXQtOEzMqD+uKbEznfxgFggglouZTwHKgwuFrroNvcM/hisjxnNjLCs6Wic3vTbJ7zCwnUPifFhefPRezS9ZNOrSkrSiAf1Kh7FJebb/0W5p7E8YMWB5SPhIkOqYOaoMFCoMFDuv0rI20t6YTwTjrWjwqDFPEySE8PI/2HM+twixbPYjLxjnkQml3PaS46xVRckRe8jc/Ra+bi//Te0mbEZ0WFPYGVsjOikpLywfW039P73Blp6GUL4ZB/c28xE9KvgT30JpYYVBxaWErDoYG5nVnzcJNq303oaEYbv0XzJ87x9GHlTBOLwTfpWQ4Z4smLYvQZfpDBQWHFgKS2c6tWr37906ZK3rm7FfisQFxcX3q1bt7CLFy82rNATEwbW0s6bry0gU0NNZ7qnJzgDEzSvK2MKvLYUMm80dvY1rchklimsOLCUmnv37vm8b/vp06ebz58//9tTp061sLa2TmrWrFlctWrVeA4ODvpGRkY6ZJeM5ORkaWhoaNqtW7doj9cOKSkpej169Ng5bdq0BTVr1rxdVLzm5uYgwlAel8RSBKw4sJQ5zZs3P02nT50Olo+DFQeWckWtVhuoVCrn9+3D4/HuVFR6WEoOKw4s5YpCoaifkZHx9kdTBSir7s3+WHcmKSklw+j8k4zid2YpFlYcWL4Kzp07p67rQF+T6qO9rzG8qlVFy+lHP3WyvmhYcWD56lApFUy/kSfnfX3fdVx7Ev+irX95dApXGFYcWL4KmjRp8h/4FAqoKGGgsOLAwsJSJKw4sJQbg34/lrRoVN1iszraNX3Az6dTzi7uXnHZIkuxsOLAUm6s/66Ncf7ll9GpVaLjku4/CU87OKitVy8el6PI3UaEoeITyPJeWHFgqTAcrQwek0no7/2pU8JSElhxYGFhKRJWHFg+CHUFdeiYmXoPoaF7oVKHwcKoEnT1a0OobYtLiRZoamsMDrSLj+QD4Hxto+1+APy5c+fOIPfhj+nTp2PevHkYNGgIli9fhrFjxxUKV61aiYCAbjh58gQaN26MO3fuwNXVFVFR0dDT02MilMmkMDExQVhYGNnmhmvXrqJly1Y4dOgQ04kojaOouL+WkF5f7959cPjwIea6z58/h2rVqjH3w8rKGhkZ6cx90tXVQ3R0FHOPHj16CD8/f+YeBQQEYMOG9SU61/r1a5H7u5HwKzBmJQ6tNIZASMfN5EKuyMLTYDHcvHlQKwwQKYuBNtl25RYPzbuMxYGDy4lKKdFmcCRE2sbFR89SKhjPgRpY/vDXX38pMvzxxx+YsEoVdyb08Xl34bFBg/pMWKdOrQLHvCvuryXMvUcTJ04ocN2596MoPvQevf27fek8vL8bi/fWxehZK/HsxjHMmDwKUeu34k5iGLwNNkJLyEVqajPcqNoBB/7aA2OPsRjhsBj3r65Gnebffurkf3UUKlbcuv0c2uoMuFWxhSQrk/EEHkdoOvaq6WmCeIUIiVER8KrijJdRaRAkP4GeeSWo5TIYWlf+BJfA8ikI2HUHlSz5MJKr0aCaA+w4cqRFJaM6mVerlMRexMiQZ+FgdDqW33iOmMQM6IjEONvDB2b6hkXGuXbXExzedxCpKSQexWWEPGuGmOhQ2KsvwN2vFkIDX4GvcwHC7RGIixehk/0yKBTA/Rv/fLHiIBKJpDKZTJh/3axZs375+eeff/rQOG9dvdL85qYVp95eP+bvLaXyLguJQ62abz6g09XRdFRR3+bNdjprY6jJ4RytiStnXa8052P5SujoYYWrkfFIk3Px6sZrcAR8HH3wFOLzoUyRQMAhk0DTV6SamKQuLSuoVDgbFI6efoZFxvnXb7OxZJqA9hmHOjWFyHi8E94iFcwtdWFuNh4hCh44kKFf50donaCEQk6WyPl1RF/uh1YWuhxhgK8NsmUqxjOipF1bMwv4cHGo5V/vdC3j6ALrlEoV/jeqr7o0AlFIHOr3nI5hLXioVycAAZ17QKJvjb27tsDTWR9ytSH6Tf0NXH0XbJzVA9GPr2LXsbPgBZ2A/6hlqF3bB/2++x8Wjm0KM2sXPLq1G64ujRDQqgV+WrQEvUbMRs3es+CuH4vvRvaFiM8h22vg6bO7+KVnIwyZ0g/DJi3Blq2/439Tl8LbTR8X7sfDVE8ELzcj/H3iEWbOWYQhE1agz5xf8eR5AupYZGNc//b4vkdD/PrXTCzbEQThs4tYdfIh9PT4iFTVgTo9hlxZAlpNX4Qz+07jn2XfQvniGo5ECLF8RFMkXVwEY8fKiFa5Y/SALrgTJoU+FKgy9A9cP3IWo2aMwrPnERjWrw2S753CrXQTzOhZB2mJ0Vj23WjsP3kPPerb4YdNlzCRCPbN27fgrK+CoUst/FwvAyFyJ/Tt1gU2tJuT+iMR/vwVXp79+0N/+8+CgVWtcTc2jTz4CvK4qhkFEKhlJCfnQ8QjZsXn5u3LUWvqECi/hCSiin4Iqnm6FRmvq5U+9I14SJLq4WmsMTafCUdWajp8jZeCZq+7HqmIN6uESq0mngvJDbu3x6gfV1fEJZcL/dv5BWbFPPPKFYbygsfjfrzncHnHvLz5By8K9porIFHv+Ov7vOVKnv6YQibgh7x1m38fkzdfrVYvJjx84xETvnhYeAgwKgyUWTsuMOGxK5oOR3/c1ZEJO+bbt91sTRjyqHWheObt1PQQNGVqG/J/KsYV2iOH0U00oXtrtMlZZdxwEhNakWnfhaCC+8/qnTNTQxNYtUXu5zxmJlb4ZeU+/JJv98WjW5D/LQpEUZdML16FvStFZUJKSoohIaVcT/IWcxo4Y/aVF+AolZBDiXa+VbD92mMI9fSIk0A7m9UYfP6KfwEpBsx/IcVGDzU43IK2SvduNzkRKrJPgjILKy7chrxuCvhpGUh3qYwQtRIWUhnJBZVQZ8sgJWWK+WS/P44dZUbDkpIisAGXh9c9B4DHDHH7eTP1n5Gr5q07XePXgfWep8ZFONB1RiaWEbO23bb72LhjBXbTLmzcuCA+LoVZHrl8o6C0d6SQOHSbshGBNw7jzy2bMa1vbzToOQ1pMQlIJ2W/+vX00ci/FSrjOaxc3KDim+NecibOjmoI6w6TUcPeCGNWnEdqXAKy4oKhZ+GN/buWYkdwHNrqJcPNsyqi0lWw1itflfwvIpFIxGUV15T199U3bocoLy/v+d5X3XpCPjg8DjEiHpQ5LzZ71vXA0aCXEBBPgT7EAoGAEQoej8eEfD4fT1Mzcer6FbT0z6mkVcgwaMRYNG/QAM9FYmR6OqDOi3D4RafhTjN//HH3LhZHxiAmUYKfEiPxPUeJyn6NgSO7sK5tLCqZheHRXR0cOBEPryokPnl/okJldTfe3JPHj8OSj83v/MGvRaa1c5Io5DJm/LwjfunpmbIMvS3frxoGW7LCNnevKNtZJY0weI+CKKwcbl203t40/oKgzt7kdnn3YeKkE0yX4O09TQ/vHV6rIxHsYl9FF/rxd/81gPwfwMx3vLznHYdZMf/pI+5rpAPf7W868jlbv/BnspPqMjUVzDwrDOUD9RwsLS1jyiq+OjXdePSBoPNEKBREKIp83ObXd8LUS88hUvEgJTk7V8AHl6f5jakg5A9p0SKvePGSA/LcM/l7Vth0PHawxs70RMhvvIRLegqOm+jgefBdyCxEWLlpHTJ7DYCEL8YMC3OILS2QMGoYbmzvjPDHpwDD4ajX6UfExC/Dg6CHWDWnAUb9fK2sbkUeVao4GOXek0ePQlNO/Nm1xN+CEKHkUWFYdDKCcZfo0DdWM7iFHtCF/ounTmppq/79wDNtLS2t7C2LZi6/fWwT446712x6aMTvG9840+4BRGn3pyN4t1qpY3mcZ1efcYaFE46+sxXKvhG1O6SlppiK46+fEyoyqipduvDJ76Msat9C4iBXawYsPRGUChsjMeJeh0Ia8xK2eoC9pyMeX72E2l2GIfnZdeibWuO10gwPbtxEr1Y1kK3igy/iISKQbNMWIZ2nBx1OMrKNayEmWUYMQY4aDnrEzRRi2+4jMORmoEY1J5g7eTMVW+euBqJzEx8cP3QerdvVw72rZ2FpoIXVu86jupc72nfvhRVr9sLYRB/NmtaGMDMS+lYeeH3/Euy96yEuUwG+IhtG+tq4f3AjqncaQn1abNq2B/aWhmjQsDZkHD2cvhGMdn7umL/mMKx1OejXsx3OXbiEJo3qIzJVRmQsHPtP3ISzjTVUtt4w0lLA0twMp64EIeFFIBp4V4IjOd/BXdvQsWtnSHlaeHX7LFyr+zHzEXcvorJzJTxKNUR8fDxkCa8hS42Df62q0NPTxqmrD5CUoUb/3gE4eP4u0sKf4GloBFyc7ODvaYVKjk64de02/NsGlNT23uk5UGP+a5APJ39Y4khzIELBzz3u5p2n8kvLeuTVrnO5fFiQ8nI0ufcq+kdEoqWLK84/K1gkzRUFWsSg82ryXBy5fg+NMybhj6NqGKY4wsDAHAYDemNS8GP8nJiMb4hArCVehalAjKiqThC8iEWWVIqMx7dh274ODi1disb+Kjw5swKyU7uRIRqHDIjw9Hnge69n6oYH+HOgd4GwCN57n6pWdTLMvSeBD1+knvorwPB9++c+gEsmdrw6YfFB/75r2hR6mxA6O1NbS6yV3aHp4FXfdXLJ8mrRd8XgafPG9J30x9h3RuzamWlgxMgvEQkayMYKUuDSyUgw/miR16BvYJgAg9bVwp7cGuLwbJ8C7t2KrIsoJA60XoE+UB2q5VyrLfXTquRtb9hNM2/l7s+EHnTqqCnH5/o2jj6NmdAkX7yVDAvabu9u7QosG5Cr69xUU65v3bEpE1ZvoKkV+NGrcd5+o4d2zXeV+kxg79OACc11qc1q7LZ656F5u9GHMBfq01FhoHw7tH3e+iaNNHHYGNA9nNG5R+FuD9vW8wTolEPH7r3z4nSt2TRvvnINTc/pVWny7AzJP5cC8bTr8OaVb8fG9JprFDqXf1v7QuveB/UcSnXAB7JvTodCCZvm54yxpx9DpaTFCzWRCDk8DUhx88BhWDTrCD1tHcZ7yH2LQeFy+PgzSoC2HU/B5pI9rtn1QNWURGQkpWLMt7NRy7USkmrVh+z4RcjNbVA9i4u7Il0IslIxzdwVvz8MxpSY+hAf4yHr+VOijlLMaLAX9trPSBp4WLekLpq22Ydtq6bj2/mrybmFha6lrFgwqmH74vcC+s3ZVX/zD90vR4aHOQb49m8+rcVspYOZK89QK88Byab/dHX10nM9DEpmZuaPYrH4l7Fjx8LKyoq+6qSrCz/Q73jI34WDR621CH61hhGVIo4tJA6PX4SiSmUnPAgMwu1IAep6CfHq1Ek079oOwa/TYcKNxMvIDFRp0A7RD6/CwtMPxzfvgVOLRtBNT4KXlydukLKiLScDYvkriK38kPgyiDzHauy8nIHeHeti/7pd6D+0HY4+SUcNOyO8zsxA4vm9CFc4YviAZth39hZaVbdDttAYZ64Fo0cTVwTHKfD65BZU8muCGJL7e6oe4v514tXo6qN9QB+EJsqguLMVVi0HYfO6I6jXpB5IlkzSUxWr991A7co60ObI4VytBgIDH5L11bBy3QGYurijPvchEkxr4O7zFDRylSDywWP4BXyDwAcPER8ZimZtOyHm1iGY1eqQV8118OYrNNJ9AXmVptizfA16fDMERmIZNqzaioZVjHExjIuqppmwa90L5uRxOXA9Ch09lfh37wOYIhW8Ki3hKIqDvjQcWSbeuHvpAloN7IOsuyeQaeiJsBvn4de7H0rqt75LHKi3UFT4Pt72Lsa2cvSkH029a39SWMCK5lULrNOesR3cncsR26Qn+D2HwSIrBTUNuEh+fAGN9W+Ap2uPuCwtfH8hG4Gh1lDXfo2UfadxX5YKLXM9eHhWwy0nU6huRiGlpieUm1aj7ty/8E+GDErJE7Qz7482h4/iZPu6SMzOQsstB7H39gsMHTUTdaKXQGodgLDjfaDIisCB9X+iy5Dv8tJGvYWiwvy83Xz67XvSytOod8ta9tuLu5f58a3td8U356HvaudA43u7jpCuK3DerVu3TjY1Nf2lZcuWWLt2LXbu3Ml49iKRSC2Tyd75WyYkJPjJl5agJywiCtHP70+xKmJTIXGwMNSYo0fVKvD20pzbc9BwJvRifn932OQ4Esa1NDlkv+G9kJ2RBi1HzSnqWNEsk07WzLKOiya3HZ6Tgfcfrvk8t21VTbt4SwNjKANIHGqmrgat6teAtpDHtJrv0dxLcw0karcB35AfjQdXUnTIlDVBC7eWeel2MiE5AxEGyqjBuV6JIfP/my51C1wjFQbKiMGdcta4gY597OmqWarkrPGKqnpXA9dbs68lEYZcVHIJOtamGagmE+3ZpzMMxfReCdCxV0+mWGPinQV9vdx2/1y09LUBh7hlwwfmz3jpWTXxV3Hqo1lVo1XOcj+UhrKskKT09bduXMPV/MKHHp8191fNzMWdb20JwPyhtVCjlhUpAkjR2V2GwMd8aJGi3jNdLfxh74rxD+dDt19/NAwOgnzjengTwWjw7SzcfXgT1s8jYJglRXZ8NOQyOepsPEzEQgJ5WiKeyBSYsWIlbMNeoWatg6TIagAL33FEGMZ/xJ14g5elcMbA1lXmFb9n8ZAHnKdUyFWH1/yOziNnMw/an2Pa34t8dl/dsPuY37sMm8m8FuzTp8/Ce/fu/UXnqedFvAimYjcqKmpGoUgjLl5CRlx9uHUFERS6RkqmXLtQ5wsLiIqVsw8T/4vQMI9KdjbPBQIBU3lZSBwMTYyQmJwGXQM9KKSZ4HB5SM9SwkBfB0mJCdAWC6AW6UObGHp8QiJEfCF09HUZNYuPTwSfx4FCpYaYbNfVNyQpIW6kUgouXwSJXAUxOWNaSgr0jYyRnpYGPX09yBRq8EiSdUncickk59DRZdKgIuVNsYALPUPNvrp6OkhNToJMxYFISwfyrHRyrmRo6VbCw+PrYenbgTyYYvBFOkiMT4CxsT44PCFSU9OZlp7aIh5JqwHSMuXQ16LlXh4yUjMg0NZFtpR4HlIJdIjGZEpVZDsfAh19UsZVkOP4kCnV4MgyIFEJoSVQMZWxmWlJ0NY3hppcr5SkRUTcZ0NtHpLJfeIKtJj7Y2BsAiFXCVlWJrkHHPIQSyEWiyDSIUVFlQxpJG16Bgbkl+cjI1sGCfGizExLXyFelsWKkngXH8O3a27h4vkzOPRtXwR5VkfbHuMxZkgA3HsNQCb5LTbs2Q+1lysuCTmoMWUqwrWFOEZ+N8WxE1AlvUZidArUwU/BTZUiIyacaUdBCi3ELpQYNqQ3qvfrC2VGNsZNmgyhqGw0s6zvCX1bcGrnv99e2Lt6PhEHZt3U/x2uHvriucuyUY2fdhg87Sc+n8/0d1G9enXm3NnZ2XnH5zz8DJVnnc7WE3LEl7rJb+m9u2jBIc8ohxOyh75jVnU8Zc6Z7GcCK703EjBp9iLmOT62fSUTR+G3FVs3IC5Rib4juuLIoavo0LE1Tly+Dn9HKbIyVXCp2Qyn926Db00PSLPJw0lK2WK9muS4Leg5ZCjuhCahtp0I8Uo93Du0B/4tmiNWLsaxvTtQr007XDtxmJQ7eZArFRgyqDd2HLqGjPhn0NURo2v7JlAJDXD84gO08TVFhrYVdm07gp4d7KBn4YtVq9aict2WaFLFnDxLQlzbswN+Xfth1V/zITQ2wyAzY6z8Zz28mrXD44snwePT12lS4u5z0bXPQCSR20LbIe0+eQ9DiDdB74CegabeQiwQU3Vi5rX13twPKgwUIRE9aOkVeEOmo695iI3y/VAcIoJGJqLcPXJCHgxyzsOIQi6kHKxv9KZmRk9LSKYPe1NW1p5DedOwcTM0vFnw5crTXTuY8Obzp9Bf/S8ykxMRlCmBigh8up4WRFYmGPDXMmz9eQ64dS3BeR0D3Wq+UGSmoPKWbeA/D8XQhnXxNCgcPlVr4uXLCLi5v3fIjAqh29BJyU72tobzZ03JWzdqW1+Mr/ld60V9xxR4mAVCoXrRyQj6e/JzxSE/j4KCa1f1dL9J56VSqVZYWFjW0b52cHNzK1a8mNeXOeJx0L3w9qPb/sGazbvmBIeEeLm7uQUWEoeefQbmzffvpimz9GznX2CflgG98TYDRoxmQn83TdGC/rfqqCk+2GgR135QT2beY0Cvgufr4Ef+++Utm5Gpa3NNBR2t4BzyTee8bcOHDylwrF9Af836KW/a1Y8YOVizzaWwW26c8xZ1yFvFjK+BL00c3sX0bzqgln0cbv2zEzMX/Q/f1NgMmc1U4gXWQlpSMsbWrYz2Hcbil9FjEdCxLQZOGI+pkyfAeskK6BOvbfLoqYzbbbBkDuzPJ+KR+zubw1UY//45q3KfUTMSkc+d/7v3liL3NTe3iHxfXFQYqAfQtvdIlVDAJ89m4xFuLk53CL4fk0axWCxxdHR8ObRf97wWjWx/DiyfDfRdn712EOYeNEAf7V0I3LsF8bZqBJ7fjEqeERCSMqmOvi2ubFuNXeEvMH/6d0gzs8au9XuwvmVbCKNe4OKJo/g+7AUcZGpMcq0EmVwGJSn28bhqCAWfRj9Jjq0SCHjZJXmAt+w5MutJKKMP2TnHqtcv+cXSzMwsPrfhEg1zXX8K/XArNTXVgE7p6el6VDzo+tzwrbSo6UTrFfT19dMMDAxS6VRUWvgzZsyYS8K5H3TVLJ8N5Hf81En4YNRKNTo/C8OJNClUQ4/Cr2kwlrx+iZU/meHBwyjIEgNheqcylE+eYnKWAt3atYTKxA6vzWxwNCEUkiwJFAdDaZkOcrkcDdRymBgYIvVpKP739H/Mum+//XRfbdJm7Qc3rdCmD6tCoeBnZ2drUU+PFAtEdJl4OlwC/aeaPmHEcJFINFBbWzuLFivouuLiFwqFMioedCrLdLOeA0u5kZQtxfrjZ6EgPgHTZI/k4FyFiuTkKgR51MZu6ELN48Jgz1wMt62GsBULkKojgsvgUbgkz0K77K4wteXi+tiWOPP9KrwED5d0TZEpMUEnrxoQKSSwlopwLSGGlNU54HC5kEmUyJYrEBMTD5laBpmSC4WcB10Hb2jr65C0iCFVKSDgiaCrLYaRmITknNWqe8PaXBuzx727vdHHkptj5+ba5XaiMoIVB5ZyY/X6jXBy8WDEgNa0Z6qkkJEHMzroKTL27UQbrgIr/vc3ZmtZ4M9TO9HZzRkXnzzHma3r8WLzGtT9Zx0StPlwO3UBwmNHkCXJAMfAFNcyr2JMyBNYD5sKt4GjYTZ9MMRa2uARD4TkxJBLs8Gl78mIGImI+OgJhWhS3Ra1q2u+5ZBDxXz3kS3LJEIiZb730BFIsWDxFvw0eiQ4PPaxoLB3gaXM0HOvh7F9OzG1bvThI2VhxERE0wY7jOfw5NlTWsMOWVY2srKyGMFo2aITggJvMM2qj2qJYO/sgthHIfDy8YOKxLF3z2F0kadCnRgDF+Jhj9lwE/3a2+HgnkPgcuMg2/Y/qL19mQdczlGBS7/jYN7kK5jiBG2ZeeXmdZgYGiE88hmE2lrUDWf2p1+Vi8TUa5fhr42zkfVMUehL0f8yrDiwlBlcHV3suHybEQWFWgJkpCGVI4KWqTn8HCshOEECBSlqZMaGQ5EcC2VWMiMiVDwoWaQIEPaaFJtl2UQ40pgvOof36Ehy89HY+CgCtZbOR0ZEJB6eboqOCxZDxiXmq+ZBLc+ETCGHIiML/AziXaQSj11BijL07RQpWrRo3xoBu79HdnQ2RLo8CG1FMNURQC4UgydXE6EAXAK0waGfmXM//0+9KwpWHFjKjNQ7J5hQQXLsH8dPhLmTI/OVZkZaOnHh5ahu7ExycynJzd0YUaCvHKlHoVQAt6/dAF9HC7/+MgsvT13Fod//hbZKAK1MEXT+vo6hgpsQCtzBtfPA8xt8PG8wDS5mdgiy50Di58AUXSh84rfIJZkkEcSLoH08kMjFBrrQfmGHJvXrMd8NaQlFUEnJfumZoFUhfLKuGvEYRuu7Y3lGMISsQDCw4sBSpogb1UO7Ov4QW5oinOTizMOvlENJ3HxtpRAZShVURDykpEgRdP46ZKSYIZVnMa1a9bLFGDDlWyidakHVozskKiV42nqQEc9Cnp4FOY88+sQj4CjVJF4B8+6Tr1KA/1SIgB61EX/yNFM8oW2GedQJIPuRU0OVloYQBQcRly+jsksVCHV0oAXicZByhZqjhop6GHIljnnVwEr68aSaFQcKKw4sH0RSUhJTpqdQL4DpnYkIgZDkxFY6BkzdQmJSDFO2F4vFMNA3ZYoJxqR8T7J5SKRaeMlXQ4s8mTI+eVTF2hCKRcSr5yM0NAIKDvEqSJFELRKAT9x/JY8DDk8LKm1taBsaQ8+yEnT1NPUHtPn79cfRaK1Nu6lUkiIGmdQKZMmySKlCwhRzzIgImOsbw0ZPG5nSbKgE2ox4KKQySLLTaSMyqCUKRIaGQ0mKKw4ODp/2Bn8G8CtobBKW/w4VbVBsDWI5wHoNLCwshWCFgYWFpRCsMLCwsBSCFQYWFpZCsMLAUuZExqXD2kwHtQdvxo21peu9qigePk9ANWfTQutzO6hlKXtYYWApczpO3QsbMz38b1oLRMVnMiIxau5pOFjrM+8Qdp8JwcqZrTBy7klUstCHQqXC/vmdUWvQJtxa358J2/g74djVUGZ5yJxjTNh6wm78MMQfkxadYZZZyg9WGFjKnNsb6MO9GXPWXkdkfBqsTHQZobj6MAqt6zqSnJ6DEX+cwM11/fEoNAFJaRLMWH4Rc8c0Yoayo/wyvB4jDFkSBSMC8zfdRGJqNup7W3/iq/tvwAoDS5lDn+129Srjp2/8cOhSKHS1Bdh5KgQHFnRmtutqCdC/bRXGM1j7o2Z4gTO3XzGTSFjQJBuP2ga2qU3FwwoDS7mQlKbp6LRDAyckpEoYYaBCQPlhiB8zf31tPzx+mcis4xEvgi7n7kPDwwsD0H7ynrziRW64bGrzT3NR/yFYYWApc2ilYP6H19xIGyu/a1lgny6NNf37e7uYM+HN9ZqhFe9sHFhgv9zlt0OW8oUVhq8b2tViXtW9WqViejsSclVMJyZCtqMSlnfAWsZ/AKoOmUkRSD7YBvoCLiS6GRBqC3Exwgq1Bp+FVqFhS1j+67DC8NWjRtzmpeAK/4FSkQa1jhB6tbojiwhDnepxEIZ2htRuD0QC9pNjljewwvAfICx8NfTVafjmHzniM5Ww0lqPfXtrQeDkDTUvFWt6OGP0vpfMvrR7tLNnz6JRo0bMZ9NaWlpIT0+HnZ0doqOjmc+nY2Nj8f3332PFihXMJ9dvUSrfY1D/AWraecvGLZs/6Nrq1qqtrle/PiwsLHDn9m3s2L2ruEPoOIkPP+hk/yFYYfiKoZ/Up40bgsfxcaiTVQlzbVORQR5bIVeAh1NfYFVGBDYf6InB428wXaOBz2OO6du3LyIiIpiHno6vSMWAdoRC+1ag/RsYGBhgzJgxKItP9gcPGYxGTZrA2cERz8Ne5q2ny7/Pm4sePXvmbZvx7XTMnT8PJ0+cRMtWmsrMhPh4/LVoYd5x54iojfhmGJo2a4Z//l2Fq5evwL9+PVR194CXtze27ijV+LX/WVhh+IqhWbckORT+icbgidXkx1aCT57/bKUcXC0pkrgm+KbPHlx8kIGQyBTw9UwKx0EEgfbI7OPjg3v37jHzVChWrVpVRqksKC60U1imk9d8myo7FxySjl5XUFAQPD09GcHILyo2NrZFChbtZ1JLW7vQepaiYYXhK0ZNHiGelj7SOUmwojk+h4c0nhoqqQqTHgtw698U/LHaHD+4aIMrNNAcQx6quLg4RgByv0WgxYvHjx8z3gLt8ZlCXfeoqKi8h5CGH/LtgoNTZSYe+mBfungRrq5ukEglOHPhPCMAMaT4svfAfoSHh2PUmNFMccbV3S3v+EMHDzHHRpDttA9IkViEM+fPwdLKCrt27ETtunXw6tUrnL986SPv5n8LVhi+YjgcJcTjZ8D8/kjoHzFEVkYGEknR4JlKiZUDSfFArI2RPbOhuKsLtVCTS9OHm3bbRrtNow97pUqVmGLD9OnTmfoGWh9AhaJjx45MHUOuGNja2jIPb2mxt7fPm2/QsOE799PR0aF9t9EEFljv6OSoOb+dXaFjuvfsUer0sGhgheGrhgddr3qk+FAF8V4cmKwzhtazFzj4OA4N9OWIiMrAkiVGmPfvt+ByNO8sqVfw4sULbNmyBRMmTICxsWak7ilTpmD+/PkICAiAM3Htad3Db7/9xmyjYqGnp/eedJQN0R4+sAoJZObbXk7B0fqG6N6lKzOGBUvZwgrDfwCO+2YYnbQFr5sNtEQ2mHM/A1e3yKAtMsPicT7QqT6ISMKbYoOrqyt+/vnnAnHQ9TNnzsxbLkMhKPFbjFxRoFBRoLCiUD6wwvAfQKQlAL9TLEKunEbWrskQclQI19dHs10XSJnc4FMnj+UzhBWGrxte/pkq9ZoD9TS5rlcFJSA5Ofm97zSFQuFWHR2dvhWUHJYSwgoDCwtLIVhhYGFhKQQrDCwsLIVghYGFhaUQrDCwsLAUghUGFhaWQrDCwPJVEx4Vp6adz9IRusf+fQdbv2/xqZNU5lx5kjy4vb/T+rKMkxUGlq+WdYfvqTvXc2DmadPu5aN8Mfnva1g4yu/TJqyMqedhtI4E68syTlYYWL5ackUhFyoOM7u7fprElDNLdt9dMKFbjWllFR8rDCxfNct330J9Z0He8r0oPga3LdzvxJdOt0auf5dlfKwwsJQpGdlygwOXgq+2rWVbpST7y2SyPnSi879tDTz655hG7coyPcfvJRBhsMpbHty2allG/1nwKl4yx8fVKLQs42SFgaVM0dUSpPZtWc2TzjeftDtp16xmRu/bn8PhZP+y+f7phWMbdySiUObpOfxbG4xcdAGpWUps+75pmcdf0fD5/Atkuk7n1Wpkt5t5aNK5JT1+LPPzlHWELCy5nF7UjenM4VpQVGtdoeqQralOnr2duBOzqUN957E6YkEaEYVyTcc/k8pecD4VTaccbHRrVd/GuctEFH5+z+4fDCsMLOWOn6f1cRIwBf0+vxx5sHVWO+9ezd/rSHx2NK9dBRLzWrh8eMOnTkqFwAoDS4VCReFTp6G0GJvYIynxFTO/aOdVTOrmDitLDwRGx8KMp0ZW2AHwHTpBwYSd8fPA1jgeYYA7Z3bgz3EBWHkpEc/un/+0F1FKWGFgYSmGKtMO581P6uHPhHfu3oa7mQUSk2Jxy7ozaGHlGg2VIZiz4Thop3dz7iuwPtQbz+/P+jQJ/whYYWBhKYbHC1oCM6KZeedGo5EUdAFJCUF5Hd/b8/OF6jed1QpI4en5ESoKKuxM5qCH0ZczDiArDCxlRmhoaNVZs2bt2rJli3vbtm3Rp08ftGnTJq9D2aKgPUsfOXIEW7duxaVLlzB27NgTs2fP7mtiYpJYgUl/L0mJ0WhV1xMKu8Z4fmEFnuz6ATVbD8Xv7R0K78xzwQ8DWuJsciVcO7QaU/u2wv6gbOI1XKzwdH8MrDCwfDDm5uaqb7/9ljN16lRm2cnJCZs3b2amkkKHvhs5ciQz5dCKTAm5CzNmzMCePXsSnz17ZlqGSS81J64H5c17dJ+D293fbHPgFgx/23gyb9ufW07gz4pIYBnDCgNLqSAewdMxY8a40AFn6MA05c3cuXPpRJsqqolHQkVi+rRp0+aX+4n/47DCwFJiSO6u/pBBZcoK6pEQUZhHwpFEJJzK81x9rYyxJTrpg483dZuIhJDFZZegCoYVBpYSQwe6/dSkpaXh5cuXjhV9XhMTKyQmRsOu5xYoTo1DdFISTI1NEXN5JlRVpuCXhtaYczEKZp4+iA+6zxyjDF0JhdMITKtliaW3YtDyz0c4OdURcWodmH/m9ZCsMLCUmjp16mDp0qVMWFEsXryYdkVfaCCciiIh9AxTiRoYm4RaFuOYdaMcORjT+nesfD0Fc87sZ9bxKg3PO2Z0k++wK10zSM9SMo3h/kvi2IykpM+mXvWdsMLAUmpu3LiRN793715m2Do6juWoUaPK7Bx0rMy//voLDx8+hIeHByZOnFhmcX8IZj7Tmc5eTKtMQu63miteqhF342fIyfy0eh2x+GZM3v7qtNdYceUPLHQajovT6jPrnjX8mcSxEOckQBNxxV9DaWCFgeWj6Nq1a5HDztMRrDds2IDjx4/j4sWiX9WJxWI0bdoUHTp0wMCBA5lBc3OZN28eM30q3q5fSHh5SBM+XgQrY02z6MQkzcsTOphPrijEHBut2Ra9lwl1yNRmwWVmfmpNQyb83EWBwgoDS7lgbW3NjHWZf7zLrwVav/C1wwoDCwtLIVhhYGF5D/lfWzacF4SL0z3ztqVs6w3D3ts0C5mkqKHT4VMksVxghYHlM0CNWDUHFp/5KzxrWzOYGZsgPikRrqbGuLmsFbPe2NgKcRcngF+1A6yMjbFvlD3q/nYPk/1qYN7+9RBYeGFTNyv03x39ia+g5LDCwPJR/DywBba9MELw5Z1Y8FiOaVUEmP9IhuHqfbCsPxGS1GikbOyIndeycFB7KDKPTsT0U5GodrAV9gbysT2lNq5s/4WJK+rSP3Bs+xOk6bFIf7wHRtUHQi7NwOeiF9v7msN4gqZldkdz7rv32x1OhAFYeO0utvauhD7bXkNf73O5ipLBCgPLh6N8glkbTuEnMrsiUoW2R5uBVy0ISmUiOmxowYgCR9QMkq2mGP73QZzU4uBwthocjhhJ61tg3D+HMC5zV150g4O7EFEYCY7AE1xFMJRq5ae7tndw409fTJw2FWdqLMAPePNNBL9yLfTs2ZWZXxwSCfcGnRD/KgOPlzTHjOkTMLp5Q2y6J0P/6sJPlfRSwQoDy0fAy5sbbcPFpishEKlTmGWlUvNQq6VnIN3Tk5kvMo+VS/JmVSpVzhw/TxS4up2hythfxukuOW+/tnQZtBWL85YGM/+TkjRFhB07muVtCb50IG9+bhdN2L+c0lgesMLA8uHwXPHLoJZYc1eB14GnUHlzLLJ0gNb/vMD2erch0B4Feda7X+2tndQR6+Jr41LOx5gbPQ9Ax2gO1PIIvDj6J9y6zIaMFCVYKh5WGFg+ip/Wn2SKEhR/HU14fGRl8r8yEQWNpyAK2MGEO7M1DaHUagmSN3TAkEUHMSTnWKbisf5IZCZrPr+u3HYqFNKpFXMRJWBcr5bYfzsK4c8fFbndxMoHidH385ZpJeSX3N6BFQaWT4LRwEOfOgklRvXyf1i2/SSWkfl0OaAnALxd7ND+h734bWBNRKs4jChEk5KQlfw1rDzevLZs4OWMpt/uwM/9fD/dBXwArDCwsBQD13EMln47ALNXH8bZ0AT4GHJx7cYd7BjjAwyMLLCvsXUdpscnc+IxLGpmgUuBsVCnUI+JFQYWlq+K0Mv7MH7+RjIB1ibG6GnGxaLgBMRH0QpSDjTVpLTi9E31Ki008XIWOYY9KzzNHwsrDCwlZs6cOfTr4fGfOh3z5s2bXpHnc6rfBYM7NcapRwmISqT1Bpkwt3FB3LMT+PmSFGdGu2PUtkDUqaKHxNcXYOXoBVMiCuNPxaIhKUp49JyDld/3qsgkfzSsMLCUmO+//36CWq2e6O/vn3rt2jW9ij5/lSpVnj1+/Nj122+/rehTY92B8/mWdBAX+YyZ+6kBmR6+zLfJFdEvA/MWLwY+r5gEljGsMLCUCg6HoyaioE/nSehHROLqgQMHFB07dixzW1q7du3toUOH1nz27JmLs7PzcyIKZX0KlnfACgPLB+Pn53eNeBAF2vrKZDLhH3/8MfN///vfmPj4eLP69euH1qtXT+bh4SG2sbER6ejo6GZmZmaEhoamhISEyM6fP6939+5dJwcHh7BJkyYtGj9+/NLcuIYMGcJMLBUPKwwsZYpQKJT99NNPP9PpU6eF5cNhhYGFhaUQrDCwsLAUghUGlnJFqVS+t2UPh8NJ5XK5X2bV/VcMKwws5UpaWtrt920XCARHdHV121dUelhKBisMLF8NzSZsV9PWhsmZmraIJ+e1/cQp+nJhhYHli0etVgvPnz8v/aGzBbP83c4YHP299SdO1ZcNKwwsXzy/rzoi9XfVyVue2LXqJ0zN1wErDCxfPPlFgWLOT8D9+wnw8fH5NAkqR8wMtSqkkwdWGFi+eH7dF6v+sUvBPqYvv+TiK9QFHJnX1bQizsMKA8sXz9mlvbidZ+xSTmhlynzoHC/Rwtgubp86WWXOkD8vvtj3W6fC4wGWA6wwsHwV7J/bnVf8Xl82RBQq7FysMLCwsBSCFQYWFpZCsMLAUqY0mbAze+/sFiUe6F0ul7dLTk7OKzdPWXnz+toZrfzKJ3UsJYUVBpYy5dySHlo03HTi8betfS3n8XnFD832Oi7zlVCs1c3D3uQ2EYVyTyNL8bDCwFIu9G9VZT4J5qvUau6KPbcT+jZzNnp7n+mrb91dNa2lr5FRoU0snxhWGFjKFS6HoxrbrZYxnW82aVfq7lnN9aevvn1p1bQWDYkofOrksbwDVhhYKowzi7ob0JCIwqdOCksxsMLAwsJSCFYYWFhYCsEKAwsLSyFYYWBhYSkEKwwspUZNqIjzKFUqXD47DSkZyfB058FIyx26Jv7gCmwgERpDj6NbbufmEMot8i8AVhhYPhvo0DXpCfdwct8A6PPSAG4a6JdRbh6DoUi4hQzdG0iKX4m7j63xOtkYjavGQKk1DHX9B3/qpH918LlcrlpFlHnevHkYNHgIli9bhrFjx2H58s8/HD58BLZu3YoOHTrg/Plz8PWtiadPQ+Dg4IDExETmAoVCEdLT02FtbUW2PSX7+JJ9z6N9+w7Ytm0rE8fncC0fEq5fvxbTp09nfjsSfvE53OF/3YhBxkKbowSHJ0BkFB8cZRqs7F/j9fP7UKnk0Nbi4/7pUNSppYWoJypwuT8DrDCUOXwqChRqYJRff/3liwqnT5/GhFWquDNhnTq13nmxDRrUZ0IfH28m/PHHHz6La/jwa5+eF86cOfMPOjTcOy/+M0etVkEhi8Khsz7wdrmFV5wWuHzfAGYGJnBwu0oEQA06Gt6C/f4Q1muD8wdP4fuAc6S8kQXNoPNfvC5+VvCJQYEY1KdOB8tH8iWLggYOflrfEHVreON0fGN806sxvp9WF78u/wepEEPIEUAk1EXlcZNw/sB52NjZQi5Tgs+TgRWFsofPisLXwZfvMahx49J2nDz6ABsPLkKL+rMA4iV807snkp4cIsVBDlTqLFzfeQLchHg8N+UhQ8aHvoh6vF+ux0AHBTYxMUlMSUkxDAgI2LNr167uZRl/bHS0nYWVVXhpjyvSY0iMegZDS2ckZquhh2S8CI2DpT4QmaqESWVXhIYmAdIY6CsE4OuqIRTrg8fnobKDTZldEEvpqEhR+O7kRSSrdCHS4aC+pSlcDIQwFPCw/Nff8fuiP8FRyKDmqPEqG/jz4Wscvx8OrlKKhY290amKDYqu8JdCwOWhcYuqqG1XGxsX2qNewA1cOroElS2joFRzoZTL0Cp5OdY+cIMhssC1k0IloHF9maJA4XK5qtz53bt3dyP3Rv32COKlZcXofkoSBzd3md7vmj0G96rduNmOksZRpMdgYu3ChOY6NH0mqFrNhFnO7YXStpol+W/54SlnKXMq0mP4sVl9jD1+D7JUPg6maCp5zz5+Bo5NE+xZchIinuZlFzF68HiAPgnB1cJv156hk4dtkc8xl2z/Y4YbjJ3GY8/q9RjZLxZrf3KGvpEabnaWkEn4UKo48GvKB1//KQR8FZRKNRQKVeHIvhDoyOA0HNXECiI+F3Hpcpx7koLdO7f37taj17YPjbft5O+ayYNOnrO0MoaevjZW/X0It3as3V4qYWDrGL4OKtJjEPOZoeXAVFwrSE5OHnSOgE+e9zdPfK4oKIkm8HKeXamKeBFRsXCwtSgyXmuDKETdn4yZw/XA5/LRsK4CxiY6UCvlyMgyhhYvFXKZGq5OQFo6yPk5kKuUFXHJ5YJcLhdMbKHxsuVE5Mz1BOhZ2ww3N/2w5mOEwUHx+BzcbPOW+w1ogfDw+FLFUcBjiLy1Hya+nTDolyPYPvv9wwl6V3HBA5JLlIRXyVLYG4kQfHQu3NvOKLCtau0m6DllAX7sWTNvXYshi3Fq7URm/kFEKp4npSHAyw4vUxRwNCy+6cXGay8xwM+xwLoHqybAe/iSQvt6VvND0MNrJbqOQufZuInc9P7g5luX+WAXdLzLtJhYIirSY+AQL1XB40AbfEhIKCAi0NbTBYcePGfcVhWHywgDJVcUcpfDngS/Uxi09C3gbPsSIkEmsqUcjUdAlCU92xApkhZQC/ZAKs8Ahy+AQJzJCI2W+QwoSdGFxxdWxKWXKT/++OOv57Yv+bGmgx6eRGXBy04Hi09FYsnUnpc/Jt5bN4JRq47mLd3WTafRp39zJCellyqOQh5Dl+rOMOiyBEePHkaj2m649SIdjevUQGVnX3zbRIhQsR32nAiDDl8Nd98mkMm56D5kAH4Z2x/nlwyAaYPJ6DfhJ2inPIVe7eHo6OeE7t1a4fCezXAXZmB25wZo99My1KruQ2QyFXdunoOIXsDdCKxYdgD//DEEyeky7H0Ugeev01HH2YyUL9X4/cgjdKvjgb07N0D76VG4VXPB90sOY9qC9aju64P9tyMwdcQoKFOT8POmjZi3cCE6Dm4HDyNaANJ0BKKMPYFRw77Hso2Hsfyn0ZiyZC/S0+LR4rstePk4DIuH1YCuZws0duDj0ZZZqNqzJ3b+8SequDjgr/9txLJ9lzB3xkjsPxWI8SMHwc67Nn7achUz+/rjwK1k6F+ZhfoN/bFgeBPEyvXx57oDqNV2KHav/Qu7t65F44Z28HWoRMpkdT7mdy+Siq54HFezMtbdCiOiqACIUFBxIDkgRCIRIwKaeoSCDSTp+sspIjR+R5w6Bq5EaqKRTQqtGQpt7LvHwenr0fCzeQl9/hpEJQhxLiwdHOKi8ERq+LvoY+0Kb3B5X54oUH755ZdZk66v/ZHOU1GgUA9i/ILtH9VRRa2GtZ9AmuZB56koUB4+CEXdUsRRqI7h2IMX6DX7MMZOnInzD65j6EA/vAgOJL+qEEMXH4ZPg274dXAdzF93DKNaOiO1clfsWjAec8YPRI2ORAjGzMOU9g6IsxqP5ItLULP5YKZM+f2ijdgxvTZ6jB6CyfPXIymRj5sn/0STGp5Ye+ImNozpgD2nbiJdpnFHuRzqJqoxpEt7zN+2ixgdNTIlFqw8hJ+a8EgZU4RvhvbD3NX7sYMIg6FYhBFNjKHwnkqPxqZ/V2DypHZkXsAsU1zrT0EzDzG4ynjcCc96cxP4fMYtHj9+Ihbuugq3zp0Rcv8cmnXrgSP/+wEDh/9Ackdi5mlheBKbjZ3r52DesRTcmjgOTccsQS1XZwQ9fQ5Un4OAgcOw69/tqNaoB/4kx5inBiKdI8LalSswaeIDtOg2CKf27vqwX/w9UCMj/FLmEb+DqvpicLgcCMEj0qCkNVwQa/GY+0jfMOROVCDyVzZuy5DgW7USQk7h3t4rVf0Z56ZPREJkJBKzU3CMPPDNarVF8w6t0KlBM1y8dwPYtA/WFjr4d+kyck4h8SqAUJUM9zIlCJWkY5qZBTnfl9OTvLlt5eC4iBfuZRlnsp7nYPnro9ePHb6B/oNbMoLcasL0JqWJo4DHYFOrMxMyxYicogQjCjR8qnG3H909x4S98+VPs77RFCnMXBri2smGbzYMedMhx4PLJ/Pmj+fTw6t3g5jwxLV7TEidzNu7vmXmO3va4Pv215l5WpSgXDuzO+9YWopq2lcz37iqGRr/uSFv24AnzwtcKC1GvBj+Znnr3uNM+PpV/v1+1Jz3/iUmPLN7LxPuOHA0b489h04x4QZ6m+eP1ayckFPs4htgz5adzGzQrQtMeOTKLc1yiOY85SEKlLFjxy4vl4jfAY8UF6AiRQbiFXCJQFBB8DQ0xOOULOR+SpErCPmFgfgTuHTtKpr5NygUp3eNWvA4cYXkIzLcTE3FjfOX8Iic5oVEhlXJr5BsYQrFxMEIIp6J46mTpJiRRYRBCaVMzpxfQYoU50hGcKz3oAq5Bx/D+cAzzW0NKr2aufaCx6SWtnmu1cIT4dz3HVcSjExtbvy79zLJTJX4m3jhLVrVUrh27na+NHEU8Bhinl3Emp234GclhaDeAFTKuIktp15ArVDj+x+mYt2us4h6cgd6lo4Y3ssfQXHa8HU2xoHNK9Gx3wjM+XUuzBWJ8J8wG7v+twKzp4/G4jFtoevRGj27tUWithucTEvcgTBLKVizZs3QadOmLajIczqbiBGaIgHNthUkV6pcyRQhyS+R25o2P7l1DFQ0nhtVRrN82zIUcjhsXo9sOSlUcmWQZRBvTiqHMj0DUxVSbKxiB3FcJcQkxUOdkQY5mZCdAW5yGilWyKDmcyHi8sETCHFWnVXo3J8TIUGBPo03+dwrsLIGEUyeUDW/zaqR9HVlWZyn5cSfXA/P//GptY0pEo2qDC/+iIIU8BgSIwPx3chW+GtET0zt0gU37l5Ex1p20Na3Qi8vT3SY/gM2rt8Kc69WWLL0bzQ1SMS/VwJxOpK4lFee4/7Vo9iw8n/ETUyCjb4KLevVxhA/K2zbuQXL992FfkYgLt8LKYvrZnmLPn36bC2ruCavu6f+a1B1Wpp7r5GOqW6PKedCwOOR4gQpHnDIwymRZEFXV1M0o54EUxmZIxRUHPhk350PnmK4O23PoImn1up18KrkhLZxr+FSyRH1fH2xdds21GzbAkcvX4fMygaKhHjcrFEFaiEfvnv2oba9K55d346Hc18hPcUb567cQeADFRydy2ek63H/3lYvG1bzg9sXPLx7s+7aGV0Zt9vfyAgLFt90rrfEJc9dlSpl3AmHB62iU/RcVbHnkUolWqLIc8Fppv419PUNEt/e7rwo+CkExJ2OIwt7FGux5+haHpejPNjTsl1Lv+oniou/UB0Dx6Qqpu7WuPd1ei/OW789sA8T9u3bt1Aky6ZrBKnNsYt560ZMmM5MlD7FpYLlozl+/HjroUOHrimLuOjDPHXDfdX12yHKK8t7vvc1EH3YNSKgCcV8Uc56Wr+geTuR6y3khpkKAWJSk2BlyPQRC91MObRfP8PF2AgEvQjF6MMHkJoiQNajQLSu4gOfbCWeWRghWy2AOiEDfi3bI6WTP45t8kNGvIp4DRFo2XU/9p2bjLHublAqOODxy/bLcKGAjynr76uDgl6mHF/QpdTdWueKwh8Hn2uJxWLJkTP7er+9j4AkWk4Sr1Ao+ARF7vqsrEyd7zu7ZYi0dNPnHgjWp+tEr09EQSk31I86lQD9bgWERDzxmOLtuClKlZrXblv0cbnRC2Y5TO3Yx8HDt8jXogU8Bs/GY0t7vSyfCbVr175Z1nHWrenGow/D/AHeApLbFGlsuW0XiA6A+gVd/Krj0L3HeRWPlPzz1Lug3A58jA4N6jMVzDFiKbQUOriSqYC7mw/MM5OgxRPhZVYC0uMiYECKCk7NvdDg2jWolAloIBci8FwHvIx6gLBwOSy1xAi5/wA6qlQsvirB77VfwNjeqaxvB4Onp6Ph+NV31Eu/8S2196Clq59CRYHOjznbe+Pb21/+ki2Y2sZeMa2tg3zRyQiOTCYTTm/vJM3d/vOO++Z5O7t0MkqMi6htknT9BoJ3q9VuAVxaDIlNSrOiAlDU+eVL23ISklNM0+TRHfSTHq924LwkXuY7hIFt3PR18Pz5c+dq1ao9zL+u1vAthbLNhv6epY77240P5DdvhyguLe8peHvb5JrWWHBT0xSfegwSWSaIQYOf4zlQqChoGjzxGJGgb4GuWzugHUcJ7nVjeCln4PG2PeC1b43gHyegg38TaFnbIqWaA4wsrJB15QYse/WlOR442dngGhpClfoS5y6lg48s1GuUDWu3Rli0eCXqmluCp/XuDlxqjyhc4nrHPXmnyyHg8xjv4eHD0NSTf3U1fM+ty8O1Rv3TT+9ebp67TIoOhTwxcn+UU9dccfxzaL2Xk1vZqXKbRs899EIsEomkb+9vYm57E+bEWwjereKE7FGlGPoOsEi5s1E+GjBYrZeVJVNqv32MqZFhAmC4TmXqtoH7dO87W4cV8Bgig86RixbC3IV+lsxDyK1LsHaojHv3XyMiIQ5PXqehWiVdWHg1RXx4MEzUyahaxQFmjj44fuwAEpOl0PVsCWFGFCTRD6AU20AgiYVAqEtygCw4WtGKEB/EhgahTTM/hCVJcPvaDciTIlDTxQaudZtg/46tcLc1h9qlFl4/i0Fm1As0aNUMSRI55EGnYO/hCbGxAY4fuY40OSna9OiE/afuQhobhJ79+mP1vquoXaMqLNXxuJ3MR1M7OUSmtrj4MB7hgVfh5WAIr3qtsGPLVrRu5A0DW088eJWEoFu3IJDFw8TRG7X8quHikYto164hDh85gqSkdNRxN4dzlapQ6Zjj6Ln76NTEByv/3gBdfTEpXvXEgQsPkBERjCp+LVHdyQgnjuyFrlgIXedGCH5wHw0a++HqpVto39AGPOIS8/WtEP3kJqxdPJGWEg9dPSMoRAY4sms7WnfvhcDgWHLOohsCFYWNjU1kiXf+AGrXdOPTh2FuPy9R/tGlTLU19Qf07QATQgCRWMTMAe9+bXj2ZhR+rGwMSaoU5mGniVubBVMDIySlZeHG1Wsw+vVPuBlyIVHIIc5MQ3pyPPOkcg31cSs+Az6/38eEIV1Q12A3nsfq49/1AdDW10HQnZXQHj6kHO/EG6pVczKYsOaOesnQ4r2HkX9sa0kf9uL2s7GzD6MhFYU/j73iU7EoNiHu3biKiGubDIko5CxzBLyTKe87JPcbjZd3T//lWKP5lLe3F/AYzGyccejAKQS4+SMs8BouhovR2TQZES9fo8/A1lix/ym6966OzWdjYZsZjNqdhuHojn1oa+MNO/8WqJKVjHtxSlTSSoJLtx5ICjyFaJ26uHPhPuy9vaHKuItwrgyxyRzExkfjxeMkeJkQ1fdphrNXQuBQFzCuQtzL8DNIynBDzJ1D6D9+KrLSwrBv601MHdkKmzfuQafejVG1cVtkBB6ktxCV1MHQavjmHejDx0+JXT6GyKE5duw6jvq9eiH68ik0D2iJczdew4vs49O8Hbbv3YtOQzwRlypB+ypqhOvUx42Tl+BdyxpPAoPgU9UUUqPKaOCYgnOB6Yh5eQRcr06wyn4EKOyhzJagYdfWzDmNk+6hevM2OH31MRGGetBx9kPEjbMwNpfB3zodp0/fQmN3PhSyNFw9ewu6fr1w90IoOsizcVviCvuUo3hl1BTORhzIlRkIu3qUCEPJOyBJSkoyLvHOH8HMzYGSBQO985b5HF3Ql5a0ukyl4JFQBVcrPTyPT4GC5OciriDPW2BCWgFJX3Vy1Hjw8CEMwpRIePkKfK4F9AQipMhlsDAUIWvpAsimTkBMSgJcU1MQfeseU1xRSpTICn2C7I5jMfbnJVg7yR6ZYfvIMf6wESfjUWAwlv1siQTdHhgxbAJ50DyJh1LI0SkzaGUqFcy/Bvm8Vxxy3zZIpVJRUbl/VQvP0Nx5WowoKo7k5GQXIyOjIpsb8239+mdlZo6WZGeaUkMQ8riyYhMv0H7lqJ0ymcwVFob8HoPQ0A4BAzVq6+Dlj2Femp36VNY0Vx7dW5OD9W9KXbXKzHz3/gFM6CkkXouBNipZ0SVNhyiWXq2ZT62qV7bLOZ0zmJKfj5lmeyNr8l9Ti1y5B3MgGlarRKRY80A0Gj+VCXX1HTBjogMzPygnN2Bqfxppzl2j5ZvqzW+6+OfM5TSxrjpCE/8ozXG9O2jakLtZGMBt1OCcdNJ0WIM6lJ7DNOeZOnMUEwbYa6JxrJLvrrn309yPySPyVjXoMogJh3TR3KP6buR63HIrattiwJsW32jaXXPNtUf2YkJNa/ne8GHCnpr/Qz6/XomiXkU+2vZTu2pv9/lY10YHp19p7JAWJxx0ifejJUd4tDqvTUNeBWS+L6Tj1SI4kvtU7XQ4wtVGkPNoM2gBBixbj6CH1xGsViFeoQA5IVJjo8DRNYRdSAT6eFTF0tNnwG3dEeNuxCMtUoVv/HzR0HwVzBu7Ej9FjfjYGPw65190amiIDgN/K7d7wpOkH54/skGHku7/z4yeZ6oOHvn3tiFH1A4mzmp7Y2daN4DIyMgf3nOYevHixQgJCcGyZcuQlpa20NjYuNDDrK2jk04nJl1cTrGeRrp5fW+9yJMp5Dfj5v/Kk1JkHcPmRX9Bu1ZndK2vefhjn9+EhXNtZn7LgXPo3alJge8DoEhlGvfkXAPOHt+Dpq27FUyEVAU9keYoJTGO88HxaOZhVlzaiyQ56AiMPNsVWHdl+0rU6zUCN24Gok5tr2LjuB74AnW9KhdcmUE8DV3Xkick7g5g7vveXSIeXUamSQ2YJN2EqWfjd+6nSnsFrr5GhW7ee47a1Z1Lng4CMZSkkuzn7WBYbAurB2EphT72mNLe2cna1OdlUfsHuJihtaMF9j2LwpHXKQgkRaXQ1o2Q3W0mbDq0gQlfpKlnyG0FyeVATTyLreSBDrtpgWTtWKhkMiQmkFCLi/+tWgRRRDwwsDdk6mykyiTI0hPCjIjLrbET0H7AOFzauBDNx47Grsk/ovlPk9CvXw/06r4VZsPa4HKjW2hTawsOreyEkLi2KO6pNdErskl1aP6FxHRZodrMLr5m7etX8zlSTPR5TFpxuopdZbfg6OhoV2tr6wLfixsYGNCPiIqqCGQU99tvNY3++vXrh3r16hWZy+dHwOPIi0tPzPN7w/S0yM8RdvoxnFoWaH1ZwGOIfnIFZxMcmOatreo4AtmxeME1Bj89Ecdu3EPrOtUR0LYeUiJDceXcNfiRcvOZQ5fRc1gnbD5+Db1b+YErSULTFh0QmJgB3fh7OP2AC115BNoEdEfiq2vYc0MFw9SXMPFvCElKFMTauoi+fw5WtTvhnx0XMbJbbWz/dw30Lc1Bbh6uP5eTXMUGLbztEff4HO4qneFHfsdMWgueEAW1gRV0OJrXuPKXZ/DgUTopD3tiz+mb0It4CJ36XZBxaRc8Og3A6ROX4asXCbFvFzx89JIRhstPEuAtDMHRcCP0rMlHCon3wfZ1CNd1Rs+mVREh1cHLiGR4qe7i9sN4aBPD9evcG9sO3ULPegIc37ARjtWrw8urGvafvYfOTaszadm2ZQ+5Hy7ISonHK1U2TIgmXnr4DA2qaT5p/3v7WYzq1RQPAh/CTIcHS6Kra/ffgK+TNh49SYSh8jVcazYt7rfNg1Y+vv1m4taqvh/03p26xrnz6QkJ11ZNbe7/vv3BEUCHeOv9qtgxk1IlQ+29Z6H1z1S8WHEB+n5V4HTlJJq7WaIqNxhnzv4LI9cuiI+Px977YqjkntA3NoQyNROVZs1B08C7uKElQfjd+5C72CIlIQlcK1OIXqbAUE8EV+0wVNbRR/C/6zCtax3Mm7sOLef8gNerO6CbYS9kx9+FjlgPysQEONYo+DHdzZUle3lOBKxArpH/nuiopYd/GVynxF5CLpWc3Z/QkNh1ocY8urq6XgqFYi6fzy/wlaFQKBSQ4gfOnTuHJk2aUFFAcHAwjhw58sOUKVPmvOtcoT83NX/Xtlxcqjf6Mz3smqu2Xe1Rb9cGFfAYrDzqgXF+G0zWrBBYaAoM1dsgx6OGWCCE2MYJHfppBLTnKE3Yr7WfZgctTd8NXiYiIsUNMDy/Dtn7YTgTUb036+QZjChQRvbUNKfuNXJM3maf2m92tfRqRZxySs5vZmYNtUICDt+U8RYow4kdrF27CUOG9CdLOWlyHc0EQ/u2zovrpZsmDknIGeh17omeOVEakqlR7zduvKMOmYxp8aANWvu8ScuALppPUjoNfOMx5IoCpXdfTTHHtb4XND5IY+RvBExFgeJNBCWXIZ2Rsw6l5u03EmXBN03sannY+9wu7XE8rhDrmvigxcbDqD6sNWYpJfBuNBrapKx/9tJmhKXqgpOuxqAAD2TJo5CdYYNHMIWxqTUi7t3EpQY1IIvKgva188hKTwavUVOk3rxJytFeCI2KhE/zQdj3IJiU77MhcGqCnft3Qp6UBNsf4jDO+wwOOBng8OpWGDHqb2xePBoB3XqV2T3xc9QZ3a2Rz99lFiHRoNTU1KnEY1hARIE2/CkgDC4uLoyXRQUht4Njd3d3Wqf0rt+FvlZ+V61voa6u9Bz8imwV+dbXlTnHqelHMRX0IYpA82qJ5h5mZmZ4Xzdd9Jt1AS2HkjJnZnom9PWJgfELN7HWiMIb0lPToGegX2BdG19NTtK8c88iz6VWZJO4tYrclpCYAlMTw7xliVxJBJOX7xoKxfbOayorbt68WbusxEGennpl6bhG9T8mDi8TM8ROKlxP0qRxH6jDVuFZjCkqOXoi7NU+OJha4YqSg9oXziBk1w5YThgHoTwdF5csgYeHB8aOH4PYtr2R9ug1/C8RpygzFelHdpPiVxKERHRk0THk9vKhJMXqPde10CnqNcxNeDh/6BdsPP74Yy4jD22O/OKvA2s1KpPIckiIegVtQzNdIgp/knJ+symtK9GcS52/8nHv3r1DJBLJ2piYGLi6ujKvgq9evQp/f//j+ePKyEg30o04oSlOumuK8REREQNsbW03ZWVl6Wlra6fRdURQHEmxs1CRkL4FOXL8dL/2bVpsossFPAZ5YijWnHiNTh3cYKVnjUfXTsDOuzGkUYEwt7DB0b1H0KxHV9w5eRC1W3XF6kMh6FvPCAcPXYFnJT4cm3XFhS3biVW0QBVxMuxcXCESiiFJi0OySh9PDq6HQYteENzbD5FrE7g52yMhUwZTHSGylEokRTxHFPRw83YI2nooITWpBXngASjt6sHNxQERSdmw4ifjQoQSr+4EomU9Ezi4+OHEvTB4ZpGcRuKFWvrP4FyzDZauPIBRg1sgPCYdVw8dgWPL9jAOOwu3lv2Y+pHwV08RozSHtw0XF4LT4W+jxLVoARo78rD3dgJamIcj0aoRnI3ECA2PwJWgeHRv6gmxUIidmzZi9MTxuHVkM2q164cjV58ioJEHbodnoZH8HracDIOROgXubZrjwZ5D8Ow5GLIbe+HTqjeOb16H1v0GYcfqTWjRtzmenTkHjwatcO5hCvivroLn2QYda5b8NWUurVu3Pl78XiXjY0XhfXC5PGQEkSJTDT90GbAUpuaO2HX1Lqp27I5qQh3ExkTj+PLl6Pb7bFy+fJnJMSO1jZCwdC10qrghmU88RKmMaRoNY0sIk19AwTGDys6UiIMcjf1qYrJbCIxrTYe2qWPxCSohZS0KBM5vg+qpu323plG9xq2OcLncNn8df80l4qCkH1XlioObm9s6kuHIiSgwD+yaNWuUQ4cOLdAGIi0tzVQ/6iTTE4tKx/hqSlLSNPLwXyWiwGwnokArJGnlItc45fp5GLdl/HYqBvfv319Gto/Jlkjxvw27kScM+T0GvmElDA2whECoaZxi6VYX+toiqB28yUYh6nXuDpGWPnybB4Av0kWf1lWhSx7qjr3MoaOrRZwMIdoP7A8OV4DUdFOSU2dAZCZGhlwA8h91Sdmcq008hEZdIZVqarG52UTIdEjuYWkJtVzCdP7h2qo6BGIdqNQcKOp2ZPalea6tsRZ4pDxbj2Tkde1NyJWqEZulQj0nfejqd0O7bAVkEism9xjYuyVS0uUw0Cbp69+L8RjSbLsjMT6B5OqmEOsYw8vQAEI+BxxlAnSM7OCnR4vLXLSpbQClzBpmqgxyVjEMxCJ0bqQRBUr3fhqPxK1BRyZsU1dTb9C4WiUI+Pbo18cLYh79doALh/79wNcWQ9o4gJyLi8YBPUjIQ+c+AdDS04Nua1ImTk9FCy9jGDToy4yn8CFs3bq1T0V/RPVhcNBxgRTNqmuhT7ch2HzyPp7fC4HTrG+xKZTkoP+ux7+79kM2uDMmBF7D2GqLoDuoD2xsPPAqNBCcqDhwI+Ogeh0BgakzUm6eJA6uEjyRCKvPXEL05bN4od0Py+f9D3MX/PmpL/adKJVK5uE+uHDcjnqNnzJu89tvBnIhXuhm4ilsfldcO6+/iG9o7wlXNw/6IhjveG/NQ/Ae2nlGpfSQ08o94aZcPzstKhrMRrGoYAVsAY+BGqUgn2GaGmveNHByescxMDBkQpGOxi3X19NEqmdg8CYOruZ4Q1rdqadxxU1N3mpaztcnD6Zm1thU05MkU0UrEIMpGAj0NFdCVvK09SDKab8lYBrWCKBLdxLnf4A0t0KfiAC0c9OqV+jOGBCRg7ZG9MxMTfPWN6+hqSfRySk96Wnz8vajmLxVPDAz1VyPvr7mPmiLNAKuJdB4gHwhjUgTmW7OPuKcIo++niZdevqaUFdMr+dNWj6Uiv7s+mOgry7PPdC8yh+m+dIdYXP+wNHHYfCysMQfv3wPMZcPuUqFU7dvYcywociwckGbpnURJxUhOSkDxrZ2OHfhBKSkGKcWkf3io7CKuObNsiSQvHgGfevSe10VAcmln76ID3H54dAEWBWxfeae26UqdIaEhKgb2GvDytrapNid3QP4tPm0njqF++8dHpo4G66zd3DI3xpM/d1vC4/9/v3kNgU8hteBp5FlWQ+ZKTF4GvwAktg4VLYyhL2lHi6GKqEgOfrgvgHYf+YB4l8FIjqdB2s9NQwFctTxscOFu+HoN3Aw1q9bj0GDB2HT7hNQpkeDo2UEkSwJqSqSS3SoC2lGOvRsq+Lmsb3w9HSA0KQyjt8JQ3rYPZjpiOFgxMGt8GwojDwwrEtdJEiAAzu3w1JLiXbd++LQ2n/RYcgwxovYsWktevQIQLREjLOHD0GXm40qThZwru2PjNhI4ju54PzNJ+jgTDwTS79S3PIviwULFkyryI5ayhoeV4QOVd3Ib6qEYet2kGdmQUXsRJmthkpXFxx1Bi4G3oHatRaUCYkIN1TCoFZ9qLJS4T33ByiFZkjctxcPnOzR29cfR9f8+6kviYG66217j1Qd276SWaYVic7m7hhTb+qrBkNbOLy9v7neuztZlkgkYupV5HYiS0WBhgKBYJyenl6JXlfTVpE0uFJE1zAkjZw2vUYwcRbwGJSyDDx9mYjWLmpIfNuRcrccq9cegrWxBL26dcHl43torRyEmZGo5mILVSQPjrpRiJRYk2OzYOKoOZuDO+1VSg1jeRy03TwQrTJH53oOWLZ4NVF4BV4+fAgnUw88i1dA58lzmPnYIjnsBVzcPRERlQxZdgQcHCvheYomXUEP7kHOM0PN9m++4o97cAzm3m2gNnaGNOE5IrJsIdWrhA7ta+PA6jUw9FYhPeQxBFUrITY4GHC2xsOIZOJoCOBiUX6DoX4qvmRRyI9UJoeM5Jk8UryS2prA08oDvMRE6AkkeBaZgJTn22kTXagjwkDbTnFJ8S6C2JRRdiaO/fI7GrZoiN7X7kLWyB9dgp+hsbvLJ72esFevmfddTD8UIY/PExtnelJq4N7i/QcWARWFTgPGMa6Ws4ONeuzgnrRF6T0nJ6cy8xapONCwgMfgWLMzcqtrNC8UBRg25E2tfZO2mldwbTtqGhe93YecQ07Y2E/Tp2G73gXfDkybNIwJTdtrGu8MGfwm7mGDAwolMrcvqkZ1qpPpzXrqLeTSu53mFSfdXMdF4z4OHD6UCS0adWHCb77RxP3mxeDXx5c+4Ewuk4ZVg3uCCis27oPWw9/wND0GHQYQD7RfZ/Rs3hy7V/2OrYeOYv7AXvDtNQCN/Bqi6tFDqOJXCx4ebkiM1vSGzP+mH0ZvXIfHv//+Sa/H0cGe6XOBeA30oStV92pvQz0F+uB+98eSE/cePG753dwV+H7cwG/u3Lnz/lZ2xUC/xzA3N4+ztraOyl3Hfl35lfA1iMLxrXPhXy0Dy9aEY3A7O4wdVhe1LZwQE/0U87rGI9jAFT91kCAsIgbHorNxa9Ua9OozAHf79Uft5cvR9vlTHOnThenSzETfEJV09Ys/aQUQ0K7ZwpDnr989qGo+YuISYGluimfPX3i4OFd+UtQ+3Vs3nKGWZqV1btPsr48dnIaiVCp50dHRVnTy8vIKJEUTOTuuxFfC1+AxcAydkRCYhfv37yMqTQdLll5Cy6bm6FLHGRN3xMPF/zEWruaj+sm+2LdgLo6fO48tG/ZCx80Rm7r0hU7YUxjPnAxXAxPITc2wf/xoUjSREcPnQCDmQvAJOoml3yHYmhtdyk5PTi5Jzv7DglW5s0wDjE4t6s0e0r/XH7n1CpTq1avfI1Nes/W0tDR9OsRdenq6Hu3DgYpFUYKR+yEXDXV1dTP09fXTDAwMUnP7iMgPn0Qwj4QzCHMJn1U4m7B48eKJgwYNWr9///7O9F39+fPnG/v4+NwPCwtzMDQ0TKEXQW+Kg4NDGDEon8aNG5+nvRn16tVr+z///DNy4sSJi2k8n/payjOk10dEYXapLPYzpEXrrkh9MR89O7RH64AABNS/hO07zoO/6juMGzkMrQcsQYJEY8MchQL7Dh1AdHIYuFdfkjK8nBmHYrqBEYRZMiiEPCxc+BeUChV+mDWbGSm7okeyu3v3bg36gJqZGofTqSTHzJk2vObb6x4+fMiUgn19fe8UdQx9wOn0caktCJ8aFp35XENq/DQcOXLkPzSkIkFDKg5vX0zuutx9c4/91NdQ3iEVhrfvxZcErUQU3XzMvK62q78Zyu+P41RyCkaYu8FauA+3bkZC60gdzDU3xjO/RqjkYg9FtZrg6HLBUaqQJZcz/UeSzBlKUoyQi0Uw0NbJ62uSw+d9ktEta9SocTf/Mu2yjU5ZWVna9A0DHYmKLtNtuW8baO6tpaWVTefzd+9W0RQ/rBMLSzlCm/jGEhG4ZShGRFwUJhw9hCgTfXC0+fjznBiVONp4fJ4HiYMEeq7VEJEUjxp8LZjYmiH97zN4bXIT6fTx4dJ+LJSQ0g5otUSw1DOEQM8AXCEfMiIWAj73HYPpVhz0QadTUa775wYrDCzlAi3Mbjl5HjFZmVCSnJuWbpm+GchDKiNu/x/eLYgq8GDHVcBNVwXllPGIJPs+OnEUlvP/RJqYg9161fCNVSV06JwNBD/C8yfB2KdtjF4unhDLuej67XcY/yoZdTkJzBmpg6CScpEhzSC5cxYUSiIoZL13w5YIj46HSN8YMtpLAZ8DoVAEU7EWdHVEMLIxh7ezG/6aOaaYq/rvwAoDS7kgUytx+uIV2JuaUBeamYgLDYlKgRgtHUhc5eCLuDB/cgSHTx3HEHtXPL11Ed6du+L5rv2wWvobpLTPBit9DFaa4LW2Pp7ohCAhPROXXFzRpKE/oqQyWKzbBpFczHRVD5UaSiI6fA4farm2pscokRDabpZwMuHDzcVbMzIW2VdKP77i5BQ1IMC5g1vwuHcHVKFDCLKwwsBSPvBJed+tqgsqm1gxY1pmSCWQK+TErZcjce8udHxwi3gQUqxevQ59Hz3HrnvXYJ6UjCySi9dp1wy/TJqBH57cRTLZZ6O3JxK69WcefLWJNYZPmQ6hVx14/b4Q1cysEHH/MUREAOSZ2YwA8aEkwqBgRuEmZXXI1SrmmwBdXV3wxEJmH56MC7VCyngxWmJdZltkfDwrDDmwwsBSJtDm8lLipmfJZIhLTEF8fAJO7T2N8+Rh1rG1YR5UsVKFjNRYqPlCnDl5ErK0RFjvM0ENZ1ds/mcJpgwdiZTIaBjrGGDH/L/gUb8eHhsKkCA0QafFczFt2yGYautB+P0EyOnHb5IgCOo4gVfXEZLMLCgl2dDOyAQnSwY5SQeXw4UWn4cWm37Fi6dJOBQUAg4RCjrehUBAR7tKhoI4DVwda+gobdG0ZvXiL/Q/AisMLGXCiZuPcOr4MeiIxJrxJMlU3dWO6QSW5tqPnjxGBu1enjy80qxseHpWg5IUBV48e4JbT0PQo0UbuHh4QisqHNnZUsRFvsLhubMx5MEDRDbwwyEycU3aYlwLWwSvXQJtksvTikv65iG3e3oFVw0uHbdCTYSKFBXo+itXr6CauRdq2okgEIsgFImY9ORWRMo4UsaDiLA/pxmPk4WBFQaWMiHsdSTsrKwhFotJcSELKvLQx0fFIDVbgipVqiAiUh8yiRRyMiklMmaSSCSwtCTiYWOD16HP8OLFC8ilUujqCYg3kYaATt2RGB0JcWwGVAnpiDnyBy44z4ObHikS8Ml5yL5qmRxyqgQiHlknhJqn6VdSJNajHxfB3MoWoMWXRPqNER9C8uwbGBrA0MSYjvwEmVLBjIURJS6y38f/LKwwsJQJDWv7otOoSUx/oWmk+CDMTENWeiKURnbgP30NAynZRNz2VFKO5ya9RnZGOsmuM5icnnoX9AEV6pgBSvpFZQaJRgUelLC2s8Wda3dQ7/xOVNu8Dlbr/0WrB4FQ6IrA1xUjO1PGeAcKIkTKjAyIk1KgIh4A7aOBq5KhfoM66L51ElKkKojSldA2FUKspQU6LKeIowMBR4FUhRz2OsV2kfifghUGljKhWmVbhJ7UdEC90K8Osv3qQaSjDY6CBwlPTVx2BXlQ1STztiOuu5dmRCpwGFGIjU7Ai9dhEOvr4uiBwwgwcIGEr8bFrASmKFLf2wOTK7eGzNgfj+Nj8T/vPqj691K0GdYA29YsgYqriUetVDHn4KmoCMmZRk2ZXCU4t/TQuXVbCJUcUlygdRMC8GhvwlAw+9Au7S9vPI2sPll5HZf812GFgaXMSTXQZVx1nrYueWAVkKWkQqnWuO0qDg8cIR9qWt9AHmYVeTRlfD742npkvRhKIQ9cCGCo5KI/rzIM+QYwINu4KRIY6BpAoGMIbTUXGfNW4N6GQ1B3rYRstYJpm8AV8In3IAVHRkRCpdCMmcknIkA7yxHSwezUTPFCRNZJObQ4o2aaSdPxN7N1eUi68QTaTT7qQ8WvBlYYWMqcSukSJGdkITs5FekJaYiIj0FqZiKEpExvYmzBdBXPk5Lig1SCFPIgR0VFIiY2ATo6OuCTHD8V2aCmKSOeAJc89CLykIuIUHB4YghFYhjytCAi8pElkeL1oxAouBJN4ykiNgq1EJKMVGRmZhIZUENPSIoNmXIkv4wA7TiNeiBSLg8qaRaSSJElNTODiIkSWYlpMKlbpdhr+6/ACgNLmfHr5ZM4vXkn7Bo1IA+wADKhEGrimqttjWGg4jKNnuKkMoTLpMhMy8Cj2/fAo20JSI4tFulASnJ1x3qNwe/VH1IFycXpa0VTMyIwaVBy+XQcdyjlCqaooOKKwJHLwE/jouuE3og9dQACuUrzlsKCFBFUmvlkhQpWds44ce0S/s/eWQBGcbRh+D23uLsCSdAQ3N3d3aVQSgu0QL2UCqVGqVFoCxSKu7u7BwsJJCSBuPv57T+zl4vg/YsEOk+7rM3Ozk729ntn5psZsUyKKtVCIJVISQT2EBP1IKADD9Fh5GJisfriLYxq8n+M3f8Kwj4MjP8LS3OfpbmQsiciAsHkh2wllMCkN/Lld9p9iRNJicwX8mGNRDWQrwYKjUAeUQgcp+T9CoRW1lAoFMgncRUR064kP15aqWgsKoBYo4ZIJiZFDQUMpLghs3MiaykpChTxTZbbo7IxVkI+BlIBuQd1jSZKw0gKCHoBuZ+eT1+CWAQ7ext4KWR8hadlyjyOM/8E5M4uOB11m/8w8EWQF9yv4kXDPgyMf0x2djbf9s//0It9FiieKhtSlM+HlhYH8tOhIUUFOuW9o6MjFFI6+q8OIo5O7KuHMQ+kKKDmvRlNYqIM1GKoiQIgZhwJEbfID5r+qIkqUOWQQoOAL0aIZEUwkntIyH0dyIdE7uzIN49KFCZo6ceFr9w0kuIEB7WaOjyR47o88sGxgkHAQULUS15RIf8hMorlfPrV+XkknWoUkaJHFRsbxMbG8n4OlqHX/6uwDwPjHyOTyfgfvKUPhMX6/tKyEwQdhHzrAEd+fHQmaH42A/JDNRjoeAn0O8BBQz4A1NKL3noT1AvBpNGTfT3/YTAZDZAQxUHL/TDq+EmGqBu1WquFycChsMgANYzkB69Bbm4GKWbkIy+uCPJKQbBSymFrq+I/Fk4qBZ2Dm9Y5wpqoBZvJ4+ColPNpNRW3WpjIx4fWW9JWDY5OwG2ilZMy/vn+64jvmbiYwWAwKhJUjr7KLqnUdD7/ocUYjP8TVppgMBgMBoPxRDDRwGAwGAwG44lgooHBYDAYDMYTwUQDg8Go8FjmC6EkZRTCyVaOwxcT0b5BxRivs/vbm+Boq8CUQXWQllWEdg18Hxne4iDIYLxsMNHAYDAqPPlFerR+fTW/vf37Prh5Nwfv/3oEbesPhfCe/m8T5uzD+ciU4j3q6C3Aid8H48SVJMz8+XBJuMMLBqLFa6v4Qcgs4Sjn/xqGIR/tQNSdLGz8uid8XK3543VHLIN5MA+gWqAzfn+/PT+vEWXBu+3Q451N+OqvM1j1eVfznUmUTcev5Oc/orw1oC6Gdgp5BrnDYDw/mGhgMBgVHmulhBj+IWgybgW6TtuAk38M4W19//e3Iz45hw/j7WJDjHwP8zRD5Ny5pcNKrjeaOF4wVPKy5426WmuAQibG+WXDcTUmA6M/24Vv32qFFrW9kFOgJaIki4S1Q5+Zm0vi4eMl67Nkf8PBm2g8dgXeH9kIvVpWQs/pm/DRmMb4fPFJFGkMUMrFqD9qOcQiIX99Zq4GHd9ah/ScIkwdxMYNY7y8MNHAYDAqPGeuJUNOjPznE5rjgwVH8fmfp+DpYo3moV64biPHxagUjOpWvdw1gz7czq/b1PPF2B41eOP9++arqDdyOX+cioCzS4bed692b6zFgLbBeHtoPdQnYZdsu1YmbnNtRLsGfvhq2RkcD09AVHwWf6xH80B8ufQUWkxYxddWUGxU5n799tbm9bWYjKebMQzGc4aJBgaDUeFpUN0du07FolMjf2J4Q/D2kLr88VV7IrHwvXb8dlxyXrlrBrQLLtlOSi9Aj+mb4OlsjTf718GPay+YWyTu4df14fy6U+MAXL+dCV83W/y64VI5QbLt+G3M/uMEv/3NWy3QYNTffI3DtZhMzJ3UEtN/OoQ9p+Ph726L2ORcLNp0Bav23uDDL3yv/VPLEwbjRcBEA4PBeCmggoFiEQyUQR1KhYGfuw2/XvBu2wdeX7a5Yljn0lkEagQ6lZyjzROv9w0tObf+q+4PvL5b04AHHi+7376MM+T4XmxgcsarARMNDMbLyyNHSzRBD6FJYp5SlDMXrE0Cc7W8kBNYatorOny3CdbbgMGoGDDRwGC8Ulh6AXDIz87DZ+9NA6c9gno+JthKZEjPFiGZq4pOoz9GzWqhL4twYDAYFQQmGhiMVwBaHNfr1Li7fjnO3NyJTo0yoM+6g6mNVZBYcbD28ITMzQ+QiWHkdDAVjYIpSgWtVQOsWZCM4R/+BoHChmkIBoPxSJhoYDBeAQpvXMIbw/rix4litFAZkXVVD4NRAJFIDZWBCApJFoQKKWROlQBRIPR2gRAKOEj1eeg9MhrGuDeRdLUufPu/US7esvMA3zsncNl5j8sep+Hootfr+TWdWbAs9LrSuYcfO8/ws57MyY8sE57xPR7HV884/h/IkvK4QAzGk8BEA4PxkkIHDyJ6AEVn9mPN+5Pxbk8xDpwowppoGzjkidCYy4edSg6lWAod7HE0Lxm70u7CSsahin0Bfts+CpzYDUpVY6hTD8HKKhkpKzg4DR4PESeFQChAtWrVEBkZyRt3f39/3L59u+T+IpHZnlMBEB4ezouERo0a8VMb3wsNS+NITEzE559/jl9++YW/jgoGOoXyC8Tr8KGDM8eOGoPouFiEVq+BL+d+hQ3r1mPca+PxwbvvYduunahZtRquRd7gBVAlP3/88NOPSEtNw5dffIHoWHOe0OM0DgtvTX4Tp0+eRKcuXYiAMyA0NBQfvf8Bfvt9EZ8fTZs1w+xPZuHatWtYu2E9pr41BfPm/1AuLrr+ffFiHDp4EA4O9pgybRpGDhsOIsuwdNkyvDZuPHr17o03J03CzdsxfH4GBVbC1RsRqB4cgsvXr0GhUKwAEw2MpwQTDQzGS4oApMRuAvSLvsaya2ocTTLg40rWqGqnRL68EAVqKdRGE3QGPYyCZDRWGNHWSwPIHeBs44TpPTYhX2YHB+dCKDgtJvS5g8trVqJD/9fASUwk/vKF/AeJAQo1/jNmzMC+fftgNBohFosRExMDDw8P3jju2bMHnTt35sN6eXmhSZMm/DYVERXBwdFSz/Hbr7/i7MULqFoliBjbq1CprBAfH8+nk1K2xuTAvv0oLCws2R89ciTc3N3w1huTMf/nn/hjxFhDRPJi1uxPeePfvHkLPn/mfPEldu/by4f5+NNZ6NG1K3++Xv365dI1eMBA+Pj6olXrVrCxscaAvv3w1tSpfDpMxTU0dNvWxqakxuaD996Di6srpFIpunXvjk7t2+PwsWPPMPcY/zWYaGAwXloEEBBjYSLGPFWThqW+lSAQSSE2EoNlEEBnFEHLmSASEMPMCaAhAiJPJMM3N3KgE+uxYnYleFc1YvD7TvhWVIjkxcCOu3fQoTAfJjtHvluGpTbhUdCagt27d5fsNyMlaCoOLIKgQ4cOsLW1RU5ODm80jx49+qwy5P/ELBsmvP46vw4OCcHa1WvRq09vyInhV6lUJSHHjx3Hr3uS0n1qSgoOHjzIN7fMfPddBAUH88b/23nfQyKRlFxz984dePv48HehIoIKhlkffYxZn83GO1OmYvO2bTh18hT2792L/n368rUOlJVrzMNmjx09Gn8sXozBQ4cirGYt/t609ofi5uZWcp/srCyMHjsWa1aZr9u1cyf2HTr4jPKM8V+FiQYG4yVFJ+D4eReMPtVxdGEsPv0qGZ/6VQYn5sgigkkkhEFnQD6nAfRyGImAmBBPxMIEPVxd5XDxzIXRuS1mDD2Kxd8VwDPUCsODbSC0siNiBLwtnUpKtuPGjStpWggMDESrVq2wdOnSknTQ0m4WMVh2dnZ8bcThw4f5ku6AAQOQmZmJvcQYWkrr9DiNjzZP0GNUcHh6epZ7LmoIL1y48NzysQV5nrLNCtt37SzZvnYjgl9bzi/64/dy1/YfOIBfU8FQNhzlq6/nlmwfOnqEX9+4GcWvqWCgfPvDPH7duEljfrFQNh4qGCizP/+MX8ry/ocflAtv7+BQsh1RfC8G42nCRAOD8ZIihRAmgQkus3/ApskX8PW36Yi87QKHm/ZwzoqFfWoerDS5yNUKkA0DNqQZ8ecYa0iFBZCQ64p0HERJG3Dpkj3qNlaiU00FxKN3wEgEh5gfAkKE4cOHo23btqhUqRLvs0B9Gsr6NTgQI5WUlMQ3SdDq+l27dqF79+58jcLKlStLwtHzp06dQs2aNTFt2rRyz0GvL0vXrl2fab49D7K3bINp83YIrFQQ9+4Bm1Yt8H1ENlYncwhzkGCSvxQ17GSY+9VX2LJxEzp06oS3pk7hhReDUZFhooHBeInhmwDI0uuXY/hl1miMaHMSqG0LsUCOi0cFSD3rhjZu/qhckImi1Gy4+BEBcasI+87oUVlvBS+ZFIVKE1qPrgdx5z8hEVuaI8xraux9fHwe6s9wL926dSupVXgYP//8M7+8ytj36EYyowuIegKKmyqmVrUnC5CrN8FOYm66mTJ1Kt+0kZKSwgQD46WAiQYG4xWAui1M+ugvGARqRBw/A0H4ENRQaVCrrRJa60QUGORoUKRGnlaOXI0BlaWOSNAZ0WDCB5jYvRusDSoY/ptfg+N4VkNcFQs6C5abWAQDRSYzT2RV1jeBwajI/Dc/EwzGK4aQ/pQltH5AhtotW8DUMglCzgSTIAcCkxV05IxAKIKNCfCnI0jTMRZo1wuBCUZqzsQC9jFgMBiPhX0nGIyXlwd0bSgeO4H+IxCStQO/I7OcthRyRcKSnWc9etKzRKfTDS4sLFzxb+KQSqUrVSrVkKeVJgbjVYaJBgaDwWAwGE8EEw0MBoPBYDCeCCYaGAwGg8FgPBFMNDAYDAaDwXgimGhgMBgMBoPxRDDRwGAwGC8x5yNThn+z6uzSd/rXEhwLj8fB8GQ0q+mJ8V2rveikMR6BycThwq0MfL7ikm77V73cbVTSrBedpieBiQYGg8F4Cflp/fnIgS39gwJdZfhtSjP+WCWPGhjVuQY/FwidrnzBgSwsfLv1C04p40EIhQLUC3LGltntpUZdYeb5uGSTvZ1trUBPu2svOm2PgokGBoPBeMn4ecP5iKFtAoMedp7OCUKXOT5ZWLDxHCb2rvc8k8f4Pwj0sBFynOlqfEpesK+bTYWdbYyJBgaDwXjJ+GtPVMiQ1oEl+2/8cBANKylR3Ut+X9jGleQwmjiIhM9mtGzG00MgEGDriZjRk/vUnvmi0/IwmGhgMBgVluaT1xSqtQblo8Lsndv5X92DjirZcurmwY8KM3dCs36tw3zW/6sbPSOGfLEX07u4PPT836eL8E1NJhheBib/fOry3x91rrCCgcJEA4PBqLAc/WmAquz+HzuufnTmetLQL0fXrSIWCR922T/GIjxiU9VLp/92osf8t1p1qu7vdOap3eAZMmtkA8TG3IKLTennnJZY52zLwKoP2iA09MWljfFk3EgonFKgMaYSwbD6RaflcTDRwGAwXhrGdqnxGV3KHpu15NRfJqOx7tQ+1ar+k7jyNaZ9E384GvJmn9ozOtT3W0WP2dvb48AP/Z5mkp85lT1tyVL3vuOra72AxDDuo/3MnTi3aMgjq3oak/fuZYGJBgaD8VIza1SjEWX3dQajbNL3Bw7UruToMahVgD89ZjQh7r0/z6U2qu6xbXTn6l/QY/QzvWNurxeQYgbj5YWJBgaD8UohFYu0v89o3/Te4+TYi0jOf4bZrbwxP8YTB84eQqibAndOrkC9bpPRZ2kkfu32cJ8LxssFEw0MBoPB+FfsmRSIRd7zkHmob8kxn8ZDkJpZOuN4fRcH9N2XgBm1lNBE/AjPVmuRmXqcP+fu4ICAVp/hxIZJD9xv4+6I4N/C8UsPb5gyj8C5Sj8kZqZh10g/LKi+HXvfqU5CFaKSkw9OpmfChfl9PjOYaGAwGAzGv0ImlcCo1jwyjJ1CgGUfzETAuPZQiH0B/S0YyXERqOMmsGLdpJKw5fa1O3HJ4I0DRDBQhI4tMNGPw8QtWixZGgfh92/B3fFvaDmg23trmWB4xjDRwGAwXlri4uL8yNIwNja2dlpamidZXMniXFhYqMjPz1eQIEKtViuSyWTUPunt7e3VZMkmSwZZ0vz9/a/6+fmdI+toZ2fn9Bf9PC8rLedF4p1O/nB0m4Mtp06hqb8N7pxchrrdpmDKjni8Xz8dFwrESNz+E+QwYVBYMwjI2iIaHomsM+pLh2HS1gT80t0L+qSdWBAnRkoPGXp5OaLJ5hgkT5tPAhoR5uyMQ5Oz0Or+4SoYTwkmGhgMxgtnzZo1Xy9cuHD0oUOHHCtXrowBAwagV69eCAsLe+R1xODzS8uWLZ9p+vR6PY4cOYINGzZg7dq1/DDN/fr1Ozt+/PiP2rZtu/eZ3vwl4e1dsXi7zL5P4+FIyxxevGeNzMzU4m0hVl08Ue7apMysR+7vTsws2ZZ4dCZxJfPbmxIyy4QS4WL6SzF9w0sNEw0MBuO5sGzZsrWjRo3qN3LkSPz0009QKkvHbKIigS4VFYlEAiIO+GXBggWWw/XJsqdsuPT0dIwbNw67d+82rlu3rle3bt22PffEMhjPECYaGAzGs8KaGNu85ORkODk5Yfjw4fzyKuPs7IzNmzfTTVrrvpVuJCQkICAgwKDT6SQvMm1PiyHuDtilvf949ZnHcHQmm1nzVYeJBgaD8UwQCAR5HMe96GS8cLy8vKBWq8UkP0h2cK+Em96b+9MwK+zB5qMgahu6Dnwb15OL0LzfW1j/03Tef8HNwRmxUTvQruVAxOud8MuOI+he2TzgZ8qpJeg26hPcLVRg0PQfMe/NDvzxzBX90fDMa+gd8w62Bn+DtCX9wb9RQkdkZdxCbsQmcq/piM4VY/wXK/Dp0DrFqeDw7eS++HH9Mchcq+K71dvRPdjq2WfMfwAmGhgMxjPDZDLB09OTr7KfPXv2i07Oc0Wn06FWrVpo3bo1fvnllxednKfKj21d8OM9x86lZyFQlA+/RqOw7MxtdK5sU+asgPzH4bJdQxy/HscfGeXliItbUmE93RMnR17DuchR/HEuexUcXBohK+0UrGyskL1/J+ZGXMJccs4wagbc+qQgI+oHzGvjirNjI3HsSrT5upw1cHD7DFkpm7FtfGX8ljgcsckbHu9oyfhHMNHAYDCeGUKhELR5oiyXL1/GxIkTcfbsWcyYMQPvvfcerK2tX1AK/z3nz5/HZ599hq1bt+KNN97Ad999B6lUyi83btx40cl7Jjy8psEaGVkZJXuZh95BlWEpyExYzu8by4TUGulAXIBIYBaXJRhpqNK4xb6BeBB06hGjqTRGgd0AIhjMfjHdFkWjW8kZE+q5OGF6VCb6278SFT0vFCYaGAzGc4WWvk+ePPnIMPv27eN7KuzcuRN3796Fu7s7mjVrhiZNmqBu3boIDQ0t50j5tEhKSkJ4eDjOnDmDo0eP8umkNQb169dH165d0b9/fwQFBZW7hqZny5YtTz0tFZkH1TQIVH2Qefd3xO77GQPf+g63s/Ro2nMCUhO+BcyNCqhcEI6WzXsiWuuMP08loYMfMUEHUjHo9BLUDfoIyTpbvPb5MmSlHXvgfcXVZ6CJMgC+wacRG5mKERGb0LzmdERmmNBr0pdY+MHA4pAavDOkB1buvwilewg+232TCYanBBMNDAajwtGuXTt+ed54eHjwS+fO/2667VeZFcmP7tbo3+4NnIl444HnBHahOHw17r7jrg1H4XzUqPuOy7otRkq3skeE2HKp9Hrbqr1w9MqD5g+R49sVe/DtI1PK+H9gooHBYDAYzxgBkrPYGAqvAkw0MBgMBuP/hnbBvOAbCg/aWsQZEHn1OrSKKki7e+qRTog5qwYhZH1HJG+gk5QaUc/dHwuuxaGuoxEuQdORFvUDjRDuDo64lpkFR9a6UCFgooHBYDAY/4qBv+4t5xh5+YtG8Or9F5I3jkDEon5o9v4ZDB7RHxe3LkeCzxuIP/BR+Qi4HMRo9ajjyGHW5O4wFhbi09nf4JOP3ykTSIvqLh6QhLaCV95lnErxRXrcfgi4NLg7BSOkdQ8EiO9i495riMpIgTMTGc8EJhoYDAaDh4NcIES8iYMrMzj/iHsdI62qDUfysRGA7iCavn8VWRl3zCe++xbTqzlh4u4ZmPPAmET48LUW+HlvChEM02FxoKS8FeSJDuuS8V0LaflLBM6Y0LUOfty2BeHFhwb9GI39b1V6Wo/HKAMTDQwG44XRykYIt19vYtXQSjAkbIbUZwQMplwIteEQKtvCYMzA7Z/aoO6mHsg5+CaM0T9CEjIPRfpY0DmJxgbJcGdKFPZO9IN2/QAoBu9AtrYAtgIOTiIhdLXeRd7FOeDSV0Lo8SU4/TVk/9UNjqN24a7eAE8RsHqAByYZv0Pm+oEl6TJc/QLSOkug0UWDmqi3Q5XY2moPbs2rAaHQHYUmNehsWFfmdUCT/QOQv2P0i8rCCsFDu2BKm0FkSkc6sf3mkj+HDakm/NVKBmx8RIQPGBRsUF9v9F+wjogG83Tb74U6w/BjIjSjPWC7MhmZf0n4+MOcHcEGFXt2MNHAYDBeDNotOJxPPu7DKmP1sNLDr+3R4fcOoTAZ06ASCtB2STIRDG78uRF1poAzcFAIylQFvNEAmJhKfe0gafQlEQz0oACtibVvvuULPojAuRdRAqNLyq3SNr/wgoEy8O/1GCRrS7dKohzZ5CO8dVALS5n2u3P78b20DTBPDWPWWXSq4Yo919IgsquK22n/bcHw6N4UEqRnlZ1USoDbGcXhB61C8iDLYUdkZZnH8xBXfxcZN0vDWxwoHb+4hDtlYpoTXjwpaXRGmaMCNmnVM4aJBgaD8WKQ9UArawFs5l7G5ok1AN0dKOX++LSdFKb4xRCHLIDJxKHw5EyIXCJhTNuCvy7Mx4rgucg3JIAOCrx1Uk2MuDvtH99ad+QtXC54DbVIJHPbdYb7iDXlzv919mtIagTja30MaPl1fI1WqPPZOZKuXyEJXQt9diqE5HjyltHw8nsNxsSFTyVLGIyKDhMNDAbjhXEwr8xIgFIfFFlG+PMdDVORuQSvajyXCAbzYVGlyeAMk0su6f7LFWQXb8v6rIG2T2l0a9Vlq6gV4DhN6a1a/cgLBsrMwzmYWXxcY6nWDn4HJn2pE96iSMsMTTVhzH695Lh7j8Uw9vgHD8xgvOQw0cBgMP5T2I/YBs2IF52KVxAuFU6OVbHwxG30CbHlD33TOxhf32yH9Gs/Pf76wgsYND8Dq97v8Kib8F0wr2Rmsd4RLwgmGhgMBoPx7xFYQSTg8OP3v6H5zzPgLBNg+sZITC8+rb/4Kdy6Hkdm0j5+v6u3I5ruTMW7NUS8ELiYkUUEgzmsu4MDv+8uBKo7O2DMgQRMranEjd8HQlccn/H2Qrg0XICktHDIyP6b9dyQMPEsNo72ed5P/p+CiQYGg8FgPAVUSM0s74QYs+0D1BuxCMlZ6eAEKnDaa5jz2wrU9LZDvRpS7D8YR0TDgyek4tHuQhJ8ecFACRm3GtKZjvz2663e551iqcAoYUY7YHTUU38yRilMNDAYDAbjX6O9NBfubX/AipM30CnIjj9242oEBCJ3vhfKr+99i+rTduG9CbWRc2kBFhhskJpIe0xUQjUJsCZajylVJMg8NANGS9ODrCOcuCGYez4XM+va4tjs9tAXn/v10BysazAfd9OvE7kC7JreFK8nvn5/whhPFSYaGAzGM4HjOMHHH3/8+dGjR2ccPnxY8qLT84IwNWrUKLxnz55rSX7MfdGJeZbIas9EVubMcse6vr8JmcVNDpN2p2BS8XG72hOxfe/EknD7U0trKBxbfY20zK+L9wS4mVF6rtnH+5DxcfFOwHhkpY8vOdfpm+OIfVoPw3goTDQwGIxnxuzZsz8kqw8t+4sXLx49fvz4RW3atMmYP3++a3Bw8AtM3dPlxIkT4VOnTpVERET4LVu2bHjv3r354YtOnTr1opPGYDw1mGhgMBjPjdGjRy+my4PO5efnW//5559jVq1aNejs2bP1vb29Mzt27JhNFrfWrVtb2dnZPefUAvHx8dq9e/dG7t69m9uzZ0+VwsJCZadOnXYNHjx45aBBg1aJRCKjJWyTJk1A0v3c08hgPE+YaGAwGBUCa2vr/ClTpvxAl6cRX3Z2tn3ZfYVCoZbL5ZqHhX8Qvr6+GDduHL8wGAwmGhgMxiuKvb199uNDMRiMfwITDQwGg8FgMJ4IJhoYDAaD8fwx3oSDczOkZ6WCzh32fWtXbOtyFIfeDiq3zahYMNHAYDBeWvR6fZeCgoLt/yYOiUSyw8rKquvTShPjCRFVQRYRDBamHUzFtAdsMyoWTDQwGAwGg8F4IphoYDAYjAqCVqsL6jJzY+QHPVzLHY9J06JFg1pwspW/oJQxGGaYaGAwGIwKQO/31nGT2zvhXsFACXSRYcx3R7FldvsXkDIGoxQmGhgMBuMFk5CS3ZkKhgcRnSVBdKqeCQZGhYCJBgaDwXjBeLnZ71y1owh1A5T3navkoCcLEB4eXnJsUo9qzzF1jH/DsA5Vv3nRaXiaMNHAYDAYFYB3RncRDv90g2l0C4eHhjEJ5SgS2qFHY7fnmDLG/wn39dpr6+e81mzGi07I04SJBgaDwagACATgls/qw0/83HTSarWno0zfrKq9PqcIuhM3Mm3/fr/dbrlUZHjR6WQ8nOQsdcKfO294V/N3OjO0fci3RDC86CQ9dZhoYDAYjArG8V8GKl50Ghj/nEoqFV5FoVAWJhoYDAaDwWA8EUw0MBgMBoPBeCKYaGAwGBUek4kTvf79/lM1/O1DB7cOlDzNuOlQ1NnZ2VyZe+HDpReu1wlyWzu2a43ZT/NeDMbLDhMNDAajwiMUCoy/vdOuftljMYk51cd/s+/0l2Pqqap42f7fcW88cefIyeupkoXvtG0pFgn19NiCt9v9yxQzGK8mTDQwGIyXkkBPu2sHfuhnVfbYin03pu05G/vOd681cCcC4L5r7qQXnZj268mqv0xr0zbYx+EiPTamqz1ZnlOiGYyXHCYaGAzGK8OQdiHf08Wyb+I44aGLd3u3qeOznu7b29uDCI0Xl0AG4yWHiQYGg/HKIhQITBbBwGAw/j1MNDAYDAaDwXgimGhgMBgMBoPxRDDRwGAwGAwG44lgooHBYDAYDMYTwUQDg8F4LnCEF52G/wuO74UBk8AAkzYLCdEHEHF9Mbj8DAglhZBIJFAb3VBoCoPCqgWq+8yGtVxDLjTCaOSghQkiCLH0ylAYa/fCuXWbsLDHVuw57QylzBtDRs0HRNIX/ZRPhIDwotPAeLGIs7KyHL7++mt+6s6vvvrqXbp+9913v/q3+3K5XDOLkJKS4vbDDz9McXNzS5kyZcoPcXFxfr/99tuE4ODgyJEjRy4NDw8PXb169cCGDRue7tmz5+bDhw+33L17d0fC7pYtWx6m2/TYwIEDV4eGhobTsPQaei2Ng8ZN70HjpvegacjJybGj96ZpeBrP8qz2NRqNnKbTzs4uhx635JWfn1/chAkTfouMjAxeunTpSPrc9PkteUXzheaPJW8sebV58+aep0+fbmjJK3otjYPGReO05BW9F70nvTdNQ0XIi8e9R/RvSo9b3qN788qSN5a8ovlA8+PevKLvGH3X7n2PLHl173tkyavn/eyMF4uRM2LdsvFA0XHIJGoohDpwgiKIRaScJSwi+2KYZCIIiKAQ6AEneTZq+gfiRvg0XMvKJjpDD5FYCGdbK6Rk6JCRooav8Ae0rOeL1v4LcPOaAP5OiRALInFkbQs0738cAn5cCWaTGRUb8datW7t7eXnNfOONN+j+TPoP+XBZzv/b/U/IB7js/jzykS+7v4R85EEXC+Qjzy+WuMgHn18s+8Qg8Itln3zky92bfHyfVtqf+T4xiGX359yTVwuIMSu7v6pMXj0wb4hB5BfLPjGI5e59b14RW1xh8uIx+58Qw13uPbo3r+55j1YRYQC64Anfo3vz6kW9R2Sdk5ubaztnzpz3wHghcCYDMu+ehEq3DUIUQGQi4kBggkgigUCoh0Yvw5Fz3ZBRIEBubgEUJgHaBO2FSH4XaqIgrEUaPh6DXov8fB2c7Zpg/xkFTtxOx5Wi86jnpIOTIgdatYjcS0CERz6y0q/C0S30xT44g/EEiGlJzGBgU7QzGBUBS40D48VhFBpw+/YuJGa4YsepXkgtvA0nIvCtDFloXCcWa0/WxOY18yGzkuKTWd8gpG5T3D0rh5fuDiwtMCboIBTIkZPXEOfDPkdaQQRurZiM/KRcbPOti9a+d9CtTgwRISYSWIfk5BgmGhgvBWJajVtYWIhKlSq96LQwGP95aPMEq2l4wRjEiExvCZVnLWza0h8nDp/B/gvnkRF7F+rK03BsVhcIJTJkZ+Vj7hdT0K73J/j7m+kwFs7BjaJcmEz5EBDZIJZI0dzHFmutvWH0z8fH8XGY26wR2vlXQZRtPbQu+BEyWQERFyYkRh9H9dp9XvSTMxiPRUzbgPV6/QNPFman4OqtfNSu5oirV+Ph76NEvl4BP1+f0jCZdyGx98TN+By4yQohsndDfloSxNYuMBWkw9HZGtHpBjgL1bBy8YKV1IToLBNkKVcgsXKFnZsz0lKz4O5kjZRsLZxthMgqFMLD1e45ZQGDUXF4lWsadCYjBJwAHCnJcwIh+DK51ojs3Hwkp6YipyAf2oJCmIwGuLo4oWb1GhCJRPxyL2X98dScBgKjCAaIoDQWQi8kcUIGOSSwIsEedP2jEIoEOHX+PH6d8y6yM3LRqFkYgmo7oFufvhjfR4PTZ2zQsHFr5BapEZeQgt3LPsbR42uBjFtw9fJGRmIiOCOgRxG0hYfQd91kfH7dGidufQobXz/sO34AbfzJczoWwGgQkq+wDnfuHuBrKZif4fOF+i25u7snU9+uh4WhPlWffPLJp88zXY8iOzPT+dTm1V/EXTw9mjOZSl5ur6Aae3pMebfjs76/mDomqtVq1KhR476TKiIAPNzEkKkc4OWhhtxKCnu5EiaODs9aHMbREzeu34CbbxXkJCRDopfw7XxWEhvoCgrg5OkFmTEJuWRbI8yHlYc17Mifx6lqXSTGpUAq4JBfWAgXWwkKyI/QXipEoZrmg92zfnYGo8LxKtc0HE3Pw/rzscTIi4iR56jzAAzkO2IyGIihFtLOBlCLJPBx9SPGVIZuKemI0UoQmZSDKxkZuJumRraGfLRERiiJoRWZJJARI2uUkvgghEgoJvGWFoCaeDhghA+HRjVqE5HyDxJKLP7Pc97H3Sup8K7hgmUr1qFqJTdsf6cl4tLP49bNxfjxaxcUoBH51mXBThKD4X2L4G5nBXd3HyTfPQcxyLMZTLicnA+pYgea5+qQp1TSR4R3NQMCq+hgNNJpuMVmwZOXxQTDc2b27NkflxUDQxu5wEiM244r2chTlzbZzyrGYDCIiQA1vpDEEtZ99/nutOgbHcoeo+9Mtep+aNayFhb8tLnDLxOH8Fq88fDXetZu1HzLs0iHmHqGKxSKhwbw8XHi124+ng8JIURItWr8llNwteJjbuaVhz2/quLvVe4KJ6V59jlPP3O4asGV+XWIjfl8Zad/8ggMxqvDq1zT0NZFic3kW6xQCIixpN82YuhpSVsk5msdTBIp7ITW2Hj2OtkRYTMpeYvF5l7hQqEQEoEQzlK6LYZIVNpbnAoCIS2lG8vXmJ5KzcVIR1sYTHoSXvLE6RQKpXwXy/XLvZGY7Ym0ggZYEn0KnRpTnwYDvF30GNIyCVkZa2Ag4sfVXQxnWzkMQgOsHDyIXLAhgqAQtCcEreQQkw975/YypCVrSQHJSApcVDAJiTDioCc2SG8yws7mhdmi/ywWweDnKEe+xggnK/M7Mrqp6wPDfzai6c1Zf58KfH4pLE+/bjUawRT80PPBIT6IvHGH3z65bOFmIhqeiQpl4zQwGBWIV7mmgeOkmNW+BmYfi4JeCHPzxD1NBwZy1NvKCtcTUqFSqYjxNRtTKhjK90ak1/FSAyJT6VEqLsrcELNicvG+OBzN6tX7R2k1kXt5udvC0eYOjOq7sAsTwMNDBpoMziCHTKqDnSMgk4lhay8lzyOE1iCFLo9DgWAwNHlH4GR9HTpdEQx6cwIV1uSDKxfy+yZySG8iwkENJGe7QWaj4Gs4IPhnTSmM/x9bW9tcW5HGtmeYI79P36b5+xL51+ytdvcXktW5GW7PN4Wl5OflOFub9Db3Ho8iIiEuNgUdOtdHm/Z1+IXyy/xNzywtYtqWQ5YHnkw8txktBr+PSm1ew+ftsvDt1Xr4qFka+k34EpWbDcHmxR9hXL+OOHg1HfNW7kaPMHdyVQ4Cg1vBRizBidM70LtjJ6TAGQcPbsf2uZPxxZoLMGrSceFaNH59fxR+334BHy9Yh3ZOt9Ch/9twq9kRu9f8BDH5cebEn0O7zsMg9a6N/dtXYtnnk/DV8gMYPXM+PhrXFtOGd8e2C8lYtGU/vhvaAgt2XsCmeW/ix5WHMfqjBejvdBAfXG2BL1umo+vIj+Bfrxe2/v0tUf7FD8hpULNyGI5FRUClycacn1dg5hu90LF1Z2SLPLBn71aM6BAGocIO785+A1PnxaMH9iLJzgP7j1/Fzt3b0KZ5U4iU9hgf6ogIqQtOnY/CwROn8HqnWvh+x3Ws+HAglu29gg8WbYbXyU+wIskaJ/edwMoDp8Cd/gvD3/0RYZ1HY/XPH/BJmtm/OW6rrdD3o99wc/kMLN1zFXOW70Tq4smwHTkftcnHaOCbn6NWh3FYMb8n/OrMgIMmEi1HTsPhpd+iwch5mGi3D5N3JCAxMgor9h7H4sltkQonvD77N/w0pT9u5Uiw9/Bh+NqZlbWPbyWEL5+Efr/dQOKZg/hwyS50cryLDgMnQe5bD0e2L4M/CXMnPhrVff1wLf4murdsjCz7INxJUuLbrgbEVJ+Mbtpt+P5ubXBbp5Jz3pj67Sp8OaoLEvUOOHR0P9KPL0WfCV+gTrdJWPHFQHRo3wUJWjscPHYAOedWo9eYDxDUpB82LSkubBuy0Kxhczi6+yHNuTGO/zQSndp2RZrYA4cObYcdlbz6bHRp1Q6xRQps2n8Q2osrMGDiF6jV+TUsGOmLHgticPTLVvDvvxSxBxc8sx/S0+JVrmkQCAVwUkjg4ShEYpYQ1KtBTyy+kVpo8qkWEItMf5s1vF2Rkl+EIp2BFwF0MXAmSDhBGVHw4JJ5Wb8Aur5dYIJtkga5efmwtbF+4rSKiBiR2HpCKSwEZ62HlY2QN/Q6PW14IOmBDDKJBDKlDXI1CiRl2eFqDPDjtlPkA58EvZEjaXUtDqnh/SQgNkFFnlEg4CAXiSETGsmz6yEVKpEXZwU1+Q4rFKp/mcuMJ4WOUSSViEteJPrWTLlHLNAKsb9OpKJHbUd88uuOOs87jRasbezSj6wIR0Z6Lvr0b1FyPCjEh18s5OcVYdmSPWg9dNTvzyotfPOETqd7aAC3sO5Y/0EjVGvaH41G1cPrr3+JraeuYGTjmsjNfB0jp3yNVkRczPtmIXqsmmW+yCTH+Wun8NuoBmj1/lrM7OAOI5eP2SsuI/LGGdSpVgnZ0Sex+qIet29ew93UIiRGJGLdmuWYMGwEkvO18LaVIfFuIZavXYlZE4bjxLULmL8nAbdjrqNOSDX0rP05LgraIeLsCKzaddF8X00aftl8GVevHkGNBn3Rf05L/nCy1gWrVy3FuBHjkZz7FbztiitYBBKo5EZk5pPnz4/B6o0HgEvrMWHRTjTWEcEzeSGsyYdt/tpDcMjfWZInHabMQU+vqVh7NQ/9anij+3ebcHLWQPR8ey5GbJ2BJYej+HC6gmSsOl6IiItHUKvxGPw4UIWQLm9hThdfTF19GpNCQ7D274UYOWoi8rgPYFMsZibN+QuNrU+i+hUnxNy8jrTcAqwxZywGTZiNEzdvQ5CWQPYLybfTFpcizhHDP5AY9VtkXRMTP2iFoA6TcOh3IwKHzkV7OfDOz1tgtf99OHScjtVOpzF31VH8OrFNub+1tFIbRP7QCwFjV+OccAveXXoYh7/sgqO3dfy9KVSNX1/5BRRtZuD4B9Xg0+gH80fdaIBRX/ohn73sMO4umoCQQXMwqXAzftl0ChvenYWr8XHYvWwRNv72GSr3/RyTjTvx47oT2Prhu7hAzn07vClWns3C4PoO2PnrbDSduBDvhyWgw08xmEsET9dPNmJyR3dYWhwzEpPw6YK/sGfRLCxaexyb5szCldg4oiXSIbx7lKRJZy7BvSS8yjUNFqaFBmMG+Y2IhNS3wQgd7XbIv1lG8JKACIRGVXyx98oNiIn5pkKAnqUiwCIKaA0EXVtEBN2+1yeA7svlIkzNNmH6xbPo2LINnhTqJeHg0g65BjVJlRQX7xhxLkaKG4kJ5MNdCF2RETak0OHp6YD6fkqohXLISOGiX+eusCVrKwcVEpITcfZ0ODZs2YzcnFy4OTthzLA+GDR0IHz9gyASP7mIYTx9yLtjir4ZFTx/YqvIh4Yhr9Sopq4IqBZ2IqBSlYeGex541WuyoIVV4cRHhbG2UaL/sI4Zzo3bjn9W6eAdIYuKilC/fv37TnrW64nja3ry2/Hkg84zqyu/On4zml83beJKljAMKbnKjhi6U/zWpCVnSo6KoEB05Al++3KE+drLh5vya29XJVna8ttHrkaVXFOtaUt+vfrYNX4dcWorv754wxzm8DJzuOH92pLFHCbq2ml+HX3tOL9e0630ec5eu4Hs2GMYsUSDv6a241N16lrx/ezq4vqZslU643F8CV2/Yd516I7z67qTjRnFiVsL3iOl6zF+t8Wq/ebjNVeBf5KeN/jdm1d2mNfXtvNrs2trDazrUnqn8KhbJdtz1x4t3uqBm0d68FsutlaYvND87MNjb5tPe5r9ROIvmzPhTvzO4vUVfm2RmXF76XPOMe+M/g7m0P3wa5knpTUIlF3Nzfu3t3Yi/37Eb/fdcL44jPm+1+l7YExHbIfx6H7oLvqMnYu+4ywf4/pYTFfTu/N7tWb8BvPb0gH0ST4dFIdlHwxEWoOP8c7w8ejNn+vEh/liWBy/99Gy4yXp6vzmD+hcvH2Sj/j9knOWdjUnv2qgLjB1f1rH7383wRwPHJ3J0gcnlpi7scUebIyXgVe5psECLXV3r+KCbZHpfPFOCnNPCn2xMBUQMWElIaVyclBjMEEsJuLARIdxNp+nvSGEwvLV+Lyw4LjyzRM0LLkkm5Tus4R+iE1JhL/bw3yzysNxAjTr+hW5/huYJEa+loEO8sQRgWOgQoek+PjpM9hw5Cj+jkqBol1zGNvVxUaVDSmdkvTp1FAF14CqSVO4vfkmnK0MkBsFWEeed3l+AcSRN6ChHSfoENUGI+QGLQoEJAxRw/nk7ZYYyPOaDHDkhPi1Wm3UdbDjh6JmPF38AitHfb0jVjqza6CG40wPzGBbR9ekyfO2Nn3eabsXv9rtJ8fs+nniwf2XeOfHqtX8sGfXWb72oWnzGqhVuxIgs77l3LhvlWeZDr7L5T8d3MmQdRO1+36Bqwf/eoLQGoRWb44zV85C9n++8wZNAc5E3EGTsKr8/pCWYXh7xRGEeZqV+so32yJ18F84+UEvfL3uKPwdHtp7Bvb+zYhg+P/SwShG5Iyz+/+/NrPhX6x+yol5tfgv1DSIiVJo5eaInZfvglOZnbD5QZFMdGAlcxgqDLo2qoHt4TEw8X4NAt6QUwfIB01hUbam4d5aB1pPMS82mRhhE/xdPR85UjN1ShSBiAJiP6KL8nExJQXrYhIRk5eFPJIOvUYHvUgItVYDE/2ghYQg1NcL9lYKhO/ahu49huHwhbNINOiQRz6rOoGORGrkn0FI4jUY1eCMJph0evK8dDHwNSvQGmCuTwHkQjGEZFNLok8xCND55GGkv/MJyRQmGv4N5y6fatx3dcsjOk5fzpePvC0IDLJB1Ww7KFP1kCut85w8fGLqdH/9i5Yde2x4Uem9F9pzI7DrW4J9e0eqL124JSdLybnjR6+i0D1sROOW7Zc963SI6Xj8BQUFKB5GuhzXD/+MN//IxryhHpj4xw30wi5Ehb2LT4eEQacjhnzVFHx6pBKG+Idj4QVnzGhnwu/XqqGb4S/ktP0UssNfQDx4MdRFWmRFHUOr4bOxZMFXGDd4KP7cuAGv9e2FAxevw1EpxqFfJmCVvgPaCg/jiKI/Dn/7Og6d3IbP5izGrNe64I3pP+LUxq/x4bfL8OEXMzH29S9RV7MfAa9/h/jb2fAm6dUUFkGozkCVkO7YsPYPDOg7EucirkDFfIsYLwkVtaah/xf7ubahbt+M71J9xtOIjxMJMLllMBacj4O+uD/kvbac9rCQ6jXI1RGDq1DARGsXaE8JfiIolGueKBf3PbUOfGcFslx3qoSbN28gKLjqfcLDyOkQ8OdyYsP1yJXLoNPoobgdC6Fcio5iOUxpBbg5dyZWJ+WA27UJdr5+CCdCYoFUgkF5RlhlJuF83caIjr2DjAIxDEUZsCLnfbKLkGhnhzwQsaEuICUgIxQyCZpUrYWYC2egzclBfft8tA7OQKuqhShIAtLiNdDZisBlGbEiNwTbRU58TQtegu9Yx3e3cnPHNW5VK9Dp8ItOS2x0VMif7/U7WpibxffHWx+aDIPwfsFJxVq0MpdfYK6IsgGiauPwvvU4DGwedLRlg1pNjzyLNGo0GoVcLlf/k2tGf7vIVqPVyk1Gg5KLP5hMn4i+zhdSNI7pGRnO5N0W2NvZZUskkgcPwPQvEdPJfCweyg/i9rk1mBEnRe0B3wFHdqHbkAHwsIkpOe9RrS5qVdXCJS8AVUPV0J3KQNOBfdB92tvwszfi8w+VJWGdPWujUd06aBqgwmcffYgiqSus5WbRV71VHxzs+Q5S/GzR44t3cF5iwOxZX+L41TxYvzcMuXcuYE9EHDavXYVbpxTQoga6dgvF7D8WIC86FyOL7yGU28FXqcHHH38KsXstKF6CHxqDYaGi1jR4ezohKt0w/e2l4dMTkzLRorrL9xO713j7/41PSD7evlZy2EhMyCou+NHaBVImL2fQOzcKxZYz12Agll/KGSGig0PRinrhA2oUyHU0Dsv1lvMWcbH3wk20CFEhkBhu0T2ldoFJgBCFLYYW5WKnSYhmciECO3VExN272H/6FMbPmIGTl8OhDKiKrXfyEKZKxfw/l8E0bTK25+fBlJqBgpOnca1xfSjdK8GYoUb8qGFY9fsydGjTEpX3b4cytAFqkfjjDm1HxNef4vt5fdG42iWYOCkxHgEouGOPU8djofIYT/LGHwfPHYF7Siw+89uCtLvt4OzfssKP5VAt2AfLjiUcwrEERETexZdjGrSuXdnl0PO6/+Gd6wdv+WHKinuPB1YOO3H1vaXd7ewdskwmk9DzffETOzr1XNX88BuR7371wYAvH/ub/Gvu1D/sHZ0ybp4/0qHX2z8ODqwcfOO+QLc258Fo4KvJaZ14psBlgWNQ89cfF/elW0m16/8UfrH80XLDJXyPnWe/f9C10cPUnK+TVVSCxvovgbX3Wk8v79uPu9/DENOZAPPz8zF9+vT7TnoGt8WSvzuhZb3irqmTrxefCUHUcXOtTQP+34ZYX9ysvau4Z9PNa6V5EHX7Mr8+trkZv/5t9/n77uVctR2ib14u2R9z7Xq587dvRfDrHlHh5Y63e6t0e+pps1/DvgsRD3pWBqPCU1FrGsri6eGI6CzjNCIgpiWlZKFJsOP8N3rWmvL/xDW1QQg+PhHNG0PafED9G4xEINDeEhSdQYcAIlgiYpP57opmEcDxPg0Pa5Iouy47GqRUrMBugwPcrl5GaGjtcunQoQDnc25BfzICkVpgXbdWMK5eB0GRFiJHG0T8uABWzo6InfsddG5u2HA2DfpWLSAyGpAdfg2yu/FoM24sLhap4ejugrT8DASs2YCWMhnmnz0GXYA3+ns74+i3C7C4SQScenSBSBWJ1Ew1BHolPHyrQpd/ARM+OY7Duw9gy/p1OB1+AVU8fNG4+ST88uMb+OCrC5DKH970WtGoGuyNv08kHSQLbtxMwOwR9dvWDXI58Czv6RlYLdyyHdSgw8YJn/1539jcH6wfvftJ4jrxxo2aTX4OuSI2ChCzYvG7f0Tl+I79+NfB94ajImT1D+//fH7Pigm0lG85/uvkDle/2x1//7AGlXvaFMWd/FupSeJdAR25tImIXD8xw6p6byev4Ae2+9b44khEZGphyJOk24KDUpKVVaR38LMmAt2apEubH+wlyJ+DgqQ5iDxDu/dqENTr4YM0PQQxnRqYPOgD+6LZuQWj5Qvrmcpg/Pf4JzUNPT/cnpaYlutcxcfh6ooPO9Xs/dH25LupuW6Vve0jVn7UuVrfT3bcjU/O8XpcPE9C88bVHnjcw80BsTncW0RAvJWcmo2Glex/erNP6JtPGq9CLEBjbxuciM/Gg5wNOCOHKs72iEvKhFAk5ntN0FqDe4eGfpgjJBUiltoHupyOz0JgE0dUMeqhLB7wSXfzDQhzDyBEMhMp19dAZO8CaW4O1F7O8FDZQ6LJQ4vYu/D1tIeseXNsdXVDjlSMW/G3YRQK4BxWHQVZSci6eh3yOrXg6+OBGzeikCpVYY1MDxQmQy71wc29h1Bw9iKavReG9fHVAVJoFBhTyTMa4ORigLVJBAkVQ+S/xg1DkXyDFoKKcPx6PGrZWSE3NhrOIdWfNGtL6P/pTpJ/OQj2ccCyDzpiwKe7EJuUjSCyv7zM/hNyf/1+GR72noRU8cKqU0n7yYLIWwncJ8Pqdqwf7Lb3Hz7KY6kcFFJSYow6s6f3g8Isvbis3ePisZKptAFeQVeTvzIJpncJ0Bj0Otn141sHfTn6Wp33/jwSvGbejIVndq8ad+91Ic16rBj7wU/Dac+MR8Wv9Gs8lKyG8juplzYhO6anU8G1jYi8Rj2BtWrvjm4KpTKHPx+9PSm8t8bdf5kCyYUCiIUCg8HEPXaMpdSv2jk+6Pj180fGVbNKX0RePHn8lSOf+9Zs8eHj4iqL+IcffphCPlIgH6kHBjiwZT2sKjdGg6oe5oTcuoCDVzIwsE+H+37iJw8eQ+PWzUr26dsVdeYgghu0BvVyOn49HU2rPXi0rYejx7l4Her5Prj/cl7qDUidQiB/0c0QtEr03qpDUlravH4Devbrx+8adGnIMzrBQfHkDk0cZ8DB8DtoUzsAp/dtAeddHw0rW2P5yu3oP2Qg5P+nb9SeIxfRoUVYuWNX7mSgps9THI7TkIcz0Xlwc/eEr+39BuHs0WOo37zZAy78JxhJutUk3VYPPKvLioXBwR/KB569H75rH1ly7lyFwqcGZGS7IPkWZAo5dHbeeNa96P9JTQMVDPQjffTkdX4MeCoYivd5j2EqGB72EX8WuLvaIz4fk4mAmJyaloM6/ja/TO0Xdr+z1D1093PC3tg0KOlIjMTIm8hfwVRc02DiG/IFaFuzBvZH3IBIYN6/l7LNEBYswsIiJuhiFBhx6GIaqqTmoW29EBRFvAtp8kp8s7c6PFUnwFWuBO5OLArSs6C0toOV3Am3338XUfXCcP7MOdzJzcfIcWNxwdMJOmcnJKRmQqeygcoggCHuNnRtmsNOLINELoFBrQaEeohFUhju3oSgWU2oYpog/sxZdPMuwPw7StgT0SGQFeLInv3oW0+ImRNCMffXq1j0xyJIvf3h7uEBJ5EJqQmZcAwM+r/+LlQwFL8X/D4VCA/af14EV/YSrDmdsocsuBmdxH0wqHaXhtXcdz2t+LtMnjdkx09T+SaKneuXje3cd/gflnMbDv19n6F/EHNa/T7asv3Njtvyua+3uZwSHVUzPeF2lWkdvPmXk7xvXLXm3VeOeu/xIuGRuNbuRRZoNGor+d29d2HU2ynu7Cyn4ugwH1dGWa+yq9J8sP30PfkFWuODP3hPQLW6LX7nbm75SmDSO/hK0z/Izs6ab2/vkP6k14uLP1IPVAw3j21Aq+59oUuNQrbRBHtDJhLsa2FQHzERKbk4nCBDTVMEtAYjHAPqIu7mbWTGRaHboK6IMNrj1LF4BBXGgmtQgAM3BUhOLYCH/ib2nTyP7p1aw8azBlYs34lAWRKc63dA9cq+EKrTcE1nhYQ1PyFV7gqrmm0ghhBfbVkJ6k7Uto4ffJr0xJp9MWjrEgODygr2DsDyP9dAK1JgQLcgaFSBSD2xBjqS5tod+2Hn5Xx0CXXGwrUnMaC5FDmFRuw9GoWcTBOquxjh1agdKjmZsPvMXbRp3hBxF4/i1IULGPPWDGjSwqGzqYUDy1bA006NJr2H4FhCPiSxl3D23CV0rusL/9aDseXPZeg1Zgi2b9iIy0kCBFgXon2vXqjZojO++WoenJwVaNm1LQTkr5+rJh+j3AjsOHIabfuMh5eKpOdGClr7FiFFZ49jx66gQ5umcLCSkRdThNi4FHC13GFXrR3EV9ZAEDwKQd6OiE7JQ3UPG2TfvY5th0+iafcB0GpkCN+4Ehz5QA5+bQSW77wE/Z0LaNWlK/y93ZB9cQuUtXsgJi4ZSR4Xsf/ICXQYMRmupNCVmqPDsqsX0cM3FfKqrbFt3UbcFfrATqjBqGbWyOQEMDnVw8Ely+FizaHVgOHYfyUZuXk56FrVGut2HEZAw24oLFKjhb+EvCNq3Dp3GPr65H2IKUAH7wxk2IfCJfsykh1qITE+Hjv/uoB0OMK6SmP0bhSIlQsXon7H7og5uhYpJgeoAhtBpVTBJeUg6nQeAqRfRrpzLZxesRxFRgEGDB+M1Fwt/t51Cy2ckuBdrwuWbTqNuIiT8HIUoXPXJrDiCpHiUB37/lqLYSP7Ye3ROPRv7o9Pv/4V7nI1BvVqg2OxBrSoXxtrNxzA0O6h2H70MmxC3JCfk4NOXhnkHVTBwGmQYF0Z+ac3IqTpAwsx/5qK6tPwT3F1sUNCISYRATEpLSMXNb2sFr4zIOyBYYXEKE6u44+lVxJhNHIlvQgscEQkqDX54Mh3hpOYB0cqW+PwIGfIkriLz/FhqS8hJ0JGEZBv44zcAi0MideQlVaAsCpWiDh6DkL3ZhDdSYQtFRnWDghr2xChPXpjxMTx+Oi99+Hj545LSXGoY+JwNJh8r7KykEfidLFRQqzOI79ZB0TfusZ3u6TQYbKNYhEEeQW4eOQswuq1QcvDHNzPrkHLBjbo1tIPNxIjocBl7L3sj67VnPHzrJqIS9bzgtWUqUWRkkNKejK+HFwTM9dchEQgL5385yWnSiUPwbpzqTvJgujbyaYZ/Wt1b1LdY8e/ibNtt34rd/487W/aVLBv0fu/lxUNb+8f+0QjvPVtOXBl2f2Zvx6olZqc6Bl95VTLeq26r5NKpQ8f3Oj/RC5XFKByD37uhbTE2OYu+RcszpdcgVcHezsr61y6IxUJdQ8b3OxJMQZ0cRVHb+YdJe3Tj8bAvud9o00+DHHxR+qBNQ1Vmpmbg+RuQbzDBu1qV0dmPicQ2aKlL90q/RAMnjCiZJsWdap2ot1FzV1G2/Kb1DciEK+FlpYux4/pWv6mShdUJ8XC6q/NLHe4R+3yAnFYu9K4+f3xA0q2qYeJc/vSkSO6hJrbAV/rb+6rb0fvG9jgvuft3dGPX9dq2oZfKBKXUPPzkqVJf3MamvmRBPp1RMNWpROK9R5rfvZuffujzNAQ/BgC0999QB9PYjRH+tcq3hGjfTVzLTL9d1CPsoPQCDC2lzndwbSyx2MUv92gZWkNm713NQwfVlxSsAVCJo4qOTesM227LW2/tQ8zj/3w+gjzQBHDK5f+/drVJDeo6VGy33fAoHJJdoQWEfGpyJI5YsAA8wgKbWvSUUDd+e2hw8y1bTCIcOJmIZpU9So9xue6F1zoJjH69IpeJedKGfzaa/y60rC37jkzpNxetyHDStNdwxGgS/FzDu/VkETesFx4KsuHjezPb1PBQPlkRqnfTWdv83rUkPbmZxlqSZszWcxzo9C3yJZuEMEQcfE8qobVvS/9/5aXwafhn+DjIN0q1Qq10wfWoe29rz0ojICToKqtCBIiCoqE5ko7Oo6zySTkR27gBCZ+Doq2tUOw+9Bp5BfkoFAqhYurB2yUSkiFwvucHim8ULBsc+WHmP7mciz0dw3o03oH4n6zhUIVBw+TCLcDfGE8fQa+IiukSQU4PG8ucjZvQGp+IZzcPTD70/cwaMRENH+zIeR6DrqoeMTUCICdWA6nIhPs9Dok3rgJGxsXIrLJY4gFIM8PsZK8gRmZCN+9CULPmpC2fxt/xl/CX4sLkKMNg4MuD1aCXBQV5sDTSYE36iegQGMDZ1sH8luKR4CrEQJhNP74rDHEVjrkFUgglZDndfbBxLEbzTUwL7GOcLWRht+BQf5vBYOFas17Lb92ZONwuv1er5CcT9dcdom6faOq1qh7bJ30WzU/mPfANLp7Jrq6973PydJCZmZmWHx8/NGwsLByFZKXL18utLe37+7j43PwSdPv4ul/FPAv+YuWrVaQiIT/ulcE+T0ZEkSVB3kZb62CyWBdEHdqgZVfo0cOHFVy7aNqGhil9Bt/v4H77yFDVV9XVB3W+dHBxFZEMPzftWePhggO52cT8z/iWQgGyqtQ0+BqLTmekJBa+ONbrZ94ml4jMfbTmwfjvUORMBARLdULSAndWDLyI+WmUQhHXyUEo4dD0m0MOd8CYmcHqHU6qMg1VFiU7V3AN0lwpdtlsRJYweDlguhL36NQFYydp2JQYCyCkAhiMSnF5WdEIdmZCJUpU3F67y54+ftgwTff4s13JqJO35GkJJiMIi9X+gGHWl0Etb0NP+qjMj2ZpN8AhasrrGNz4BJ1B7vHj0RKXDwEHSsh+/oVjN25GXcSUvmRJiUOthDrDEjN4ZBaKIWY02PzvBXo0q4d6gSJEVwlBq5EGNTz4eDr44tFW5Lw+Z+3MPfzqciPuYrGYWPwQZ/G+HztUYglsqfw13t+2MjF8RcuR3Nb5/Q0q/jeVZ9a3KPfnz9yWrFocPEKiKQ1A2JOap381eNbEWIJ/+ReaWlprVxcXA46OjqCLuRe5c5rNBoVef94B1Dy2x5oa2u75p/Efy8ioeCpDHErs3M/jEzzWA9WmsQJaXej17p4V3psT5dH1jRkxl1EbKYBGodqaOqvwqpFf2DQ+DH4/bf1cFQWwcFRgcbNGmPN0RioUzNQ1U0Nv8Y9ceTIGXAFWaS01g93Lm6HW2hXiExaaLOSsONKGuxzYyFVSVEzNAz7wxOQk16I+i6ZKFQ4w+jXEPEnD0FpLECPIYP5MdBMBWm4rlOhhr0CalIaWb9hF2T6PLRo7I2rSVKkG5yhJWm1lusQUsUNBvsw7N60Bt27VIVnYAus2nQCipwkNAwSIl3sgHyH6si4cACOchNa9R4MmSEPS/ZchiwjFh1aVEZUugBqa19c2rkBQyeOg5tCgPV7TyE/JQNNfE3IFNkgsEZjnDx/BXlqCfy5GEjsbeFXqxX27TkKoTqbGLZc1GtSA9G5IkhdK+PC3t2krJ2O9kOG48jJSKRmGVFDEg+9tRWCG3eHg4yfIxh/btwHXcwFtKntBf/KXjiQYI87t+6iql0+JC4uUPo0xLVTx0meZJCSfy4cqtWEnVc1bN9zEmNaOSMySwiZawjCz5wCl1eEUaN7w5h1AydjCmFUuqNlNU/8vnI/6jnnwGBjD7vApji4ZTMcjXfQpVt7XEkhf2+ZBzLO7oKTkwSBrfvizLELyMooQGVBHFR1uiPmwjlAl4PWtT0Rp5HC1r8BqrjI8MfvK2GDZPQbPhzbTsYhO1ePET0bIzf6FK7nkL+kVQAySJpyyAe3R1AR5E622HldTxS6BqE2WTCIRAhq2QP2fFnAiI27TiE7MwdjhnZF7PFNyLWvjIR8OfRx4ZByReTDGor1l4rgmB+Bqq0G4vJdNRJP74CdPBe9+g/AXvJuZZKP8aCOdRB1YhuyxTZIFQZDH3WEfOy16DFoGMnHBBy4mowCrQramHOQq4RoEOKKFIfa2LZqM14f0xkXbyYhX61ALvn7V1IkAXIZmnTq80y6zf+TmoZzi4aYLeTI0EfvPwXeXhr+SAc4W7nwelxcct7idzsUD735z9rIqfufHSmVL2hXajg4zggDrXYglp8OCLWR/A6G7o9CDYMJBWt/Qkz9Psjv1htWvt6QK6T8KKHlfBostQtUSBR3saQjOspMxFyTz+5fR2Ix79eNEKryoVIFwFV2F8a0BEjI71Bz/gb0Tr64sGUrqgyfgLWzZ2Hf3oPwcbVFVlwuGrdvAi47D8aCQpjIjTh3LySQ36X1+WsQ1QuGf4A7IqOv4dDosbCyIgmxzsWXb3ZB797T0Stfi402zlg8dhgkhXrcOLwRvvXb4q2923D38nl8/d132LRxC9YeXIjrbg2wJ9AXNoffRmObQgyb9jXUmlRMbHIRfwkycGrfNHSe9it2/v0juo+6vwcc5ezCYof/EbUevf8ECB7T5/Nx74lcIsy+GB6t2fVNb3OV5sB/7tj5JFB/g3l7E8qlNTAw8P5uew/A39//R71e31oikfR6XNjMzEzH9u3bHzx//jwSExPx22+/3RdmzJgx+Pjjj+Ht7Y3Q0NDVu3fvvhUUFHTxAdE9EV52cp1UJIBUTL71ZFFIRLBRiPUqmVivlIiMcrFQrJCKHquO9Bkxo2KNXhP8xQl8ol3U1zcDlWwfd90jaxpkCmuEXz0PpyquJCdVUKpo6VGAcRP64capPTgdW4gW5EPfnPyAYrasRa3adXA+QQ3bokToXMxTePrU7ogNq1dALbJC79ZhaNesDm4ficfNbAGqcAK0adkQx9dthkelIOw+cwvBlUTwtVbjrs4JmdGnYRPQEDIrF6j3bcSSVCOGDu4DN2kRslW+vLJu3Kg+Nu06Dx8HDklqO1QOqY0NxxLhGdIAlXz9sfivpRA6VYPSSgkX8uM7deQq/F1rooqbHFGZEkiKvzGm/ESY7ANh7RyAqMO70LZXLSQH1YObSkantUPB3ZuwUing4B1MRNF1VK8nRUHSLbhXbY7KNsDO09EIbSyDvTATWq8gyLMjIVHZIfzEFfiKveHrIEZ2kR1EEivkxEciqG57uJO83H0umsQViyS9PzysRLDTpqPQwR5iuQoCsQLNSH5tScpAlaqu2HEpGXV9BHCR5yOZxKkwSFC/YQucu50NX0k6JG4NcfXQLnTuGwZbYzrUbpX4ZxNae+Pm5XVo2qXUkco/OBjr913EgDAJfG0NSM8WQaSww6VLx+FWhQiZADdcy8hFFaEMutRb8AtpDllSHuoGeUAdrUNskTus3HwQvu04ulcz2wl/FxGy8mwhkNohL/46qjQ0N6FYeQTixpGd6NivNgo1OlgLbmPXJQ16d6gHU9ZNBFZuAh+lCtuOR6JW/k0kqarAQwJk34mE0tq+OMUC3LhwHqHt+yHhznHkc0Q8CCVo1a4h5IlKXErOhVwqRyVPBZKzTeBEMsRcCYedq7ndwcOvMk4fugrPGtVg6y5BRLrZNVIod8LdiL2o2bw7tGoVorIl5G9aHwcPkr9RrbpwsnNEQsQeVG/aDYb8fFgTsRuTxyGL/CSdn8EAfS9TTYNSKkyIiU7IXPlx51DzkZpPNX7qzyMp89nvGuCF3aMGoPP3v6J1y8ZIvbIZRQYhcqViZBpN/BwjTdbvRPXfF0DZrBVijh9Dkq0SNU2bEBTaAvbcTahsWuJwdCGqVvLCX8s2wNffH+mJOnzYpw4+XGWCqigTYr8gFMTHg8sqQrwsB2IbB3j/9C2c76SinkCKC+7WOJGehtRTlyDwd4ZAQ943Kzpdtwzi3HTky6rBXSlHslCBzNxUqOw88NHnK6BS+MAgl8DOyQVitRGNgmsRAR2JBRdOoLWLBz4dNRYTlhRg45UobJo6Fl+8PgFf2B/FO+tPIqhKG2RpsrB58SzMmHcCCy+2gtBwCHezi7Bx7Ub07d7w4Rn5giElY0345Vv5e7/ry7dOYsjTfU+eFKlUSgdRovMMlM0sWqsQEBMT05uIipLRH4lg6ElW3J07d0b7+PgseVicO3bsGBMREcH71vj6+vKFb7rk5eVBrVbD1bW8439ycjK2bNnSZcaMGf9INKSlpbV3cXGhNRR2J95+4JD4kuKlhKSkpFkeHh6zHhanR0gji91f+E/S8siaBitSQh47snLJfo8hA0u2Qxp1IIt5m7o2+PYzq9VWVD/WGF4aiUCMPoNK26PppzqsfX+UdYnqNqwvvx4aWDyJmMcA3Ev9dr1hmR2jXe+B5c4N6Uarikuriwd2KO1pMn7syHJhhw8pVraVeqJkslypDcYMLo1z1GjzNUO7NSh5hpHjSn0qxlUy32voCEvcARgdZE5d174W1W5+lnEjiktOtfqXpnekxefAE2OrN+G3LF4ofYaUyTuY51gYPKCDOV2WqdR9yj9/vQAHsozlt8eOMaep14DSNn8BESpjxpb6OYwbbJ7nY8xoc1506Fv693lttG/xVhAsZb6Bwy3XmsfraNalHyxeKePGlvqVtOlR+ncbOnJkybZI6ULUtnnfM9QTCC0t2fTpU8PyUBhT2fwXMXccFmHMa2NLwvk37UkW83ZI/9K4+b+0Z000tUwpENyv5NzE8SUO0LD2DMaIoZYM7FX6txfLMXpMcbjKfdGk+PCQDqXfFcv5+kHmhpHyPfyfLhXdp4EUarJvRt3J2Ph59+I//PMzADKRBC2CAlE4cxJMBjUK9AIUmUywpRX9RNiLxETgD+xhXnjMflFGjsPQdqHo0LY6dCYBlKSkEBWdjlM738Hd81vw1TY5tpyPwYAWdbH2wnWI/IIhsyEFpFtRsPK3R1cbJ5xcuQEXXR1wjtxDEKFH39Ba6PXVHDR098UFXTb6/rUM1UnBR0cKCuorkcgIrI1UImQaXjyPsE2RWPbRBxBZybH94nV8a9RBIDTC64OpWDJ0CD6fvxyrtu7B2wt/hDYiEuQUn+b3flmInfYOKLidDmPSOnDHqqJ5HVfM+bgdblzNhE5sB1txIRq1C0Djtk9eW/A8oMNuXL4cnb3nuz4uQgGdNKTW4y96DuTk5HS1s7PLKHPInxh3ayIYNpJtATHOvxLjPNFkNODDfjWhLshbHNKwY9fxs/+4b7wHyrBhw74JCwv70mAwiIjQKDlOmyioaLiXo0ePmurXr//Zk6S1qKjIRpp+YYtYLGnp4vnPRSERDJ8YjcaeRNCE/uOLH8Ejaxry4sNxW+SBas4q5BYZ4WSrQCEnhkokgEFbRI4ZYGclRX5+AWQqa7IugouTPV/NrielBF1eLj9yW6FGT0poMlIqEMPehsoGDhnpGbBxdIKGCBat0QgFKVJwMmuYCjJhTX4oGVn5sLcWYPXxVAxtXRkZmblwcnZEUS6txlbClpT40jOy4OjkxDsSmwxaZOYUwtnJAdmZ6VCQ0oFJnY8iLUmjvRUWrD6FsT3DoNYJIBUZYRQpYGsl59OSlZEBK3tH6pWK1ItboKrZBRc3LkXTPiNIOvLgbK/Eweg8NPNVIl8rgCMpvaSlZ8LB0Ra/LDuA1/o3Ic9oglzCQQc5eUYF9Op0UgoQYN+pSIzv3Rjx0dHwqRSIFQdj0K66E1wcaClEzafXQj55NjparEomgVhhjbwiHZQiPXKKTHCyN8+zkZ2ZARkpfSvIM9A8ciZ5kkOOScmxjX8sJwKgK5xcnFCUl83nk5VIB5GcxFWghY2VjP/bpJO8dHa2R25OHgRSBTQF5BldnJGbnQ2OGFF78jdNS88i+e2AXxetJcZ3ILLJ38vOyRkGdR7y489CVrUtKYllQGJlR0qc5i7DmenpsHV0RmF+LvQmIfn4acjf0okYGgG2bNyBTm2DcEdQCZXIo2iL8lCgFcKR/G1QFI9YvRtObNpCxIZFXOXhWIwAAdnH4VC3EzTkb6S0c8DK1fsxclArpGfnw8XRhp/yWChRkB832Xexwt5reajvawO5UI9CHR3Hn7yLJB/kArrPkfvZIC8nB1q9kc+7vOxMCBU2UEkF5H3KIXE4IZc8l1hpS4wLkE7+/i7OD+zu/EyoqDUNNyPj7mz7qmexogx9oWmhCMUK2IhLxTbED28souNMjhraDbrkBXBw74CzNwpQy9sGH3+3H/FXE3BLa4Mgdz2Wbj9Cfo81kW3nDP/bkejr4QWtRohpo0bhq3dnYiP5mI2a/T7GV6mJhDWbkEp+ux4Ta8BbZI86Y4di8sHz+HPJIqi+m4OkyHDyXsmgzs7CSYULKm/eDKGuCAaNFjDoyW+DfCuUcgw+fhR0niQu6ipM6SkQEtHBKe1IomUwCYQ4Tn6PKveqGOR4Al1bBeKPrZfw/vfnkZebBzc3F4jFBgiIzK4oA0VevRqTvvPrXp5i3lkv9EUn5z6IYMjkOE5o6TZJmbv9ttayTQQD9Y5+nY4JIhJJUumhG6d3957a3osL7fb6ZyMmv/9x2fhoM0j16tXFmZmZq0wm08A1a9Zg2rRp5e7ZqlUrLFy4EH5+fgeIYGj7qPTlZGe62qUeSqHb5bqJp1ykU07TLV1KSkonNze3BzpV6nS69USwlAgcIhioWstDmZ/Ko7h5K7rJlI+/PWYZpGryqH6DO3dou6psmEfWNCitbOBh54TFa45jSEtnJIttkcm5oDr5tS5adQIje9dD9p2rsPGri8V/7cPg9uRHpreFTJeJSLUKkZvXwt1ejCZ9+mN/DDEQ0Rtg22EYMmPPQOhUC3+sPgrb/CT0H9AUWWJXHF32N5xs9PAJqQV7RzvEiN0gU6iwfdkydBjUF1suxKOykxLJJ1dAJRehTtdBWLbrCkZ2romVSzeh38D2iIu7AJl9NRxe/hfsZRxa9u+L06lqSGVy3Dq2FdXb9caea9mQRKxB28ETkB1zEib3elizeA2GjbP0FhCgef+x2LP0dzTt0QXhSfSd4rDyAPmQNK+EzDs3iJF2wq4r6Xy8CWd3IKBFb2w9ewdOdw6gaf8J5lgkSshkStzYuxx+pKS860oCf3zrrpMQkJJS/25VcDfPFi4KImRI8eJ0XAHcEw7AmRhDm7Au2HoqDuqsZAzpFGqe/8+YjCSdDQ4tWkOeX4ihw7oiNT8R6Vo7HF61CnZEstgRYXfyZjIJK4LDnbXE4NlAUb0TNhyJxKgutfD373+j74j+UBuKcEejQBR5yfu+NgQXYvOQmV0Ap4TNiCYlneod+2NPeALkRDEnndoIh9qdse7wNerVgy4BJC35t5Am8EbkqmXoNmI0Es9ugbJ6B+TlF+Lg1SSEma4hoHk/rNoTjkEdQiFVWRGBoiRihkRuLcKq3VfQwC4Rjq0HkI+oCQraPkcMu1GngUgqL/k7nI9KRwe3c8izqgYDEQiUVUv/Ru/B/aHVpSGB88DNTX+j15AuOJfCf48RdXgjpKRITLvbbjqVAKvbm4hAsEJoe3NNyI4tezGoVyju5quRrXdE7uFV8AmuCVsXBxy7nQkPIprOrFhF0mRCN/renU9Bj7rPZ5SzilrTUCoYXkIEYrQf+Tl2rw3AtR3TUK9Kdzh72cJdVhlHT8ejWnAgkonwNsr8sX7LcrT85FPkkQLPoHFvYc/enYi8HgFbaytc3ncYOvKb/jkmEu+2bIl6dWqSkuMxePr54tq33+NNWxXEgZWg3bAR8lrNIbh9FVIi0g0mDYREtEuJUOVM5FuiLoRQr4ZRXQTBjRvg8gohJ4UFO6kVPHx94eXljXpVqyHA3w8ORLheJ9/RnJx6+H5lNpJSnYjwKIKXp3ux/4bksY//PNn7XR+XF52Gx0ENvVe1JvsSrp/g209//3jMxklfLe96b7jP1l12vX71Sq0/3u4cTvfDt/360ZCJM2bTngf3hnV0dKTGYxCdw6lFixYNTp48WY+ODxIWFnadCIXHOhdSI/3e2rOG10OlQrviYyaJKsbg3bqGVCorV2VBBMND4yGCoe/du3eHerva/4nYvVKYxzuxTpCEdPHwD9l17OKNzjvvGCeNri4npb4HDxA1b9ZUfLPgbySmpGHdjkOf3ycaHlXTIHYM4LvIvTbYPGcy9WhwLz73+khze7XSxlzRO2F0+9ILJa6ooQJqjC2d0rsd39vSXGXu7G+uanl9cIuS87Tlp9/40ip0Ci2DBzexBZqYq4d71jGLpWq+Y0rCUMFAGTrWXGXvZ2UuufcaXVo13dCTLP1oW4q5PaVTKElcqNmw2weaK6QtgsE1rEfJdR1Gmpskalua1YPNdeBWNuYD3eitapdOtduzYWVyI3NzjkRBSisKwL8rVYfmxpgutSw+JqVV+qVeJ0K0q+UDfZW+5FpzrcLQNrQ6PbgkBETuqEb+ANXeKK2KdJVZwZUEr/5Gae+OJlWKe/yElDZRUMHAx/nayJJjNci7V+M1c3NIHX8bqD1VUISV/g061Sbx1C61FYNal3daon+NyiPM+exZvzTf+jalo52aRzylgoGPq4P5b12zWA+M7F1mplkbf9CfQb8RZbtV2qAZ/86Y02dJxaihZd4zqFBNRvJjiPnZ69FI3Eim1y6Np1cjcmWjMSjLoBHm2gzq7eBNs7pXaZNQs+JKhcAxpcd61H1GPUEeQEWtaXgV6Nh/NAx9R9Dx4pCTm4mExARs3T4UVtY2fM8LCx8O6Imx2Vk4uHcLjFZKrF+8ANyP30DoHQhbV0dIDRzWFN7FnuMFCFDZ4vTS7TBmJaBAlw9Z3Y5QOrrhlqsLlPbN4AwDEaEmSOLSEL9wEfJio9G8Tn38tXQprBQySARCCB4we+WbX89FnLMKqquXYWWSoueoqRg3aizENir88tNPePOtybB3+qcD5f13MRqNot1nt0+2cVbNCHQOck+/HV5yLvrioS4Pu87RzdFu4KLVpDCshsaggcFgED9INJSlRo0aZ+jypGm7FRW5WW9Cj9VXclHfww1Nd9twNz9qrqITWUkff/l9eHt7/01WfyeKAoZ7GqL5qai99De24+YNZGmqINjGiM8Op+GjFqX90CRiYaJUJPxFprL+29raJj0lPYOW0MQpaRkBn3z1045P351ckkePrGm4F0NeMvJVrrB/yBStSREn4RbS+AnHHeGQVWiAg0qCC/u2ILRtD37Uq5jTW6FydIdjYL0SJ0ULCVcOw6lqC8jFL6YuLub0bgQ07FjSHXrV7otAZjIGkVKuOuMylmw4B7nSGaOH9bjvWn1RNrZu2oU+pIS85M+/oDdq0WXQeHjamD9W57auQL3uQ0oEw78i5TRuyWqisn1xBZdJjUVbrmB8r/vHpiiLQvqih9VkVNSahlcFMZ0pkwhNJxcXfnkQfWo3QK/fG0LE9+mg/gVGCLRAuq4QV25GIDU9FZeu3cCl8PM4nnAX2XoDpI5O4BRWMMWcQeFtDnYnt0FdpEH9UWNwbE8MxAWZUFSvC2ONajhmMMF/wiRICnJgyi+CtqAQOi0RHOR76PXzR/BxrYlM/wCIEtJQEHMNP/QZicLcLPTp1g2XbkbBmoiFv1esw+S3HjvY5n8KOgdEYmKSD7EtcplMRn0TaEmFtyLUUbFLox6o+bk7REIxDi8+j8+HN9JWbtL7r+HTPr93UJgScrMLMlM1SZBLFMTuyGGC6am5P2ekpwdnZmXdoDWq1KTuGe5Lawqa3vmi5omnEb9npdBleXn+W21Sj/CTU8UUWM9rUdVrl0QivTmqrVXeo67dvmKBJPzy1Ybvzfn51LWo6OY6nU5qGdDqkTUNsRe2wzesKxYv2gi90g5juvjBaMjB8RRrCM/+hUa9BmFPZCEu798CWxGH1jWdEX9lD7xqdMBfu65jbBsXXC2yQXVZFuYv3wGZQISeLYJw8a4R7dvUw5HIPNSTR+PanVQ4pt9FAe0jrSmCJi8VSVGH4FKlFU6v+x2tBo7D2fU/4kquNayyvDGwZSB2rFkB+8q1IY07iPBsFZQBjcjTyNBKeR32YV2watdV6NVFGNHWH3fSM2DvHozlP31LfnBOaNF7ICqpDPiWKEgbqRCdm1bG5RQRWterhmSdDEVn1yMzIwsxelvYVKqPsJAA+NqJkJuWiKL0aGzedQKOddujQK2DrcRcdDYZ9WjapgNcHB2xYMUBjOrhgVxhCLas24HxI7pAorSHVFvIV5WOGj0C284n8IKBDi0dn28HTVEBzq//FZdzFVBUboLBzSvjjy3nMDJMhwLPpli15gAa+xgQHhGFLiNGIitXjpPr12HkxCHQF0QiTRgM9fGlqNR+JKggo34XJQgVED96yHhGBYHVNLx4hEIxyloGOgLlhTsxaPnmZOiIihBrjdBL6RgQJJzKgU56TUqgRnA6DcQZahiJ+E6zs4K1UobkzFwiBmJhMBlQoOVg0Auh4QwQku+lWiIh16sAK7JoneD+5hCIDWIkRN0Ad/k0pL0HwHXlaoybMA7eYXXhSG5uJILjl42bYKxWBRONBohFj52C4JVn5qyvd12JjOHHBPnzh8/g4Xa/GDTotYcLCtXjL3+QFE2bJ+ixb3Y8fqJHO1u7nO5VSp3YNRrNEw8XHXUrusaUj765QrfFIpG+Xu3q21s3rv1r08aN9kffjMwzcoKSEqKTk+MyR0enEQ+P7f/DxsY2Bzbd+er3wH94bWitGqd3rV4o2HfwcJ+yI2A+UU3D2NdKh8ylIxzynuq9zJ7tHYNVZLl/OO+xXcz9tGvwNtUdUyaUesJ3Ka5x71WHnnTBiGplupC0LN8zgAoGSv2+b5b0nuDjGFBcBR1WFeWH2TGPEzKqc00s2nAaMltXVLY1V+NNfbf8KJPTJ5cOUOdNkqsvNA/3XaOz+W/X8p5nCuturuYeMtzclRFV3UvOqVzromZxbeHEIeYRHenTjR9RWvPVbXRxPpGPUrf6fvymWOqCQEfyBx1oTkvZZxnbw/zEdjTOQeY4azUx96RwcgaqEMGw9e8t6D60h3kaeF4wECQ2OHI2Fr0amvOCFFfM3uCMCg+raahYmExGfDdvHr6Y+w26tm6Bi4nxyE5LQ2FyDuyJKPhrogrXpYUIDXkb6Tf2o0W/eUgqMiJu+3wsXX8eu3a+ybfj1Z4xGz5B1bFj/S5IMzNgys+DTJMHk6EI6qJ8fDj+Tcy9mwhHIirsXPzRsG49LAolX4M9O6GlM3AYTTh/KwYt3psGibc77J3s0Om7L7FvxsePf4hXnLmzZnT67c/lc7fsOz5jzJSP+GPU2JUNQ7vn29n988GvlEplwf+brqDKla7SdCQlJXuPmTbrzqnzl3vRBT8uxQ+fmp0lhUJhfuXKlUucFKlvAxEmcq1WK6MLLeHr9XoJbRahTSx0oTUqlsVyDV1bxBCdB6PsQptTRCKRUSKR6Knxl8lkWrrI5XLNk8yZ0a51yw1l9x9Z0+Bf5z7fkJeK8X3+WTcVicoegc96RqKnDBUM9+FYDb3KOvxbBWJIm/uDMSoerKahgkA+vxlpqTjxSz/Uc4vEvoWv40Z+AN6vVQ2vTRwOzkqCee9WQ3LaQXQP+wB5ytao32YCZAnzsHvdSmw/kYv+b38P75XTUdWhCGkZNzBt5BAk+nrg2Pp1+Pmvv9CgbVsE+PphxKhRcPBww8qvv0acNgcysQqHbRXo8udv2DZ+IuS0SUUsQqOQYAhNahgMMqisnXAn84nnGHrlGTti0AdUNNBtkUj4SJ+Df4JKpcovu08MMy0Lav5JHB4e7nepeLh161bl9Vt2vXX0/PVJUz75HjKpuOCjt0a3unDhQp2nkVaLeLCIi6cRJ3n+wqCgoCiLIKH8I58GBoPxbGE1DRUDE6fFntVfoEplI46FG3Fj7yH8tuEj/Dm7HzSa27CXBkCbloeoq2o0qZGP1NxUFKq9kB55iVgtKTwcOX4I64QsCf48UACjaClSr+Rj6IfT4FO/EeY2aAB9fhEWLfkTM957F0IIcP3tqfh59z58ffEUVJkGGJyqAAYNbhcWYklMNBEbe+EW1gwmMaDV5eP0zA9edDZVGGhpesakkb2//mXpxl4d28y/9/y9JXRLKd2yLgs1kHShpfA/l6/5ZMfBU2jTrCHOhV9DXn4BXx1tpVJmvzV6wLCmTRo+8VwZlStXvvXOW35TR2dlfZaVleVAaxQedP8XCX1uWgNhZ2eX4+TklPGgibnY3BMMRgWC1TRUDIycFJ3HfoXPpreGs9QOxy4loHqVyvhjw0b0l9REePhVbD51Hd5ugejzxjKMHJCMCxdPoEfPObh9ZR0UAZlwc8hEtVqdsO74j+BMJogMHLQaHSQKMXS3YzH3vfdQq0Z1zHrnfZiEGhi1WggFQvRRF0FsMEB/LQav79uL2D7dEX39KmxJHFWSM1DlVizknAHfbN0HvVZNhIUR2pwc1G3RHIUyOV7//TdIX3FfB7Varbh582YVWm1vOWajlN759O2xDURCofFpld6DAryP7Dx0esbhk+f0Hi6OEdYquYiO86DV6VVf/7Z83bw/Vmoa1Ki8uE2LxiXdEqnh9fDwSHJzc0u5Nz7aRODq6ppKl6eRvhcBn+Fz5859l65nzpw5l7aj/PDDD1Nom8eUKVN+yMvLs1mwYMFEZ2fn9NGjRy9OSEjwWrFixZDAwMCYvn37ro+IiKi6bdu2brVr177Uvn37vZcuXaq9d+/e9i1atDjSsGHD00eOHGlx+vTphvQcDUPD0mvotTQOGheNk8ZN70HvRe9J703TQNI205K24rRWmH1LXtnY2ORNnDhxQXp6uvPixYtHe3l5JQwZMmTF4/KK5gvNH0te0Xyj+detW7dtVatWjVi/fn3fmJiYQBoXjdOSV/Re9J4vQ15Z3qN788qSN5a8os9Jn5c+N31+y3tE84Xmz73vkSWvLO/RvXlF/w7072HJK3pvmoaKlDcPyisi4n8A44VAG3dFRg3+vJOEJclpKOK00Pb5iJ9Yyr67ALbZGRi3YRkcNu+HpCgTbcYpIcjn0DwoHadOHkVQiC82/paON97+BHeJwV8+7R3kxd5G8vix8KhekxgTEpNej6L4OJxetR4CXy+I7Kxg0hkhFtmCWCN+wi1XsRhazjxZF8WRCIy6CiuIrEQQ21pD7BtYbkJLOh6AnsQ7btasF5FtLwSFQqGuVavWZVqDQEvtOTk5dvn5+dZUMDzN+zjY26V99s64eo8LZymdOzg4ZNG0Pc00VDR40WD5cFHoh6vsPv3glt2nH+Sy+/QjTxfLPv2g08WyTz/4dLHsU4NAF8s+/ciXTRD9yJfdL3uvirZ/b15R0fNP8ooaRLpY9qlBpItlnxrEsvd+lfLq3ryhhr/s/uPeo8flFRWhZfepcHlez/o09hnPl3yjAb/GJ2Bzpox8FQUQq9xgI+RQKMiFR0EqjlwJR77ICJO9DazHDUKLzCiEhloh8noiHJ3l0CVGoCjRgLxtF/Hz6kNQOTnDzdYGnqG1sfv1t9HIxQX2DcMgD6sNVc1qqPLOVOx8cyRs3dxh1OpQoC/1tzOZTEQEmJuQOV5ImO2gUklHwrWCTCEtNw04DU9+Xy8k31401MGPfnfp8qLT8l/h1a7DYjAYryQcpwPH21Ux9AJiWPHgsVskxeMr86VxmHtDbEpIwPmcdH5uiAxicNX5HAxffMnPgNm1TWtcOncGsXfuwqBSYPmSJbiVI8CJohwIlFaAUIJ8gwl7bKrgVLgRxybNhTAxEvPebonFu/P4rpVWOWkQZ6fxTRJaA/i10ETuvmczdZaAWkibP8CnX7TzGBZvP41R778Pg6GI7xjNN2WQ5zPRESS1JsgNBqhI2hRaLVTaAih0+TQQrIUCiEgY2iiuIM959rNZsPMNwju7NqOypyvMFRVC6Mkzy4gQok0f9P46IpCEQhG5H8k3KYmDE0FEzpmKazpgSYOIjdvCuB8mGhgMxkuFgdi1TdsOYd3py/BwsoNcJOFL2xwxpDqdnjeSBuoTQIfiIaV4PbGe9JgxOxf7X/8QRXoFCe9FLLYEIMYyrOgKYhpWgVqtAlJyEKBS0omKwIklGDh+Aj8kecqW7Wj58ac4bUOMKzGs1D0/k5j4Bj/Phxf5ih5YfA2BPy+Hk0GInJxUpKWkISlbjXiDFjmcBG0C/dC9czv4BFeGvYM9dHo9UtQFSNKI8dGug3ijWQg/JD3ttkEFDmdWRLwRp89iwajRlZznaxlEAt640/342Fu4lpiA0UMHo3rt+uRaCZFSQt4hkzMZ+LiFJOV0nAgavYkoF85A8oXkk5HelwgKI8nHi0dPIlsgw93Lp2i1Ox4zGzbjPwYTDQwG46WCGrrd+/bAMyAQXjZWvFGj7f+0XZ+6xRlIydlgFNLJe6AXUcMpQaFGjWt1wlA7/BDkJgGkVGSo1fC3kUJjLMI1YuQNAiPOF6jhqnSHrZUKaTkZvBHVK+Wo2qUjPnhjIr6pWht9lv6OVHIdUQ8oEBhwixMi+O+1cMksxJ1zZ2DKzYYxL4dWa/DT23NEDESRuP6IT4LYwRlWfn4Q1qsH97qNIPNVIGD/IejUxDDTwZrEIkikUhiI2DGS56SzLdK1gYgPATHs9DmNRBjxVl8ogIAIBjoENhUQVtYKuLk60YGE4E4n7CuuLTBwpmIRwsFAjimMWrI28qKKxmdp6hCL5NRRD3Eerrh7KwGHzl9CxyYN+CnKGQwLTDQwGIyXCiExhJPemIQF+48hjZPBKBLSOgViw2UwiU28saRG1Sgzb+upMZabkL9uOdKvRSK3IAcGTSEfFzkM/6BA3Nq5F1kGNWb+vACHo67Bx94JisREZBHhUUBnclUq8Pk386HVFSD+xFl0nvMZznEC5EpMfEm9kJjg5NY1UHNwX0S1bApOJKWDGcOGo00HHATEUMvz0+Ga7QBFahRUEeEwbNiIVKEM1wUm7MlNg0Br9l2QcHqS5jwIqeigPSqMIoj0GlAfPyoCxHxjjAByYuD1egFMKgmKdPmwk9tARX0eRFporl+B1qDjRQV/DW1qENDprcw9/Ohsukby7HQkS5FYxI8sqaAjYUrEMOk4WDu6o2oVf6JLmGBglIeJBgaDUaG4HpeEpv0mwFapg4uDIz9fhFIs4I2biVg6iVRC9omhIwZOTUrZSmKgIeX4dnu9XoTbt2PpaP7EImrB0SGfqd8CMdzUp93dLxBe1PHQWID0hLvIzc3B9ahb8KvsDwXAG0+fAD989e2P+GXlShTt2oq8PA10EhmdF4Av0bsGBGFw7y5YPWs26n37E5LoVO8BVaBu1xMXiAGXavMxIF4LR7USPpEJcExLRa8RNbDqp28hJmkX0e6QAn4aY/55zLUAjhAIhbxnhlgsoZPU8pUJRmLdBWRDSvSEhCwmIpA0QiqRiv0PiCDac2wvctUiyKxpzUMRyScZOGE+lAradCGEWCknoQsgJHmmM+kgNlmDDgRo4jTgaLONiCOCi4OGpOXy7XNoN6E6zow4QuQFMw+M+2FvBYPBqDBQA3ro+CkU5mXh9R49+DZ1vgeB0MC37VOjKqIVCToD3xxRWFgIqGS0Cx7t6cVXt2dlZPAlaKNRDKPcCCG5TkQWI981UQADMaQqnRByv2B4ceYqf+omeSc+jsSnRsSt2+jeoytfQqfnFFI9rK2toddpoSUlf41Gg99/X4KVK9fiVnQEPvxrFdYGhiCblPxBkqMTSLDCS4J+H3+NvkNH4kjr2lAnR0Hl4EqEjnmuGhqPWGTkmyHoM5mE5iYWek+tuHibiCI6kxEVE4ZiP4riTOLrC6hooOGd3ILhbCquVSF5wjswikubLSRi8/TZtPcFv8mJS3pd0DV9euoEaTRpsffyYWT1yYGBiC9pxZp1m1FBYKKBwWBUGKgh9PbxB2RWdAhb3qhRsWAyiWmHAb53gVpXAE1eAQoKzIuBGGk/Pz9Sai8ER4yslZ0db9hpu71RT8RFfj7oqLdUYNC46GLxgeC7K9IqeIEejk5OELuJoZJSoWJEYtJd5OXloUir4Q2rSCiDWkO7RnLmpgOjBKHVatKR/pA0dgQaLduMq76+4Ipy4JGdA8+jG3B9y2I4th0O0advIvf2cohkEv4Z6b21Oh10apJOsk2bM4zkWWkthJE2IxQ3K9Dnp9C1pYsl7wRJz1HRQISFTMIRIWSCxMShMCsPeo2eFw0iqZgXDkqpFclOJUQ2Ar4Wg/ot0OYKOugwFSzUAZL+p9OKIHOkPU71EEpoTjMHSMb9MNHAYDAqFF0aV0fB2a0Qy0rnZqXOgFs7tkfMnQRYc0K+7Z32IDSQNR12z3jzCtSkaCwXSuHp6Ag7IhzsfT1h42AHpZUNMdZySBVK6hDBt/MLRGLeUFLMvQNIqZ8IifxCLa6fvQixVAqlrTWsbB3h4uaCkJBg9KhVH2qBEnEoJILBxIsPKkQuXboEV3dPHDx8CBcz87D38CWoVyzBzZwspHAGWEd/i5u/zYejSQIZMeKmgACEBYShwVdvY++oT0n6ddAqDRAQ401FjIlPjolvuqAjThkFRPxwemLCTXxzAp28is4EQPdJIPhS487JkJdJhIpJBKm1EgK1DmJyb9q0Y4KaXENCE9VllApxV5vHx2cSaCGElIgHE58XnMGEmbJmsP9AgHmftML0hCMP+Qsx/ssw0cBgMCoUYrH0vi8TrUS/XqRHYesWsHF0hlCuLPH6tzOZHQhpCdxoMJeOc8mSSQdJSMkgx9PNxthUOqFf2W3LtRnpGbgTGQuODp4kEkKuUuLj92eifr362DB/ISqZ7CAjgoUODyjlxLAyyIi5FsNaqYSXnQcODP6SnzK7sYAIGp9aKHJWQ09ERGFuHhQkiXoqTiQyNOjQG3WG9YBIJcUdT2K0G4fAwBn5JoqyiGgtAxEIQgERFEaOFxXUCZKKFUuXTForQWsMaE+ITatXQyqVo1HDany+cEIBL4to7YSQr8UodmoUGIrzFCU1GZY8yYYB+3fuR7XbWqTdiIVziD+rb2CUg4kGBoNR4aEGVVhUBEc3L/j6epOyuZCvotca9NAUFCI/Px/Z2dnQGkvHNKBGkFbF07Z82tRh6ZrJj+Gg1/OLZQhmjpTQ6SiMdPhm6gNBhzWgRpcXG7Sro7UCBqOBH/eA9lyQEEut1tPSPDHYRSbk6DNhJPZXKJXwzpm0Z4O1WAKhsxM8XFxhRT61tCmB3i/nyBVcORnL+ybY2ZtQQIWAWMR30bRQtlmC9nTQFTdl0PQLueJmBSqSyIbKygoKlQIqsYo8r5R/Vl2xkKLNE/S5ZSIxf41MKCbx6VCk0aBIp4GOd6gwmWskyFoqk0GitIZJTPJEwswD437YW8FgMCokBYWF2JuZConWwIsB+7ffwpWF8xFtIgZcncf7LVjQ094S1EjLpMhUWRGRIYLYWgWlkx1sjXaw4gxQKORQWKkgV9oikCyFRIRkxtyBTgreP8C2ki+qVw/B+ZOnYKIjJJLrf1q7AQHHzsDP1ws3fe2RXJQOI2eCnNwvnx/ukdxcKCMGXAt5kRHiAsBRpIADscYOkMKGnLM1iXlfBanIGnYCDmIB2RZIoJJaQ5QjgXHJacg8rHE4NRNFJG5TkQYGiZAXFdTgW9HwZNtJKIW3kdxZZxYEtBeFgIiIIvDDKaOLshZMEhES9sYhN/UOblopYSD5UCgXgM6aLKFihqRbRoSPQiqDh0kOSWEWVSa8oJFKZPyYFHXdm0ASbAeZjyNMei05998coprxYJhoYDAYFZLxm9Zh+zc/YFC3HqTkLuV7Adh0HsA7LVLHSGFxF0WOK72GGk8/OpgR+OmQi0eK5HifiKIiNREiauhyCrDp/FUYDBrab4C/jqOGnZT2ObLwvRIkEog0Oogz8xBzMw6O9VrCtddgKIp0yC8oRKFWD5VJA5PSEZBK+dI7V6jhnRkTOCESi0vp5sGTxETPUEFjxddq0JI99VcQmji+JoQi9PaA2K8yWsvyEJR6gwQp81CceaRG+iy5ptKmFRONu7jLJj1Gh4TevWkrJEQ4SQJlUIoEqFQrpKSrqAWLM6V5cSnZNhhpDw09kjM12HozArptJzGnZ4tn+jdmvHww0cBgMJ4LFgNOKWmTLx6R0HLeAj2fqMsjRlEPBztrKETSEr8E2i1SpzUPzmTueijkS//UAMuU1sRAm9vuTXT8geIeF5xcRErNQl4i5OarYTKoeYNL78wP0ywkAoOjIkTOj4XA946gxpSsdeS/U2fDifEVFneZFEBMh2gmQkbEzythhI46LdImACI6VEJzrQc/HgNJEyci6VIoAJkKHCnh01EfBUJS8qc+B8W1CWJraxKUQ3KOBgF5BRBJJfyw2Pw4DvzDUJ8GE7R0Qgl6jI72SBaDSc9vm5sWRJCIaNqEkEtFsLazh8jIQa/TQSQ035PPaV5AcOZxIPTmIaXpTfj7CIwkDtpcoUdhfhEMOi3URm1Jzw3a1MP4b8NEA4PBeC4kJSXxxs3S3dFSQrZMAc0Vj5lgERed3bxwsk4t/J52F+JcDT98Mh9GXwQNKRXLiGGV0XEIJObqf34I5vw8SATm7onUAAp1xU5/tArexN8E6vRsJNOREAuKeIFhIOE5YmzpHBO04C8jxl0jFfNOjbTGwY4IBe2Zy9DRMQ1o70zaHVKggoGW6KX2RLAYwUlEfMmf+lnQgZKISoCQXGsi5yGljgcKyK08YG1FjL21nBhlWptBB6giQkRM7sNJICaiJV+gxJk72RBo9EQn6IkoyOfnzbDUBmhJWoVajjf2+dTXQl3AOzhQOWAi+ZGmo10qiXDSEEGTS/JHRwy/lYwXFhIiWPQSOk+FuSunUUBrMJT8OBD834Km26AhkSlg4y5HfVd7XLt+m3bG5MfLoDUWQUFBL/ANYlQEmGhgMBjPBXd39xKhQLFsW6rL7+3REBISgsktO/Lb9557GGWr4R9H2ZoPC/fuU8pO2GSJ/95aEcvaUnNQ9lpL2Hsnfrq3hsWybxham+8lYRmbgR/UqrinSNl78ceERr6mgDMJy8X/uBkqLfGVfaaylIwHISyNl816yaCIH/QjYTAYjArCq/6BYj0aGS8NrJaBwWAwGAzGY2GCgcFgMBgMxmNhgoHBYDAYDMZjYYKBwWAwGAzGY2GCgcFgMBgMxmNhgoHBYDAYDMZjYYKBwWAwGAzGY2GCgcFgMBgMxmNhgoHBYFRo7h3pcevR22he2xNFGgM8nFUvKFWl7D97B7N+P4Gpg+ti3+k4/PZeu0eG/ycjUjIYFQkmGBgMRoUnr1CHQrUe7k4q7D4di81HbqFr00D0blXpRScNrg5KHFs0EL+uv4zPJjR90clhMJ4ZTDAwGIwKT6cp66EzGOFko8CaL7vjyq10LNgY/kDBUHfE8nL7NkopDi4YgLaT1iK3UMsfa1PPF41remL2HyeLQ9ERqAUY1D4E0wbXRb2RyyESCnBmyVD+7Oq9kfh2xTl+Wy4T450h9dCzhfneNSo5of7Iv2HiOLzeN7Tkvit238APq8/z21KxCMd/H3TfnBIMxssEEwwMBqPCM2NYAyzdfh0yqRDLdkbg6KW7xLhX5WsexCIhlPLSTxm1yQ5EWLSu58PvK6RivL/gOC8W5rzegq+hOHAuHtUDnRHi54jI+Ex6FYJ8HeHioMLa/VF8HKYy8+yYJ2wCujevjGPhd/HFklO4dScb04fV489z4PjzGw/dQp/WlaHVGXmx4OlsjXeG1sfUeQeI6NmA3fP7Ptd8YzCeJkwwMBiMCk+P5oE4eSURB8/HQ6c3wlopxZdLT5LFfN7OWo5t3/WGXGqeVdHdyQozh9Uvub4zMdaU6oGOaFvfp+T40I4hqDtyOV/B8PennfljjceuwIB2IViz7wYRB4loFupZMgPWmwPC8NHohnwNxNoDkbxg2H0qriS+n9Ze4AXDgg3h/P4fH3SAk50CIqEQmbnqZ5I3DMbzggkGBoPxUpCWXcSX9Md0r4EfVl2ARCyC3mDkz8kkYpSt7b95Jws9p2/mtzd/0xMfj22Myd/uR7e3N/JxfPNmS7So7fXA++gNJowl96CC4ZvlZ4lg6FVyznILiVjIh6PQpoqwIDe4OCiJeLjNH7sSk86v7YmQodhYyZCdpwZrkWC8zDDBwGAwKjzjv9yLcT1q4oslp2GtlCFfrUOXxgHYejSat+KpWQVENIhKwlP/A4Ws9PPWsLo7ds/vhzFf7EZiWj7emX8Iiz/sxPsfFLsv8Jy8ksSv7axlCPSyR0xCNr9/v50vPZJboMHYnjXhViwYohNyICxWBpwl8ld9km7GfwImGBgMRoUnwNMOjWt6oEWYN9+l8vSfQ/jj+87GYce8Pmj7xtpy4QO97LD0404l+7fu5iA8Kg2bv+5JSvpatH9zLdYdiDILBmrbiw36nL/O8Gva5GAhI+f+pgRLzUZEbBa/fn3u3pJzX5E46oa44fKtNCJkiuDpbMWLCgp1i2C1DIyXFSYYGAxGhaeKjz2/HtGlWsmxg+fv4OjCQebjnauXC5+dr8XmI9H8Nm26+Hr5GRRp9HzNxN7Tcfzxge2D77tPSmYBOjYKwIB2wcS4cxj92S78sPoCagQ68+epTwNfq0Ho3DiQd36kLPnY7P8whoSnQmHhe+3x59YrGPrJDr7nBXWg9Pewe0q5wWC8GJhgYDAYFZ7erSrzayoSqKGnhfS9Z+LRuq7ZgXHfmbhyXRqT0vNLjLmNSobDCwbyjo8LNlziS/lTB9VFVX9Hc+Di2oW45Dx+PY2cs7eRmU+Rc7tPxZYIhlm/H+ePtajtjU/HN+YdJmtXdkH1AHNcHRv5Y+fJ2zAYTPjpnbZ445v9+H3zZXi72mDtl92eZRYxGM8cJhgYDMZLw6AytQJfTWpWsr3pm54l2+eWDnvgtbvm93ng8fN/lYa/99qy5wa0C7r/2nvCfzq+Cb9QqN9E2esZjJcdJhgYDEaFhg2lzGBUDJhgYDBeXh7he28kJ0UQmDhYfPVpPb6J/COiGy+H453p8UEYDMbzggkGBuMVhONoqVwPg0AEIdkWEAEBo4j8b4SQSoYyXQkZDAbjSWCCgcF4xaD1CSaTEbs2b8faZZ8i1C0NQf5y6HNFuKu2gbJybwwbPxNSiRjlBiFgMBiMR8AEA4PxCmAqXopMOpz57lukpCxBmzA9GvdTQi9RQS4mIkLBQSzSQixdBvW1bfh5kStem78eSqnEXONAYdqBwWA8BCYYGIyXHRMHnUAAWVEmvujSBR/MDQIXJ0SRQQSNiIOVnRgKnwCIbVxgFBpg0qXCqjAer8/0R+KJD+EUMAw2PtWKBxRiioHBYDwYJhgYjJccg6AAnJbDnM5t8e4sH+RHHIJJKyS2XweF3ASDyApihRtMYh/oJRxMMi8IretCqDsNL88buBtTACnehNTLDyKR7EU/DoPBqKAwwcBgvORwOiWuvtEFHXqboLkWDm2BhAgDDUx6CYw2Jtik56FIEg65TyoUsirQSexgFOVBLG0CofIy3DxO4u/3z2LEn0cgFHL85Ex8vNyDty37ZbGcM5nMHRtoV8iCggJYWVk9NB5LHIJHj5X8rKs8vnrG8VeE+7/7HO7B+A/ABAOD8ZJTdGQF5h29hm+riFGo45CgFeDPnXqcjtdAx0lhR0SAVBAPF/s0jBkRh+5Dq4FT+UEHE+SyylBmJWPYB/ZIWjgPPpPfKzHgRqMRMpkMIpEI9vbkfFISv00ZPnw4Vq9ezRv9hQsXYsyYMdBqtZg3bx4++ugjXjBQ8UAXsViMr7/+Gm+99RYfXiKR8GsqJnJycl5gzgGXLl6cOaj/AARWqoTc3Fxs2rIZzi4uz+3+mzdtxu8k/7KzsnDy7BkYDAY0rFuPzzeanz/89CPatW//b2/DBAPjqcAEA4PxkmKkExnpdMj9eyeqhCpQWKDGrweMuMPZo75Wh25+gI1CAmuBG24YNdhzNx6fL7LFd0tP4tM3M9B6QB3oTQ4QKm1hSM/Ajk2LMPH1GTAKhBAJSkv+er0eWcSgqdVq3shTqGGjC+XNN9/kBUSVKlWQmJjIC42y6Egap02bhhUrVuDkyZO8MaSCgdZAXLhwAXXr1n2u+VaW2mFh/HNMeP11NGnahN8OqVwFl65eQWj1Gjhx+jTGjh4Nb29v/PjLz4iMuIGunTsjKiYGQYGBuHA5HLa2tpg0cSIaNmqEYSQfKNTYVwsKxvZdO9GtcxdE3Izi411BRNbwIUPI9eb5KN6ZOhXRcbG4dfMWv0/FlYurKy/MOnTsiInjX+PPV/Lz59cajQbVg0Nw9UYEIq5HoFr1aqhB9jdt24rqNWpg2dK/cOXyZbRu0wYLfv0V23bueGF5y3j1YIKBwXhJERHFoNfloUCbiLYuhXh7kwhDPD0wiYiDJKkKOrUOeqMIBYIM+ApFeM3HFTZCIzamK/DDb8nYfWAPvv6lL0wSF3DqFDj6CKC9egTisLa4d2xFakip4X8Q1Ii98cYbuHPnDi8ypFIpLxocHBz4Ujs9Rg0oFQcbN27kax/oeSocCgsLn31GPSHvzpiJwUOH8OmSy+Vo274dxo0dA6lMCoVSwYdxcHTg1yKROYeoWKDs27MXR48cLREMtGaGEhwSgnoN6uPrr+by+/4B/vj9zz/L3beyfwDOXbxYPjEkz2jXWEpcbBy/vhMfDx9fX35boVDA3sGeTydt2LG3N6dr9qxZOHfpIl8j9Cb5mzAYTxMmGBiMlxSj2ATdpXNI0mRi6gY1fq0VAE+JEPkiKcSGPGhEet4BQM8Z+eoIzmSAQGiFPh5CtIARK3OcMWPcVhhNeehcSQsrDw66E0cgCW1Dwj256wAtDf/+++8lNRLVqlXjxYHFX4EaLyoYKMOGVcy5FY4dPYIbERG4fCmcCB4Jf8zb2wdnTp1GQKVA3Duo5g/ff19uf9PWrejRtesD43ZxccGF8+f57c9mfYoBgwaWnDt07ChaNWuOurVr4+btmJJhsBOI+Pr5x5/w96qVfE1BlaAgzJw+A6vWrnnss1ipVE/83AzGP4EJBgbjJUVAR15IT4RK4ogBVRRwFtABoQEJEQdSgwhiIhT0xNCJBWKiFzgQuYAckQaFAjk+S9CiiqMRGSYFQgr1+PJMOtQyI/aOvMuLhUeMOX0fVBiUdYI8fvw4LxQEfCnZhPDwcPj7+/Pb9zZXVBSaNW+Bj0npfMumzTAYzGnMysokpXiHkjBln3HKtGm8QaecJqLCIiBOnjiJxk0al4tbp9PDy9uLbyr4aNYnyC3jt2FnZ8cLhSoBgbxgqVbdPE23l483tu0wNye8MfF1uHt44NzZs498huzsbH5NxZlEKv1/soHBeCRMMDAYLzG0mSAmMR6GAiWENgrygxYRY0/nizAj4OeNMFdvU2NtMCrxRoQaE+sVYcq7KfhhiRyRJzLwU1N7vHamCAWQw8ZEBIDwyXsyWHwZKLSEbKmOt1xn8XugVFTBQKHpHDRkMD756CN+f/++/Vi+YgU+JUae5uSi335Dr96977vuu2++wbqNGzB50iR8+N57OHj0SLnzx44c4R0ad2w3C4DfFy3C199+y2//8tPPePf99/jtVStXYcbMmfy2oEznkJ9+/QWNGjfm/RgsNTWUGxE3EBAQULLfpmUrvsni8OHDvG+GlbX1U8gVBqMUJhgYjJcUAYQQOPsjRsQhXJuH4bQpwiiGia9VEEDOiVBIax2MBmgEdMBoAz6OTMHbvazRv4UIEqUj3njbBcNPAaY8DtN72iDJXgInoQFSOt8EicPib0CbHa5du4aGDRvyIuLUqVP8ebptTQxTUVFRiV/Cxx9/zC/UgY+e79OndFppGh9dqMiw1EDc20WTHrP0xnjW5Ofn48CRwyXmmabtSsR1bNm8BQcOHYKdvV2JcadhqMGm4anfBl3fvXsX3877nk/zOzNm8OHo8+Tl5fHn4+PjEX7tKn/dQbJPHUcnTZ5ccv82bdtg186duBV7G6dJnqpUKvy2aCF/jorBzMxMeHh6ltw3JSUFe/bvw68//4y+/frzYehx+jfevHUrEQw+OHXyFO7EkftevfJc8pDx34EJBgbjJYWaWat6YXCxEmJqexukXZLBTyTmh3nWCbVENJggMJqIWBBAQgzKvjw5QquI0dKbGHYZB6NYAjupO/SSSzilV6CdowGiug2LHR7pv2bDzddMEAPfunVrDB06FFFRUbwhtNCYlH5pT4qDBw/y+3PmzMGqVavQoUMHrFy5kjfKFCoQaNfKn34yV+VTw9q/f3/eSdICDXP9+vUSZ8JnDRU71veUxJVKJXr07FGy71vsaHgv9x4vu0/TX/YZaJwPiqde/fol27QWgeJTJpy7u/sD43+9jEPjvfE2atzogellMP4tTDAwGC8pQhMx6VYOsBK5oXrNQny3OwFTfKtCoSeCQKeG1iiElDNRywyOlIA3puVicVcHqJVpkBqtYTQRsVCUAVNBIVzqqWDvbA/bpt34KbAp1HjT3g/z58/nRQM18MuXLy83OBPl77//5o0uLR1bwsXFxWHRokUl/g00LmpA586dy8dnISMjo/wzCe/tn/HykrF7H5w6tit3bEdiIbp4lndKvH37drmmBQajosIEA4PxsiIUEdEgQPMpb0Jutw6zPrXH7/OyMLCyJxyMAohMyZAKBRDrjMiUSGElVMPEZUNslEFjJGIj9wKK8gQoMqpR04eDsVpHohIURC5YZrEEb+AvX76MAwcOlAgFC1QELFiwgG/7p4aeVtMHBQXxBvBeR0haC7F//34+3L1NEPdS1ufhpYQ8X9bHn8G07G9kjhgGx9kfQWPiMP1yHpbE6TDBT49va9shLTUNC375Bfv37UP3Xj0xvbhJg8GoqDDBwGC8xFAD7NWqD1Z/vhqdWudjwiJvrJmVjaZ1fGAVLYUkOxcKYshVOj3cvPVwtNWDE0ugy1ci7WIuEg77oWsYB2e/INg0nwaTRAIh39dCyAsC6oewZ88evuaArs+fP8+P1EibJxo0aMCLCLpvITo6mhcOO3fu5H0eatSogS5dupSMz0DD03b5hzlRvvRigUK7kk4Yi/S1G+AwfSp/SE6EW1t3OZbEGzGXiAWKi6sLrG1skJ6ezsQC46WACQYG4yWHEwjR861lyL82FvkxF9DlDWcIs/Ow5mYq6rh6oJJQCjtNHiTXUyGR2uH6lRwYYm0RoLRGmo6IjDG1IO/4Hji5qth/obzDIfVjoEu3bt345d5zZaFCgA4qRB0dyzo7WqACh47L8KqjJ38Tl6jL0CYlQ2ZtFkG1rAUo6OmAbK0B9jLzp3fYiOGY9s7bvI8IFWcMRkWGvaEMxkuOUMBBbmMHXfAXSD3UAV60P75zHoZNUsEoSYdI5wZxfB6+quSIi/tF8DWSkq1EDWsXHzTyE0HacAqE7iGk9C+E6NVxIXhSnsnkVlIPN34t8yh1WvRTmZ07LWKB4uzszK+ZWGC8DLC3lMF42RHKIOQKYGVbGYoet7H9i9ao5xkNW6MOWhstlFwyCo1K+DgY4d5Wh4MXBMhItoGfzIgG724DZ62EgBNB9E9Ga2IwGP85mGBgMF5y+EoBgRUkpAAr4Th0//gghNp07P/zG0gvboGbUoT/sXcW8FEcbRh/ziV3ycXdDYgQPLi7u3uhtIUKFVrqQksF6gZFi7u7u5MgwUIS4q6Xc/lm96IQSNBAv/nnt7mb2d3Z2b3dnWdm3nnHoDdCrdLj6k0DvMK7oeUfn5JdnMGViFFmTvBsXB9QKJQXFCoYKJT/EhwjBEzzNs8FXV//EjB/BT1HCLOZwy5NmBZ4swkcrpl1LW3kaIhOENd2rikUygsAFQwUyotLFf3vfEsk+0/KfgpLw2Vblxoq8Mif4O4EXiiaTF5u3vNtj8dKo8uMHTg7b+RTsWWgUP5LUMFAoVAoFAqlWqhgoFAoFAqFUi1UMFAoFAqFQqkWKhgoFAqFQqFUCxUMFAqFQqFQqoUKBgqFQqFQKNVCBQOFQqFQKJRqoYKBQqFQKBRKtVDBQKFQKBQKpVqoYKBQKBQKhVItVDBQKBQKhUKpFioYKBQKhUKhVAsVDBQKhfIC0//DzUXdmnjIujb2wNyVJ3E9VYdF77WHVExf788zmflqrDx4WykQCNd+OLrZhNrOT02gdxSFQqG8gKg0eo93/ziYtPCdNmVxn7/UFlFRUThw4iJSVVaY3KteLeaQ8iCcFBK80T9URr6O7/rO+oG7fxhoU9t5qg4qGCgUCuXFgzPtp31JP7wcec+KiIgIcKKj4WyjxapDtzGsnX8tZI/yMKz6sIN15+nrCvbOHfRciwYqGCgUCuUF49U5e1RViYVS6tevj2giGpT5WTCb/cHhPMPMUR6J1R91tN55Kn5U90jfZbWdl/tBBQOFQqG8YHg5ycTVbcOIBiAaP2+8gjcHhD6DXFEeB0bU/bL+4s9UMFAoFArlScEb0Mq3RhsyouHDNbuoYHhBGNzW/1Zt5+FBUMFAoVAoLxBms1nmqChvYMjIKcB7885gSkd7SITcWswZ5fHh5Nd2Dh4EFQwUCoXyAjNr2Xm81d3xvuuZ4ZaUF4O1h2PDJ/QKr+1s3BcqGCgUynNLk8nLzQ9aX9/P7rGP0TjYsdrjnJ038rk0G4yJz8SEtve/BjdyxZjWv84zzBHlUeFwOPHvDGv8Xm3n40FQwUChUJ5bmIJ6zcEbU9cfvvXOj1OaeUpE/Cfe5v71hCb3xF2+U/TXF0vODF4wo0tLbxfrG0/6mE8KV0cbxOWlgMe9V89w+GKIRNXaRlKeE4Z+ud9q95yB62o7Hw+CCgYKhfJcM6R98G/Mkl2gdh05a9e5l3vV0bYJc6mZ1d9DQGp4WT9vvHbIDKg/G9/8lX0RXq886WM8aWxlIszanIlP+jtXis/VWSE1GxjVyad2MkapMTweL2bIl/vsiFhwre28VAcVDBQK5YXAwUaStuO7/u7M9zmrz/+UnFnY5eOR9etyHtPJQFahfu1rPx9p/8XEFmM+Hd981xPJ7DNkw+ddMfbbg2joZw2l1oz4DBV+fz0MIgGvtrNGIYjF4m+ris8q0GYs23ejDp/H1e/6fsDUZ52vR4EKBgqF8sLx9tBGbzKfV+Kzm73+88E9X09swg32sJE9RBKmdccSl525luH3+/SOo/fOHaR9Ojl9NiyZ0b62s0C5DxKJ5P2q4r0kEswc1exZZ+exoIKBQqG8sIT6Opw+8NNgG5PZzH3n98ObXGzFQZN6BAffb3udARem/XZcMqR90G+Tetf/Y1LvZ5lbCuXFhgoGCoXywsPlcExzp7brw3zffSZh+C/rLvz6y9QWdnZyEdtfcTwme+WfW660Xvpht55rPu+VXru5pVBeTKhgoFAo/ym6NvVZySzFGr31hNm7TzYPcdv15uCGb/dqGVjbWfuPo8asqWPxz4bDMFq5YNjUz/HdG/1qO1OUJwgVDBQK5T+JlVhQuPqzXiG1nY//B8yFJ+Dg0wu9Zy5GXOoacExKfNivKey//gxZGVGg/if/G1DBQKFQKJTHon/d3mjzw0UsnuBtieDKMGtLDIST3kWymjHwA6JWzcLYj/9Cpl6GCZ8vxayxFv8XOcuHIPL0yxhw+x1sqfMTjjX8vVL42py2SD+5CL3Hf4qkYgmGv/sLfny9K7uv8sZW9Br2Nq6mqdBm8BtY9+u7eC49bP1HoIKBQqFQKI+BFkeJKIgvFQsV+HT+9+yn/sLn6Dj9GHJSk9hwL097zG6YgffD+JDJrJC//22MPhuFb6Uktc2LK4UNMXMQ0m8F0jISIST7ftTGDY0z1uPcrHrwbjEZKTlpYNxTXf1zILyGLkfS6pHP8Nz/v6CCgUKhUCiPjikDZvAgfcAmgoafErEAqHIzkaPUYGArEZYfvEMEgz9jsQqzuSNCSxO4K/zz1O/Q7vsrrFhg+HLzHPwRPB2YtQccsx4HbhWiR6A1Ql5Zj6Tn3tXWiw0VDBQKhUJ5dLgu4MKIfDPgcJ/+AM3FH+DWaQ7enfUDwj0VSCwwM9Nulq3ne/tX2r5i+GasAXWCFGVhjoysMySQb3JkxR/B4B5NMOpaFng2wbh48yQ8BE/w3CiVoIKBQqFQKI+BEF2tgYGzo3D4g4hKaz5r6YbARXdwbvhsTNiYig/aWNoJrn5nqnHqdQP5OHw9F4i0uL82F9wkJVcA+51jUw/rjlum+sg5+A6CfEchJ3nZEzgnSlVQwUChUF5ICgoKbOLj48PI0jQhIaFeRkaGW15enj1ZrMnCdGsLzKQWW+I6Wi8SiYxWVlbFTk5OOY6OjpnOzs7xvr6+F3x8fKL9/Pzi+Hy+oZZP6YVlecw+OHh0QB/lz9g4azR4pmK836sh5ifXRXaQAPk+fKw+ehVo0wD5F//EEZECGSlpZM+AatOe9tfH+LJld+jHXQDTePBWj3cQ8d5+mJIWwKnNJmTGb2VHYRjUWnCs7z/NN+XxoYKBQqE8d+j1esHRo0fHHDhwYMihQ4caHT9+3J6Jj4iIYJf69eujbt26CAgIQN++fcHlPtrAvaKiIhDBge3btyM6OhqXLl3C+fPnQQQIiLgwtm/f/ma7du32kmVpo0aNzj/Rk/wvIW2A7Nw0fPfmeAS6vwODzAMvf7QAOaNbsatf23Ubx9o2g9MfWrz51y5s29Qc/v7d8HnvO6jSb3IFeIFTEbPFCs2DPZCms8HLX23HgZHMaNkQnJ2nRvN6vojL1aNVvylIj/noqZ/q/zNUMFAolFqFFNiRf/7555x58+Y1VyqVHEYADBo0CN26dUOHDh2e6rHlcjnCw8PZhTnuXTCzN9Vllri4uNd/++03rFmzBkTIwM/Pr2Dy5MkLpkyZ8oWNjU3BU83kC4MI7/20gixVrbPCysNXKsXcTit1uLkQ6RVcdIt6Vw4zOEeOx7kb4+9J1bfzVJyOeSHmbfpPQAUDhUJ5pmRlZdV/6623tixfvtyrT58+eP/99/Hdd9+xy/MKEQiYOnUqu5RgQ5bpW7dunT579mycOHGCWbf622+/nSCVSlW1mFUK5alBBQOFQnnqaLVazy5duly7fPmy1ZIlS7Bs2TJ2edHp3bs3u5QwlJzb0ClTppg7d+58dOPGjR14PJ6xNvNHoTxJqGCgUChPjaKiokg3N7eTM2bMwOHDh2s7O0+dsWPHMgtjZdnmvffeMyxYsECZnJzsJJFI1LWdNwrlcaGCgUKhPBX++uuv2wcOHPBjDAv/HynpZpH17Nkzf+DAga9MmDBhYW3n6bEpXAM7nylVrOAjMzeTFij/cejvS6FQnjhGo7HVrl27/DZt2lTbWal1tm/fLiSC4f1x48Yt5nK5NXdA8LzC90du5tlaOfQf3VxQd0s62gur35by5KGCgUKhPHF++umnt+bMmVPb2XhuOHXqlM/Vq1dDwsLCLtd2Xp4uZvwwbRB+WXcUIud6mLNqG/rUkSFtQT+0uvgqZgj/xqfLjqN+79exa8HMkn30+HBsbyzccR5u4R2wdttq+EksabnaOeL2hX9QL3Iyfu0nwUdndICLHUauTcevHfn49tWB+G3Dcbg26Ikd2xbBgWdJkU5K9XSggoFCoTxxunTpcqZFixYDdu/ezfpN+H9myJAheOutt2JrOx/Pgo3jfLEndBsS00JJqBgBDl6IzMqBRCpB/sqR6J2ZhclzgQ0T6sJ/bDBuLxmIjq4uqPNXFNKWeMKUcxiOHk5IycmEmC3iTej8dgoS0jPZ9OttsMOXyblsC0MfL3s4/XAWSX/4wZC6Hc5O3sjKuQMuCuikVE8JKhgoFMpTYd++fazPgvbt2+P69etwdnau7Sw9U1asWMEaQaalpTFLVm3n54lhuA07O7tKUYJGnyFj7+vgC/mIO7gBxndCwYMVYrNz2PX5ZOF6TYZrSQvAgN9m4yWfrwBjKC5qhdjX15ON59q3RRehET9EGfBRBB+Mk86vV752bx6MN3GsWIicIX5skO/WE93ESixKM2GiK5dOSvWUoIKBQqE8NV599VV2+ffffzFmzBhMnz4dP/zwQ6m75v8cycnJrNOpO3fu4MKFC4zHSjaeEQ3/GR5gw9B7XixMc9+Ah709tGYS/mANlrzb0bKbT2D5hgJXUuinsOIDPOdK3QX+blzcvmMEIizFk3dVpZQhFjBrYX+XcHG4ZgBc6aRUTwsqGCgUylNn9OjR7MLwzTff4OOPP0ZkZCT72bVr11rO3aNjNBrx888/44svvoCjoyN++eUXxl6htrNVq/Sd/jO7MN0JTZwcsOalHHQBow2ul2+kiidCwYeUQEREGJnpsVEmGm6kmtA4gPfgg/CDyQ4i5OSkVWmbQCelejpQwUChUJ4pH3zwAbswZGVlsa0O8+bNY8OMqBg+fDjatGlTm1msEp1Oh9WrV7NdDbt27UJISAimTZvG5p9ZKEB/D3u03HQb7zRWgDFaNBIl4CixFOmmjCW4UjwboVbAz6NmwHnIX8xEEWgq0WHqlmT83scD+tQdOKATYkVI1UWTiCR1J48k6uyP1lY6jF4Yg2UT6pEfJxlurhG4kJUNpxQ6KdXTggoGCoVSazC18rlz57JLKUzzPRPesYMUHgcOgJlxkmmNaNmyJVq1aoWwsDD4+/s/8bwYDAZcvnwZ586dw7Fjx9glLi6O7a/v0aMHO9fEiBEjylpK/m+pwoaBYWlaLjYmp+CdkX3htu8CpK518eXum2gvttgwCFt9i+ivR6Dz/ANo0P8dXPutG7vfrpQMfDimB5wnRsMvsjeuZ6Thfr0Hy5ZMRYNwNxydexKbE7Pw7WuD4DHzKGwDWmDHrUy4MArBcyKdlOopQQUDhUJ5rnB1db1vrb20UF+8eDH7efv2bbZQZ2aXfFgHUU5OTqzwYOaJCAwMZEdzMJNQTZo0iV0oVWA9BLm5Qx6wgRg/LN+NH6pcZ8bIWSvIcnc8H7OW7sE90YTUnNxKYdduXyA944uy8IzfN5Dl3v3opFRPByoYKBTKCwOfz0eDBg3YhUKhPFuoYKBQKBTKU0UxfCXShtd2LiiPCxUMFAqFQqFQqoUKBgqFQqFQKNVCBQOFQqFQHg1m9krfNxBRvy4bVOcn40ZCNkYtisYvJd4bq4aZJ8Iel3Jy4cgBUrdOw8CtYTg5bzKOvx8O3ReXWPfP2csGI2xLP6StoW6dnweoYKBQKBTKo8PzYIe/lqE+BTv3RvgxNxPVuF8qw633rzjZ2/J90aoUjCwZCOEwai3SRj3R3FIeAyoYKBQKhfLkkDQmBYsBOWbAiaNFHSc3SJv0Qhs3Jf5dfxhb4rPQ0qbyLsuHuGJTn8uYZbUAR5VmmL/6HMqxH6H5yWFlLQw3loxEi3eOoUePltizbRde35aAD5tbY8frTTB6nQpDBrTDsY2rIBq0HOd+7lY75/4fhwoGCoVCIeQt6Q3XFb2h2T25trPyQmNK3wADR8Z2NZz4oCm03f7C9aWD2XVf9p8C36bTkH3j1yr3Deo/A44vf4tRH31q6ZI4WbpGh1Zv7UZibjasmKByF5r1fh0fHlwMcdeZiP25P2wZh5K/fgY7+3CYf67aZTTl8aCCgUKhUCiPzj2eH7lYcC6VLbBXb0hBv7/7lq2RdxoBU/YY8q1qwXBfdMdg5DpZxAKDrBtOH7S0IjT21qGNrzMSC/UlKwUwkf817Q6h1BwqGCgUSq2Rf3k12vZ6DTfyBXh97iZ8N7EZ1Gc/hVX7/TApj7HbqM9+BKsOh2AqYsImfD62K35YdQjuTfrj6OE1cCwpGSQcHvLyzqBRnbZIEQTg5I0oZP09AT1mLENo/49xavXH7P4iDh95GUcQ2bAX4nWOWHTkIgbVkd2VMz2mD2yHPzafhkeDrth1eDsCpEy8GV9N6IbvVhyAyDUMf247jEEh8md3wZ5HHjB7JSMazOaKMUyA+wgHYVIy3httzoJvm6mIyc6CM5usBvZ23o+QPqUmUMFAoVBqBWPsL7Br+CNU+myISfilYBG66G5gzyuf48vQOei9KBFbx7vDpdk3uKazFBbtrflw+eMmipYEwJC8CUKBDQymArYI4nBMaD5LhavpSuSsHgpHOwmOZ6mhfGshBtrx8NGFD/BVQx4peswIfiMFScn5RD+kgsezQYbJWKlG2kQiQsjSeGjWe5Ma8X7wZQIUm/TYMtgW2+sfRqGmPtlKCQeeNVoZTHCh7d9VMmKwNwb/swE/tR/GhvO2LwbPbVC1+5nvjhC2As+UjQxm3inmWmuPw859MnJTf4KZ51UiFoDCQx/euy/liUEFA4VCqRW+G/4Oev6TzooFhvmn/gHXZSLwyn58eCoX1lwp/tjliBZ/3kIw86YyXsMhpQimUQHs9nyPfugtKcJfKSa86m4pMZZ805r9tO89AGbTTTQvqfwPbi/EovM5QEMnNrxy6VDLCq4bBlqZ8f0ZA94vzZgxBuc0QpwZbKmpch06oqfIiK/OG9BIyMetPatg/Kg+ERgyZBtNT/kqvdg0/eokZE6uaNJ/HyIVmVix5TSOp2agCklQRkN7Lsa36Ike037Bl2UqToBjc7uhroMrunZrg/279uLNLXFESFjD2RyPViNeR7DqFLI6zYOUswhj3/4Qy+ZUNTsF5XGggoFCodQKV28asG2cPTjjKkRynUq+CHH19/bweu0sTKv9LFGGm6Sc0YDLqVydd7piANyF7HdFaWs3j5Q0HEV5siTeZCovpPwrvPm8XLhISDKURxhukf1dKxnNBRFBcivOgFnLs2H8ehJkJEENSW7AFzuw/uP/Y4t8ZjKqzAdNRkV+x8ycslC55QIHabnlE0uNXJOGUk8Lv1zLxi9la8qHVQaPW47ccfce4Vp25QmqMDX33o0oTwQqGCgUSq0QXpeP3Mkp2DHB8d6VpmT4vHYaZ76oA/ehq5C6ehh5W9Ul5YyYFPzqx7aAJ1oFriVzKN9MMaG+H3kVXi5ZyQ8GjGlsHbj0ODHJJjQLtrwuB8+czy6MPUSwgIdlr5kwyo72SVD++1DBQKFQaoW3l83BjDoNoJyQDMbkcMtr4RibNB15W8ahj7M3Pj2vQpMGItiJuFh0ZwjGewehvUyL/n9exqZXwgBdIqRiX8QajHB7SDu6YT3nIm3PdKA4Cts0HPzbkA9TqWDg1UELqQ7j1yVi8SAv6JM3Y7dOhC3hfHSScdFuXw4+irQF06xuYHwNSKhYoPx/QAUDhUKpFXgB05B30QmtvB1wNcuEodN/Qt7vYxA/rwd2u7yLLUQsMFzO2AWevSPGGHNwoNCAz8d1g+zNA7ALboOjWfqHFgsMF1e1RUMvBW5qnLDldjEYP0J5FdYfL9Zh+oDWEA47j8BWA5GuU4NpkNinLMarfTtCuvMMpO6hmHMiE10kT+JqUCjPP1QwUCiUWkMRNhRRd4ZWivOdvAPaCr6TOIouMBlL+8G5+HTxHrLcm5aqgo0CRINg1pZb4w9Zq4alp92yDce2ES4k5lfa33bsVmjGlob4mLvhJObecxQJ/th8An/U4NwolP8aVDBQKBQK5YlwesVsvD5rPuKyVfALa40fl/6LFm6iGu8/vK4j8NpOrJzauMr1+SuHo+66bkhbP7bK9ZSnCxUMFArl/wgONGY6Uv9psGZMAN5J7If4q7dZnxaapIPwDHPFsts56Kqo3s5j7uCO+DYqC1411xeUZwwVDBQKhUJ5bBbsycXn0d+XOcASe7bHrasxUJSKBVMupgzsh83Hb8CzQTds37GE9dLJtBrUWd0J6Zv2s5sp145CwPL2JDyRDe/78RVM+G4DAjq8hHW9Kh7RhG9fHYjfNhyHa4Oe2LFtERyoP+inChUMFAqFQnlsPp7gj74h/hCv347hbeuycQoXl7L1TV0CMGhvMtLqS6GJ+QXubq2Qk3HsgWkWH3gTQ39KRE4a4+xJjXou3kBLi9+LPl6OcPrhLJL+8IMhdTucnbyRlXPnkRxPU2oGFQwUCoVCeWxafX0W14dswsgJPfBaQgGY7p/Or/yC1bMsLpnOZOYCZj1Sk5NgkPUE9LOqmh2iEn98vhJdv7tZEpJg2Ss+6HmJfDXexLFiIXKGWJx68d16optYiUVpJkx0pZLhaUEFA4VCoVCeCI4R/bDnQj9LwFyMLn7eiDD6IGp2S3Txtkdq2Ah8NqkLJPyaudS+nWCAr7e0LOwd4AMwgsEQS9LXwr7SLJmAwzXGI5fwyZwM5R6oYKBQKBTK42EmtfuffsH4t2aWx3GssPnHXvD6cifwhR7nil2Qu63EObQxpmw2CQ6XQ8LlbQ25OeUeMbw8eLiSUAxEKtjwtUvXyP+eFm+cHBFyctIe2+snpeZQwUChUJ44AoGgQKvV1nY2nhtUKlWRjY2Nvrbz8dTgyLD0hzn45awQZ1e8wxYshrxraDh5K4b9G0tKmjxwzNnQkHgxTBjeZBK45JORCbLGkdC/uZZ8Y5xvmDHsq/NA4wFssq++3xf+rw2FedhucIzZmLAsh4gHsoLnj9ZWOoxeGINlE+oBumS4uUbgQlY2XGiPxFODCgYKhfLEqVOnzl9isfhPjUZT21l5HjC3adOmMxFQ12s7I0+Tgyk52PDTDETW8UFCtgb+DdrirzPJaOvDuMK0xbG/JiHY1QmO9Xvj2IXjWDymGfzqtEXy9cP4eexxuDo5IbjjJBze+io83lezaSp6z8eC6PHwdHZCQIdJuLxiOLy/t9xTmxOz8O1rg+Ax8yhsA1pgx61MKhaeMlQwUCiUp0JxcTGfw+EYLl++jNDQ0NrOTq1w6tSpbUQsdCXC6f/CgfSAN79ll6qoO/gr3CFLKVOWnsaUku+jZq8mS/m2GbvLv/f7aBFZysPpbUu/cTHj9w1keSJZp9QAKhgoFMpTgcfjGc1mM2fdunWDwsLC1i5evNg8duzY/4su5x9//HHH9OnTe+zates3nU7Xu7bzQ6E8CahgoFAoT5VBgwatY4RDXl6ebdeuXVcdPHiw47fffqt/6623xLWdtyeFgfDFF18c+fLLLzsMGzZs1cKFCyeQ81PXdr4olCcJFQwUCuWZYGtrm7d79+6uzHej0cj7+uuvZ8yePft9kUjEeeWVV4qmTJni6ubmVtvZrBHR0dHJ8+bNS/rnn38aubi4pM+cOfNrIhj+JkttZ41CeWpQwUChUJ45THcFU8gyS2nckSNH2nz66aejV69ePbSoqEjevn37hG7duvG6du3qWb9+/WeeR71ej2PHjiXs2rUra8+ePXZRUVH+Pj4+CYMHD147duzYJb///vtVsjzzfFEotQUVDBQK5bmgTZs2R5hl/vz5kyrG37x5M2jRokUtSeHd6tSpU5ExMTH1mHixWKzz9/fP8vPzU5KC3EBq+lwHBweRra2tFVkYz0ASgUDAIwU/YzdRVFxczFURMjMz1WQxZGRkmOLj48VkUZCF9WEskUjUDRo0uNi8efOTrVq1OsYsRLhkf/tt1YZ8FMr/E1QwUCiU55qgoKCbzDJ+/PhFNd3HZDJxCwoKbCrGMV0iTz53FMr/D1QwUCiU/xxcLtdEBQKF8mShgoFCoVAoFEq1UMFAoVAoFAqlWqhgoFAoFAqFUi1UMFAoFAqFQqkWKhgoFAqF8syZ28EZW3sewcG3gwHjTdg5tkZWbgZ4Fb/XdiYplaCCgUKhvLAYjcZGj5sGj8c7/yTyQnk4ph/IwPTSAC8IuUQg3POd8lxBBQOFQnlhiXxl1bk93/Z4rDS6zNiBs/NG/l9MikWhPA5UMFAoFAqFQqkWKhgoFAqFQqFUCxUMFAqF8hyRnZvfb9w3uzc6WQsgEXIQk6yBUmuqtM3jdsNQKI8CFQwUCoXynDDjl+26Oi58wds9nMojmwKfbcjEpi+61F7GKBRQwUChUCjPBR//sUPbsZ5EIODda3/5bk8HGIxm8KtYR6E8K6hgoFAolFpGq9UFKCQQViUWGGQSPlRaPaylwmecMwqlHCoYKBQKpZZ55fvtN0a3VNx3/Z8HCvBXOBULlNqFCgYKhUKpZW6mqbmA4p54rZGLv/bn4q+3Wj/7TFEod0EFA4VCodQyXE7VXREinglvdFHg2tXLZXEFOgHaNg15VlmjPAbZBRrY2tZ2Lp4cVDBQKBRKLeNkI9SSD1FNtj1+S0MEw1POEOWJsOtcSkqgj2ttZ+OJQQUDhUKh1DJLP+7j+O+G/YVBrtVrBqmkRrqC8hxw+npm5rTazsQThAoGCoVCqWWkYkHRnqva1AAXkRv3ASMnv9+Ri+UzOzy7jFEemb+239i97KMe3Wo7H08SKhgoFArlOWD9rL7und5aW/haJ1u5Qlp5YmcOh4Nvt+dgBRULLwRrjiQuHNM1dFZt5+NJQwUDhUKhPCfs+3Gw9brDt16Zuersr10iHPNcbCXGwzF54hahrqnrPmtypbbzR7k/RpO56NS1rORZ/56ZfuDnIS/zuBxDbefpSUMFA4VCoTxHDGob+CezlIZfqc3MUB6KLk1lZPH9tLbz8bSggoFCoVAoFEq1UMFAoVAoFAqlWqhgoFAoFAqFUi1UMFAoFAqFQqkWKhgoFAqFQqFUCxUMFArluWXW0lPL1Rpd/7cHh0mqWr/n2x6PfQwmjby8PPPd8Uq13jj11xNZr/SPeLtbU58Vj30gCuUFhwoGCoXy3PLhmMiRzOfBC0kDvlt5dunvr7ewspU9XdfIp65n3/hlw2Wb5R/3iNj8Tb+Mp3owCuUFggoGCoXy3NO+oecGZilU6WxHfbnj2viugY7t6rtxn+Qxft187ZTJzLn52fjmY7s3D3ySSVMo/wmoYKBQKC8M1lJh3pZv+rkw379ZdmZ+UbG6/7tDwu0fNb0itT5n6q8n1FMHNHj3k3EtVj25nFIo/z2oYKBQKC8kH4xqOol8TDocldx39vLTS3+b1tLaTl6z7opT17MP/rrxStDyj7s3IAIk6+nmlEL5b0AFA4VCeaFpG+GxmSw2RSqdYvRXOy+N6Rxo2z7CVVbFpqZfN1/bDw43+ZOxkRNotwOF8nBQwUChUP4TyKXC/E1f9/Vivn+7/Mwf+UXqbu8NDfdVaoyXXvnpqN2bQxpN/2Rci7W1nU8K5UWFCgYKhfKfY8bIpq8yn8cup/QM9XU4ve3b/tm1nScK5UWHCgYKhfKfpVWY+/bazgOF8l+BCgYKhUKhUCjVQgUDhUKhUCiUaqGCgUKhUCgUSrVQwUChUCgUCqVaqGCgUCgUCoVSLVQwUCgUCoVCqRYqGCgUCoVCoVQLFQwUCoVCoVCqhQoGCoVCoVAo1UIFA4VCeeqYzeZL5COstvPxpDCbAZMxBylxm3E+ah8kUjF0Ji3qBAggEIoh43tCZOUPPk8BvtgWXJ4zbmhtmZ1QRwHwYAPm/4sGh1DbeaDUHlQwUCgUygMwGkk5zyUvS1Mh8rNv4vq1hci8cxIcgwZ8oRpSngOC/QbhZloz3Lg4F03D02HEERTlm9n9tTBBxBFhtm4pkJaMz1y+gL5YgcXb5BgxbDTCGvcFLYcpLwJUMFAoFMpdmEwGmIlIMOkLkXpjA65EL0WxMhVyvgkcYRFkPA4RBVpwjWaYzWpcPrsHdi4XYeeohEotBkegBs9shEUGCAGeErl71yErSwrJmAKI7PLg6dwZe/ZvR1ijHmQbEam+m2v3pCmUauDPmjXrw6KiInm3bt12tWvX7tCmTZv6nTp1KnLYsGGrIiIiohYvXjzu+vXrdaZMmfKXj49Pwk8//fRmenq6y/vvvz9boVDkf0bQaDTi2bNnv88kyMQzn89TWCwWa5h85ufnK5h4FxeX9DfffPMn5jyY86lTp871cePGLY6KiopYtWrVMOa8mfNnrgNzPZjrwlwfQrdDhw6169ev36bIyMhTzLbMPsy+TBp//fXXlISEBB8mbeYYd1+r5+Fa1PRalV6b0mvFnBdzfndfK+Y6MNeDuS7M9Sm9j0qv1d330f2uFZMH5vdhjs3k4Xm4FvcLM/c7k8/S37T0WjHPB/OcMOfJnG/pfVR6re6+j+73zIFS62g02Vi7uCvkvByIxAIiDHIhF5rBJ1UsLs8EkLKdw+WBY+KQRQmeMRX6fCNyCm8g6Y6BbKSHTMgHj8tFaqYJyiwzXvKdC4FfF1y6UACBlIe6Xrsg4Mmxd99sdOz4EXg8bm2fNoXyQPgfffTRV+QlCPKSnsFESKVS2NraQigUzsjIyILRaIJAIER+fgEbJo8JG87Ozp2h1erJTc5nw2Qduz/zneF5CjN5JOFPiTAqWc9hwj/m5uazYeYcSXhRUZGSDev1Bia8UqlUsWGNRsueO/lkwySeDTPbMeGCgiI2bDKZ2TBJdwaHw7vnWj0P16Km16r02pReK/L7V3mtmPNizl2l0rBh8lnpWpHtKt1H97tWzMKEs7JyZpB777m4FvcL63Q6NszkmYS/Kb1WzDmR8J/kHCvdR6XXquJ9xDxjVT1z7AWn1DpRJxbAhpcKAdcAnskEk9kiEmAWQKvmQqvRICtbALVKD78AgCfQw9HZHiryzhCazTAZ9VCT7x72UtwpNiIxRYM8JQf1ImXQqQvB03AthhDcfOQmbwW380ySOBUMlOcbfqNGjc7PmDGjUWlEly5d2KWUiRMnVNrhzTffqBT++OOPKoW//PKL5zbs7OxYbbh161YVwh3QqVOHsnC/fn3ZpZTRo0dVOtarr75SKXz3tXqerkV14aquTURE/UrhiteqR49u7FLK3dfq7vvo7ms1Y8a7T+1cnna4qmvVrFmTSuGK18pyXcqvTcVnjsPhmAlUNNQaRBwYeUi5uRZ8YxH4IiKSifDjCogo4IlgIL9Mvk6GnecHICctAZ269UD2jQWQBCRBYeuIO1wOuIy4IIU/V28EV2uGwX4Idh+8hcJTiVjQowGU+dvAEaqh1zFbEQFSmEjEJo+kX9vnTqE8GP65c+caw6KdKRQK5f8cDkxmIhQEuTCapThzSY6YFEd4yGPQvJkJIjEw83cf8JQX8PbX3+HKzSg0avU3vPmfQySVWowXS96mfAkfycZBKAh+CZJmO+AQokGC2hc3rniiVf1YMOKEsVvgchmhaAZtXKI87/D37dvXiVDb+aBQKGCHH9JSoxYxk0KbAxF4xgJsPtUTyXlAfn4SUoq7oVXmDaR7foHP3lZh0OBB2L3vAEIH9MPCH7+BrHkqQho0Ap/Ph1lnIGW/CTyzGL/vzQCnVTSMyfE4tGU7prRvi9NWI9HG/DURCnpyQCMEIh1onY3yIsDv3LnzXrOZ3qwUCoXCqjWOHudyhuLYif1o16knuk19BfMWrQdsHHDi6nnM++FL1tbH2d4Wc35dAH9XWzjYBwJcMUxGMcxcNfhELNhbOULo3wTcLCXiDXy4OHviVrESQ9qGI/qwDer45BKBSAQDRwijicO2NFAozzP8Tp067SOftImBQnkOoDYMtYsZJigLzbh4WYeo6Kvg8jgY/9IUNGzTEFvPHofQ2Y1tPWAMohs0bIBp/GJIko5CLHWG0aAnvx8XJpMlnXq+QgivGRBQLxDcwR1x/pWPseTvf2BTzxdNtXL4uaVBIBCwoy44XPqTU55/+Hv37u0M2h5GoVAoLFsOHUVkRD2IZQKkJyrRrXdbLN2wAx7SXFiV2BwYDSbIrSVoGdEYRcqtSDSboVIWgGNmLBcZ3wwm3M7KwxC7OOw1hCPr3CVkKFOgNRrQPawfBFE7weUI2OMJhIxgoDYMtQUzvHvgwIHrY2Ji6jFDpomI09va2ubNnz9/Up8+fbbUdv4eREF+nr1erxc6ODqlPYvj8deuXTuYUOXKhPgrcPENRcad27B280f8tWvw8vaBg42kbJtzl24h1I0PgZ07kpIy4GorRWpOATxdHZCWZ0D+7asIatoYael58PFyQ/LV8/AIaYDYtHwITQZY8fQQ2DhDmZ0Ce1cvZKYkwsXTGwLaPEf5P+S/2rqgJwUoY9dnhpH9M3E44OjM0OuNyMrJwZ3UVJgMRug1GvDJs+/v5wcvT0/WJqAqSj0j6smf2cSDzmiE2KSBmSuAmsSlGETw5eohFEgeyosi0z1bnKvCiKGDwTfzkZoWi2GDBiHqbDr8PPVwk8dBWaSBSCzC1l1b4O7jBAeRDDLjcagL3aAn+WAOZzKZcCO1GI0Ee/DJwnTwXJ0gFdrC2mzA8Tk/ofOoQtaDJCMUOOTaGLUq8Eg6lGcL4wfm559/rjScjRTAgszMTKe+fftuZoRDUlKSp5WVVXFt5bEiRqORdys6qv35LSt/z89ICyqNJ/e4aeLcf6xEYrHmaR6fP2TIkDX3s2HwcXcEc3Rvbzfk6QGJjT3iE5LgUD8IV6/EICSUUeHWSExLgKtIAp0mByZrDxSl63D+wmVYyW1gI7eGsSALurwUmIlgAHlgLl2OQd1QD2g5Trh4KgpOrgIYlGpY2RtRpFLDjhmzTzv0KJT/DEqDHi9tughrsQAyCR/WdhJok3NIJcIZvg7WcPLzgowngJTxXVJcgOjLF3Hk8DGMHjcUXFIr13P4pABn3gkaaEktvsjIwfVcNY7nFOJUShpiU3TQaDUw6A0QC/kQk/fRL4E8tG7Z/KHq7UxhP3RQZxw7fh7unl5g5nvgkHfWxx9NhD7pGK6f/RtHj+1Hq7bdUJhvhCY6FWdu58PTNxWuDmJoDRyIuSYwuk8AFaKKBWjMjcXhY7EQy72gK8xAGx8NrAUaIiq4MBm5MHI50KkLIKSC4ZnCOFsrFQvMPVJVKZiXl2crk8mUjIgg4tXwbHNYzvzpk/J1apXN/daLxULuP29NVCucXW+M+PT7uhzO03Ebyp88efI88jm5yrU8acn0KHyIyLNqZy2BVYngZ8QCg7+tCbEaW8jFciTziqFKTwRPX4zwsGDEZ2jg7MCDnicCrBzZH0Tm4AYPF0/yTc0MKoKvkwRChTXyVdngkqoFx2RkvaNRKP+P/FdtGGx5QtiJDDBAgkJSCylKLAap/yMzsQCn4nOJCNBBSE5bZCVEeKAPGrdsi8S4XHx1Pg1Xs3JwIz0fOYVMbVwDHVcEK1Ioi4h44PKsYOZz2K4AKz7jgtnygsrV6WESW8HiEeEhIHlQyOXQ6Y+R/RrAw8MD585eg0DCQfSR43AWqHHz8Dj8+EMbyG3tkZIUhSYBBYgIKIRarYLeYIZYYGZ9MplNZvBlItStm0HSMkJF8i52zUOQt5ltYWEPR46nJSKnuCgPMoX7E7/ulKoxEbU2ffr0uaVhUt5iYmsXHLxegJjUYtxdhx4zZszSFStWjHjW+Syly9QPmm6f82mMmXHYUQGZTIJefZtDpdJiy8bjyM9IC5735gTl5J8Wyp6GaOD//fffL+O+gkGOkl42SEk2pbZy8uTLK20isXVFGFkY6gb4s5/2LpZ19Uo2JXIBQXI79ruCFQvsnmy8h18wG3KysQiQuiEhj39WFArl+YLUokOd7HEpR00KT/Ie4wtYt8osfD55G/BJLc+MI1HXsDM60WIMSCoOpQuDnMcYFApgzfRZsDJAxtYMGU/NYLs5wM7/wIRtBEYoubbIzs6Di6NdjbPJGC0yiTpLtHj/3U8Q2aINVq1eCaUyGaM6GSEQ6+DloEWXwANITDPD3Qto20QAo5ksRh07BwWHPRMejAYjyYwadQJIRYhnRHZ6LkQSMyRSM2sYaTSa2XPjE9GQn30Hzp6hT/iiU+5HYmKiV0Vhbivlg0/u0c71FOyiJcIvKVeDuCwN+dRi19YNTL99rQkGbz+/m6++MVCVmpQhz88vglwuhYenY1l3m6BQVbatQaeTJt662dA7KPj8k84H/9tvv51BeNLpUiiUR+C/2LpQSo+QQFw7dY11icz08cNgEQLMfyPr78gMuUSGAqUWPB6vTChUhLFpYN6RpauYq8Xsy4gE1m6wtE6l52DhnQIouA8nGEqRyBxgo/0OezelIys5Bt7ydLg5EpGj04Aj5sHHUw97GwP4QjOp7PAgMOuIqFAT0aKAkauGWa8ps7/g8jlwdyCVLo4eehPjZhyWrgjGbSQREhyuECkZlxCMno9wVSmPwokTJ1own3weB8EuUgQ7SyqtF5HfLMBJwi4l8Bn7AYLxGWeV5c7t6628TQa5m7s9mOVuZDJxpfChnVsnjn0agoGZPIcKBgqF8rTxk3FIzc0EEa9qTcTUloKDApFy8epjH0vAFyA5OxsaufUj7W8ldYSLE/kUbIWnTAN3JzFshFqoTUQE8AykwCeFitQMsUgIsZRHwmYYDAaIBDbQa3NIbbXc9sxgMIIvMEBOhIWOfNdpLfEWbcNhDNmQn5/72OdMqTlyubyI+ZzcxhVCfvn9mK8yQCGt2tD20rmTrRo0a3X42eSwMsaMS1/C7v6da4y47tmnOdvqsHDeDmg16qdiEMMvmY1v9tNInEKhPBz/VRsGBsaCwc2KjyKNGTrm3Vdh8gR2qjameZ5pZZDLodVq2VYIxiCb6axgm17LrgrzhW2XYEde8EpaFZhtOGW2AeQYHD4kfDEsk4UJHmq0hL1DIMRckiuZDvZWgJurjskZSVcDk47pT2ByzIdQbIaRHMvAEZH1EoiFjpC4vYm02GkQ85RsPgx65rjk+Hwje54CM9spA4OJC6NejKh4d0Tapjzm1aU8DMwsu8xnxVtib0werqao0DPcDoF3tTgw8AVC3TPL4F34udp4QFt0T/yyxXswdGQHcn/z4eNrsQUIDfdFgbX/paeRD/6MGTO+xX0EQ79GPuB1+QrrvxmFht4+OHwnAfKqNnwIis4sRK+lGhz+7dXHTKkcXeJBtJl5HKeWVZ4Iq3F4OE5duoSq9SLBUICw4PrwC4/Apdgc3Lp89L7b9m8Sir/PXoFTxUh1Aoa+swyrfx8Dr/DXkHhpKxude2kHRv59DTt/f/sxz6wEYwq8Gkwi6e+owcYmeHtHIOHOpRpZh696vzdUg5ZgQuOHb7Z9HI5uWov4Qg0CmnVBi2DnB27btkldLN93BR42dHaeFxkRKVR7+7pj9bUUCEmBrytp3WUme2JqSKwnAvIGN2anwygSAUIp2zXBDslkHTaX1rCYLe9tGWYKZ2ap2JWRnmvGjcR4hPoH3bP9g5AqPCDm6yHiMVNa8yAU8aHRaKAzcKAlasdo5oMvNMBM8qsnYZVZQMJSWNk4Q+pgj8QYF/C4SeDymFEQZrb7gctMjW1Ug+zKGtUROQRNsQnDJy/F4Y0fP+JVpTwK9vb2OYw4/+tQGmdaRzc2rmWADZQaI3wdy5v3/z2ZgRylAW92dkdYw6Ynayu/8am5eYc37cfosZ3B41veg+wQYBW5J9mZkC0lV3Z2AaIv3saQT19d+TTywXZJMNNb349L6z9D7PvDysIvD+mG/VFp+GfLHgSpzqDzsLfgHN4Je9f9xarnDV9NwJKLBQjqPQ3tjEfwxjfL0GfqbMyd2hotmraDm4cX4EUKqbRodOo+Ahz3Bji4fRneG9UTW04n4JuFmzC0dSB7rG+nDsffOy9i2vdLMK2jPTp06A+llS/27d0CTcxOdB3xOnyaDcTWL9qSC6eGNvs6OnUbgmKxD44c2QJlQSERACZMGtAJB65n45cVO9EzotwSed/Cr9Fs8hz8M2Mgts+fhasZJlxdMh0z/9yOke/MxYftyU302h8wyoLhlKskr6tkeEWMg6spGcF934PLha9x8ooBv/bvQlRLNprWD0JIvw8xp68CxUol0mIOoOfgKZD7t8LhLX/Byy8S9RzJS6Zuf+z+52307dgJN/L5WLP3EMKdhEDhZQS2fxVcsx2OLH8LXQe9DNfwrti5rKTLyKjC8L5dEZVsxKZDx/B5az84NKyPfVeKMLiBHGsu5CP6/AGyoRZtGtSDfeQYrJ7ZEQ0Hvg2u1A2f9PHA+79vQJ9XZ+HHt4ezSerUShTrJagTGIwG/g64bfLFqd1L8d74/thw7Ba++XcX+Dtfxa2Oi9E1/XvMTeqDl12PYtxnSxFka8R3uw+if/OBuHb9OCKCwrB760J0H/URxAoXvNreHl/+sw2D3/wBs1/rijF9uuF0sh6b9u3HnRun8MXfm9F/qgQtAnuT+6oHDsbk488NO9GxjkWWbfxlBmb8uQ98s54Nr/zxfXz820YMf+9HfPlyDzZu9/wvMPWbJWg3fDrmffESJg/qiaNxSqzcuQfv9m+Gf/bEYNagZpj2z36Eez+u3H36/FdbF0pp6KHAiuvpTEM8uc9LWgMqGHMzMeH1gnHixm3wRZySOE6Z+GVekqxh4n18zZW2IpR+/l1UjNeTslDPLwj36QmpEj6Pz85UySP55JOKJZdpbWBKeqYVg2sCVyCAgG8DI0eCvGIbxCZLEJVAKgvaWBiK34LaZIKIawcBSYcZ2WHmGSDk8yEyyyDkMeKJpMPnwMpkxJGvxkCmeP7vzf8SRFSaGjSIiLJVxzcojZMKuejf0KHSdg08ZbiRoYbCwTX5mWeyAlL3sF/0+t3/Gk1mlFabmHv85Vf7VNpu17bTbMucjZ1dxtPIB2v0+CDBsGzDX+jdfxpYzWUqxsCXpqFn0llMmrkQEx0Po8fLn2BkuxBoDCgfctn3LXw9JhJe3pPxxhtT8M9P09Ewvx06vj0f032voN8KA6aOHIKmAyYj9+xmLNoTjQ2ns7F0yQLY2grLjj1vazR+W7oY7goFPnp5BN5YeBRhcX/jpS+XoGjtp9gTk4ATf89CmqU8gdogxnuff429S3/A0r1Jlkh9EvbGAssXLICNvPIbIysjA47uHuz3npM+BAyJ6L3kGhLjb6GJrw/eaP8rYFsPlzbMQSf/TZadjBKcvnyJnFsLJF7ZjFVdv8G0Ti74nm+HM9GHSXxzoO8n7KYciRO++O43LP5yMo7mMPtyses0s68vCm+3QSLPH4v+mQG5jmlqshiy6Dj2uHN+E1qT4w+a9jqiN83HpkSLYDiy+ENkSBtiVDcTpkz7CszYlKkLNwNtfNBxTgJkL9XDxgTmRcrHkQtX0SbQF/kzOgAyN1w+sApRZ47h16DWeG3qx/hh+vC7XqAGLNp2DJ3DgqAqVqPfqCmIbHsLX33yAT5uXnE7IyZ/thiXYq5hSsvwKu8Zoa03zuyZh6jTx/FzeHO8+fbXGO51AfpmM3FmgQeW7TuNVu374aeQZnjr7dkYW+8G0vwm4eq8ZgjtMhI3zu0lL2clPvxtF6KuR6Nb0zowFqbgo7924tXXXsL8H2bg/ck9YEXy/+P8NXj32z/Q1N8Vl9d9gisIxpgBErw04SvY3veuptQWQuiJUCAPLCmA+SWCwcRlfCnzWFHAaAcvZztcTM5iX3zsKImSeAaLYKh61HzFLgnmk5lIKo1sL1SZ2e6Oh3Foy+eRApxrTZ4RJURCMfQcKYxmK6gMHOTp5LiVxMX5W5m4kUwqKoWZcLLSItyBAweODDKuEdlKI+ILTLiWZoKBnK/llEmeeCbIyY3rKhOgZag7eg+aiBY9hoEn8Hi8C0t5aPbv29/x86FhDzQeCfWwYpeWA1795lnlqyo4Evsjk6b0qna7Tl0b4/jJW+lPy2dE6bDKv++3gcSzE7rbfIJ1TCD1MN746QBi5rfDm1suodPItyCxtsHU8cPx++GbqFdiX2RjzcgLZqwTH29Nn47hI8bg2PwZRMGJyQvA8kAzSqnX8JfRacY4Nhz079/Iu7Edr6wrxrWtX5MYFX5fuhDc3Mvo+8Y+DAvmgMvhWZodjYzKt7xEbBydy7oRNvz0DtIbzES4ry00JnasFQwqAf5dMBeJJ1bij2g7HF1YbuDZsFFDzF93CJjQDGNaBWL08j0kUYt+45ScgUh+V18W++LhlaxF+TuoLL78pTT3tUFoMfcE/Gz4sGSnvEnVJPfEXz9+hGMrv0OM9WAsmGm5Gbhiy/GYX3vklOl4Z+J4iGUqvM4c0aSDe0hXfPhpb0wl1+CViIWkBkN+I4HF8pov4rNW2Cg7SsmPLBaxKQ4Y9gqir10k1/Gjql63bO1JQD416dfxypcLcXrFh/hyxQm2WVWvZTzq6UuuDKfsw3IgM2vhXjp4WSiQkcwqMWzsmzgTdYD8Pj+QWqSAHVonEErhpBBg2OhpOBV9CALzt+yLvbKJe2l++MwuJfVLEzhCK7zx1rsYPnQsKxYY8fLurJ9hLVRjxPBRWPRZBzgGtcHMTwdjGlGwA9ptYwsd0wvk+fy/bMPAYCTCoL6rHa5kK8viLE9FeReDVk3uM40SPKm0ZI1FJDDPOSMgTIzFIflVeRVsIKoaUcG1DG4kt6UChXn5UNje1+/NPZi5BnAFchj5NrhTyMelRCEu3CxCcm4aiotSYFZp4SKXItDeC3KpHs5+jeAd1AIBweEkLAVHysf5s5dx8OBBHD15gogOCbp3b4XRIwagTbsOkFg5kjwLHvLqUZ4kClvbPHtXr7ictES/6rZt2rn/imeRp/vh5OyaiLzqt3Mhz1bjYZOf2nCbUsdN9xUMDHPXbMXGgEaARzu4ZM5Al7fTYMq2g4JfRAqh12Af0gFB9xgjC7H4q7EI8gtAi36vYOl336Nl45aICg8FrNvj79Wb0KlHc7wv9cfW3Rux/qcx2HY+HrPmrS/ZX4ozq37C4j0X8OrXCzC9mzs6dW6JIrEf9uzbBN1QF3QKDURA6yHY1MzSzdBj+AS0HjAEdYO80KR+tuUEbRyx8OuROHw1Fz8v34xvX22HgX8cQgBZF9zjdXTaNwZ+voHoPfkDdCY1/+/Hh8DP3x/D3vsV1dpXywPhqzmHn3YkVLl62CuvYFCPSIR4SdAkt7Lgs7F3wDfDuuNaAQfLd3S4Z98tu1egW2QwRB6NsH/r92xcu4k/YPGgHvCv9xn+WPUAewa+EO0ah8Cpy8tQlDUj8DGinQ+aRXYkr1qdpWy/T7EkdgmEtykW3Sd9Bk0xOe64jzGzUytEN6wHSRMuFn09EfXrhsKBx9gAKTC4uS3CG3WCwVQhQa4cfSOd0aJtfxiNWnKt34FofjfUX2PAtgP7MKCVG1q07ksKEB38Or0Nj4W9UK/1HPy1bqdlf44Vvn+9G0LrNoADn7z4rT3xw5SuqBsQgI4j3sKfX75Gss9D4a0jmPLDMrQZPA1Nh7wC7039EFD3G3yzaCM+mDwUnZqFw0v64jgCY1zR1nYeniqkYO/iq8DVHGbcuEUkMEaNhgqiTijkw95GiiSlDlZCxtMi+f1JDaGijcLd9ovlLQ93HY4ku0ZvhiIpDk0VEUSA1kyLccn9F5XUFoqUTGjjE6FQFiFSq0WxTgZVgQoZfg44l82DvZsjXPzboG7zevDyDkaLiAZQqZXIysmCRMiFT4smuJIai0YhjWES82Fbry3EUheYSG3HRHQhU5HII8+jnlyLQhMXt3U65OmNyDBqYE/yPdrJmdzl5ioFEeXxmTBrZcfvJ7SMf9A2ApFYbW2jyH9GWXoQbGXi6OFLiLudBqlUhKaRdcuMHVl4wjxvv8ALTysDrGtoQpUrN51PKPlmj/g7lu8Hz12utM2l64MqhQd8tLDse4fRMxA7urxGfzzqWqVtj50vT+u3Vdvx213H/+TPVfikQvjQmQqGnxFdcOX6rbLg5Q3d2c9bt3uUxX083JLnRRsPlsWtmpaJ6RXSnDl3KVnKw0Pf/p5dSjm1pDf7ue+2Ja3E6M2WzzuW3+Twxevs55s92lSKP7KkL/sZ17vcTfngO2dKtrGktenI6conbB2G+GMWWxVHUls5f+lG2apSg8fFG8rPpetlSzrfHrJ8Nl9muT6jbl+slGz0jiXs51cLt+KrykfEmJ8t6U27Fct+Hrh2k/3cdrRyGtdu3S77/tWQ99G+YwvsP3wBAXIJvl68E19X2PbE9maWfC3egW8rxM9bsQrhROAFOEgwa8E2zKqw7o81e3E3PV+bxS6leLz2KfqRpSL9X/2cXUr5dfXO8pWRnyJuQuXtn3dyc3OfrfXpM8ZMSkgXmRVMvGJwDEJWJnDvccxrRoM6wci7cJMUqpaWK3aLEsFQseuhbI+S1q2745lWyIt5KmSlq2EKMYHLq5nhLDPb5Cez5sHM0ZPCnYgWUpgzaRmJtNGQsjszPR17z17ADz/+gqvNImEdSqog9YKhYrpRhHKIxAIIBnUD32QFz9H9kSaQsK2qr/CLob92DowLBmbhG80QGLQwGwzQcUzQG/jQkSwKdSZwtMVYGc3Bno7dH9JdJaWmuHl4J7Qf8c5HB1f8cPersYw3/tj3XHgTzOZ5fLPu599neno5oWXrUDg5KbBsyV6IRAKMn9QDPB4XefZNmz3Nrlh28qmnmP5zx/JTMbWdhReej9YcfaT9hNauuB79aPv+vxAXF+dHiKvtfDwt+CU1ZVeeGOklrV+MHwJ2lESJIGAWa1I6C/h8qI16sP2PJVi2M93TwvCgGrieJ4RGagsDOQ6/hoKhtLuDAyEYh40ckwZGPg8JKjW2J8dh5ZV4GMVcCN6bChcdyf+deDR2dkCSJh+JRhNMWgMp/MmCXBQTEaHXZ7OCg2swsf4azIxdAzMNF8kTM+kWSs67dChpgdEAZqDmTTOPbYWg44OeDGqNWvLl0vd+iAhueqpjWPed9nYO2T1HT5t9P8Ew9I0fRrt7+jywBeJZYR/Q7CMe7+/3DAYj/9jhS6w7aAatVo+F87Zj5IfvfWRr53KrmmQei9Lpre+t3j2ALyd3Q6Npy9Ej7F6PU3dzYvHHWG/qjDkT2jxqHnH97FG41m8NGyFQHHcQXd7fheNrSuuuBnh5N0XskR/Q8r1jOLv6kwemRaE8z/j7+9/+L9swlNKpjiuWRaez35nC2UwKTmaIoqnExklPauMmVSY4Ajk4/FKboZoVm3eLB8bOJ0rmiMikZHj637+7mlT2SSGvIUJFiDxS67+YnoFDKTnYnZKEfIMeeTo1KfCBIo4RAp0JRrkUEhHQuhA4blBBdewIRO7+yNEpoTZxWJsj1rpHpwePiBwDET8msw4mDRELjBBizsmoIwclgsFUYgcExgW2EGZSW1RqDNCSTRKKC+EvpCa8j8uUP4es23xn3UA2EPf7qyhpjHSTOGo8Pc3wThOR39cMN7960c5eAdeGTp8zQSSWqGsxy5Vg7JtCeg+dcHH9v0vvXufh5WyQOoXMqmq/Jwm/U6dO++638uevPoOXjyMK7ZphbI8G+PSLebAVFKFx6x7wtpPgm88+Q8MQD6RJ68NdFY04Xh28PKAVClKu4PdF2zF22ltwD2iIVjxn/PvTF7Dz9UWddqNxe9d8RGfw8fbr48ta2rT5SZjz+zL0nfAaQlytseSPOSiSBmDq4HC8/NI4BHd/G/O+Hot5Kw7izSmWIYHn963F6RzLcCSm9jqgS0P2+/I/f0K+lT9eG9P76V49CuUJ8zy2LuQrta4KmSjtSaYZrmCcIKWVj2q4az1fIEBYWAjOX7vDGjkyza2PCrPnxTvZuOFY9EDBMDvqHH6KvsIKBx2HKehJ7Z7PFPBk4XPBz1ay01e7avOQevsOzG07oF5SNjp4y1EkdsH14lR4uvvidtwN6DRKCPVatuXAoNKQWqCKCAMdOAYiIohg4Gi05LsBHGb+CY5FKjDdJTwBn6w3sesEjvYwGU1Iz8yGv+3zLxjMFj9Iz5WFMePOee2fX373TuKX043cqrOWqs4SpzoCpx0tYXdrs1dbZ5c0vytnm7Rq3ObIM8xutbTo1O3fxAunxubE3+rIhBn/Cx27NIJ/gPszqWTwGzdufI5Q5cr1Sxdj7v6z+HRgK0Q0uYRFC+ZiweKFOLboI5jDR2HZosXouHM33ureFb9u3ol/BnXHoAE30KjtJOze+TdaRTTDqvfa4whfhux1SxA8fi5kGz/Fz2cleCdSjZYTf8TJBW+xxwqP6IQNe7ZgaIsG+H3mQJzStkIn7iXsj2uBOo48DH11PNb8OgsRfV7FnLEtUP/AHvR//U/sWvGpxetUUQJWbD8Jx7iluGw3AvUSV+PN38T4aWrnZ3EdKZQnwu3bt/1rOw9303n6ulSRkG88+utQATta8QnADKW0EnBIAWyGvooGFROpgXtZyxBFClGm2Z7x1MjMGcFnRz0wBQEqjZKotG/JcMxSOAYTUnKykeVo8ZZblYHk9sRYzI26AK7WCJWNFSSJGTCnFaJFkBPORcVi4ZB+8Gwix5JbqXDJisfwGTMRvH8n/DJzkavUwN/bFjeTEiGwU4AbpYbQpITj7UT2ONl29lDpVeAyY8/JsesG+UGQV4Sc+HgU6cx4NSwJLYL1CLIHYs4WQyoyQU3OLzeVgw/NzREVl4CWwYFP4rI/VZpOXm5ytLXK2/Ftv1q3w2Gmo17926dzzu9cOo0Jt7K2w+GAnBrtm1KYZruicFm3FbeWdXPc6VBw6eNMxdPK56PMT9HjlbcHMK2QnLQzJ42agrrM85Cj4XBM2dmObDyHY3Z0cMh6Gvnlnz9/vtGDNvDzdERYgDNSUwtIyAmd27cmgqF8fdN6wWAc8EVG1IWdBFBrb8HEE+Pcmcv46qt3gMLy+S/6DeqL01+vQvden6F9Fy5SfiztPiiE2ixA1Nlz+OCrL9Bi+GBcnPUJ3v9tPZb1ehuMq29mfvuUWxdwOHcD9Bwuim5fh0NEO9SLaFkpvzHRV9H7r35omFKE/ouukBgqGCgvDszzSHjik8Y8Ls0aB/OGfbPP1CHMZc7LvUPfedz0BKTob+1ui30JuWzzAlP4M8NfjebyccFMwc4zGGEQ8aAna4VmIztbJQOXa65k98CIBCYNpoA2mcrTYLZh4nl8KaSensguKoCD/N7hlbtS78B30y54NGkJbz1gx+fC85VR4J05jy+/G4m9W7eiZcM+qHMxGueT0yEhBb0kNgPqiGBYxV5DHgmrOOQYJK8SvQZuKfFYNmAg0lLT8GpCKhIN+TAJZejdvSeurv0XRRkJ8FMXYsU7GeAKtTDp3HDnti3MxkwcOW1CZoYGHh58zPHdgU0ZXnjtcS/4M6JuXR/bbu9vMc9+qXn7iADHQ8/6+My01Yu+eHntlRM7B1SMdy0UgUeE6aBmExYMDOpxaciygT/XJL2s4mybdUeXjxrUeuSyJ5VHTfzB7WJtbjezU+gn3KLkfvDu2KQm++kNBoH1O3tVBpO5gkPismH/RCGfySwNMMLe0Yqf+0Zj+fdj2tRZ4uJgm/4k8s4/d+5cY/JZdRMDYUjvgYi7lIKPQh8sGsskkigU9WRpWLl+M+vcZOkEt0rbjZo+HcFthmODwohXPlxcEmuNLkFCLF61CUmxqYiwzsD6k4nwc7ZCfrEeTi5O+HD8FETK8nAn6xxu5Zhg9O0G1cnpmDgmqlL6096fjh5928Jal45P19SaJ08K5ZFgWvyeVxsGDzcH3MwxvN1y6uo3j/46RMgllZrHSa+xpzX2JhYSvWCqNMqh9DuDk70CCTk5kPIFJetxz3b3jIy4y9sjg8Zgws0sLVwLsuAQEoa7rSZtOELU79gYgXw5jor4GO7pCkNCAvZdv4m8HBWUXmJYyaywwzsUjfPyMPDDT6GzleNgGhe6pGwcPTgX4i5dQHZlvUR28wuAhGx/I/oiPu7ZC5OPHkLHjl1xceE8FB87iPZ2mfhtdiNodQXk/B2RHGsDvTEHd/KaoPugsbhx/TIOHzkCvToDPZ3+gdE0ETzufZ3cP1eE1PHCgoN3Dt6adyx/13f9n2lfynu9/NVGg15YMY4ZFjnotTljfuzWl3Un1Pf7yOsPk+a07aP/7dd86KrqnCExrQXk+Jr67Qct1OrybSd99M+Qu7fJyUwLtNfmsEP5OJmXWUPL0laB6vIR9OWR25XFwv1hGtEylQb7Dw/lzV4cdWJ2zChDYQHX7rBe7v6jXOF2SkSuSU3SuRt+dbWZNVvXo1QqJN45zn5+vtQyYVePkuGBJ+Msn7tLhvntOl/592AH2U0qGZZn0wQJN+4dJvrP3spxRyuYH3T99xiqGhx3PT62Urh0aOW18xPvez4UyvPMg2yKnhciGwfzRsw+YGxTz/HHV/uGT69+j6qx4zOzPBrA5fPKugkYPwlMF0QpDer5I+O0iq0dWnyLlWxX0rpQOsSyUhfEXWEGHp+LLVez4e+nRdMqfDFYCyUovJNPvnBwwssRzZOz8PeJA3BzckDajRgci0/Gr8s2QqPiYVsTf3D1GpicfOAukEIRGoKuwfVwylPB2h4wjuWWpKZibH0N6rZui6Vkf8eIRuBGRUNw+yqWjSXiw7stcvPPg6M1QscVol6gDRasjcdrM3/EH3/+hcPHTiAvtwADOvTBjbPfIjvpHJy9mTfpc6kl70Eo4COknq+ixwdbzV+Nb9axYZDTgWdx3IpigSmE+779x8C2XXpvrLjNdeU134dNd/KiIWsXTtrQ/0Hb/Dlj2F5yP/Mv7ls1mQmfP763W6OWnXdV3EYssdLcvZ8x8chuvnfbLg9K+/MN1z9LztN4Pmy+GUYGEZ1j0lvbmDJ6I49ZLrD+GhDY56G7jviMVTahypUzv/0BVo+SQwqF8kiUjFp67nF3tcftPNNbraaunnbklyFiLpfzUP2wDMwOPtZiJKosvt2ZOhbfXHlaKb5WRwSEnm0QuFsE1MSZUamdg4CICz3HAIGtPcwmxlV05YK3nsIZO91ccD05AwaVCp/ZyMA1iZAmdYBIpkLAoP7IvJMEY3oqDEVqmJnhEVZi2MrESP91IcxyK/A/fh35uTlQkzqgyc0VUw/uhR1HiF1CIxRh4YhZvwyOV87AY1JdWNmakK/SwWwQIafAhHBnFeSKuqyjqfCQetiw+F+oDFrsPBeD9zv3xp4lszH64/VlnmhfFOoGe2Lx4cT9H8w/nr/7+6ff2iBTOGQq87PZyWjm7k6q8gYp1BYJq4qvCJ/LQwOXZifOpp5o4ZcjhU3U2b6miSYuMwfF3dveuBrV8N/PxuwuLsitNBFFQdzNUNwlGKzk1kmZti19nfJPXic3oog9ljqrs+balpv8gO4hfIFAf3f6qVm5bl8dinsohzJBTlY3bmYWBzPfWcFwN0ad7aXzJ/qHN2qx8d6V94fPjPu+38pu/QfdbxWFQnkKMH5RCGtrOx81pVnjYP6o7w4amgfZ/TJtQMQb1e9RDofDR5+6TvjrbCZ00Fa5jdlogETAhUZnYt2fl9oq3LPdfTw9lh/L0upw2doeYdmp8HEqmYSOiBGzuRhOiW8gWdMYni7u8HN2xOWMeJhs5XCQSuHStQdMuRnwaBgGs6AJbkSfg1bIAU8qRrHBAIVYAL7BBCsDF4Up6TCRY2nMIhwm4oQxuOQU5kKhKoRYxMdH4x1Qt04E0nJTIFTzkF5YgMxsPgxaV3h6hoJxlupiZ4cGwT5IUuVhWs8eEMpTkZKwC0xJ8iI6k2Ys+UNDfBU9Z24zfz62SefGwc5PrRUtqE6LPRdObRnFfE+4fbOOj39QpebujKw096r3rMxLTSdsf6fL3KEfDqmXx9EZBcwUaX+8M/jg1Lnr25Zuc/vG1bAln4zaW5SXVWm6XZHESjli5l/9w5u1r/I8nZxdE0yO/aTFt/dvlxsLujFxYo4uEHFbVJk2kSFOLu43K27f+++zD+086PfenlPa1fc7pNXpRUIBXxd781pja0P6FCdhcR8YtaywCbUp/JJ8PJxgKPEs98AJOCgUyrOB8bxaUxuGJpOXs/2eZ+eN5Dwo/LjYKWRMFeW+faeuLnZIKMTrraaufvXwL0MkPC6nRhPfMB4evWRSmBgj8fu0T5iJNmjmG4DDNyy+cx4kGBjuJxpK15++lITWdgZ4O7qzrRYGFEF/yAuXYzrDqjAPKXvWQvTKNPLW54JvbwsnVydkLvoLDUlufRuEoP7gUVgkl+FSxh3omXlKSH3TViaDslgDhUCMouwcxpEzmBlMWN8KbB1Xy1isgcMXwcZWBL22EAfOqSEz5LPOnQqLitnJuJJSb8BIEpRYyWAns8b1jGTk5hfBxooPrYmZSrvmvigq0vRlyzQIZ/4e8cBwDbnvPeVgK2NUn+h+6+sEeeDfo8l7Z84/XrDnhwGKhzloTWk2+KVfSwXDjcPrxvr4z/yg4vqFh/6qkf1oa6f+C6ykVsUvz1rbZt67A1hjuNtXTrfJSk91VxYVKP54q9c5g04rrrgPaysxY16/pq067KkufaalQh7YuXtBfo6vTfpBy1Bqs5nvlH/yRk6R2/f2gS3eY6KSE+Mizg8qsllynY+XDgghFnI1RDyLH5g4WANJthWFqAVWiQcE1zsLMAvh+jr2N+QalCGMkWhVrSb3g/+f911PobxAlMztUiPatAh5qPBjUCNDK6a1YcwPh/RN/RS/vjGowes12Ycp3iVELRTdFV86+oHBxsYKVlYiGFh/DEK2kL67N6Ji90TpyIiK4VJbh+w8HfJKBkkYGR8JVyeAJ5QjMeYSrOq1hUokgBUp3Hmk0HckYkF37QbyjhxBXHBdnFt6FRkGCewbBMPZ2h5JxXlQiqygkYrB0SrhwBfiUkEupDIXqLR6cIzMu5p174jc9Ey4C3mIic5B6ya5uKq0Q6jOBL1RD76BbCuyR3HSLtaGY/3WzTiRkA6lmofvV63CrHfawEjECLdmP8M9PMP75L5ioRQ+n4ewUD+bXh9tN308slH3ZnVddj+pgzN4+AVfLf1+eNuyKV0nVBYMu++sqpGBW4cW3diad936TU9Z2dhnFhfksN0c34xtlnS3oJfZ2GcNnvlP7/AGTU5XldaDsFHYxxtl/QRI2HeRZ1CGMnH2xtR3TbHbhus9OwV7qC+yhfzYOgb09udmHcvzmDRwXfqm6tLVGYz373aROe+GMqMr8zUrLbGRs7vP2Zrml18yO16VK/VF6Vi15SiGjBiMkqnpsXH1SgS16oEQ97uHJhUgtUgON3nlJ/lqcgFCPGyQe/MMbIOaPrTJTtyFI/BreH8vkYdPXUfbyDoPmeqzISc+ConwRwNfi3OpIwdPo037Zg+VRtK1c/Cs2xj64mys2nQIo0cOwrk9m6F2boDW9b0eKV+a3BQIFG7gVejHjbt4mNSg2j5Rk6qTe3fCyUEB/wbN71lnJrWsVIMI7lbVvmMeSPz5w/BpdP98Hzx6Ee1bN7jP2vthxr4jV9GpDfv8YvfxGIgMxWjXtkajnx6LktljX1hcnGyRqMS01tPWTDnw4yCZgM/VVbdPZx97bLidXT5EssLQShNTSJr0pFZezEzHirJZYquYXKGqOSbuRsATIM/JERqtBlySnCn7GPgCF5gNieBb20HIEUKfRepQ7mLI5Aqkzf8DAr4AWo2GnRb22oVoLB0+HK/FXoeKvAKLlHlQiyWQCYVwUKlJRDF49kLw1BaX16xhhpaDgtwM2OUWkBpuPiYPzoaDwA+Z2SJI+AayFCIuXYQuzRxw5cJW9OjXB3Hxd1CQX4iw4HpYvWwDvOq0B59VSi+WDcP9CA5w56w4nrLro3+OF+ydM1DxpNKVWlkVl874qlYW3pPujexbTtWlIeaLKrV3vT1vf8gXQyNYvwYVxYJIIi8c8dnSLo8iFCrCY0ZfBHQLS7t56j1XUzLrwphr0HiI4rcVV9jMbBXU1Ut09nLb+yRTCbXOcN9WiBxR4Kv2ygzWcNHZEP8b4FPjQumBknXxjlhMGjkYfy7bg1dGdcHJNfPRb+gkxBzeAbh1xorla9BryHCc3rcbLqH1YGcnxqFDJ2FS+CDt0Ap4dh6LzNjLuB3njhaOBjA+X9atWI7InoMRe+E0ktNyMGpEf6xfsQxNug+El60E5w7uQabYC97mJGRbBUGWdodcKgNWMccaNgInD+9HnloCkeoOIrsPQezli0hOuYORnYKw9dAFdG3bDGt2nsHI4QMQFZcH6/zL8CeCozj5Ki5GX4dL0+5wMadhy7F4jBjQAauXr0DbvsNQfOs4LufL0a91XSxbuRF9ho+ANbk6uXeu4FBMCiLbtUNeIflx404iqEEElq/dgaGjR2D71u2wkeQhJt8DI5s4YfvZJIwaxIg3LX5ftAljP/gMm1atQLNeQ8kLxIRj2zbC6NkM+dcPok77/gh2kpK3lgYrVm5Ab3J+edeP42wKHwO7kN/QoCW1FBX7W+zduJGIhTHYdj0fjVv1QdrBZUD90ey6LWtWIKBNHxRFHcKtbBVGjhqCs7s3QufWFM2DrLFs9VaMGD0cjNedlKvHcepGFvr07YWtq9cgossg+NgK2Ul+shOicPrCDTTqOgjmO6dw4o4JXRu5Qmzvj0OHo9A0UIgD0alo2bEZsotFEMSdgFzGx6k4HXkBkovj0hQt6tli9Zot6DtkCAykhpUCD/iRz1WrNqDn4CGwJrUsZcYt7Dwag9COnZB/8RCyJHXQrpE3TAYNVq5cj75DhyHx7H5kiILgy70DbyI4LiSo0cjXBru270F2kQZD+7bD/r374eksR1HCWRy6EIewiHCcuZTKrtuydi2a9hgAg5mLPdt2QRpICvuEI+D7tkBkkDO55gasW70WHfsNRnrUAcTr3aBNOgSdZ1fU4d6GnuPDTufN5DvN7IowdysYNTlYufEQhgwbSAqWmj5iD8e8efMmP0wrw/NK00ZBgvE/HtE29Lb+Y/qQhvdtBmZ8HIa5K1jBwIbvqrww004xAsLezg45hcWwtIg/+sVnnETF5iiRzNHBz9sZGrWSbQnI1glg5vJYp1DFRUXgqXjkedGBczsNQjG518miURUhY+IgfPf3T9C1aAt1QQ5EfBOMUhG4eVwYiCDQkXeVtbUcOflZpScAHoekpdbAo1UX3ExMw2cLjsJOno12oSrIxBoiJri4Gn0APQN1+HjpdHiFTGdm6cLt2zfBN+fD1SYBOTG2rH3Df2kOKqa1ITzM36bPJzuM7w9p0LtFqOsDpuCtOTKFQ0ZRXhY7hWNhYYGNtbUN40SIiLsieU32D1KEXK0YtrV1yPZv3GnL7XP7+jBhqVyR+/Kczc28fPxjq07h0XANivyusCBvkXX6wWTy+1dqIUjg1ensIxJpGHuEmqSl0eok91tn7+gSh1wOM6EJH+qcpg+TR36JYqqyicHa2TJXBF+Vz35mF1lmjavXpjv2r1iIEaMmYv6KPeAS5d/ZS4LruQYoAiMRt2sVOrcKgUOIO46YBRBe2wYVV87WFBi/8Gt3nYaNUY0JXb2RZShCkcABUp4lC2l3UtF7XCv88OsJ8IrPoE2YF05tWgytUYR1u89DInVGU8VN+PYajuU7LoCv8MCIbu64nZmPyD4DsWLZJowb3R9bTiexxjam9CQ2XX1RNpr2HIiVy/YhM/48nNzkSMgLg17uCpFZj8NnY9CwS18YSSHN45qx9ch1jOxQB5uOJGHCoDDcNBiRRcpufkoi/EPqghmCvudiNvSKEHRoJoKj1hXqtAvQ59xAzHU5YhK5CPX3hvDWPuRrjVi2fAMcSY1ewiOFXQ8nbL7Fx+b1W/DeK8NwZttajBg5Govmr4PCnI6QziWjd8h15Ze8IXR6xvEqH5pCPSTKOziXygNbbyYvKKbWsWX1egRI9Bg1shtu5ybCsX0/4MoO5KU4w80vmN1fQEq57bFyDG9qx7jSA/P6Wb9qN95+pTfy0u+Q2pcGXfsNxaItUejgZUB23Hkog1pDQDaPv5OBWKLdXibX4g65RtkqPsREzHGJyOvVawiO7NyKi3vWQBhng2EjJyAzX80W1hf2rYVVkh2GjhyHxQu2YsLE3li+7SomDWuB64W52H0lGUL1FbRq9Da0eYmQu/jCqMrFzktJEGkuw7qOE7zRDOkFluckJV2N8SPa4GJqNoJbdkXW0RXQCZ3RZcBgrNx+Bf3qkocrejv5rfRYsmwvbGVCCFRKDG9TjGyXXtBErYchaBgSDi5HkY6DBRvPwl6ZjDY9wuAS3Bwa20DEnriNW2c2IMTsh0EjhmHeqiPs+S+7eRNjJw3BmiM3MaRt0MM8ZzXm5Zdf/vu/IBgYnB0VSFHh1TbT1kzaO3fgDSG/ilYB8mfNDqUkhTM7ydS9YoDobDT08cD+K5Z5dRhRUTphVXWtCsw27HawTHXNGEucSixAS50RAb7uKCpkJEsSlBpb6Mh9LTBrYEhJBdfHGnI3FzRoWA/TP/ocxQWF+HrW1xhAxMBlIjj6uTpjWXEhEmQkxw4KmFOTkZ94GyJXB9jYkUpsYhx4RqY1gKQvE0OdlIyLfCG5Ji7487AtpjTNQZYoggiLIyhWmyERxeGng8Hw8zAiO/YvFGdrUD9ADEdFOvKSVbCyOUKEcQysvUPvGeHxohPo58Zdczpt+ycLTxTumzvwXq9aD0nTHmN+2798DuvjIO5aVJOIZm1Z48OY5CuhNdl/dOOX598dN/XrxX2jDuzszXdwSQ0Nb/DUHKtZ29hmGWV9pUUxO08oBBpLYW7tudfHLXQ/87XGguEBLQwMKUrRDncrDSuArl+52LlOaIMazSf1wBYGfuwJJPkJ4N7Iku+27esiOi4Vp3ceQK9mjkjKTIfCzQOFWRll+zA3M9OSwCeFdZ5Gx0ri0udZk3wWwc2a487JBHAY3/CM43F1LsIbRWD9ztN4eWj7ko3FsHH1Q317X0CVDG8/L9g7NICSVDBupWVUeEGYkZVwC1fOZcGvji/UJNqZn4fMxCvw8grEhVPnUZiahlJXluxe5GXjRWo0bTo0h5UhDw0b1ceGPechtvZD9uX94KZZo0WLRjh229IqZa1LRlKsFPBxQeqtq1An5EB6Yj9aNa2Hy0VGS35JDSI7swAX9p5HWIA9XOu0QL06wIYl12Dr4oagFvXgm5WL+KvX2e31ubFwaxiJG7ssfi3cvN3ZaynwqANllhBqUnOHb+VJRH09rZERH4P6dfyxmwgVd4nFqlyfnwjnBpG4lrLfcl2Yk7RywKljVyFOSkPDMCHCgoJxOF6FHsEycDOvISNZBnuTGYFNmuP27srTWFuuEbD9XDZC7cUQimS4k5KMYiKYfE3JpNYjBi+wAZJvxUN7pwBt7Mi10RciXxYAETceLuQlnJKahNgCI3Kt/CHmxRHB4o2k1FQIvS0Dcjx5ucjJTAXH2gs2Dv5o6Gpmb8QCIiDCQ7yx+1ohifdFYzdSmBizkJiQAD0s4lVdmIr4W5fg5BBUbidXcjtwuexAfUhdAhHa3ArFhUbE3iCVABIvFNsh6lwsVLeKEdAWcAwIRhNyXxWQy5gca0LyyW1wbtcUufmFRMgoYCsRwM7XD6nZ+SjWmKAgSQfZa5GWmgBvcv+cu52Fxv6OD3p8HokZM2Z8W/1WLxZDuoRt+nLJyc5fTmxZ9QakQA8hwvMCKYh5pFBleh3MJsbPAofUqk2soaNWq4WZbGckBTozz4SZy2fj+Xx+SRLlYoSxXygNs/YM5pLuipKCVqkXQGBlBb2JD62Kx9baHWzFuEVuJAFJ05iZAmQ5wbaOFIdPnsboIUPw6dezsWLRQsz48gsUSmwguXkD9ra2SGS8NCrsYOCbUZwUB7WvC6xtLRVZY1mWSF7t7JF7KxYhjVpCZVTir5PH8eeq42jVNgQf9kshFaJiyPjnSIERAQ15r4R6SqFV63Erjrxj+fkQkPvxjzntoXEdDV1aPFpGtkOnkS+DZyB5FlRrA/fcw8wV0q1N3WvjZ+8+uej9rvf2YT4EDTr0X1EqGA6un/9mqWD498Afb9dk/5a+HXZVFR/RofvWmuxvMBgkeXl5gTY2NnFCoVBZ03yXwriKVoT1ahYbfXSam6u9tdSpXtmkUgI+/55hl1Wh1Rse2Ndrcgx9F6pzrGBwx51fyVWrUb/+A20YBr5ksQ/xLOkqt/Zrjfrks/5ro8q28XRiWn7qsd/rlLiBqD9xfNn6dvWI2q43oSzMSLxQb5+yMNOh5ESer4Z+bAsSeo8dy35OGtSuyjw1CHIsSQUY2YNIgR7lnq0ZnxHdh48vSzeiP/OCsrykFHUtXT9jR1ce5s7kvN7AVvccx7vEtcegiZNIKZWMmwY+RnQhZ88sJfhU2L49MwPAa5MqpTFgrCUvLZjMBLqhbYtygdvEgSyverPfPep3YD9HdXcpO7dSXOpZ7DfqdxrGfjLjdwLbkmo06rJhga0fmAnQm71Rfo2ZrPh3CK2UVo+SnruXJg0ti2MGRYe8YslDkx5jy+In948g/yPKwkxRHTzJE5ePHGQ8o8BBIMOIjiFAx3JjqQGtScWgteU+OHdgCxFkfcBlstimfBtvN4vXzx7jLHllfsm6FX5ne/9I9lg+bqVnUY53yafE2g3+4eXdeD79Xir7PrZ7WNl3J8vthA5Nyn209GpBMtTCct1svCNRVpWpwxzQcp8wHvsD+0SS/5FsmJl94L1x5XlkcDHqkanCU2H27NnvP52Ua4fE+OTrHw7sNNVWVod5aSvut12XUE9EHWeMxc3MCLYqt+EVq5GfnQalRg2hkysc7exYI0FuBQPHe7w+VpEOnyOEVmZPavek9s7vBgH3LLz4BTidmQiJlRPMSjVMag3UWg2bHT5PgG3HokkFYBHad+6KUwP64s9xk+A0Yzq4xVoIrK0gNJGKgNYAkUAEoVDAtmZwmPcqETxiso4vFMOcn4Pzp05CYesFXoQOAvJiPUyOcXKJN+yM2dDrNSg2ZWBGuyL426uZmxSp6SrYKQRsd6bCRgQz/yi4njqcvxqN1O9WYOCEtbB39LrPmb5AqAp3dGtb74t3+tV7LHsAhorTUSdGHeq5benPM3uNeePrM4VHWtRkf1/vh+9qYFrqs7OzNwsEgl4KhYLj6GipTOTk5DDCdreDg0P3mnhzrEhA/da/3h0nENRMMDzQ6BFMme5zE9ctDp7lfEONm0tfDF+jtY3EA0+nAfrFIqxN+xpt17hDn6eWh4kT+z61tGsKhxQgzjXqDX143n///dn/BdHgbiPYm5ySpVn7eS/2ZrhfpaQUD8bro95Eri0zL0S5AKg4L0REPX84+objzOY94NjbIEWnh1goKhMJpV0UlUZMVHVYrhIH04Tg5+SjTfMIXDidhSTVBQhjr0Mvk4FTnAuB1gR1MWNoyYcOXOjd7PDl639gycHD4H7yMfrO+QFH0uNZm0amBYsZcWHMzYeeJ4KVwAQhM2M1hw97tRIhGfkYXCcY//BjEFWUDeXNZAhIjVpo5QKeVT5UqjTkm6XgkHe8LRHkjXusQ8GlobgUfxNtmjbEtdgYIlo4aNauG7buOIVeQ+bi3NVsxF6LxuJv+uON7w+R9daP8WvVHnYywfVzF2MFG2f17fkk05XKbPJUygLWUZSAb5ncaWr7D1ycZC5mG7HCrJDame2sHHg2UluI+fft7q8ROp1OLBQKc4hIkF69ehUzZ84su99ffvllREREMEZteqVSaSeTyQof51g8LrdGDtKqEwwsUofjUGUzNSVOduzprx0Cms2sbpcH2jBQKJRny7fffjvjRRcMmclpUe/1aTvR2S4kqab7mLmkBi0xIFsnJS8lprtNCK1ZV6nFYJdWjwb1vMAJnowzI6ah4+jXEJeRChl5hQmJiLt7KKUl4Xs9QvI5YlzI0qCBrRH5d5Yj9qYaBhjhqEpEqrU3zKlG2FtZoTA/F9L6TbHup+9g5Ivg5OMOh1u2yO7dBeLjZ6D394LRpIRGz4ETV4CctDS2q4SrLobIpCOFhg4L/P3h0aw5xD7OsJHwoXRyxqu//AiDSg0jefdzyDnxbBTg5BaAw+Whrocvzp45io1rPdEoRAWV5ipa+HnBykoIVVEC+vT6Ht71W2Le7K6QkDxKWkzCtj8/Qu/XfmSNK18kBLrirV1C/X/6cFDIE3cb7Rra5NDtU/tYY7A2/cb/xnyOaTaFKayZhsUn2hxDxAIz9buU+X706FH8888/ZevkcjkjGJivPCIWmO0ey3kyn8+rkY8TbTU2DCw6ZZnTRgdD0gdADQRDTQ5uIQ8b9lwHNzMG3UdOLBtmWZF952+jU6PKTclHtyyHwMETRoe6aBn04D7fgyuXov3wMffEFydH41i8FqkZxRg/6MG13N37TqFrp8jKkaZinE3QoomfHU5uWoHm/UY8MI3HR4P4AhF8bSpfpAVbL2Ji7+qG+Cmx/7YADgI96nvJYCzOhM7KCQ/SwNl3ouDgHVGjnKlykiG280BN7KYWz9uAcZMHVL9hBf7ZeQ0vda9b5bqrB1egTvsRZa5n5i/ahknjez1U+vdixIHoDAQLkuFez2Jr89eGM5gy4F7j38s7lyCs+9h74u9mx45j6NGjcjeVvuAODHJvSJ6ymfrDDKus76NY+zDhRyU5R90wp0hb7bTb9la883fupCv//ah7u4c9Bo/Uxr9qHU5+TRNrM3I2qwBX0jIRU2iAUmsATyhBcVEuxCod4sSkFp+ciounjsKjQWMYiyz9QxXFBad0YX1Kl8cbiTDhGTmsl8kQczJ2XAnB9Zzz4PBM4KhJzd+3GRBzHVITY2isR/joIZj+3nRcu5SA+Yt/x/yPZqPT568hxpaHjFu3SInAQZ5ZCAem4DcZYGdjg8LMbEgEYrjdugmtvRyHtkdj5cFdaNGwMTo1b4aQ8Pq4cuE82knt8cUb43A+NRnv/jwXepIHtVqF7n3644/1S7Ez1R82TRogLTEZ3epLoI9NwLD3OuPnL1/Cq8NFOBmXA77xBo7s3oeeL/PAu48bSHu58KHC1RB3vxVKjcFJqzfJqkvASshLv3ApVr9tdv+n1hQ55ZN/hlw+c6izT50GZyRSq9LhiYzPoWqNKlNSUuq6u7tfq8lxkpOTe3t4eChKw0TwV1r/+++/47vvvisNSjMzM6c5OTnd09VQU4QCQY2MHtW6+4+SKMOgca0YTE9L9XJxdUt80C4PtGFYvXobeMUZGDTBYssQVL8h6oqtyQskA1djkpBdaIYk+Si8QoPhGNAUN6/dRAgnHklE3Nj4NkGwowi3uXUwrkUjFBWrsWDud+gwcjiuX89EVpEBhtijqFvPG4HtB2L7xj0ojM9A06QLuJSuh1kRgBaBFkO33bsvYsDEcVAXFSL98kEkaq2gk/vi1MZ1qOstQId2TXE+RQuOxBnXb95CY1c1zhY4ICO1AP3DzBC5OsPZPQALV2yHLjkF4SmXyPY65Bvk6NMiGAdW/Arn0KbQ2Ych+sBO8g4oRqemATiRJUP3luG4cXIXion6dwhvTx7OYzDq9PATZcAkkiCyY0dsOXAdxXmFaOVrQj4p2mWBYchWi3Fqzw6AvOQ6tvTHzQLG+5sQu1YugsRWgSadu2DL3svQ6wwY3cdSOF3atwrFYnLOrq1xM60YG5f9i0mj2kEmFGH9rkvIuHEZw3vWR0qRCVLfujh2OAqkWoMIPxE0kkDs238IYn0RmtWxQpZWDK9mneAi5GDFv+thxVWhz8jR0BSmQyBXYNWeaOjzstC5kS0SigSIKXCEVW4MJPp8dO3ZHoeupJLanhHHNiwFR+GIupGRWLH+BAa3dEFclhZ8j3BcOXIUAkMxRo0ehKzYk7iWZYbOoMD2lf9CbmeFhu37kbwDS5ZthA1PC38XkJfsESh822DZ5tNQZsXh0I41aNC6Kw6euUIulR6jB7djr8X1wxtRwJPDL6ItHGVc/Dh/E8IUajTu0AJXYnORz3dB/MF16DZsAK7fugMZ/wasFWKciVOhSMvB3vVrIJEDDToOgRVHjwUrd8OUnYWAyNs4cz0HemsvdApxwc61ayEWGdCwbQtcuJJO4l1x48YNRHoX4kKhG4wpt8h6I0LDPcAROeDImRik5Jkwoe/D+dOoKQ8zQmJMO58hDxN+VBiPkdU598nPyDz95vjmoz2dwm49yjE44BLRwPgx5EEo4aGLly06uzvCwNWBbzaQwlSKHV/sQO5OJQoG9IRzl3aIO38Fzg2aQms0QFzSulDWPVHySmMNHSu0MDCt02KjEQZ1EQ5wyDthZRSc5Tbgcck7y5gDg40cWrOevJh5SMzNhoOnB66cYbwLK7Hg978gNJlw81AUOJENkX89CqYgV6ikCmj0Bpg0Goj0JmQosyAg5zG2cTP06tEV+7Tr8b2PL05fUCOgTiA+rB+O6Rcv4IsJo7B16V+YOPkN1PnlT3R99zVEpdxGWkYKWoW0hNhRhT1dhsNh74/wd7RHNHlemRaTuq4iFBOBolRroMo/iqxiKQRsS3XVLQwfDKj7UOEH/k4czn2FY03uE6lJs6V9HfclX4zov6HGB30EmJklG7TotLNinMlkyuByuT7V7UvEwlWNRiMXi8XF1W3bokWLtUyLQsuWLXGLCMj09MqzSBsMBkRHR8Pb2xvbtm3Dm2+++V12dvYjCwaRgK9zJ+85Cbk/xQIuRDwupEKuQSzkmWRivtFayDeQeJm3IOuBVRtmZk0dT7FPYipsUzqE02jQMFaIDxYMD1qpI4VkgHW5jcXtq9EQ+rhBkX0CXdv0x9Z/FpMH2R7NWnXGhUQDbJzdcOTccVK4tIZcYOlq0auKkZV0C8sPxcHGzg++tmLEya1QFHcdVgpvNO/SknWB2rJbNyRtz8SFUxdhE9wUMnn5zS/gW57+rOREREUlo8/o0Vi8ej+cAsLRNsIEk70/ODFHIXYIhMLGjrV67tgiDCvWHUNZb4shHZE9eqLgUAFioq6iTc/hWLZ4FfnFg8kLxZqo/mZYtvcU/Fv2QCPuZeSRgrwTEQsMquJiWPsHkYICiPCUIDqFh/BmbXDo0FEUJtxAv66RuLBlOS5fN6D/2LHQG/OJYCDnzpfAUaIl1y0WrQaMxo2tF5EkCsGkroE4nipAXTsdbuaXC/LriVoMmdAc+0vmAnP1D4ODmGiC9EQM6d0ae/NiEH3qDNwbdYLcrEVE207QHF4AD3cPuNtpYOXujmCFAC4+zrhx4AAs7U1aKGWuCPUv73RXZaagX7fmSD6xHjevxKLdkIm4sfcWNBwRPG25SE++gU7kN824norLxR54pX8z7Lqej7DWHXDr9ALI67YlBbQZrcKccfJyCptm3KUbaD1gHG7uvIZUq/qY1M4D5/PUaORggl+LTmjlbYWYI2sq3V9yRz+0bReJU+QaOlnL4O8mLVunVGphS16y2RotEQwSOAVFoFldLXILVBBbK5B6KwtSWx8EerjhnKMenLwbSL58Ge27jsStDWfIy9QM1wBHqHREMJpVaNupPdQXcnDn7FHYuDSCSM5Ux0yIU4rQOrgOBCJniA3XIRT5QS63g5A8iC2ah2PhwgS0aVwHHFMBea50EImtoExjKh9PRzAwrqEJT6SgfxbIRbzYuLiU/KUfdousfuuHgcfaMwiY2RM4AghJpeadyMaYffIUInv1xLmZ0xDy659IXLUKLi3bweBKKhE2QoTfjoEDl48uET5IS05FUXYuwot+xKEEBUKdM5HM8cXxi1rymzogId0IH48g1pA3ntyD2XwpJDJrSAVW0FhzoTFxkZ+Xh+DZH+HmOzOQlZEGA6m0DOnTB2sSEyCUi6Hmi0BuMWj4zBTaBiKgjcgnhYaepGUi301ESPDJG0nH8YSfZzERRjyEKpwwpm0XFBYo4aqIw+5Du9C+9xDIyXtIUzcUI7/6Ep7KHHzctAPSbEhF0bM1ESQX4WonIGkaUFhsTdJRw5CrgVpIvnPUrCtq7iO4jX5WCPmcogsXY9W75wysNSOk/Pz8WXZ2dltqsCmHiIVCrVZrJRKJ7pldsiJEIIjIMwsiAtiuB6VSiUhSuWJEgq+vL65cuQKBQMAOA/7www+Z9Y88pIVx4+xkZzM74YsOd6+6pxy/ffu2/YPSYkZiSAI7dc7OTAl3MKV+ksPzmu7u6PxAscAe6IF+GLRpyCQ3vkW7clEnLAJBjkSM+Drh38WLEdyuG7LOMoUy04dI6giJp9F3YHcs3XgIAwIsFvODmtth3d5jaNm2J+IvnCNHlOPW2c2QErFgxTqB45GXtRsubFqMIpUJY8f2xaIV29CpbyC2H4tBz1b10GNoPyz8ZyECIzuja+8OWLBwMXoMGoETx6LBIQWXWafEtfh0tAsTwk2YAZ2gLtsc6StMxI5zWgzo6wGewA3XN5BjaMwYPYakt2gRWnYvGbpoUmPxggXoM2o8Lu1di81mJ/QKKTcicnR2xP4ztxHS0hdRV2Jg7dsY2aQmkKU0wzagEVYvWQy3+m3QuZkJCxYsxRCSvog8u4LsWygihVunTq2waMlKyN3D0dAlD4uW7cfYkX2xdPdtuIa0xrKF6zFqwkB07FAfS1fthH/zPkQ18qCTilgDO7lHIJbOWwA1qb28NGIglm/cj37D+kJERIlZYgV7Zx/svKCCTeZlXC/0hrvCgKQcFVqyTbEihEmScemWHOHhllEEVq7+rLMs68DGaN9JjvmLV4Lr3gQKVQrSBAo0C26O5UuXkl3t0aehBAv/3YpRI3riQjoHDXsMwr9rd6Lr4CCcvnwVImvLKISIjl2xeMkKSDyaopljGhatvYExIwaxzcGmy2uw5oozGjhIYedeD0sXLwHfOZwUwoX4Z8VOjBs/HOtIfuwDWyJq4SaMntAPzqR2eehyLBq7BLDp59w8hTVJEozq2xK71+yGzL0+eFYS9v7Rx50Az02GwNZdsGTxMggcwuAu5uBKLHkhBxjBt7LGJSLotKRWNmTYYCxdvh7NezATq3HRJVSMY5evIzjQEzGxKWgaEIlgOyUKjbawJeu715fhyKUY+Lb3g4kIwFtRp2AjrzTXzBOFmXzqqSX+hCnOzTk+ZUjD1wKHh0U/i+N90KkN5kedw+Wf/kDd6a8i5o2XYXpbgDiOGXydHjaFxeD98gVE7q64fOcsdkTFwlaQiywfFVr5maAxuhPxLULfniHYtPkEKcQMUBapEWRvDWufQGRmZiLPbISTnBT2SvJukomZFy8Cgvzhu+AvmH7+GyO+/hwrcxIQv2ErzNYSCH2doTbrYOXhBVVONgyaPBQUFMLG3RaLz5/Fq+3a43ZqFvZuP47W7RrDs0iJU4VEnBuZuSE4cPHuiejENKiL82GSmGGylSGf5ENFBKuACIpPb3yJH45rkNdMi0aNfRF36iR6TP4EP7x9DHJ7J7Ro+x7WxLzF+mh5XiFSbkcTX/t134wauKg288Hlck/eHUd+8yAnJyemVYzxDVDRCySXiIV0UtDbM4Xrg9IdM2ZM2fBexo7m7Nmz2LBhAwYMKO/OZeI///xz1gjyYSkuLpaTvC+VSCT9SJ5qtI+/v/84kvdJ1eXdwcn9EuA+6IHqogIPvMv6jppQIWSD4FITBI4Qo8eNs3wPsMxoGeouQOhEdhpwTH6pfCibrWcoJk2wDO9r4t+D/ZzycvlQOAZ/BVnGjCsLvzTR8t2nlUV08MS2mPBSeV4mTrCsH9ij3FXvSy9Z4gIGl2/Xpk+5rQIzRdmA0eXHmDC+fOin2NoR40hBzNCuV/mww1I8wtpgbMmIvcZjy/fzb2j5HDm2PN2JEy0FXBMXskyaXBY/fuzwkm8hCCjJ9rgJlrTUDSxtAfY+ERhDFhZvcmM0sMyWKjTrIREJENiiKyS2HnipZD8HpkLe0zLcsjvzizcdXXa8SRPLh3827zYEpQObdx+4gqETGmPoaIutSHHOHfCMOvRr7Qs7cUDZPuPGlduSTAix7B3pwfx3IdffcnyP4eXHE9m4knMsvd4BCGhcfv3a9q1cYZ78UqkdQbk9x/CS30ZTYgPjGdERoyNK1xrhXb8V+ja1DK6cPLE0b5ZzHDO+PB/jx40qW1fRimHAsPKhwOMnlN8jgU26ILDk95g4sSR+QHl6/o06kaU8nUmTKz4TT54XoXVBzOdk3LqVlLPm8173jkd+ijDyN+aNaXiZCNZVfy5EiEIBj1atkKcvRrOkZHzRshl465aDQ54VHnmBv6Iphj4vEbv3HMKGrV+gSUggVMpiKLPyMaBHYzQM1mHlvmhcSyjG0NahmL+tEAKjGYy/JV16Lsyh9mxV6s7Bk3Dr0hzySSNxQFeM1C0HYOCaMMgzEIf2nERe64bIciHCoTgUiQuWQ9qC3JfyYKS6euNKYgxatGkJiZUQQltnKGxt8cn8heCQfL3epzc+3LAZ7RsFQa1Ws9NqG+OKYDZqwdWb8dbBLHzU2ggP7lm4CHpCkJuLv5e/jZc+WYSBLy9CVFQ0dm/YhkYdO8PAYTo8n68WBi6XY4iKupW/Z86gXhxO7RvWKxSK7CqimULpZ1K4upHClVTByicETYg5ZyNSuLm4urml3C9NKysr08yZM+9p/vf09Lxn25EjR+Ktt96qkQ1CKTk5OR3s7e33P8w+JXA0Gs0qkr8nWgF5oA3D/wsteg2s1eNLhNVY0nEERJTcawz6KAwvEWOlWNl7Y8LE6g0BnxVMv9y98MrEwn+d531qa11h/rFxvULfCxtV/57a2tOGsU+QCoVYOqIXFpNXVpbWCD6pidvydODVuXfgs0BsBbjWRa/+dpg3510EeBZCLLGBQKBCw7pBCHSLg60ol6Rri1t3bkNnFpOdOLD2D0Shgxy89DR817MPCq9ewVe/k0K+fn0Y9QbYCg1Y9v7XiLeXYUnduqj753Kkx56CokgJeZAvOEYBZFIFsgTJ6H/4DL4nx+rQug1iU1Iw4K9/2NEfOpEcA36ag9+nTYQ6V4XJ/y6B1koEXlYeDCYBtFwd7qTHY9kWDbxcJPjn4GVodDo0b+SNv+aOxZ00Luta2dvWiKnvnCAv8mf9azwYGVe/r46bfPv3Ywb9VNt5uQum8C+b4pqICKYG8DNTE1epVHZSqZQZScE5tXMFVv/4HuN9NPGztZftSt1L382RI0dezs7Onu/i4lIpnumiuBumBSspKembmma0IDNxqH3+hQWw71fTXSpBxMIgck4yck4P7TzqfjywhcHAKHSumBRoFuVaXFgIK+unOd7XDLXOXH0B+hTJysoiNwkPDg52ZXEaZRHEsocbeF9ErpW8wrUqKFSRl53Z4qWOJ3zyPuFNelLrETwTxxoFRWrYyB9u7HJ+bjbEcrsqBYFJr0UuSdPBTlEWV5yfC5XeBDsHe/BKDNlMzOx/XNF9R3gwRrES+Ys5Hr2Uzp077yV0rn7LZwspl5U3rt/J2vx139a1nRfm+WQMJF0kpTXqau56cs90blUH3doGYu2JFHDzTCg0WsEgcIEbrMlL3IQ+4fZQn4pDOil5E1MzkXvuFGaPmIw+9cPh2qUDfH19MGnXEXCEWegYUBeLfpmLlSuWsrYuJ18djTe+jkOIiwdi3NxwSFUER3tnXONEQymS4PWbiTBejCI3tQYaEQ9GtREckwlXyDuiw9yfGcs4aI16mJlSnzzDzDwnzLzeBiIgzvKtcOpgBto0TcMvrzfHmz8loX4jZ9hZc5CdpcV3/+wjwuH58fTIOO+NvhSbs/uHgd24XE6NfAY8S3Q63b9CofB9tbIQ184eAHnFbGvWsR+7jilYi4uLPZT52e8QsfAWE2c2m7ifD62fPWvjNWuxWKK+O72wsLB/1Gr1z5cuXZKGh4ez/heGDx/OdkkwrQwxMTFguhGSk5OZLhGDTCb7rLo8psSen+JuvPOLjdlkae1IP09u9kaVtiFC4KzBYOhubW2dUyGa6RoMr7gdOSfGYtfjYa6RUqmUHT52ql+Hti3XSySVz/mBNgz/zF+BsVMmIjc7CzKFPTQqdZlgYONsHaAmhamOqG4RY6BkpWDFRVE+KSwVcmjIfa8tLoCRI4SJiA9bR0f2QderlcgnD42jrRT5BUrwyUOlIurcyckBeqMR+tw8cKXWuLTzX4R3JzVrTQHMRJHLxTxkZuXCkZltrigfWoigKCm48nOyIZTbQsTRI7dADUcHBSmki2AmD2Bu9D4o6ndmLDBhbWNN8p7HpsGcuEFbjPxiIymsLOe17WQ8RnYOxq6LmWjhzQNHIsfV/VsQ2XcEeUCzYevgCK0yDzqIkX/tKOR1WjPKCtYKa+Rl55alu23tRtT1lcK3WV8IjIUoNkqQGX0A/g07sIKBuX4SG3tISgpQZqKj7FwiTKys2ZtOzNFCKLFij6kgx2TeJUadGjmFGjg52CIvx1IAC0lOcgrVcBIW4jbkcBPIIRNxkJWTDydHBxQU6yDl6yEQWYb/FuTmQCBTwKQqYG0irMgLzMCzAt9UTN5nBjiQ30hFCmvGaEuTfAU620A4kV1VRh55SUmRRc5x3d6reHlIJHLyVeQ6s75RoNcoUUTKczkRRYXkxSiTWUFJfmN7W3If5CVDw7XDtkVrMWryUHZinazsHNjaO4BPSv8tG/eiU7tQpBZK4GZt6aM7vHkdOo8YiwWrj+GlgU2IoNBAYLgDk3V98NQ54JN7zVhcxBqaWYn4bH4vbVuNup36syKjID8fMrmcCJU82DPXgYTl1gpy/xlRSO4LnlgGKyGpCeQUknO2h5Kcs0lgBWsJt1IcRDLIJQ817Oyx2LdvX6dndrCHYEBj54GRYyL21HY+HgXGzsfZnIe0m9loWc8ZBUYv3L52Fcf35mPzwVyMHNAHyw5cRWqBCCZyX4aTh61j30FYvH4pFE4KjBg2BAMH9sVbQXWQt/oPBIvscMdeVeacx1atR0iDFggjb6STZ6LBqeMGiURMXgsaIob1yCtUgmtQEcGgJc+wZcprLpcIB2cpTLbOMKfEA3EpzOB5GB1c2Dk1zFwheU+IUcTnYUSwLfoG30Q+ET4dOkVixGtzSAoayGV2RCw8X9NRBdlzv/hh7qDPajsf90MgEMyc98VrXteObWb7Tx18wzsRwfBh6XpSK08ly/RX5mxd/ufbvVlXiCajgf/RgLoFP+xIqPJFoNfrPYhwyD137hwj+BmbAzY+KSmJ9cNw4MAB1hBSo9FUW3Cb4/dcdjcUVnb3m0/uD+cGzJAfJrTRaDQOIULgHn8M5H6M4DCTHFXGvbCwsCkRFmeqPTbRAi+99fHN1PQstm966fodP638+wcnInTK0nygNGfmETAnHIbRoxWWk5e9HUeFvi+NhyrhKDI5vli1+wD4RjVGt5Ij0yESu9fuxeSR3bBj7RYMnTQIMRlmpKcWQnBjK9oOeQn7L6WiS303ciHTYMiMhqxpEyTqnHFm6VKMeakrrufZIidLibSkZNRxEJAHQgwZKQhPpvDBu7oGrk42EAW2QGquEtHxSgRrDkPesi94hbeQyfVC+vFTuJ2UTV4ATXAxlUPEhRaGSxsR2cATNiStdYcS0a2ukvzAZhyMyUOHerZYvukUhnRwQZwyBH4y8swSMZORfBv2DkFIJ4XMuU1b4WcvwO3T28F3q48dm88QEUT2beAOextSmFhLsHxPPPo3VEGvM2JHdA561reYkNhYkwJIKsT1m4VIOLcFAQGO0OXEQefgjwKNHms3bsDLkyw2IMc2/ovIPiOx9nQmuUH1aCa+CIlUBqOiDjauOIJJI9sgLTEBuuJMcM1u0Fp5I+boQfh4u0GnykahjyeMZilOrF/MzjzZeegorDyWDLVagw5ON+EV3gPGgiTcyjHh5voNkJBaTI9B3RCldEDa7kWwlXLQauBorD2ZinAHA8nverRq1w5mhRXi4xOQd+skXNzs4RjaiZ3Rb9nizRg6tDXOpKjQ1F2MxVvOobFLIcRGFbxa9sHS3TfRThEHeasBENt6IGnHKkjDLIYNOnU+NFotlu27gXFd6qBfv7b4c/4yTJgypfzmNRmRlZEOa1dn/PPvDrRr7AouKeFlSENCpgqxxzdDRIRHzyE9EJNjxp39S+EoNpEHVIQtZ2Oh03DQo56SfGpx5JYWGakqDGlrC21ROtLVNji6ejNsBWr0HtEXe6ITkZGSjkgPIRISouHWvBvycrNxp4gLY/RKNOwz9pnNEkheOo2r3+rZcnbeyOeswfvhYIZcGuy6QZizCY6BvZCUmIf0Agku3zoBOamk/P3vFuhMDvD3kMHOLRj2ublo1r4VVu/cBVJew8yMfuDyYSbbcgUytCHPxYrly9kuEsb6/Y7AgLWaQsRGn0OhngMrUmFRkQqBSKmGwWCEUWAGx8gHqTOQCgipNOq1MBNhz83VgJd4CUaTCHypCwwSN/Ls61hBzTGTbUgljGclgA+568265oi+cRX6pBNEpC+o7UtaJS/CfcJ0wTfsPGx+qWAoSL0ZVtV2QWENzvd/c+6YjT9NX8qEjQaDYM1vn84dMvXz6XdvSwrjPK1WK23YsOEZPz+/0MuXL1da36pVq+sqlaqpTCYrelDeVOkxc6Tau8QCh6fJkgZNMGdmZSlsbY8JhUJNRQdld59bdnb2Vw4ODh+xEaQiizsHkM8LVltX0TNAhIRdbm5uZ5K3t/h8fkMiGATvvTIaW/cexf5jZ1FYVGx/83Zc/TqBAWUTDlVjw2CZzId9WTIuWkvzaTTAxskeg3o2I4mfhcRKCiuZyDJzIoEZE6xTM9dGBqmVGHyxFYRCEXkwLELl4Jl49AoVs6MvpGIxqWkrIJRawcD6XzGgfl1PnN22HRF+fNblKiusGI9tNt4QkbQvJRfh7nntmG3kNtbgpmSzzSXMHHQyUuvXCC1jsQ3k2IxntCvHD6Jht14w5lXdWiaSyOHu3wCe3EIcvGUNAZhhpQKSdyOc7J3Qs70j4tNycOfqfjiEB7Lpikneb545gnrtSaGcUW7TwrxQTGYjojMA1wraVJuTAIHCl216LMzPg7WCqaWX3wR6olCL9RoIxVJSC3dEv+4WAXJg12n07hkKTUkTvUwhx/6dJ9GndwQqOhjnlM4ATK6doVgFVWFJqxLJi5WNHbr064/jW0iBS669jUSCXGbYqpmZFdBA7k8trhVKYcczs9dNT877ZJwKjcVM/rgQlN2sdzVM8cUIrB+OrMv7ISK/tZXchggeISsuVPmJ8GvTH3fWrACaByAn9ixk/q3Iy9bSLbhjz0WMH9EHCYUqeBNhJxbwWK93dnIh/L2tERXLgadfCFTKeOjIOcjI/dK1fx8c2bSDPQeFUYT0SiU6BxKZFW6d3Ingdn1gSFZDIrcqmwRNZiWBgFNBoJM8Ng7zRuyBzQgIaQSOnojWYjH5RaR41m2qjRo1emoz4f2/IiD37YTPfsb7fdfDM2MbmvUdibpaR8RedYSfvz+pZBxCUD1ftGzWC/YRPth5aA8+/+wreHh5wNPVC2lpaXBx94Rm/r8wCSQYvX8LPqwXQva/govxt/H9krWI6dyECNwCZBPhEOjljbjjJ2FSFbGzcXI0xeCpdRAamUm0VOQ1WQgueS6hM8BYSN5lRFiIpFL42DgiIDAIdvb2aBZeH3W8vbB4zT9QpeXh+B09ipQBUBa7sCLlfoUGpXoimjQ/trzku16ruW/fapseQ/7d/Mu7i0wmdtpRFGYmBtxvW5GIGbOGsFOnTgk2bNgwmhTcVra2turBgwczgkMnlUrvtyvLnB03lnT25oypwxOyrcYM2dZhQx3cgtc8zBR3RCx8TD4+QMpJHoostppexivHAH8bUnmU7j9x9s3TedKpo8JkZU6bmHupogbo2i6SFQwMYqGgku3GA1sYBg7tCamNHJpsUmt/aSh0SotAkvq3J3FZsFIQ0dCtGYngwpa8sIf2sXhh7DtmCPKKdAhxJqLAUQiOp2V4SZt6FsOQEf2asfPSS8jD501y4NS/O5MqAm15MFnbkVp0HnqPGgqZ0EQKTj5CbQth7jIEchEXWTmkdh/qCo0yHzpOV0sxax0IJ1Ij9Aith/CwAOQVatHQzQZqUoqae/QnokWKAvLAdghzhlWDwWyzfosgi8Ovkf2ao0BlYFsX2HPuGGaZ9RA2qG+bDcHgYaRwMbM2DEz3gA0pwOtxpNC6D4INqfHmF2vRvZEbyetAtom+Q4ilib7XoP6QkcKvSGNE91BrcMhxxQIOBHwBhHwhcrNyMGJETyICjqPv4G5o1X8ksnMtbsYHN3eGgdsdCiKAGOMZha3llhk2cSDrV8CRxBflZCMovDHq1auLYhJnQwpBYbESXoPGlzS1F2B4a08U50uh53iz/f5cWx+45eaAJ7VBx0GDQBQU/Dg8ePYdhnO71qKgQItBTX3Zbgt+8HCmeQ7aAlIzb+EBE98PcnIds7NzMaJ3U1iT35zxidDU3fIgjO0ewnZJeDTqxjrf6dXUg9wWTmAnJbUlL9ucHPQcZTHcdAntRM61AIPbWJ6/Ll0aI5/sHGwvRZHawAqGtv2HQmxtgyBzERqOITV+8pvZOtZh+6Ol+eQcJDboMrQfuESMeos4cB04mJyjmbxUtejTxJ9cJyKO6o9CTkExmvu5wFwyoZFI5gJnkr/+5N6Wy8XsNe8S4QVlQS7cBoyBDTkvpmuonltJl0Tn4c+sdYHB39//NqFar4qUh4PH5+HrTckYP7o9dnzyC4KDm8CeVGZU5L7zcffDb3N+g4eLLa7m5eFDayvIRD4k7ITUwkK8+c4fiHCwg95OQp4hOXJFQryjyfwfe3cBGMWxxgH8v3t+F3cgIRDc3d2txVoKFGgpNdpChRp191c36tACxYoVLw7F3Z0kBIh7znf3zezlQhICFBpIKN+vb1/uVuc2R2bm2xHU+m0azu/ajRy2LcAtQqoQigghAjpWMDh7egsUZw6c1jwI2W7IyVmQszIh57B/4+wPk14U8dWMP3FPuwaeR6OsEMALNup89qxkG5+VixdffRr+5zNxNlvCu99+hCk//wKRVdbstjxWQbu52+qUJZvNWuTm7dq6sVPTVu3XlbSv2S8wLTczVe1uqcjuK5bSdDqda+jQoT9fTXr2n4jr5XDk3DN8Tibe71EVbYIyIPlXjwmvUPH0lY8ukTZP0L1vAZ5X38kuPxydZ41HHdMX+2RsOZMCVmC45MELV6wveD1n4fIJz4x/aFzBiS/XhsHf39PQLygkv3uf/4VRNYNCPJmYwdeTYfA7yUPw6mtWEAgJ8hbc2Ba95/fDnzVzJh//gqGOeZsGnZ/nOnzsAp7DmPQXeoX68EOMFxogemcBM/sGoHCZzT8oJP8Vu3aw5+w8+sdO7tlu8ZaNtCxtF/qyag1mBBfq2upnudCAyPu5vULyr63zCyy4doDvhTJXSPCFMAKPdqjn4zuai55HUM/lWccLC+o6UYtQ9ofn9uZu+BgLnTPkQvlSb/Jli+c1f/7vSYwvAvLX+foFFuwbGuq5h7xQV5h/UP57o+d3yVOs9wtAy75D1UFr1M8UdOGYoICiX6zQ0AufJaTQjeNpCy5UVvczqze/4BMHBV84jkcPQgs1KtXqzexcnjvqa/IcY/HzpM8/wDf/WhdO7h/o/QyefdUr5X83vXv55H/XQoKKPnbk95lv8cv/boeGeM7l61/4O5a/LiAIN9qpU6dirrwXuVo82qfXazFl+lrIDhdOnjoJH18LIip6GszzwXW4+sFBrBBqQppiR5rgxEN/zmMb7TjJKiiBlUIgs8qFNs+KULMfQmOqoW50AyxeuxBZf61mFQ0dNB17IzeyGjT1miHYmgkf9negisaNM1//CDljH5LsbiycNx9NGzSAyWRQZ9tUr6+9kBc5ZQV3PDMODSKC4XfPPXgYFsyZMR+zFi3F8EF34PlnnsZXk3648TfxJpaRkVFpzv6p37+z4rm+KwYXbYpz9vD2NpcqMAz836Qwu8sGvjjd9l6lna7s7OxgvWRfxrPgqoF6TNuXhawYw6SRPa+5sKCyVGw+ETlnnoAieTI0RTKZBCsOpzjQMNwAWbkwYjr76cxMT91h8fV/tW69+qtMqzZ8w1Y/wrdt3LH/vtGpaW+FhATzcSpotsryxtdYNr8SE9VYyoX09PQbX0q5hWh5KN+sQZ36JQ9hzB8hnPzfp1h9/BQCjEYEW3wQZjZBb9HBx62oYzxwvIbFH0fmuNLRsaIPtmzZgj+m/wYfmwh5SQ5q9O4Ev/RsSDZ2jJn9RU5LQESVSEi5WYiLO4H6tWuwCo0ONrcTok7rmaLb5USmIuLNQ9tgb9UZSbKE5lYZboOEOwffhtMHd+L2vt2wcnm5bypQLvDKsN1u/8BkMj0bGBiIn7d+gWHN70Ol+kXn86lYu/mWS50jMqCyLAhift5a+rN7JSWeV58b3FHXD22jzHh3U+bxkT1bPVIa547T1esY7dxX0NgxRElC+8q+6F3DB+fzpDMGW9p7bdu1n+SZdrtWwXETHn3g0cTkzHr7jxzv6HA4zWcSzsYUFBiuZhyGFUtWoSf7wl7K2j2x6Ny4yj86lzUzFeYAVuuU7Nh4LBPt63geVyzbFQe95ELXFsUfF8mYv/YABnZuePHJbggJW07konV1T0029ehmtTV+WnBN8BjANx98Am2gH/qOegCRJTwVO75/M+yRbWA+sQyr955FniYYT943MH9rHjbF69G28iVmj7lKn33zB5589MLYEvs2LkD9dgP+0YRTpGyxP2wZZZ2GW5sAg15Av3o189+xjMclwc5eHU1KwOHYYzh6+CQOsUx/7549OJCVBrM5GFqLAe52bZCj5xNb+eLUll0sw3LB1XkEHMcOAVF1kWcwwhpdBY8sXw3LvMVw5fDHFHmwW1nBQnIjtG0LmIfdDl2cA3o/A9LtOZg+dQpSdQHoMfQOJGY78fHn38FsuHG9dm4mGRkZwbIsa1iedmdQUNDr7GcoKywUbH+m++toVrk1pmz+FhWq1uFPfvb1uv/1xxu2aFdidIH7+8Rqp1FvMRp1RnUa7NBSHOU1/tTJeFbTLwjTVgmxxK54vN7FA4pco+iYmtsTD8RPjtBmjVL8QjfatTGvfjZIdyY8PPyK0YsPXnu687CHnk7hDR8zsnMqetdftjo7ZcpMuGQR9/eJxm8rjsFgCMTGOdNxOseK6KhKOJqhx5jbG+PX6QvR565RyLW58NN3v6DLwDvx95K5qNmuN1rVDIczMwFT567C4JGjsGrmb9BXbYG49bPRbtg4nNy0BKbqrWBPzcNvi3cguFYzsFINTu9dh9W7z+Pee4epXQq3/7UYubmesLXsSMUvUxeixx0jkbxjOQ7nmNCpogtbjychpEoMjhxNwt0dIrFy9ylUatYTcavnonKHAUjcvQJ+tTuiW9Mq7B9rIn6bvQz972ZpmjEFAbU7wHV6PVJsAu66dwx2rZiL83I4ejcLxZwlmzBk1ChYHRJWzJ2OdF1ldK3qhqhcaBJniKiD++/tg9y4vVi45QB0EVURf+QoHnr4PvWPTpXa1bAnG2jQojeqNc3GhgTPrU87tQt/bj6Gym0GY8uK+Tht9cfwgV2QcnQLNuw6Br8qNXH8wEk8PHogpk6bjQZdBuPMsskwNh2InKNroavcDK0jHfhz3SGMvHckDCJ/PBBQ5PcYEkLRg5sF+0OXTlGGsiSobXAKkwQ3qg+6Ezl2BwS3BIfLyqdm5GP7Q7ApsOvSoOg0kE06CEY9DD5mZFtM8LWEwlSzCqyxu+E+n6S2o5EMWmhSM2HNyWN1JRtkG++1ZVO7F+vu6I+aETVw5vxWZJ5OQvNly6Dr3RYtNXVQvVZNjH7oEcTFx2P+nBlldG/Kpx+nznrnj0Wr1KmZZ/7wMfx8S34+f3vDIXh85j3O9lW7TRn96YJnzWZziYMxFTaw0bBMUaONuNJ+Jdm4eWufBvXqbPL387voOnEnj31rdysFw0Gywo0jKirqko0qr1VE/e58WN77eB50NX9UeCCBd6kcMGqcfeb8FW937djuD77+sm0YtBH1cG8HLRLOpWDAPSMx95fZcGisGHXfA1h3MAn1MhZi7ewjkBUzth+Mg93hgqFSI/jGr8Hd992LJb9MAWreiwUzZ0DRBmPvyUTYFA37RxGA6KYxaBgTiJNbtdi5dj1ch3V4kB0zb8sJljE7sT7WgPtHs4LHsWy0q+mHFfsSEGrmyezGR9+AJTACPqxC/uueBIRZFLiDK2DIyNGYumw3BjTKhex2oP/QEZg1hV3bHI5WwYn41iohYcdWtcCwegFbr/hj+4F4VnswIjLCD4eOG3DfmBFYuP00Dh5NQ4A+EStSfDBmzGg4JVbLcMnsGA3OHtgGVG3KH74XlLhceVmIj09AgNuJ3uy6U3/6E/f2bwreTJRn1zrNhaZze9euQIdunu6UK9cewOgxg7H6lB0rDyQjxMSnTO8CyeXA4OH34Nvf1mJol5pqDwctu966lVsQqfHBbdXdSGx+D8LYfj//vAgPjR2OGVtSMKz11bSpJeUNqyUFXnkvciOJgqi2MXCJCiRWe5G0Zoi+ftAoWrgFl1oh4LNjCvYcGNi/R1asAO9fZQkOhbRkNgySDaLaxoYd61TgqqiDaGPH59nV3lxgBYew/r1ZASIP1iwnNG4NGkVUwcaoGEQtWoNnAneixpY/1ShETpYLJyODsPbgfnSsV5v3Wyrr21Pmlqzc+Dj/GR4aXGJhweW023ft2vVOs+YtP5h09wzXRTtcBiss8Mz+mgoMP0xb8E1y6s9VeObrazGnDu7T+YMuHdpMT08531nQGsYW2lWJjo4OuNK8DzcaH3/ht6/fDbdabQWNFy8bYUg4uAcLU+zo26UWis/zqXYXYj+btW0CTYoBAaGBOB6fqMbwQhq2xtK1W5ERWkPdt1XTBjijBCE4wA/nAyti+/ot6FbHF2fS0+GyVIAex9C8fgQOHNjDO0Oqx4Q5j2Pj8rOo19XTw6KSvy8qhXq25aScRVRUBHYeT0ZUgA8qhfEGbEpBurw2bt0JTcXacCfFQQytBd+AZPjmh6gaNW4Ks9UES4gPssKisPXvrfDJPoet65agcbM+SDzgi6ggM6qFabFjy1oE1G7nuSfsGLPGcy3F4A9dfhdGlzMPqakpUNjpi0wyfhEFu/OqwjvTQ72qJhzYvRfwb4EwPxOqVig6Xbt3ul57WiJCo6OReszTk0IMiMTK+WuhYX+kmkTbsGX9X+jU8kKbnNMHtqrdKh1OCfltUclNQPF25yBljj+q5YuTFRgsfoGQs3JhzEiDZNKrs1C6WnZCJZsVLn8d3Ft3oZXbhaDubbF23nKk684h/fQZONgfCK1vKDQRFWE/cQp6Ww4c7DcsO13qAE1wy+qgaX6R9yCFnd8UYUOi1YoZ99yJ+5cvxJrTcQjNCkKn+/rgw4WLYI3wh19MVYz46XucePNDmHyowFCpYsTRE6fimiWlpOFk7BlUqxIFe152YkpK6juhERV/MhqNtlat217r6a95WOWHRw186OvJc35Nz8iKyM7NC508e/H/+PK/V54okvH6+/vXudKsmGUlICAggy/e95dtwxBZrzH69/KMI8GzsfvuuzCPRcd6YSy3e1B93Tm/E1jjmAtBj76dL0z8VblFL1TOf1379qJTcw7v3xngCzzjV14YtaJxkf1GP3Bh8iD/mBZo521L3nhUkf1G9G6sHptxaC3at2qWX/72nOueO3sW7BdRv2NBsbFpFU+tfPZv2WjVqZ/6+qH7hqG46o1D2Kn6F1nnLc+Of/L+IuvH3H+7+rOgz4UYghZq43sBo/tfGOazfpcL97RrzIXPElG/k/pz7MhOBet6VKiJHoUG5h3StQrSLFUQWazYJ+adQJX6DxcUnX7/YRKG1upy0echhFxeTk422nbqCsGah45N6mAvq3y4Umxwpmbj/Q5+CA8/Bas+Bu1e64WsVDuqtnkQp9q3w4zv3sDs9WeRy2eybBSMu8bfj8P7M7Bj1WqYEuKhk11QbHlwymmoYAyEIc2KPCkX2ZmZnoaVOg1+nTIZCu8j73ZDkWR8sHE1bDm5qB1VGWdwWn28QYAv332xeZ9hD6uZ2LiJb6NPt9YfPP7gfROjLKXyKLZ4Xfkfa9uq5V8tmjaJfvzFd3fGnjlXkLWdiE1A7eqeuXF8fHwGRkREHPVuc7lcOofDYfAuTqdT73a7tXyRJEnDp7jmC69YeH96j+V5OV94ZMC78KiFVqt184V3+WQFEwdf9Hq9k/+82s902W/ciF71L7e5XAus2/mqjxkyauSVdypHTP5VShwkfOy4olOoDn9wbAl7kfIovwBPUYZyIDs9FWPvaoDXh2VD9L8PPy6Nx6kdB/HVrEXYvegr9G8biGPWOLStOxBuR1007NQJn775KHJc/nAHDEVUzE+om21FDd1pjAqsBkfdeCQkR2HO6e0ICA6Do3oMzu7IxorNGzFxyw4c3LkN507EwhgYhsnbd+CRVvlzzPL5Z7QKywxfwAczfsexhGSY2FckJScLPkEhl/8Qt4gqkRUPxCZ4MuWWjRuW5hDmlx2d8UpYJu385sNXG8afORPz2AvvHZEkWTvptz/UwePsdqf85jMPxicmJja78pmujP/d8BYkrvZYbwGDFWByg4OD03x9fXNKekRy2TYMhBByK3IrGdj0x5N47fEQrDlUE599vABHjp/BoMF9YM8+g5fvbYmTudtxMr46fP106NC/E3au+goTOh3Bc1NSkKtUZzXdAagY9w16NcrEkRwJAaIWGcF6pOVaUaF2CIIDAnAgJwdajQGP1KuBKev/QprNivp16uPHw0fQJroKGkZEQOQPWgUBlTMzIZ08iaC6BtiNGnXmTOLRrmWj6azA8C5/3bRRg42leOprjjB48UpAcFBQ0mtPjmn36sc/buXrcvNsGH1HzweuJXO/HryRi8zMzAC+8HU83dHR0XG8AOHdj2JahJQjFF0oH6yKBUJWJralpkOj1EPjVhX5LH6Y0MWFj+adh15QcPDgWUjaICRnWCG5ZVSPyYRPshFGKQnn4s9BH1MXIUH+OJcjYu6eg/j6zq5IP5uDw2k5OL58JWZM+xU5Kelg9UJU9w9FrWadsGP7BpxZvRKh9Rti/I+TsWbiBIhaVkBQHJj1xyL2R9wAg68v3LY8RPj6X/mD3CJG3jXwvWlzl6oFBh56L63zjn3mtb6T/veG+jou4Rz+XLbq43EPjHr6as+TlZXlz2rw0hsT7m/9v0nTF9WvWXle9Zgq+0orndcD/1vEG2EXKTBczTgMhBByKzCJWiTn5uB0VgbCgnzg76Phsw1iz4nTOHDMwWr9TuTkOGFnmX2T3N1QNAYkxgOxp1IRElIJh05lQgMbnpjiQLLbjmpVH8fait8jKNwX3376AUIDg/DihGewZNVf0PDGjxpgddeOqLV8kdqI0rn/EL665w5WUBCw7MAe/H76KHbo9TA3qAWXIoF3jNdrik9MeOviteNKFcJOnj2fXM0zEFHRbcUXb/i+pAI6D8172wPEJST6PPjUq6gQEYrtuw/wzRMWr9w4oWWjevOfeuSeMYUbBF5OpUqVzlasWPGc1Wo1v//y463y8vIsvH1C6Xz60sMLW2az2RoWFpbMH0sUv5cUYSCkHKE2DOWDxu3GwbgkREcByzYex+4jJ/HtN9/i/RnJCAlV8MnvJ/DEvTVx8lwWLK4MnDh2BErwHZj1+1J2TBW88ekELPlzJuo2bgzXrl0wWp2Y/tyL6Ni2BTq07YZja7ai5+23Yd7s5VAy0xBUvSbkSqGY8uizSD16DMFCLuxx8Rj390ZsSk1F08rRiIoIRBockDIysfqlVwFJgdPlVGfE5MOt88nhtKxyrRFMt8wgbfzfCm8cyMPoOo2YptEI0Tt37iyVNgFeCeeT1KWwbXsPDhw+9vmBEaHBJ19+8v7eoaGhyfz5f+GpoIvj/7YtFkteTEzMqcLreQHGZrOZ+OdghVJj4UaP3kcWhf8m/JO/D4Uzeu9rnjZvY0e+GI1Gu3f5p106qQ0DIYSUIE3bHm2Dd+K+QTpsedmJRXPn4t1Pf8bqJfNhtdmx68RB+ATUwtTVabi/yQb4R3ZA234TsXrdOgSc2I2Dew9h0aoV0Gh1ngnMbG5IGhGKbMPWb79ATocGOHlwJ2SNAzh7DJLLDrfMMn9Jhi0rW50G2969BwJgwrZt2xHpcKDduVxUSI7Fa736sfMocDvtvCsHGjRpAsFgQM+XX0ZUbT7M73+7xBAXFxedlpYWXDjzfGB4/0clSSrVSnCtKpVW2J3OgDybM9Rg0KldLJNTM2v4+prP+5iMaXl5Vv+kpORwVmApeD7EM+PatWsf4T0T/sk1eEbOCxJ8Kc20Xw/atWvXdl62bNkHrzO8pDFx4sT3+Yb3339/Iv9ZHt7zdPH08dIX/xkQEJDJ1ycmJkZ89tlnT1apUiV27Nixk44cOVJ78uTJoxs3brxn2LBhM7Zs2dJ6/vz5Azt37ry2d+/e7GMu680/78CBA+e3bt16C9/G9xk9evRk/gvmx/Jz8HPxc/Jz82vwa/FretNQnu6N97333njvFS9x8/URERGJTz755GexsbFVJk2aNNZ7r/bs2dN4xowZw/h94PfjSveK78uP8d4rfi5+Tn5ufg1+LX7Nwmko7/fqUt8j773in5N/Xu+98d6r/H8zvfl94vfL+z3i3zn+3Sv+PfLeK+/3qPi9Kpw2ejxYTohafP3Fp1j1awP8OT+VZfY5eOOVlxFdvSar6VeDNnslAk0CDmzdgmxHBaxavgOjh1gRY85ClR4VsW5nItav3QpHUhr8o/M7lfvq+TNgOJxaVjBwwc8/DLogrWfQJ/CaIx8YjucxAiRW35MVJypl5qKeZIPW7APB1w/ucHaMHKOOMsm/KQr7KTsF9H/kYYRViIBbuTUeU2RnZ/uVVNNmNeVSbQk6aki/F6/2GP6ogT968PPzyy7NtJQHWv4Hi49f7w1bFB/Lvjy89/YX5Wnk772/CB5G4e/5sxb+nj9/4e+9JTV+HH9vMpn4XOXgPwufjz+r4e+9jWR4SIm/95YM+XV4WMgbZuJ/7Pn78nRvvK+998b72XiaC98r/pkK3ysemuLv+T34J/eK39PC94qfh7/3hrL8/f2zvM/9bpZ7danvkfde8e9D4XvjvVfefzP8J3/v/R7xe8rfF/8eee+V93tU/F7R/BHlg0N2INUh4aUDR7DfmQPBLSKrxtcQJvqj8ysOHNq0A7l/b8OSv/7Ez/dGILJdM5w6/AMMfAprVxi++eE0Gna4Ex07DIR+5cfoGGDGX6zG3/+LL6DxMUHjcMOalA457jgy+TMDpxt57Jp8lks+iBMvLOp0erUgYFdc7L0OcPOvENuuE6BlhQENL0yoc9V70qzmmOyfoKwVIWg1bM9Snx+pXGrQoMF+/jM3N9eHN8zjjQoL/725kfi/a/63hP875n/3LvdY4man5bUmvnhXPP/88x8U3qE8ved/uAu/Dw0NTSn8vlq1aicLv2/SpMluvnjfd+rUaR1fvO979uy5gi/e93feeeecwtceM2ZMkXnNeW36Rn3Wq31f/N7wL3Dh95GRkQmF39etW/cQX7zvr3Svbr/99j/54n0/YsSIaYXT8sgjj3xb+P3NdK+Kf4+K36vi96b4v5krfY+udK+Kp5XceBLLpc+zmnqtAycgyEbIBj9Wq1cgiBXQidX6N61dwkrZBmR2aYOQ/XsQG3AKp1az8qOsQYSfHYnHViEiog20vy3EX19OgSk7Az01euj/3gLXyHvgsGhg8AuHUqES5IoRMFWMhmKxwMQKCXaXWs5kZQMZksTL54Ka8bvcfDRINY6gLiajiRdOIfGBnPIjUby7pRptuEUjU7xwzpeoqKgzhdfzQY68YxLwwZD44h0AqfAgSMUbPhYe9Mg74JF34RWCwtuLNwi8FVCjR0LILe+cTcbgQ7FooDHDoNex2jzLlGUREe5sbP5rGZLNAtwuF7QaBVkPPozIaisx9/dT6NCqM7auXsXKEhJO/3EAqVoDwsMqQvIPhVAxCrlWFyzpdlSuVB0B7dvC2KQxLJVDEZGajgNWG5xOB/L4JFT5Gb4n8+eFB8+w1J48SeZRMJY7SmpEgWVeakGh8EKKKtyIzxv5I/8eFRgIIbe0tLQ0nidjVqARDlYocLHCQoY1EzFRlfHNuj3Y5LZDdumhYRVRmT+QM7rx2KEOGFopEadOHEflqiE4myrgrC0TAdUiYAsMgMbXFxqW2fu6BGyz5iD++FHUiKrKMnsD5E2boJGtsK1ei1RnFlJdWig8UiBJ8DZB5wUHqyBDEEX1cYWk08BPZ4SPwQhBp4VoMkPDCg6sBAODTgdb375whoZDLyuQ9RqIVIgg1wEVGAghNxkFLpcdoqiHxOd7FIUik84VpmMZJ6+p84fKMtslPicHv8afQqbDhjhokMMKAtnffQvD2bMIi4hAs2qVsXDRYrj1BvhXCsfyqbOw+OVNSNT5QNDoIGkE2N3sPAYd1tR7DIuHDcPuNdNxZtYMLFd2IiAuDkLcaehZemy8AODmjRpZgYOlU7NpA5x8QjjPR4CiNaL5wDsQ6/ZH9vlYtr9LbcTIUgxBckKxyxBZAcbAPl+g2wa9PQVmpxs6dxZ81M/MzumSYWGfceLCP2G2+KDHi89h+MNjWcFDhuQW4JI9hQ6jVlDno+C3iUcz1EEiNex/WhEaSaPOminyeSvYNrfbzUPzrCxya7SHIP8cFRgIITcRWZ1mfvrCv5CnM7EM0c3WsMxRvtDOTJAVz1N/lvmJbkXdLkuKOpHTp427wO6oAplliCxbhsFHRljXPqhQIRA6xYScxXPgcrogse2JaWl48aP3senlN1Hrw/eRGiDwbgnquAeKRouDKZkY9sNkfDV4CByTF+Pru+6FOzkJuVnpOJ+ZgiMuBUlGC+r4huHB5g1Qs04dyEYt7FotnKxwkJCei1idP35cOg93ta3NChDeQo/np8QydTXd7GeubIfgckLH3lsFJwS3hjdggIMVDNxODVwsXX9s2II/3p6E8R//CJ3ZDLdgVNPKcn9WsJKhFXRqoUCr0yDIYICGFTq0Wg1MRh30fhb4BQWiWkQlBAVq0bldW3Ru0uTG/3pJuUYFBkLITUNgGaSDFQimzluGmvVqw8/HV+2WKLHc1cUbA7JtDrcLLshqTVl2OFmtnhUy2OvMNh2RkaN2dPE0EtQaECgoSDn6N+K2O+BfryPCdQLMRiPSXDbAbcHqLVvRt+sBTB02Eg/MnYF4vZoIyLIEfqYNkh0zlq3GsEfG4fDfB6ANDoIrLRmu5GBocnIhKBokBfjiuF8gQuvXRWiFCJh0WmQ7rDDn2NUog2XdSuhNftCKSkHjRbX9gt7ToJEvPjCoU2rzKIEvjxqwNPB2DbLGU7jQs3Mmn/YFNAoio6shKCQMomBQ7416PlawEtW7orD7IbHCiKJGM8Dul5PHafh1nRk4f/Q8vpu9EHP79MPeedOofQQpggoMhJCbh8K7GDphMbKM3QCYJYcaTZBZoUDHMj43yxgNIuBwONQCg4tloAaWMfq7c7G6Rh0oDg0fU5llpCJElvlWtaViT262Oi5C1o7FOCIHwScwCKmxR+E2+4CP3vvUe29jw+/TcUdMdfwaewhpWs+Ivg52PY1bxGep53Bnx/aYg+PIs/gjgRVK0swupEgS7E4HMlm+vJcVdGr6BUDPzmll2XaGxojTPkCS3giD1g3JKUEsNlCw7PBMiSCzz+FmBZOCnhBuT3s+gbdhYAUSnqkrogSjQcMb+MGHpc/M1vPQhDfDF7UaVlDgvTEV8E5/TgOPyHj6Zup5t001K2CFkIBAdlgu8mwyJJcMrZ4eS5ALqMBACLlpyHDzLA8+rMYe4OOnNgRUa81Op1pAUBc+ABKreWtZQYEvvDFhjq8/Wm3ewsdj4sPbqo0DdfYc3N6zPd7M0iBOk8sySAf2Zqajd1gFnDyyF0qeHXaNDkaNiKZ39sfRP1dgxxtv4m8Nq6mLOrUdgpWlx60XcHvsQaQuWImc7HNw814P9lwI7GIarRmphiRMSkzBjJ1HoQ+LgFCzGgLad0FA7eqoLrphZ/uJogBJYIUbluHzTN5ld6iPEXjExCVLajSAFxj4IrHPqraJYJ9LlDRqewOZFRB8/f2h02h4l2p1Ubtc4sLjGa3siSTwrpvsarzrYcF91bDSikaT3/uCvbb4+wE0QjkphgoMhJCbBm/MZ2M1bz+/IFSuVAl+BrNaQOCZn8vlUgsMDsWTGXob8PF+iutmz4OfdrMawhcUWX200aJNawyvFoXgMaOw8vhJTJo7Heks809XXDCyGnl2rhWyXgsXy8gFjYweo+7GsinT0eSVV3Ha3wAXr6qzcztZpp6QkoKo0X2R/vKLEHJdarsBxWyCwAoAvCAgppxBanY2xMwMWHQmmBs4IbBafLbWjjPJaYg0scxaLxY0NnTbHAXjK/DPpYUnc1fbZfBCBPsMvHsl358vvPEiLxzI+Y8rvESd50+82pARnsGhZN76M78nBj+W4483dDqdp/AhaNnCz3Pjf7+kfKMCAyHkpiHJGlhYrf5U8hkEJEazmn6O2kvCrbZf8AyXzCMK3kaQPFPMc1qhrVEDG2bOYCewIT0lie0rYcmc+Xjv5dfwwosT8fR996OKSYPXf/gJa5KOoU3l6rAeP8gKDSwDd0uwGE04dy4VX/z4PeY/8xS6f/IxUnzMcPG2h3wSKIcDJ+tWh3+9JrDu2Qod+9NqYpmwCXYYRTP0LIPXKzkId6ZBf2wLfH9MgKtCJA4kJCLDEoilp7JYwUdh+/ARHvmjlTzo2Dkldm6tnS0au5qZ88IAn5lCz87NIwJOaGAVrOz/RQT7B0HHCkQ5J09AK7IihugpCPCGjfxYnXd4SMYl8XYfvAeFwLaxdLJz6dii1evUYaNCBQt/+kNIEVRgIISUK3aXwzOpkqCDjT9iYDViJ6tl52TnIC09HS5WwzYrRmxZvAgGDR+TwAiDfwCg06Nrg3pYs/cgjOqQyrwfhANZyUmspi2iVce2OH7sKDJzc+FmCyt7IMdmxwus0PDR629j4vtvYd33X+OnGbMweelCROQ4WBoyYWUFBj6ss8yu9dU3P+CO3r3xUY9eeGrjOiTl58Eun1Bo/CvCf+FstK/fHCMqRqBW5Rj4uWzQVKsL8zNjkJeZBZeNT43Nqu4C7/3ACghGvadLKM+4WSHGnmfNf+zghujKYYUVVlhwyhDtLrUtA2+kyB/KaAWN+vlMMCNbm4O1S5bhmb3r4GK3zcXnpjCZ2HlM7C+86GkcyY6DhhU09CJkVyZEd7bavVItYMkiFI0PdGY/dp984Ve7FgIDjeow1IQURgUGQki5Uq/fA8hJS0BwaAACLb4w6zVwu5zQswKB3mDgORx8THr4RPhBz7I1rdkIHatl81D+woW/sj9qFlbQcHjC7iyjz2Y1dj5gIs+s9YoW1atWgyM3B/HxpwpGWMyRXXj5xZfwv1ffxFOvvYGvn3wOzz88HlqbE1Z7LmxGN1xarRqxaNKsGc4lnMX8rXuw3JmL7CAThKZNgaoxSNG4kH70ADamGnEqLgchJ8+gbl0L4qbNhI3V2HV8lEbezdHNoyCS+tNDUdfrtDq16QCf1JgPm+AZlwGwOKG2y+BFIKso5Y/wCOTlubBk0wr1fZuatSFqjer4ClpWQBJEGVqjgb3WQeatLQRWCBFYAUuuCruSB4kXTBRWGNF62krExsZBWzsBOz9IUFNEPSRIcVRgIISUKy1bNMLRLdno1rYV+wN1Yehjnll72yZ4FzVEn//snS++xgCWsbJatOyJp/PeEr58XAXeQJD3atCxmjMfTElvRL2AUGRlpyHu9Gk4JYc6ZkOq2463Xn2BbTeh/6CBWDr/T1j0LIPmgwu7ABufctoloVajRojfdwTN33gJB6LCIZkt4M0MbJIWe/Q6+KUkw3w0EZ06hEOoE47MhVkwR1ZlmbngaYQoSGrBh7eN4J+Lt0fg+EBLfARHSWClBa2gFoL457SZ+FZJjUzILldB2wOzwYzwkEpF2iJ4F5Glw/NIQusZWjp/u7e3hcT/EzyTO/LjM4/m4kxsEhIy4xAdWPXG/LLJTYUKDISQcoNnZk3qRuH8dgW+JrO6ThB5LdzTgFHPm//x9gos087NzVWf08t8LAOzL59UDHoDH2JZVEeA5IUEXuBw84Z8rCavU6vuvJuirL52S26YTX6oXachKzikIvHMWXXMJJvLDY1sx4xp09X0BPsHgPc3cPNeBTYHFLcL2akZGHrfPVg15Rs0+eIXnDMa1WaJgkuAjZ1jX40QNH1kEPYP3Yj6kgxzgK86OJJO9jRM5L06+KLheXf+fBD880l8REZRUdsX8Axf4k8v8tsuIH8mSq1G7+n1kB/xMGnNakGAn09xeRo1akS2zaGwApKg9nYQFBFaVobS8y6hiqagcaTaI4NHGXj3U8EIp82JUxmHUDm4Gj2QIBehAgMhpFzh4XGLjw+CgoI8GSkfa0HO7yDI2zQ4XcjLylEHanLKEsxGAywWH3UMgsCgYFjznJ6Gj6yQYLPbYLfbPT0o2OJ0OdXXfOAldcTG/J4IPux6DRo2QFZWJuLjY9XMmO/Ht2VkZsI/IJi9d7OFPxuQ1OPXrVmD5XP/xIyB/fDgsQQcimAZcI5NbVFotdmQfu4kzAdTca6yBaIgskUpFC2R1PTwsSCcvLeC2xM9UB9Q8MiJJn9SiUL4KI1i/lDXPAqh1Xgy/oAAP7XAkJGWrn5Wnj5Rox4AreJpiWAymeDrZ+ZTs7NChAEaQVQfx/Dj3fkFBh8/H7a/1lMs4VEIKjGQYqjAQAgpN3iGmpuWDRfLALOzs9V1PFLAF/WRBKv5O2x2VmDIVrc7WObsJ/H2fAaWybKsTtRD0ErqGAY82C7n96DIyskuGKvBez7+Xi+InkaHgkt9fKFnGWrdOg2Qm5eJc+fOeUZaZJt5o0snK6jICh9MSVEjDnxkxgkTnsWunTswYNPfyMrIwzl2VSMrNDRcMhWR4aGIeuZuOB55RR0fwXriuNrFkn9GnrHb+eBS6ngLnmGeeaNKvp0XVnT5YyLwQgLPvHmBoGB0x/z5IXiBQdKz++TMVM8vCXZYXVmsvMD2k1nGr9fCreWTVfnAptOoPUMcshNat1btHeEtD/BBrwQ3K9DoBZiDFcSE1KKyAikRFRgIIeXK+u17cfhsBk4sXaNmpN6avqxjtWK7DVq7E3ZnLmy8UaM5GJYUB1yxZ2DSiujXvA3+PrBNfQTBM3ZrTi7knDTIThsEhxW23BwILNMUZE/Bwfts3zuOg06vg5MVKrSmIOh8QtXBjuxZ6XBpWaZq88ySzBtQ8qO8kzQ1adoM8fHx2PrRl7Cz9DYR3ajw+yzYRCDFHYuQB3qg99atUPzNnkzcLUDrywo2Ggsc2ZksI/cUQlySWx3+WY2O8MJMnpVl5G52XbtnGGfJM+6Dws4ruB1qYYIXjPrUqo4NGxbgW/tG2CUFtjw79BChMSrQSRoYJN5QVAOzwQWNXQ+3jhUmZINnPAp1PAfAxu9VTQeCtQYEm6pAcsvqnBOEFEYFBkJIubL6t89YBslqvPx5Pn8KwTJSxWnFV3VaQFJnWNSwbaw2rpGRywoPuXZWrzeZ1DC9ccNi3BcUhsDoyuojDT5iodE3gGXOrEauZ5mkTq8OqsSnjOaPOLyPCARRQU6eE7t27ICWD4nsY4Zeb0LFqMqo26guHhk8DKnZ8UiHEymiA4IkF0Qq+DmaNm2KXVt3Yfialch79gtkGyrgoD0L4YoWZwQZh1r2hg/L6X0DwyBHR2L4A+Pg07Am5jz6Oqx+WrVbJR/ymUcz5PzCAe92qfDHFIpnHgi10MC7VLplz2MH0TMDpSTwsSl8MCKpNjQBPnBYbWrbDS0rmPCulPzZAm/QySeWyNYqSLXnwi3k8YGg1UUWWMFJ8TQajdBEYdIP/dB8cH90/PSJMv0ekPKHCgyEkHJFw4dd9o4xxPNzliGfWrUR81gtv0+vPpD9faDnDf5YZu0naOHHChd2jWf0Q56lukQJqewFH1nRnZ6uTkzFCx7enhUaHuqHkD9MsqdboyQJWL9qDUw+FnVkR9Goh8Vsxp9//qnup49LQzWWKTfVGjFXOoM8yQW940LDwbi4ODz1/FOol6TAqPWFO7I+kuLioWeZu4FdQ68I6myR5nb9Ue+Je2BpWxVJU5bDPqQhNLw3g9qmwFMI0eSPwqjLT6OaZnWeKPUhC7T5QzbzFg42kR3DCg85koL5M2ehZacusGgNbB/P4xg+cyePokh8PIb8175qwcNdcL/F/JPxs6bZ7Nj+61SE/RXM0jC+oPcFIRwVGAgh5ZrEMrpTK1chi2VpTrMBZr0eDo0OokGEgw//rOTPsyBp1YyPZ6QO2TNwE+/GKOWXPuT80R+l/ExYzSHF/LA7K6NoDPyRAR/OWQuFZ7B6zxgGfLZLncwzfAMkVmsfKlRV57Pw1RjVoaMr+4bCaPFF8pqzCDP4QjaYWKHFhYrhFeFMy1JnzNSxAoma3cceR4W0c9BmhSBt/QGgjgA7K3yo8z7wKEN+ckTey4E/glDLBp6JoDRqMUeEzeUquDc8IsF7U6hDYpt41EVAnujpesrPJ2r5Z3dDq/EM96zOV5VfWPAWdgrfGy073iHbkJNjV/cqNh8WucVRgYEQUq7xCnVaXrbayM/Mav0mtii8wMAHJOKNFx0O2PmET2pPivzmeoJ3ciWlYEhlNVPO704oFxo6Wi08KBq4+eiLGk9o3q32IMgvWHh7aMCTQbtZHmzgo0+yTFqn8OGZbdA4eObqRo7OrQ61rGXnMLBM3xIaAiPv0ijnZ9A5Dhx6bTLO8p4NJhMczUI9Iyoq+dNNF8LTwVKh9s7QSCybz0+PqFyYBpt3u9QYtDCxQopBNKi9IVBo0inv+A4GraeLJm+A4XKzu+SWPZ9PVJ9UqGng19PrzaygIao9SHhEAhRgIIVQgYEQUq7xjC8iLAIWlqlHRETAaDKzzF1UH0G4WGFBYQUDK6uN5+bmFHRE5DVunlnyboS+vr4FjRt5TwheSPBOVOXNeHlezbtp8kWW3Or4Dnptfm4piupwyzrZrdbwdTzzVrstCmp+6uJjIvDHHqInk+YFA33+YFI8DXyqaT6EdcHgSYpBbZ9g9POBzsrSYdZ7CgzKhW6UBW0r4HmUwnt08LR7jkfBvoKOfUaTJw6gETw9KPgjDn5PxILxGzzn8RG06t10svTkuJ1qI0uFP56RlfyPyQoMRlYY4w0hTUbwOTFNoIaP5AIqMBBCyjUN+6/ugAEInj8fyXFn1IyXT/nszMmDyykhL9eKbIcVmTlpsDvt8ET2RXVcBovZD9k+FpgtFjUz1bOcnw/JrI4/wAoHfEhoXhvPczmQmZwIR162+kiCs2Vo1R4R/HyCr4EVSKxsTx1yBRkW9krPMls+XFMeO4fIMmEXz5wlT7jf6eCzP6qzRngGdJJ5xizmD9Qkwa24oHHJiDt0DKJBp6ZBFJwFBQF1FEs3W8syeZfCp7vOUwsMvKCjNsjIf/Zi5u09/Eww6llKcmxIP52gdsnkvUREteukoD6OsPF0sIIAb78hs3uU47LDarfD4XIWdFnlZzWygoIimgGLDgaRCgukKCowEELKHVlxYuXxODgNGnXcBSXAB1UrRuL4+g18ZCfYczI8Iy/mc4h6T23ex4hknRmC2cgyPxOU8GCIfn7Qmc3q4wyeOYcGBcLfJSLl7Hn45WSqEQReu67ToClOHD4CKx9B0miCEOCHsa+/iXpVqqHuyLuwcO4CpLgzoVc88zTkskxcUHIga9nxrMBhzmM1cpaBB0CPMKeAENEELct0/RQtjNoA3kwCFlYCMavrdfBNSUNVHwXGAD22ZaThLDuvwsd64OUOdk7+iMKo0YFPTxXICg2VRQt8HAZ1Aiq76Okxoc44KbACC/tL3tu3FZxn3TiTdApnJRcS2ed1mrXqqJeCoIFJ8cxcaWQ7B2sCEWx3weiyqlEKrVanRmNErQH1+7aDOLg9Swc7Lwxl9RUg5RAVGAgh5Y4gaXHv5G9gPnICDWJqqJM2ads2V6dh9vR4ED3jCOTvr1dHg2Q1ZaMO0XYRfDgkPpIhH00x1SkjxZEDd0oGq7VL2LNrN5T0XLj4IwRW29ZoPM/sJUGnXsdsYBmnnmXI7OeBQ4ex2Cccxqg6ELr2R6Akwia71SiHPx8x0eTDChc6uLNy1XNkCmy7oMUpHldwex4LKG4+kiOPFrDXPCLAyzky7yqpQMOjB+xThA8bg3FtauDPpdMQyAdrVOSCxxsuVjA6r3Ejia3QSp5HF942GIqkqL1AeI+KfXt2wG3NhSbMBehMMBkEVKsVkz+MtB463naBzyXBR5FkhY4cnadQwgeL4pNS8f3yMrOxZct6hIUF4sG7ugHGsvoGkPKICgyEkHLHJbKMMjsP9YMqorJPkGdCJR4iV8P2PNCuUzNV3sDQE8a3FDyzl/We7onqbJVsH7eLTwjNh4K2q48YzvFjAwLVUSOB/PkqWD2eRwp4lEIdYVGvUxtZ8veJ5jBUCQ+G1eZARq4Dcp4El92htqHgvTUEXlhxWlnBgA/vrIGDpYmP+8C7O8p8FEo+f4NPgPo4QG0jwdIjKDrPWBB85EatBKtBh4PZ2Whn0UPjp1EjBwUURT1Ok99pQh19Mj+64h0PQmaFn0TZCac6UqQRBgMrILBzRvla1P283SPVdo98bg05fxAoeNo9KLKg3lc/HzMcrHCTabey+yhQm0dSBBUYCCE3RPHpkos8ry+2LddhU5/xW4x6Vou2qwM5qd0SC3oS2NTGerJ34qb83hDe2RgLGjPygZX4+dliUET1mb3isqvP9os0HmQFEVE2ejJlvUbtOaBlGbqOZf7p2bkQzidC5LNgSlro+PDOfCZJUadGMkTePkES4OQFGVZ717Cau5s3MMxv6Mh7dAhmMztGWzBhlMLbOrCFN2aE2ojTCovBH+7cXLUQwT+TVOiRC9Q5Hzyfi/e40Ljze3koDgh8zAa1V4baEoKlXYaPkV3LaIbsdOd3H72Q9RcZW4H3ulBvgKebqcJugMaVg1BRKuhx4r2fNCYDoQIDIeS6KzxIUvH3hbs5erfxjJ7XzP39AhEaHKZOusQjBXanDVarVd3O83resM879oKBZZB8xkrP+TyND71zQfAJrWQ+zLJLpw7uBHVYZE9vCjUNGtHzSINn/m6oo0YKPDzACgJ2Wx7SUlPVdRqtgWXIovrMn4cPTBLLVvmMmRpBDfcLWs+U0nq+nS28gMCn0pYNFlbjNwB8tEmWHh27nkb0fG5+Xr1vIE5k2NGAJ05ttalRHxuoSeXpUzNtqCM7uvhMlgUDTnm6j/LCDJ8fghcZNKygIvJr6z2NKdUCGR/DwVsmE/ILAfxxBI+KKEJB75L8wAPCQoIgOZxw5I8R4V14OxBy66ICAyHkusvLy1Nr095xDwoXELyZnpc6e6TOAB//AKQ57NBm5aijCvAeAnZHrnouPtujOgOkxvMYgY8/IPEJogSNmnEqcF64uNtzTX68i6XB7naqmbGQv96TBqhheT53hE7RweZyQuvyRAjkbAE2B5/RUauOAqmw7aJWA4lfhxUCeAbNoxVudWIpnTroEvh2VjjghQGdyQKYnLD4+MLkw6eZ5oULzyMJ3pOD5ewwsIJLnl0DJ49i8N4NfJjo/Kmw5YLogqdnh4NHJdSPxx9V5KnreBryoGP7OdSoCG+8yNNgY/dPyr/nktZU9JEEP6+TT7rlOYecX1BjJ0FwkB+Sz52HC575NjT5j2eowHBrowIDIeS647MzqpGB/MyreOHBG/ZWZ6RkmVYmy88cUi5W8rkTEs+ojx945ua256rhdZZFwsQLC7yFP8sstU43y3ztnkxYbeuQ/yjCJXkeP/D2DaxmLjtcyPQLgJyYxtbxGS0F6Fimreg9GSKMJmj0FuSxfJxPRGUwGGFKPwlrSoaaAasDHQkWtSAgmvzhtrrUeS3cyH9EwAsRPHRvMvMBGuDS+7FM1wqDYIJdp1EnhJLUXpEsXazgIAoyBFnPD2SfT8CuhEzoHJLaHsIppauZOO8OqY4doQ7IxD6NVUK27BnNUeI9NfjjC3afMnPtcMk+MEpmaFJlaH117F4Y1Jk2Rf5Tr1XbgfAGjk5+ffWxCSsQmRS1rQcvBMkuK8K6VcOevQeQ0rkKK2QYodPp1AIZ/0lubVRgIIRcd35+fmq3PW+NXq3RFhptsXiEgU8GteeZNyELDigut+exA695a7T5rf41nmEXvfgjCm8BRO1u6D0X1HEXeC2bT2LlcLvUmrWRnccteiZ2UgsrrMDBIxCexpKCOrQ0Px+PYrglF3QavfperYXzHgx8xEje4BD5EZL8a7jyp8+WWS1dcjshCXx2SsDmcqg9JGR2LisrtPDHKw5eAGKFKIfdrRZc+HTX5qYNYdQbPL1CtJKaURv4e1Z4MbIMm4+xYNLyIpCnhQHPwzWyBgZWGDGw9Ua2n49Rz16L0AjeaawFtRGlJr9QI7HPpg5DzYfQlPMfT/D9+FTgfKwGeKYENxh4NEEpiDBQgYFQgYEQct2pz/J5F8P8SAIPxXsLDHx9kQZ+ReiLNGK8nMJzI/wTxa95qWsUPqfoDeeXcIx32OnixxVfX/yc3vPxdU5W8FB7ZwhikeO8r3nGzTPzC7NsSuBlIO8okt40Xuk+iJdqBJmfDvVRTKHfzz85J/nv0/6Tf4iEEEIIIaQIXuug7kM3L14roiFtCSHkOqCnEoQQQgghhBBCCCkVFGQghBBCCCGEEEJIqaAgAyGEEEIIIYQQQkoFBRkIIYQQQgghhBBSKijIQAghhBBCCCGEkFJBQQZCCCGEEEIIIYSUCgoyEEIIIYQQQgghpFRQkIEQQggh5DqSZAUaUSjrZJRr/B5JkgydVoQgeO6VrCgQBbpvhBBys6EgAyGEEELIvyDL8iW38crz9oNJOByXDofTjeqRAThxJgNj72h0A1NYfh2JTceYt5YiplIgbE4X4hOzC7YF+Bjx5bPdUDs66F9dQxTFf5tMQgghV4GCDIQQQggh/9LR+Aw8/O5y2J1u9k5AjahAvPFQO1Z59sfsVUew+cA51KwchJ8W7oWPSY/Rt9WD0XDlYlhmrgNO16WDGBxvJBHsb4L3oX+O1YV1u87gREIG3G4ZIQFmtKpfAXWqeCrrdocb2WyfKzGz9PmYderr7Dwn+2yS+ppfJ6TQ9bxs7Lw5xc4rssT5mnQw6DUlXqM2S9Omn0aor1MybVAUz3oj29/Por9k2pLTrVi3OwFnkrIBdkzlCD90bBKJsCDzFT8XIYSQ64uCDIQQQggh/1LNyoFqEGH3sSQE+Bhw8mwmhr60EHd2rYURfeqiZ+sq+G3pIVZ51uKp4c1ZpfufFcFe+GoDdhw5f9l9/CwGLPn0DsSez8aYt5fC6fIEA5T8/+PBgK/nePbt164amtWOwJs//X3Faw/vWRcT7m6GhORc3DlxAaRCLTaeHdkSd3WvVWT/BetP4uNp2y57znaNIvHeox1gKhZgmbXyKD6aeuFY3k1i/keDUCHEUmS/xX+fwhs/boKSH43wBiX4Z/zgt61qV4tXxrTF7R1irvj5CCGEXB8UZCCEEEII+Zf4Q/0X72uNh95bgZpRQTgUmwqXW8Kc1UfVZfRtDXDqbCaqVvTHlCUHMX/dCbWlgcPlVls2tG1YCdERfujXPgYVgi0lXqNeTCgmv9q7xG28sv3sl2sLAgz39KmP8UObqK8dTgn3vrEUoYFmtK5fEZ2bRaF/x2rqtv0nUjD6raVq+vk5PnmyCzo1jco/p6cG/84vW9QAQ1iQBenZNrV1xLd/7EHvNlUv2dpAYP+t/PoudXtKhg0PvrscZ1Ny8PfeBHR7bBYW/G8QQgNM6r7p2XZ8NXuX+joyzA8JydnqeAwf/rYNnz7VpeCc63cn4LUf/oa3AcV7j3ZC95aV1dertsdj4tfr1DTzAAq/bqemkVf+xRFCCCl1FGQghBBCCCkFVSr44a8v7kTfCXORk+coWP/I4Cbo07YqTp/Nwoa9Z9TuC7yFgc3h6VqQzfZdtvkUosL98fuKw5j8Wh9EhflefAFWgZYVXHh8D6hP7vlTfL5EsmPOp+aq639degCzVx9FtxbRuK19DKa92Q8ajVDoVPktAbz/l79JKbSNW7rpNHYcPq+uf25US2zefw5/sPPm2pz4avZuvDi61YW0XOK+hAaa8OlTXdWWHfzcPPjy04L9mHhvS3X7R79tV7ta8GDLFPbZ735lEZLS87BxbwI27TuHtg0rqvvxwIz3GnzfDo0rFVyjQ+NIdR1Pl2ff4xRkIISQMkJBBkIIIYSQfykxLU+tBPOfvEauyAqa1I7AnmNJalcKvm3d7nh132RWgeZ4NweNRkRGtk19z8cX6NkquuQAA3PwdCpa3fdbkXWDO9fEC/kV/W+e645lm2Pxv2nbkZVrV4MYizaeUBeOByTG3N4AYwf/s0EnrXa3ei6uVuUgdGoSiXpVg7GQVeBdkox5a4/hzq411c9XEqXQ67AgEww6Tf6YFUBCco76c/P+8/hre6waPBjes47aAuGhQY3w1k+b1O28C8Xs9wZApxWQlmUrOB/vdsJnovDS60R1nTfIkJppUwMvSuFEEEIIuSEoyEAIIYQQ8i/xCm52rhOvP9hWfe+WFPR/Zh6qVPBXBzRs3zgSo/rULRhIkXcRGPTsfHUchyw/I44nZKiDJLZteOmn7/VjQvHLa30uW3Pu3aaKunC8y8HeYyn4ctYu7D+ZorYi+GnhPrXVwFPDm5V4fOHWCJ/N2Km2suAqhfri5z8PqK9jKgXgaHy6+vqDX7fhx5d7ldiKofC61Ew7HPldOTg+UCOfeeOdXzar+/ExGHjggl9DYvfObNTBanepwYipSw+pA2VGBFtw6HSqemKrw6Wezzu2Ax+U0uq4MOgk35cCDIQQUjYoyEAIIYQQ8i/xJ/C7jiVhx+EkNK8TDq1GwNtj22PrgfMY1Ln6Rfv/wirTTw5vjq7No+DvY4DMKtxj3lpWMAPEJV2i5jxjxRF8zFsdsAq4r1mPGe/cjrBAM5rUCsN7j3XEHRMXqFNocjptsZkeCkUDvGc/cDJNbangtXpHHFbvjCvYyTuzxL4TyVi+OVYNbFyqTs/HiXjtu40F3TC0WhFj+jfA9/P2qd0iOB4Qmbxof8E5Cgcovpu3BwPZPRzWo7aaDr4TD0D8ueEU7upeU91n4fqT6jovvi8hhJCyQUEGQgghhJB/ibdC+P2tfkXWxVT0xwdTtuKhQQ2hES9Um5/8ZA1u61AN3VtULnJ8i3oV1Fkp+LSXJTlwKgUtRv920Xo/swFLPrsDWo2ID6duQ47ViX5P/VHiOXiXhHFDGl/2s/BYgLe7AvfhuM7o0jyqyD58UMbZq46qrz+Zvh1dm1cuEhhQ2H/dHpt50bmjwvzww0u9kGdzYcpiT8sI3mph0ceD4VtoEEkedBnw7Dy1+4lbkvHxtB146+F2mDSxJx7/eJUauPho6lZ1KYx3yfhsQjc0qxN+2c9ICCHk+qEgAyGEEEJIKRCEop0GeJcCPuNE4QAD99oDbRDoZyyyjk8/2aRmWMEgh14j+9Qp6P5wKbxlAm8dcGe3murCuw4ciU1XZ3PgaaoU6qOOqWA0aC46NpJte/m+Np7ZJeCZivNsSi6G96qjruPdQIoHGLhH72iintOLd2toUTdCPVdhfMwJ3lKDnzcsyFwQiOAzbbxwr2csCd6lxLfYLBU86PLpk11x8FRqwbpcqwvNaofj7x/uhsst41h8BuISs9VtvPtFLXaNwuM0EEIIKRsUZCCEEEIIuQ5a1atQ4vriAQaOz0zBl+LaNap00borMeo1aFwzVF2uhKdlYKeLu3NEhl28rjA+tsSAEo67VCuM4nhA4kqqRwWoS0l4MKFeTLC6EEIIKV8oyEAIIYQQ8i+IIj09J4QQQrwoyEAIIYTcWHyI/WuvlfLpEQWZ/XCp/d75EHki+88zYJ630Tu7gJTfMJ031ReE/IsK6n/kX5PLOgGEEEJIeUVBBkIIIeSmIcMtOSAoGoiCBrLLhvTMFJxLy0JOdjac9jyYtEaYjb7wCfJHWFgYDEYTRNkTXoBe82/CG4QQQgghV0RBBkIIIaSMeZs2CHL+BH6CAEVwQZEBh8JfO2GNj8XCn+dg8/JpqFrNhvbtdKgUCpg0WkTxNgpOJ0TZDYGdSOFTFCboAJ0beRCQnOOLXYcUJKZHoVmXMWh9222QDQI0sgMaVhTQaPMH3VMKtXKgBg+EEEIIuQYUZCCEEELKmEZWWP1egF0UILHXOsgQXSJStm/Aj5++D2OAHaP7BeLO5mm4o54DLqceokaGLFshCBpAb4LWYoDG5AfBwLJ2nQhZy0f2z4EReTA7MxFTIwQOnyyIxqnA2fVYs/w4HM666DfyeYiBvhA0OjXGIChlfTcIIYQQcjOjIAMhhBBS1gQBMqvgi4oCfV4aDs+dhqm//oAWXWPw+IRKEDIPwZ6yH3luO2Te5EHmrRAk6LQmaAW2+OqgD4kA/CtA0QZCq/GForHDBQ2cogiNYIMipcAgn4PoPA9BOoPuvU2QdelIPf4pls05gsGPvYKAqvXY+bUQoNDYDYQQQgi5JhRkIIQQQsqYQ7JDdIvIWL0c8396BfHZTjz5QCAM0glY9xyC0wU1YCArvtBJgI53q9A4IZv5YoOSw7ZpRbU1guynhd0oQo9caASWzQsWKLIRghAFUVMZitnOlgxoXMkQ8vYi0H8P7n6qBnYueRt5B81o+d4n8NVYoDFePM0iKUJT1gkghBBCyiMKMhBCCCFlTLTbkPTBm1iw+U+kB5rw/HAbziXnQnaybS5ZnRVCggKHG8jKEZCeJSHXJsLlEuCSdLDpjfD1y0Fo4CFUjjyJqMoBECr7QGuxwGAKhKIPhEMxQhZcEBQ9oFSBoKkIjeU8HMIpaLL3onWHUBwN9cOqkZ3Q+8NZMMbUUtOm0Vxcl5YkCd9//z1ycnKgKAq0Wi0qVaqEDh06qINNcjqd7qLj5syZg+PHjxeck+/bsmVL1KhRA4IgqOfhXC6XOi0kPze/xrJly7B8+XLs3r0bqampcLvd6v4hISHq8X369EH37t3h4+NT5No8nStWrMCBAwcgy7J63aCgIDRt2hQNGjRQz3ELTz/Znt3bDRnp6ZfcoVJkZIm///8SPmAq/37k5VlhNBpRrVo1VKhYQd3Gv38JCQlQ5AuTifDeRHqdHkHBQTAYDGWU6iIasmV/WSeCEEIKoyADIYQQcgOxegvcvGuEBAhuCZI7HUmvP41TJ05jT0YOXhygIDNRVvdLz9Fg5wkN9p7TID5LRrqdLQ7A7nBDVFzQCTJ0rNojK7nsZBpodQZodAKrGKYhSD7Dzu2EJQAY3CsQA++uhuDKVQE5hI8wCUWTC1k0wWyoCMFuR44tHtFVLchoZ8aqV0aj16tfQKzVEm7ZDgPyWzUUqo9//fXXOHz4sPpazq+E8cp9ly5dsHTp0hI/+7x58zBz5kx1f16B4xVYXskPDw9XK3pms1mt+J88eRKjRo3Cnj171ECB575dPFhEcnKymobJkycXBAz49adOnYrg4GD1mMWLF+Pbb78tSCe/Jt/Xz88Pp0+fVn/eqnx9fbFzx3Y8cN/96vtPv/gCt/e/HTOm/46XX3wJm7ZtUQNBmzdtxrTfflMDPk6HAzVq1cSjjz2GiAoVMH3adOzetdNT+2b3tXOXzuh3221qoOidN99ilfc89dx6gx4PjR2L79jvwsWb5jAVK1XCU09PKEjPmtVrsGTRIkx49hlUqFDhovSuX7cOixb+WeS74B8QgNZt2qBL1y6Ij49n55/Evvfui459cOzDqFmzZsH7vey7NfyuoahVuzZm/zGH/dvR4dfJUzB3zhx88vln6j78exIVFYW+vXrj2NGjqFu3LhYsXoSxDz6EVStXIoBde8XqVWrgavu27Zg1YwbGPvoIqlWvrh6/YP58bFy/oeBcrdu2Rc+ePfDuO++q96d9+/bYtm2bek957ySTyYznJj5fECzjn5NfZ9aMmXC6nLDb7GjUuBEeefRRBAQGXuNvnRBCrj8KMhBCCCE3kqBO+gAb78kgOZD17XeQziVi3qlz6NQmCNkpudh43ImlRywwmX1RWyvgNrMWYQYJGrcEl1uGpJjhlFzq+AyCqECnsUMraKATdNAoGhi0RiS4TNiUlovNeU78Oj8LU/88Bj/dYbRt6YdHnxuIwOAwuGGHpPGB4mNVB5p0pbvRrBkwdfU5JPz+O6o8EQltYAQcoqdvgAYXJp3wBhYKV/j460OHDiEjI0Ot5BfHj/EGDTj+mh9z7tw5HDx4ENHR0ejatasaZOCVMr7w7fznpfBzevfjy0pWKeOtKgYPHowpU6YUpMubTn5Nvn92djZ27dqFzp07/6tf582upLE3ht09HKJGhEYU0aJJU/X3+d0PP6Bbj+7q/b69bz+0b9MWjz0+Hk9NmMAq0/Owc/sO3DHkTjXAwPGA06PjHkPbVq3V9z9Nnqz+fhs3boIXJ05UAxInTp8quKbVasULzz2ntlRJSkrCr9OmXpSujp06YcniJZgzaxbCwsOxaesW9bjOHTqCt8j4dfo0dOvWDY88/DB0ej3WbdygtnYZeucQJLNzFg4y/Pj9D3A6nTiwfz+mTJ6C+x98ACPvGYXOXbpc8l554igC2rZvp1b+MzMzsXLFX7jzriF44fnnEXf6tBr04oEIfu8GDByIpSy9K//6C127d8fgOwar53ngoQfx2SefYsCggQgNDcU9I0eq69dt3FgQYOD3oFe37sjNzcWceXPRuEkTSOzff/s2bfDTDz/i9TffVNNLCCHlEQUZCCGEkBtIUHgrAoFV2Fn1Lu4UNEe2IUnOQaItD06rghfmSIiWDHiqrj+CJBlmWYHsssElS7BDhlNg7yUFGkEDkTctkAVIighJFNh2to13rJByodEY0TcIGCRaoBhlxGbqMTdDxL6DQXhyzGKE+KdhwjMDYNDboBX8YHLnQrDFwq2VEV5Li+TjBxCyYSH8b39YHQNC1rDK/BWGIeDdGK4UFCjOuz+vnPXt21etNHpbOvCWCbwLBd9euXJlPMcqobylAm/xwAMTvPvFjz/+qFbE+JPhwgGMP/74A4mJierTZ28QQr3/xV6TCzasW4ezCQmYNm0aVq1ZjV9++lkNMPBuAU2bN1P34b+TZs2a4eiRI+oTdv5U/YJrn5rkJ/Z7bN+xIxbMm4fNmzZh3h9zMSi/Un45/LvQhlW8Fy9ahB+++x7Dhg8vsp23JpjwzDNo07ZNkfVvvfO2+p2LPX0a773zjrrwlhm/TZ926Yvlf3dWr1qtvg2PiEDffv2wcP4CNGrUCAlnzuAIO+fPP/yIBx9+6OpvQiE/ff+D+r3mwbqatfK7Lmk1aMru/fJly/DrlCkUZCCElFsUZCCEEEJuKFZREdyelgMn90N25CA7z4azaXp8sjobH9SNQoxeo/YD10LP9lWgiOwY3mpB4OtkuDUS3JDUcwmKZ2YKXm+W1a4TbH/2n86dg2SDBb6CCwGSA1XDfdHZLGOn9TxS9RUhu01444OdkMVAOGQrMjNT4UjJRutqAmJCzBCys6HdfwSuXjYoZgt012m2CW+Xiccff1wNMBTGgw38ifiqVavUimTh8RN4awU+tsI7rHL47rvv4rXXXivY5s5vLr+JVVZ5c/SSulqQi3Xo1An9buuHjRs2qAEYB2/GD95aRoBOe6HIaMgfFJR3S5AludAZLv0dOXzoIPtdanHy5ImLtvHf+18r/sKcP/5Qg0q8pcIH772HPv36quMkXAlPH5ebk3NhJfudnz51Gvv37VdbCxTHuxusXLMadrsdn3/yqRrkSDx/Hn169FS7QPCgVnE2mxX79u7F62++gcjISPW7yVtDTJk8Gb9OnYoKFSti0jff4JP//Q8DBw8q8br/lM1mU3+K+f8+vLzjQLhczms+NyGEXG8UZCCEEELKAG9OnZuVpVbUTBozcg0JeC4qCrWFXIiSL2SNnlXZZPXZsCIrEPhSrLLMm7qrFRBWx5LVKIOnWwB/7VJkGCUrzgu+eO+kDcdhQzSyUTfUgFGtz0H2icDStSJS046jr5yK7lGROGWsjMkHczFTPo3n6vvAac2GwWmD1uQDSRCKdJfgwYHilfcrtQzwdoHgvMd6AwK8slbS/r169UKrVq1KHKDROzgkD1B89913auuGwvi53SX0zyeFFf0d8ns69ffp6utOnTvhi88+g8PuQGxsLOo3aKCuP3zokPo96N6zB8wW80XnOnPmDN7ngZ/XXy/YUqduPbRt1w4JZxIuSsE3X32tjsHBT/r0M89gxbJlareJD9//AK++/tpF+xcXezpW/cm7HxRg352qMVXV47du2YJ3334HL778UsHmp598Sg1idO/RA8+/+ALuHjUSXTp0hNFkgsViUb83Wm3RYrKJbWvUuHGRddOnTlPvk8FoULuHzJ87D4mJ5/H2m2/i8y+/LPi+F/6Xwe/npf6l8Dv41ONPoEXLFur79LQ0deHjV/CN/N5z/QcMuOJ9IYSQskJBBkIIIeSGEtT/XIIMQWuGRm/ApuOH0aepPxYcSUcHX3/YLG6YXTpPjYPHDvjMCwIPOYhwsxUa3j3Ccyq1FQMUGRpJgp23eJB4OwYnnDoLJp3NwP4MCb4mG17sm4e+LXxhjsmEKTAGRrOA6Nq5+OTTQKxIMCDUkocavsBjMQK+cIVi8+lzaNehA/w0JrUThqh2axDVi3pnlOAVKLnQyPv8Pe+2wAMGhcdB8PI+GS+83jtgI39Cy5/eFj4n328LqyCmpKSofeu9gY3C4zVwfJ/iAQYvfoy3+0Xha6rBGLbee63LBUj+q90qDuzbj6m/TSuYjWP2zJnIysrCyFGeMQJ4hXrx8mV4/LFxePjBhzD+8cfVVg4njh/HJ59/jtsG9MfSJUuReD5RHThxz+69eOH5idiyebM6/gEfQNJ77mVLl6JO3TrYsH59wbopv0xWK+7Tp01TB5LkvxOrzYoWrVpi/dp1+H36dHVwxcfGj1N/j/v27sPhw4fUa9n4GA7PP4/9e/fD4mPBn0uXqOMZfPbxJ+p2kf3O3nvnXRjZ94qPiXB7sUq5Xq9XK/MtW7WCr58ftm/disF33IHnX3gBt/Xpi7TUVMxduABrV69BSnKyes7kxCT1nLwrBP8+zv3jD3z7zTe486671K46/PvbvGUL9bOuWLYcb7z2Ou4YMgR///03/t64Ue3OwdM4b+5cvPP+e0hPT8eiP/8suB8fffCBOlAm7y4y8cUX2D2shWeffhqj77lXHcdh8cI/1e0/Tf4FnW7xsUQIIeUbBRkIIYSQG8glwDMeA3ttqNcQjuVBkELceHKADh9LeZiTZscwcxB0ggTJW6mXeBcLwdNtQvE8vefdKfgzet7WwcVeG5ysUq5zQ+sWkGz0x6dHz8Pp1uOBzlp0bWFBgEkPyScPGpc/RJb7u8w10bChFT0778Ps+Xk4mpqDYGMlLHGfw+u9onH8cCLkysEQjUa4WVq0yoWWBN5++fv27StS2ec/+RPoESNGqP36+aj7HH8q/Omnn2LBggUFAQJv5Z6fi1fYfv31V/Tv379gMEjvmAy80lu9enU88cQT6pgM3oHx+PF8gMgJEyZgzZo1RdLhxWdG4AMBzpo1q2C8hsL7FW7pUFKXCu8MGJpiTdb/K+o3bIAffv7psvvUqlULy1f+VfCeDwpZWJ++fdTlUh5/6ski77/85uuL9rlr2NCC13xwSD7IZEkaNmqIBaxSfjkff/apuhT2Hj64aL/3PvxAXUqyedvWgtf169fHuMfHl7gfD0rwxYsHTD774nP26vMi++07eKDI++Ej7i54/e4H76tLSSIiIrB2w/qC93cNHVrifoQQUt5QkIEQQgi5gXhbBJHXrxUtxOq1IHTsgIDvD8AU4caEkRUxc3E8pp+WcFd0Ffi6PU/+tRDAG1iLaj3YySrYCiSRP4X39M2WoIdNMEEUspCtF/HtyXTYZAkvjQxDddM5dj0D9Fo7DIIOok4PtxAGhW23OPRo0xCYPjsLhzTBcOWewbCuNWERk1G9YVVU6jEUCp/yUeERDI3aBN3r7bffVgde5LM0eHmDA+vWrVP7rBcfYLHwwIwF94NV3l955RV1VonVq1ejT58+6lSJHA8keFs/vP/++/jggw8KzucNFhQ/J1/PnwzzKRA3btyIDz/8UL2Gdz9vcIO/79mz52V/V/xc/Dx85gv+RJ2UE+o0D6W2GyGEkFJGQQZCCCHkBtLwrJc/FFcru3r4DhqD6rs348D5TNSukYxRo/xxJj4Cv089jF41WqKGoMAts0q+LQPmXAfsTgO0ogStJEDvFqGXRGSJbmidNshODbZaBcTbFLw2wA8RxjS4tHoYBLfaEsIpS9Dxmpcjm1WgJbiz4uDKtsLFKvKNG2ahZ50ABOlyYandHFLlIfANi1QHkRQEA4r3GOCj3vN++jwosHv3brXC7x1XoaRgQmG8qTrfh3e54NNM8ukmuRYtWqhT933yySd46623LmppcLkBHHlggQcTfH191VYRvAUDf8+DCtc6LgNvwcA/J08vKR+U5FQkDB2JyEVzIRQZD6Ko+efs+OWUFfPaB+FCG5SLww4fvPe+OgbDpVorEEIIuXoUZCCEEELKgHfQQoPZjM6v/A+Tx49BxToNEep3BLVqpqPWpJY4dFLB9N9PomvVdgiyhkJjzYRPTjb0eVoIfEpLtwSbS0agU4bVx4Ys0QBrei6qhgioXdUAjZTjGZOAXcupaKBzSMjJTYdiOwtLXhC2rnHj1y2sIu2rQddG0QgLdsMvsjqElmNROboj3Ipn5H5RrZzxwIFn6Ef+hJ8HCPjTfd5agM8OwLtH/Pzzzzh69Kjab5wrPJ2lt/tDeHg4OnTogIkTJ6rTSxYeXM973hdeeEFd0tLS8P333+P333/HiRMn1FYN3vN5Awi8iXqTJk3w0EMP4Y477lBnIyjcJYJ3r6hYsWKRsSP+CR6s4NNlvv766wUj+pOyo9jY9/vjL+D68ReYJAmpjVpCP+Fx+D9wH6DXFewXb5fx2I5sLE7m4TERkYvS8W0zM26rYFS7KXkdOHAAz014GseOHVPfr1ixAh99/D+1ewghhJB/h4IMhBBCSBkp6EbgG46RP8/Fso/eRFRjCbUC46BNOYaqVYNR5Z0aLLPOxP5VKTiwLg0Nwmsi2lgdFlaRN1uz4bTbkOe0IcSVBYfbgpggHQ7IaQgKMwG5uXDIbgg6I2wuPXJPyzi4zAE5Iwh1LRb46P1hyj2Kp+5vjepVnNCGNoDSaQKrqFfyDMgo8Iq5p1UCr7AJ+ZW04gMh8ifBDzzwgFrR97Y48A7oWPi991he4S/8vqR7wvFWBDwYwRdvkMKr8HkLj+/gPYc30MCnt+RdO3iw5Z8qPMZE4Z+k7Agm9n2uHgPNiGHwve8eZHz6BcR6dYoEGLhIo4iaJgk9GxnRMliPt/dkok2wDsV/+zzAxaecfOnVV9QBG2fNnEUBBkIIKSUUZCCEEELKmF5UWFVeh9snvoyscxmY/MNY3NFBA93ZcxC1CTC6FTStEYY6DWX4OI/DJYQgK0fG8aOZOHvagbxMPZyZbojORIj+AsJTNPhkshXNRD+EyiIEswm+OisskgMt5CD4wR9nDRasOH8S7z3dGdpKBgjN7ocxvDOg08DNxzVQUybmL5fu2+6tvF9qYMSSZmb4pxX+wlNeXu68l7u2N9hxNf6rs0kUk8mWlWWdiKsRMGwIwBcm7OvPStyH/6Y/bhZc8P7PzqEl78e+E3yWBi8+leVNKresE0AIIcVRkIEQQggpa4LIKt5uKLIPfMJ1ePjVP2DPtWLZL28hQjsDkZUk+NjsMOZkIVPQQQ87gowy2jY1Ac1CIbvzYHUGQ5NnhZiTi/usOpyIU7B0RQ58zWGoa7JBa7BA1FeE26jHgmwFrStl49HuXVGh97PQhMZAMung1thZwSAbOikQNGTefx6f8uCmrVkTQggpvyjIQAghhJQxQX3+qocgKtCLnv7/Oj8jhjzxKSThU9iyUrF91VwkbV+Oyq4DqBCYB8Fpg9Goh1tJZbm5oAYqFJn9VAS43RJCfICRg3TIy83D6UQNko6dRXSoD+r37Y/HuveGvnI1dm7PeAsiNPntFUye5Z/3LCCEEEIIKYKCDIQQQki5Uaj1gCDlr9FA5xuEDoMfhDjoQchsveSWIDsVpKcnIjXxMByn2ZJrh83mgtOthdlfi5DIKqhUswqqh1dGA0MQdKIWLj7tpSBCK4kQFKhjLtBoA4QQQggpTRRkIIQQQsohRW1bILH/d8HAxwhQNHwEQog8OKDRQjJpEF4pChUqRkJs1lOdnU9RPC0Z+H6y6OIdz8Gzej7GAA9Z6GTP+PqSRgGf1FEr5zdZuLohC0gpcLvd7R/5eNXcPSdTSx404AZqEBOy5eeJvdqUdToIIYT8N1CQgRBCCLmx/lFnBM9MDlrvm/yf/IVYaDhGTaFthdtB8PYPJUy7mB9M0Hr3pOBCmZFluTIPMKz4oG+ZpkOv10/v/NT8u8s0EYQQQv5TKMhACCGEEEIIIYSQUkFBBkIIIYQQQgghhJQKCjIQQgghhBBCCCGkVFCQgRBCCCGEEEIIIaWCggyEEEIIIYQQQggpFRRkIIQQQgghhBBCSKmgIAMhhBBCCCGEEEJKBQUZCCGEEEIIIYQQUiooyEAIIYQQQgghhJBSQUEGQgghhBBCCCGElAoKMhBCCCGEEEIIIaRUUJCBEEIIIYSQMqIoEHYfT+rz598nnzl6JqNWntUWohUUvSQrSMuVYHfJZZ1EQv4TGlcPOXM+zepTIdgS175h5My+rav+GhpgOlfW6fovoiADIYQQQgghN5AkK+Z3ft20ddvhpPqP9q+LNnXD8fjAOkX2SU9PR3x8PJKz3VhzKBenUlx4qF9ddGtaqYxSTchNLyr/ZyBbGlttue+99+funLV7zrtfH9N2eJt6FZaXZeL+SyjIQAghhBBCyA0gy4plwlerE/RaMeC5oY0wfkDdS+4bFBSkLjzYEOYXr66zOpMw9pOjqBEVgqeHNLxRySbkP8ls1GLsbXV82YKE1LxlvZ7+I+f5ES3u69q08h9lnbabHQUZCCGEEEIIuc6OxqU+88inqz/6+ZlO8Lfo//FxhYMNZ86cwdiuQciyuTD4tRX46OHWqFbR7zqmmpBbQ2SIBTNe7uq7fMfZH0a/u+yFX17o3UIQoJR1um5WFGQghBBCCCHkOjp+Ju2JcZ+v+Wjmy92h1QjXdI7iwYZXBobhzd924Jm7GqNB1aBSTjEht6ZezSsFBvsZqt3zztKdv73cp2lZp+dmRUEGQgghhBBCrqO3f936wcRhja85wFBY4WDDPe0VfDBjL6a+0KUUUkkI4ZrXDAlYuy9RmL3m2GNDutT8uqzTczOiIAMhhBBCCCHXiaIo/hpRMDSuFnzJfVwuN/7ceBQrdiXiXIbrqmaUCPIzICPXiUCff94FgxByeS1rh6WdScmLuvKepCQUZCCEEEIIIeQ62n86HaJ4cSuGXKsdT369EZUCdRjYzB9ju106EFGSPMmAb1acpwADIaVsx9EUpVH1sPiyTsfNioIMhBBCCCFXRyfL8r96wqUoSlhpJebfYOnw4T/Z54m51nMIgpDBl9JL1a3B4XRh1Ptr8WiPEIT6Xn2RXKPV4fvlSXhheJPrkDpCbl07T2RMT0jJq/bWAzW/Keu03KwoyEAIIYQQcnVcJ85m+c1Zd+yRVTvjh7SoFR7Xr1VUeP0qARWu5iQrPuh7vdL3j7lcrv48HVlZWSf/yf6CIORlWqUFi7fGWxdtOt2vagX/Q0M61/ymY+PIhRpRcF/v9P6XHI3LQLVwwzUFGGStBW/OicUnY1ujSoTvdUgdIbck59Id579bszuhyS8v9Gpd1om5mVGQgRBCCCHkKtWMCtzz4shWD/OFv1cUCDuPJXX+Y+2xR7YdSezZtUlkbL+WkVWjw31u5vkFJbtLWf7XrnMJ8zee6mI2anOHdK71da+W0b8/NqiJlS1lnb6bWsMaYZBWHMPWk1a0qmb+R8coGhN+XJOC2pV1mPt6j+ucQkJuGY7kbOm1J75a/+TLo1qt+OG5uo+XdYJudhRkIIQQQgj5l/h86s1rha/hi3edJCva9XsS+vMWDyfPZjbp07Ly+T4tImuGBhjLZQd6SRG2rd+ftG/B36cbZ+U5gu7sVPPb29vF/HJP7/ppbCnr5P0nffpYe6zafRZvLziIIS39UauCoch2URSR4TBg5YFsxCblYtzAivj68VpllFpC/js0Gs0BpyRM/mnpkeprd58Z8PYD7bYu+2jwVbVGI5dGQQZCCCGEkOuAdx/o0jRqLl+86+xOt3n5trjhf6w79miO1Vnh9jbROd2bVKzha9b9+7kNr4pwYvvx9I0LN8VGH0/IqNu/XbWfB3es8d3gTrXi2HJjk3KL69akkrpIsoJdx1Nx4mw2fll+tMR9X/llxw1OHSH/HRFBljiB/aWtGOJzumOjyIW9W1WZ9vzdLZLZ8khZp+2/hoIMhBBCCCE3iFGvtQ5oX+0nvnjXZeU5gv/8+9R9PPDgY9KJt7WuLHZqGBFl0GlK5ZqCIKQdOpP916LNcT5bDyW269EieuYdnWp+27NlzD62lMo1yL+nEQW0qBWqLsO7Vivr5BBS7vV8fgm2fz/iBgdoyT9BQQZCCCGEkDLkbzGkjexZ53988a47n5YX/fu6Ew8t2HjiweoV/bP6tooKal07NKikaRALEwTBHp9i/WvRlnjXqp1nurSqV2HFHR1rTGrboPJatlz3z0IIIYRQkIEQQgghpJypEGyJe3Rgo5f44l134mxmgzlrjz26Ynvc3S1qhSX1bRlZoVKo786l2xOSF20+3bFmZODeOzrVmMSbATesUVF6cRQNjk4IIeTGoyADIYQQQshNoHqlgP0TR7R8hC+F14+vHIbxdzQtq2QRUmqUtC34Yvwj+GB5HOwKIIh6WIJDEWIRkJuSiLQ8NxQIEAMaYfTHP+KdQTEwXPm0hJAbjIIMhBBCCCGEkDKUi3XPdsSQn2Mh+7XGs0vX4LmWASixc5D7DOY+0g8PP9AcU14egambvkAv/1Lulq84kWcDLOZLTARzpe2E3OIoyEAIIYQQQggpIzLO/TQSw3iAIXQEpu/8Ej0tl9ldG4XBP+xjS7H1SjKWPNEP9087CaehAhq1a4la4Wa40o5i6/rdOGvXIWbkZCz9rDdCC8Uk8v64B9UfXAzNgNfwXOr/8P6JaDSrG4agzi/hx/FN4bzCdh0cOPT9SAx+eTVSZAuiW3VBu9qh0GSfxva1G3EkTYJPkycwfe7LaJcfDLHt+hD9B32AnblGVGzcHq3rRMCi5ODMvr+x+VAKnJameG7eQjzfzFzqd5uQG4GCDIQQQgghhJCyoaRgxaKtcCgaxNx1L7pcLsBwGa7kRBh7vYjPe1ZFj36NEVi4cQO7xm9DGuKJ3+7DQ032YO7o8IJWElq9FoKgwLp4OrKWn8S5Jroi55Uvu92JTS+0xIDvEuBz2yQcmzIEwcUaVeQsGYtm93yKAS2TMH3PV+ipP45Jz36MnXlhuG/uXnzcqVhrCNc57NmSAG2wnZ3dDGorQW5GFGQghBBCCCGElBERmvzZWiXJfc1n0YU3QPPow/j0o2fx7rh9OJPtgnLxXsjJyoGMcBSfIFbXYCiGNtRddMRltzs3Yubcs5DEUAwcM+CiAAPn2/1uDAifg58SF2Heuo/Rs3cNjP/5Kxwe/DgmD4rAL4IBobVaoUPnjujcqSu6d2qExh0qXsMdIKT8oCADIYQQQgghpGwIIegzuBMmrlmB+GnfYP6EVhhSUm29CAW2lBS4g8PgK0JtqTD97mYYv9yGCnf+iFWnBiBcLLy/HQvui8GYBdKlk6E3QH+Zy5a8XchvEaFAuTiiUZBWzzYRQv7x2ugh+H4nW9R3MqznD2DL+vVYPeNpvHXvHqS4tIgZMwOrPuyC0h5ugpAbgYIMhBBCCCGEkDIiIOTuyZh/vCtu+2IRxtZvjQ3f/4FPbo8suaLiisPccYPwyJw4KJVHY+a6/6GLcQ82b7dC0URj4AN9iwUYFGSvfQlvLLKzVzpW4b9kNODq6dvj7iHRmPZNLOb9OBcvdR5WZLwHfu2MxZMxN1mGpuKdGN7ZACXjCFb/tQHHTJ3w4O012WcUYa7QEF2H8mUc3nauxPh6wzDtt/cx9dFOeKyqeKmrE1JuUZCBEEIIIaQU2O12Y2xsbAO2tGRL/fj4+Jjz58+HpaSkBCYlJfknJiaak5OTdU6n86Jnk35+fvD19VUXi8WiLnq9HgEBAXC5XAgPDy+yvyAI6jYuLy8P7JxFtttsNp4e5OTkqMdnZmaq+/B9s7Ky1PeSdPFT3bCwMGdoaKiV/cyqUKFCGnudzH6erlKlyj627GHLaZaWpFK8bYQwRrR4bRMSn9qNH58aizdHN8RUHgsQdDAHhSLUTwdbyjmk5LrUKSxNMf3w6rJVeLRFYH5Lgm54+aP+WPngfHzbrzbW9uiH9lECUo5uxurNGWjwymzM/+ws2j6+Ans/vRf3numCXiNfw6gm/3bEAx1avr0Zm6qPxsDnHkPtsImIad8d7aoHQEo9hs1rNuN0joiKfT7Gkp/uQ20+36YhGsHnJuCjt5/Hy4IfqjRtjUbVQmGBFSmndmPLzjjkiBUx4IsfMJYCDOQmRUEGQgghhJBCFEURTpw40Wr//v29Dh061OLAgQO1Dx48WOHIkSNmt9vTZ9zf3x+swo2qVauqi/c1/9mhQwf07t27jD/FNdPnLwFsifau5AGJs2fP4ujRo1i+fDlOnz6N2NhYdeGvExISCoIWwcHBjgYNGpytU6fOsfr162+pW7fu+kaNGu0JDAzMKJNPRG4agl8TPPjTVrZc7ZEiwgf9jMNsubQZSBhx8VrD7T/jXNqlj7vSdv7Ppcbo6Tg4+p+m1YTGTy7BqSf/6f6E3HwoyEAIIYSQW4Ysy9r9+/cPWL9+/dC///675YYNGyLPnTun0el0YBVitGjRAs2aNUPTpk3Rt29fDB48uKyTXC5oNBpUrlxZXTp27Hil3fnz2pj8pff58+exbds27Ny5s2CJi4tTW2PUq1cvmZ1vc7t27ea3b99+NTt//HX/MIQQQq4rCjIQQggh5D8nISGhy+zZs5+fOXNmh+3bt5t51wNeOeYtDLp3745x48Zh/PjxZZ3MW0KFChXUpVevXiVtDmPLAJvNNmDLli2YNGmS2lJiz5490Ol0Evtd7bjrrrsm9e/ff35AQEDmjU05IYSQa0FBBkIIIYTc1M6fP3/bl19++cGPP/5YNzs7GwMGDMCoUaPw2GOP4amnnirr5JF/wGQyoUuXLury7rvvelfzWQZbsd9pq4ULF/4yZcoUrF69GlFRUakPPfTQ548++uhXFHgghJDyh4IMhBBCCLnZaDds2DB56NChI/jghi+99BJeeeWVwpVT8h/CB8UcOXKkuuQLycnJeevjjz9+67333lMiIyPPzJw5847mzZvvKMt0EkII8aAgAyGEEEJuFsKyZcs23Xbbba1ffvllnDt3rqzTQ8oIn4Xj9ddf54ugKErlxx9/fH2bNm10y5cv79W1a9fVZZ0+wmTPwohaY7HUqUdgZGUEG650gBY1H/wVUx6srjZhubm58PfEZhg0rytm7v8MXf7tJBaE3GQoyEAIIYSQm8KsWbP+WLJkSevk5GQEBQWVdXJIOcEHkPzyyy9NrVq1Su7Wrduq2NjYKtHR0XFlnS6STxOFUb9swutN/+PVDsWJPBtgMeuBvGX4ZcZZyFcMrBDy3/Qf/9dOCCGEkP+K6tWr75g/f/6g7du34++//0ZAQEBZJ4mUE0888QS+//77EP46Ozvbr6zTQ/49264P0X/QB9iZa0TFxu3Ruk4ELEoOzuz7G5sPpcBpaYrn5i3E883MbG8F6dOGot74lUCn+zAi5Q8s8+uHu3o1QLg2E0f++h0zNpyBK7gLPlg8E/fX8FaBHDj0/UgMfnk1UmQLolt1QbvaodBkn8b2tRtxJE2CT5MnMH3uy2jnLxSkLe+Pe1D9wcXQDHgNz6X+D++fiEazumEI6vwCno9chVmr12JDrgLFvQtT334D67VmNB0xAbdXUxA37wkMeWwGTjhMqNyyGzrUC4WQdgybVm/C6VwtIvt/ivk/DEdV7bXeC0LKHgUZCCGEEHJT4LMNZGVlYcGCBRg6dChWrFiBp59+Gi+++CK1bLjFyLKM3377Dc8//zy0Wi2mTZuGBx54YGPDhg2vOL8mucGkBPz+cFesNl1+N23dsfj1m7sRKfJjjmPSsx9jZ14Y7pu7Fx93KtbfwHUOe7YkQBtshxNm8K2i6Olk4di0GxU3nMSBmoU6XTw2ER/ueBVt+3yF57rdj4B9k3FHgAubXmiJAd8lwOe2STg2ZQiChaKXyVkyFs3u+RQDWiZh+p6v0DP/M2j1WgiCAuvi6chafhLnmugKHdUcL9dQsHLmVqSxyv/Il1/L7y7hwvZX26DvV7HwH/AjTvw8CEFFrqcgc+GDaDHmMbTqEo81a55HPe213QtCyhoFGQghhBByU+EBBT7NIZebm4tPPvkEX331FXgA4s477+SVTXTq1IlVOsQyTikpLadPn8YPP/yAn3/+Genp6erv+Y033kBiYmLBPvv37y/DFJJL0kRi+Herr667hKYGxv/8FQ4PfhyTB0XgF8GA0Fqt0KFzR3Tu1BXdOzVC4w4VSzxU32wI7qx+8agOuka90K3Ctzh5dj1WbXHijq5/Y+bcs5DEUAwcM+CiAAPn2/1uDAifg58SF2Heuo/Rs3fR/g+6BkMxtKHu4gNL4tyAqTNiIQk6BGauwCvjVl68j5yJIIOAtCMz8fvOCXi7le5f3QtCygoFGQghhBBy0/Lx8cGrr76qLl6HDh3Cm2++id9//x3Hjh1DpUqV0KdPH/Tu3Rvdu3eHv79/GaaYXIokSdiyZQsWL16MZcuWYffu3QgMDMTgwYMxbNgwvPXWWzSDyC1EGz0E3+9ki/pOhvX8AWxZvx6rZzyNt+7dgxSXFjFjZmDVh13gXyxAoJR0QkWCW+JbRHjijwKE/L2VEg8ovE2EUEIQQtAboC9hfcnyrysEoOOTn1/cIsHr24tX/Zt7QUhZoCADIYQQQv5T6tat6515oMh6/gScV17Xrl2L9ayAfvjwYWg0GjRs2BCtWrVCixYt0Lx5c/V4vp6UroSEBOzYsQN8TI2tW7di27ZtyMnJQVhYGDp27Ki2PuGBIB5IoGDCrU3JOILVf23AMVMnPHh7TVZhEWGu0BBdh/JlHN52rsT4esMw7bf3MfXRTnis6oWatXPHVEw/8ABeKNbCwLZxLpYkyRADuqN3WwOgb4+7h0Rj2jexmPfjXLzUeRhCi3VfyFg8GXOTZWgq3onhna9hFEdFuRDwYNcbNTwG0z4/iTmT5uD5jncjrHhAwHEMq5ecRXi7NqgbZlSDIFd/L6gFFyl7FGQghBBCyC2Bd7PgYznwpST8SfqRI0fUvv67du3CgQMHsG/fPqSlpanbK1asiNq1a6NGjRp8EEr1Z82aNVG1alUYjcYb+VHKHB8T4cyZMzhx4oS6HD9+XF2OHj2KkydPwu12q/eEB2zq16+Pxo0bq0GcHj16YODAgWWdfHIjSWfw231tseQf1NEFSy98uPwtdAqMRvC5Cfjo7efxsuCHKk1bo1G1UFhgRcqp3diyMw45YkUM+OIHjFUr1ReaIhhat0XO09VR6XxNdOnUEBG6DBxZ9xc2x1khVrgdXy39Brf58D11aPn2ZmyqPhoDn3sMtcMmIqZ9d7SrHgAp9Rg2r9mM0zkiKvb5GEt+ug+1ryLGoK3WAk2DRRxKmob72p5A62oGBPX9CF+9tgnbGzyOIePGo3bI0wir3w7tG1aCyZGEw5s3YvdZG3TRg/Dl/I6o570nV30vCCl7FGQghBBCCGF464V69eqpy+jRo6+4v91uVyvWfADK2NhYtdIdHx+vLnFxcTh//nyR/S0WixqoCAkJQXBwsNoVoPDCu3HwffhPvV4PX19fmM1mPuClur54IMPPz09NMx+Lglf6vXiwJDMzU33Nt/F02mw2PusCnE6n2qIjIyPjooVPDZqamqr+LIynp3LlyoiOjlZ/ehceXGnSpAm6det2bTec/Lf53YVp5++6xoNNaPzkEpx68hoO1dXHUws+wrv/qNuAHjVGT8fB0Vd3CcPtP+Nc2s+X3sHUDV8cTsUXJWyKHvwNtrHln/sX94KQMkJBBkIIIYSQa8Ar/Q0aNFCX0mS1WuFwONRAAQ8K8EABX/Ly8tT3hfHgQuEBLgVBKJjakwcqePCCj1vBX9OUn4QQQm4ECjIQQgghhJQjvPUCX3iAgBBCCLnZUJCBEEIIIYQQchMTEDD8d5wfXtbpIIRwFGQghBBCCCGEEEJIqaAgAyGEEEIIIeTGyp6FEbXGYqmjhG2CCJ3JD8GVaqJpp9txz8Oj0aOaBf9oLMdLUpD5+92o89hyoMun2Dfn3mJTVroQO2ssBoyfj8SQjnj6ux/xbPvgS1xTQerUu9Dg8VVA9y9xYOYIBP+7xBHyn0JBBkIIIYQQQkjZ0FbD48s24/WmF1dLHOf+xs8vjMc9LV+By6cTPlz3B+6vUvrTNDoOTsL9j/+FmCc+xJakn2DybpBj8U3f1nj19DDM3v8Zuui9GwSE3D0dsUNkQNTBQAEGQoqgIAMhhBBCCCGk3DFUbIdHpmxF/eeaYNCP6/HKy3MwZOpd8CvYw4XEzVPx5aSZWLH1EBLSbFCMQahUszE6DhiD8Q/2QozpMhfwXqfuEPQJfgPjR7fH4S/2Y9ZIXywbWxMjZ+VAUff4FXdE/AqIwRgx8yC+7KZD6vS7L9GSwYWkLVPxxTe/szQdRUKmE/rgKmjceTAenPAYbqtuLnpxOR17Zn2LSTOXY/OB00jOcgLmIETVaoked4/DkyNbIFRTOveTkBuFggyEEEIIIYSQcsqNvFy7+spg8YG3MYH70Mfo1e1d7JGrYNj38/H3lKiCbXLaOrw6aDhavOFG1bELsPGdNjBe1TUN6DMpDknj3kfnrh/iSOA9xVoyKCWn9Ojn6NvlTewU6mPstDnY8mso1PiAkoktb9+BO1u9D2f18Viy/nU0NwCuXW+gXe/PcVLbDM8vXYZJjS4EIOyHvsCdvXqj9nN18OzKtZjYgKpt5OZB31ZCCCGEEJJPQcaU/qgwehHQ8zvELXsI4dQUnNxwMpw5STi2ZT6+fvM9zDqYB32d8ZjxaV9PsEA6is/HfYjdDj26fLYMXw0ILTJ2ghjcCW8t+Bgn647D8h8m4H9DNuDlxte5OYB0DF8++i52sDT1/noB3u4ccCFNQgBav7IKCa8UPUTX9DVsS37N80ZxwZqejEyrA5KsQPHphzvav4/Ny49j5Zo4PNugGqhBA7lZUJCBEEIIIYQQUjaks5g1vjf+NhcKEwgitEYfBEfWRosn5uFw/2YI0xc6xn0Sx064AU0UatcMKHFwRsGnGqpVFIHTsTgZJwHXO8jgPoEjx1maxMqoWuWfDVJp3/0/9B/4Hnbk6lGxzZ0YcVtr1KjgB5OWjzvhRnxWfosJpeSWE4SUVxRkIIQQQsgtxo3TMx9An/t+xVG7GVXa9kLXhuEQUg9j3fL1OJmjReU7v8eq6feimlpSsmPXmx3Q4fWd0HX7EjuXPYZq3vqK8wDe79QKL27VouPHW/DXU3Wgu6ZreNhmDUHQ0D8gDngFz6X9hNkhwzCqWwz0SVsw/dtp2JGuQc0nZuN950SM3Vgf947ojCra89g07Vv8visDxlbvYdvfz6Oep4020n7ph0pjlgLdx2JM0u/4038gRt7WBBW06Ti4ZAp+XRMHZ0hPfLl+MR6tfblioR37vxyEnhOWI0n2QdV2PdCpXji0WSex+a81OJgqwbfF81i44h10CvBUr6zb3kTXHq9ja44Jkc06o339ivBRshG3ex3W70+C06clXvtrNV5rZSn9XzG5eWgq4a4vl5U48OMlaWugTg22/+4kHDqSDqV1+EWVeiXrGI6ek9m+1VG7+g1oA8Cvo6YpEYcOpbE0RRRLkwK3wwaHm+1qMMMgr8ULw9/HjrwIjJm3G//rqC96PukQDn4oX6JjBiHlGwUZCCGEEHILcWHzs3XR4X+nEDDkd6TOHFps6jkFGXPuRs2ho1G72Wns3Pk6GmqNaPrqdmQMeh9d2o5Hzcj1mL57BoY4J2NA0wewRO6Gr44uwaM1tP/iGp4tokaEICiwrdyByifP4EBBX4VH8eQjbdAj5lGs+nwkfpl8GolfhxRUYh59cizadK+GcWvfwKuzH8Mfw3w85xM9lSvHuu2I3JuGM3UKVbaefgNfbXkW9dr/D+NaDkVg3BwMDyzpnjmx/ona6PpFPHwH/YbkP0YgpFiNLnvBKFQf/B661jqPhad/QT/DEXz+2NvYmhuBsSti8W33YhUoVwJ2boyHNtTOzm5Bsa2EXJ6mBsZPegXLOr+G9c/1wbighfisf2R+gI99vc4twbMDn8Fqpx6NX5iECfX4P7Crr64LeoM6c4SSHY+4DHb85foOaWpi/NcvYEnXt7D++R4YY5iHb0dU93TvULKw85ORuPO9Tcir+ggWbnwbrXWVUb2KFkJKKrZtOAh7xyb540bIyNz9HR65fypyQgIgIh1JZ89DQnXqLkFuGhRkIIQQQsitw7kaP085BUnQIyhjMZ6+b+nF+8gZCDEKSD34G6ZsfQkft/NUXfQNJuLv9IH4tEcrDK+owXBWxQ/u9z1OLrgfRWbV+xfX8NI17YkuoUUrNEJwKIL4Kl0z9OgcVPQpqRiM0GCWCEVGdlYuq075FNmubzUSd9e6uIqia3Ybelf6FF+eWY1lG50YfnsJ1X3nGvw24wwkMRx3PTLkogAD59fnPgypMB3fnJuHmasmod/ttfHsrF9woMcD+K6HAZMEI8LqtkXX7t3QvXsv9OnWFM26RF58IkL+IU2NcVh2diyStvyGL74ZgzbPHsKZdBe0PiGoXL8N+jyxGEeGNkPov6jtaGo+galTMjD2hV8xsWFFvBYYje6vzMAPI6JL3F9b+0n8de4xT5q+fgTt3zyKs5ku6NhxjToPwhebZ+D2Gt6WOzF4bNlJdJ39Ht78ahRqfZUMm+iPyg06YvBjL+KXXY/AaN2DSWMfxke/DUHMwlp48IcVeK2D4do/ECE3CAUZCCGEEHILEaE+3BcC0e35Hy9+wu71a8mr5eSd+PtILmAKQICciYzdq7A9eQyqRBQZdu5fXUMlCBAvvZFvvkpKyc9xFQluiW9hab7kBYX8gIVyma7h3m1iQdq0VUdg2gm2qO9k5J3di42rV2HFr2Px0h07keTSovoji7D9q54IoMElbz1+d2Ha+bv+5Um0CG99H97hyxX3FRAw/HecH17SpmCMmHUeIy7aIKJC7zexgC3FhYycjfMjS7qOjqVpDEvTmH+QJgvqDHkbv7OlRObGGPvrVoy94nkIKV8oyEAIIYSQW4e+C+6/twZ+/uAYpn8+Ha91G42I4hVcx2Esn38GFTp1QIMIU34FW8aZKYPRbMxCWFu+i73rJqKBtB2vtO2IoZUqYMaUHZg9MtITGLjma1w/zi0/45e94/Bmk6ItJqxrZmDBeRliYB/078ifkJYQRWCfZ/TIqvj5k1OY+fUMvN3jHoQV6/6RPu87zEiUoYm8G6N7GKCkH8TyJWtw2NwN4wfXYQVOEZZKTdBrFF+ewcfOpbi/0m34+cfX8dOE7ni62qVDKoQQQm4uFGQghBBCyC1Ei5bvH8Cxxvejz5gxqKAZi4hGndG5SRTMjvM4sGEttp+xQld1KH5e1Q0N+SFyLH7q3xwPLclDw5e2YtNbLWBSz9UCb+1ORrcnW6DnPdEI+/07bF/4AKpqruEa15mhfUdkPxICy9k66NGtCSrq03Fw5RJsOJ0HsdJg/LJxCgb5XupoHdp+fAgHat6J7uNGI0L3OGp07oNOtQIhpRzG+hUbcDJbRGT/b7FhxljU47EKQ1WEJozF3S+Nx9OCP2JatkezGmGwwIrkE9uxcetpZIuRGPLTdDxJAQZCCPlPoSADIYQQQm4xWlQdNgVH2PKPiFVw/6JU3F/iRl90/uwInJ/9y2vkM9wxE1Z55iU23olZtkv1VzDhrtk2XLLxub4RXlj9NT67YpMJAYH3/gn7vRedALUfXoiEh690vJcZzSZuQPrEf7o/IYSQ/woKMhBCCCGEEEIIIaRUUJCBEEIIIYQQUi7ISX/hleEP4bu9WZBhQHDVOqgZFQSz6EJW4gkcOZaIXAkQA5rhySkz8VKHwFId08QVOwtjBzyO1QFD8OaXb2NkQ/9rOL+CzN/vRp3HlgNdPsW+OfcilAY3JbcQCjIQQgghhPwnXarrAyHlk5L+O0Y0HYflrup48q9jeLWpruT9MhZhbJvR+Oyu9jj3/d/4+vaAUgs06KrchZ/2/ttZLwi5tVGQgRBCCCGEEFLmBL0PfPjAofYk7Np6Co6mtWAoab/A2/DdkVR8V3yDkowlT/TD/dNOwmmogEbtWqJWuBmutKPYun43ztp1iBk5GUs/653fsuBCiwOl40fYPfd+VCg2Dmnu7JGo8fCSi7fLZzH3wT4YOz8BkqkyWnfviDpBEhL3r8faYxG4726fElLuRty8JzDksRk44TChcstu6FAvFELaMWxavQmnc7WI7P8p5v8wHFWplkZuYvT1JYQQQgghhJQ9n9vxw+HN6Pr0GEx8pS0qvMQHOhUgGkJQrUlLtG7dBu06dUXXNrURor/4cFdyIoy9XsTnPauiR7/GCCzcvEFJwW9DGuKJ3+7DQ032YO7o8H/R+sGGFY93xUPz0hB2zxzs/Kxr/owz+aRj+LRnBzjYywtBEhe2v9oGfb+Khf+AH3Hi50EIKjYVbObCB9FizGNo1SUea9Y8j3pUUyM3KfrqEkIIIYQQQsoHQ00M/2ojW7wrJOSdO4wd27Ziy6YNmDLhQzwWmw1JEWCscS9+WfQxeuUPeKALb4Dm0Yfx6UfP4t1x+3Am24WL52PRIScrBzLCobnWNDq3YunKNMhiGPoMal80wMBpamD4iLb4cPe6QsdswNQZsZAEHQIzV+CVcSsvPq+ciSCDgLQjM/H7zgl4u1XJ3UUIKe8oyEAIIYQQQggppzSwVKyPTgP5cj+eV9dJOPVVH7R5dTJGDQrDqrUT0UCTgul3N8P45TZUuPNHrDo1AOFFuj7YseC+GIxZIF3V1WWlpGljZbb+ckcpcLk8AY4LjRVEiDw9QgA6Pvk5Pu5UQlMM7turSh4h5RIFGQghhBBCCCFlS8nE3Pua4MGF2bC0fxur/3gE1S/5IN+KuFNJrKovQF+hEsJ5kwTXHmzeboWiicbAB/oWCzAoyF77Et5YZGevdFAKAgcCTP5+0AuALSUJKTKKjsmgpGPF4k1wgrd/KETfEt07+GHq3GQsmbsBb3XqBnPh7Y6d+OHHbepxBd0l9O0xangMpn1+EnMmzcHzHe9GWPH+Go5jWL3kLMLbtUHdMGOpzppByI1EQQZCCCGE3BRq1KjxvSAI70+dOhUffPBBWSeHlC/StGnTzhgMBkfNmjWPlXViyDUQAjB48nG0WPQS7h3/ClqFvwRF0MInvApiIkNg0UqwpiXgVGwiclwKBGMV9Ht/Nb55sBF8eG1c3w0vf9QfKx+cj2/71cbaHv3QPkpAytHNWL05Aw1emY35n51F28dXYO+n9+LeM13Qa+RrGNX7FbzbcxWeWP4xejXbjhEjeqJ2gBtJ+1dg5jIHhj1+GyoumYYUWYZckFgf3D5pBT7K7IvnfhuCqJmhqNOmFWr6O3Fu/xbsSquLZx/rC9/3F8IpK/nHadH0tU3Y3uBxDBk3HrVDnkZY/XZo37ASTI4kHN68EbvP2qCLHoQv53dEvTL6NRBSGijIQAghhJCbgtFozJAkSbzrrrsWC4LQ56effsKYMWPKOlmkDCmKkvHFF1/MfeaZZ+4dNmyYYrPZTOy7cdmG7KQ80yLqtg+w+rZrCSKKCB/0Mw6z5dJmIGFE8XWRGPH7CVy0GuPxUv6rF8d/WUJSq2PMnGO43F+gZ5+96CBED/4G29hCyH8ZBRkIIYQQctPgFcjZs2f35a83bNjQoXXr1p9s37692ZAhQxzPPvussVmzZmWdRHIdKYriXLt27caPPvpIt3Tp0g5dunTZw15/63K5HijrtBFCCPGgIAMhhBBCbkodOnTYsGXLlhbe9xs3bmw/fPjwx/744487tVqt0q9fvxz2PqhXr16wWCxlmVRyDZKSks4vWLDg4KxZs/zXrl3blP0O80aMGDHtkUce+XbJkiX7yzp9hBBCSkZBBkIIIYT8J7Rv334jX9jL4d51KSkpobNnz+63bNmy3n/99VeP9PT0oEqVKmV36dIlh+0b1LFjR1Pt2rV5C4kyTPmtyeFwuLdt23Z048aN6Wwxb9iwoVZOTo5P9erVT/Tq1Wt5jx49/ho5cuRfDz30kLWs00oIIeSfoyADIYQQQv6zQkNDU0aPHj2ZL8W3SZKk2bNnT+Mvvvii/c6dO5vt2LGj+dGjR2vJsiyGhYVlN2rUKKNBgwaoW7euX8OGDQPr1KkDHx+fMvgUN5fY2NjcQ4cOJRw8eDCLLcL+/fsD2c/KDofDwFsjNG7ceE/Tpk13tWzZclvHjh3Xd+jQIb6s00wIIaT0UJCBEEIIIbckjUYjNWvWbCdfLrefoihCXFxc9Lp16+rxIMSJEyeqe5f4+PjKPFjB9/Px8bFVrlw5Mzo6Oi8yMlKpVKmSUKFCBWN4eLg5LCzMn/3UsKXcdt1wuVy85Qdfss6fP5+dnJycx17bz5w5I7P3AvusFrYEJiYmBvNADD8mICAgMyYm5hRvfeBd6tate6hDhw6H+vbtm1PWn4kQQsiNR0EGQgghhJDL4INNVqlSJZYv/fr1W3y1x7PKu4532+DL0aNHK6ampoZkZGQE8iUzMzPA+zo7O9svJyfHl8+QYLfbjVlZWf78WL7+WtJtMplsRqPR7ufnl63T6Vz+/v5Z3vc8OBAYGJjBF+/r4ODgtLCwsGS+tG/fPtHHxyf3Wq5LCCHk1kZBBkIIIYSQ64hX8CtWrHiOL40aNdpb1ukhhBBCricKMhBCCCGEEEIIIaRUUJCBEEIIIYQQQgghpYKCDIQQQgghhBBCCCkVFGQghBBCCCGEEEJIqaAgAyGEEEIIIeS/T07Bmq8+wdIEoFLvpzC+axjEK2273DGEkBJRkIEQQgghhBDy36dkYPf8n/HjHqBR+GiM6xp25W2XO4YQUiIKMhBCCCGE3GAul6vf2I9XLdp3Kq2sk4IGMSFbfp7Yq01Zp4OQ605TExNWJ2HC1Wy73DGEkBJRkIEQQgghpAzwAMOKD/qWaRp0Ot3iLhMW9CvTRBBCCPlPoSADIYQQQgghhBBCSgUFGQghhBBCCLkCSZKCdh2Of/nU2Ywmadn2UK1Wa2tep9LOulWCT1/hUOMNSSAhhJQTFGQghBBCCCGkGFmW/X5duGX39HXxMZWCdOhc2wdRIXqE6YGwEL6HA5nnjzX/fKMVy/fn4pVRzdC4WnBZJ5sQQsocBRkIIYQQQgjJ53K5oh7/eElsUpZTHNMpCC/0D7/kvgJbWlYzq8tfW45h1W4Lnr6z4Y1LLCGElEMUZCCEEEIIIYSZv3r3xs/mHWn3ZO9Q+Jv9rurYHvVMmLIpF0kZNoQHmq5TCgkhpPyjIAMhhBBCCLnl/Thv894Fm+Ibvjo4Qm2hcLVS7Ebk2LIpwEAIueVRkIEQQgghhNzSFEXRT/7rdMNnbwu76gCDoNFh7k4reyHhhwkdr0v6CCHkZkJBBkIIIYQQckv7e2/8uOoRBviZNP9o/zNpLny5IuWi9b0mLilx/xA/mmCCkNIkywrqV73izC6kjFCQgRBCCCGE3NLaN47+5NlJf39sdcgwG8Qr7h8VrMOHwyv+o3OfztBiV5z93yaREFLI/th0RVGQXdbpICWjIAMhhBBCCLnlDelcffrP6+LuHtczpNTOqYh6TF59BlOe71Jq5yTkVqcowOdzD6a8eX+7sWWdFlIyCjIQQgghhJBb3oRhLUfo9bqUdxYcfeLpvmEw6q5l+McLciQTPlsQj18ndoG/RV9KqSTk1ibJivXBTzakPXd3i4caxIRsKev0kJJRkIEQQgghhBBm3OAmT97Tu94bA1+cf65qiM4wvI2/oBGvLtjgFk2Y9FcSujXxxR+v9bhOKSXkluNYvP3c7HkbTrWc+krfFhajjrpKlGMUZCCEEEIIISSfn1mfsfqzu0x5Npff6PeWbTubmlOjc11/uV0No9asv3i8BkEQYJUMWHkoD8cSsjF+YCX8+HStMkg5If8toiieS891//7J7L2NsvKcAZ+M6/TcyJ71zpd1usiVUZCBEEIIIYSQYiwmXfbsN2+vzV+7JVm3fu/Z/tNWHn7qTHJOdbvTbbba3b4lHffyL9tvbEIJ+Q9pXit8TVKGNSoy1PdElyZR83q0iJ755ZNds8o6XeTqUJCBEEIIIYSQy9BqRFfXplF/8KWs00IIIeUdBRkIIYQQQgghhBBSKijIQAghhBBCCCGEkFJBQQZCCCGEEEIIIYSUCgoyEEIIIYQQQgghpFRQkIEQQggh5OroOz0+M9Nqd5v+zUkaxQSVVnqumSRJTVvWCUts8dA05d+c54WRLccO7ljju9JKFyGEkJsXBRkIIYQQQq6Oc90XQ838RVaeI3jBxpMPzV5zdEKwn9EwsG20b7v6EdBqhLJO4z8iy3KFt0c3/0f7xiXlpv+59Uzc2j1nq7ZrUGnRsK61vqhbJZjmaySEEFIEBRkIIYQQQq6Rv8WQdk+vuu/xxbvuxNnMBjNXH3161c74u9rUDXP1bxPtVzPSvyyTedXy7G7bip3n9s//+3RkiL8pfli32p/z6Rsb145yvXJvWaeOEEJIeUZBBkIIIYSQUlS9UsD+l0a1Gs0X/l6WFc36fQm3z1x19NnT57Ma3ta6stC3VZQl0MdQxin1YOmTtx1N2zV/U6zuTHJuyB2dakwa1KH6D2Nua5TElrJOHiGEkJsMBRkIIYQQQq4jURSkzo2j5vPFuy7b6gxcuPHk/bPXHn0qwGLQDWgb7d+hQbheqxGve3riU6wHFm6OT1uzO6FRp0aRC4Z2q/VFn7Y1drHlul+bEELIfx8FGQghhBBCbjA/sz5jZM86/+OLd93p81l1Zyw7+sRfO+LublUnLHtAm8oRtaIC/lXUIdfujlux89zB+RtPN4wIMscO7Vrry85NouY1qlmJuj0QQgi5LijIQAghhBBSDlSt4H/ohZEtH+YLfy8rirhx39nbZq0++viJc5nN+7asbO3XKqpCkG/J3SxkBRnbjqZtnP/36fCElNyKd3aq8e2gjjW+v/+20NT7qdsDIYSQG4SCDIQQQggh5ZAoCHLHRpEL+eJdl2tz+S/YeOL+OWuPjfc168SoML8jmw+eb9mlSdRc3kqhT5vqe9hSlskmhBByi6MgAyGEEELITcLHpMsa0aPOJ3wp67QQQgghJaEgAyGEEEIIIYQQQkoFBRkIIYQQQgghhBBSKijIQAghhBBCCCGEkFJBQQZCCCGEEEIIIYSUCgoyEEIIIYQQQgghpFRQkIEQQgghhBBCCCGlgoIMhBBCCCGEEEIIKRUUZCCEEEIIIYQQQkipoCADIYQQQgghhBBCSgUFGQghhBDyX/GqoiiVyjoR5NLcggsaRYAAERL7Cfbe7baDvYFOq4MMBYIoQCNr+SsIGlE9TlH/n78WIAtu9kqjvlckGYLAVytsvYMdpwdkUX0PxQ1o+EZW3FXy15EbQhCEh8s6DYSQskNBBkIIIYT8JyiKcif70aCs03ErkWUXr1BCFDVwS4InDKDIcAoydCJ7J0uszu+G056KzPRTSDq9Gqlph5CZcRxa2cr21SPPrkPz1rfjXGIuTpzQw2YPgV63HW1aJCMoIFctrLpdWfxqEKCBQ3Kz1xp1sctGHopgF5Uh6pw4nH0/9gffDj+zAefjYlHJnYRBFTZC0YeyfQzIc1WERvSBnyUIIaGt2DoNNHp+LqEsb+N/EQUZCLmFUZCBEEIIIYRcE1HQsfq5Gy4pDY7cZOSmx+L8mb1IOL8KtrwEwJUJrRgCk1aBRnGxOr0ESbbCV+StDCS2zQyz3ox92+eiXqNuCG4uIS4tCRptK2Rm7IVOOI7/s3cVcFIcS/+/u7Pueyvnbtzh7h4kuARCCMSFhOTFiPtLXtz1xY2EhIQEgkNwC+7n7nd7676zO1/PnnDAIUEi39s/v+F2Znp6uqutqqa6SiXzgOIJQooEFkJugPzffLDWCmAVGyR/rxd7tuXCNn40gjHxqDqUh/3rdmDcnPXgijlgPDxs2qHG3oPJiIrgI6vrCoybeDsUgqS/joBhhBFGGP8PEVYyhBFGGGGEEUYYYYRxVgSJEI8gBx53JeprtqKx5giMNQWwWEsQDNgh5ROmkqLBNG9JEDABCLk+QESDy6tqyoTNggmS86adC6Gk5FnADo08AgXHtkOlTYC/sRoCgQ9+nh1VTj9qeTzyKB3aFsHlcsFHAGKRCMFgED7yjmCARw4KfI4Qs3ruxK5dB0AfuRfB0h/xyOz98HlE4Pi45HkuBvc1o3u3YtQ3KMDhVWDnxsfQZ9gjUCqzQ1s4wgYNYYQRRhgXD+q11167/4EHHni17cUXXnjhkYcffvjFlvM77rjj/Q8++GBey/nChQtnX3PNNd+2nI8fP375ihUrxrWcb926ddDAgQO3tZx37tz58JEjR1rNF8vKyhLi4+PLW85VKpXFarUqW84ZhjlpiudwTmyiUyqVVovFomo5Z/Nl828579Sp05HDhw93bjnftm3bwEGDBm1tOR83btyK5cuXj285//bbb6+ZPXv2wpbzefPmffD+++/f0XL+4osvPvzII4+8cCbasPfYNGeiDZs3+46Wc/bdbBnORBu27GwdWs4TEhLKysvL41vO2bqzNDgf2rDPsc+fiTan0u5ctGHrxdav5ZztE2zfaDln6cLS50y0YenK0vd8acO2G9t+Z6LNufpVW9qwaNuv2P7G9ruWc/Y59vkz0Ybtz2y/bjln+zvb789Em1Npdy7ahMfcpRtzp9Lmj/arP3PMndqv/uoxdyrtzkWb/6UxF0YY/+tg/SNUl6zDzvUPQkCZQDE+cjEAGZcGj8/eZYjQz1oWNE8DFL9ZYGfAcGhwvDzwAiQN3wfaR64xXPB4QSLo88AVcOHn1oHicRAZkQ65WIuqSjL1BE2hTRF+sMoJP5o8M3AQoP2QiSiSTgCbM4AGEwd1DTTcThqUgA9NBBcjxzVgv1eCvN1WMDxLaDsHh/X4EGRzqATDDYDLkZATBZZ9dT9m3fIzKJGQ3At/fwsjjDDCuFhQfr+fz/4gzBbLdLVcf6H5CIEwqqGjDRY2HyEQRvXUfLe2PSGM6qn3y9qeECb91PunMqptT5Vt7xMG+NT7ndreJ4zqqffHtb1PmMjQ0Qbzmo8QWLqwRxucRBuWZm3oxuIk2hBGNXS0wUnEaoc2J10gTPip9y1tT85GG8LEn5U27dDuXLS5pvkIgQgvoaMNHm4+QmiHNu83HyGcizZEwMApOIk25+pXp9QtdKnlBxEOT70fj7PTZmDb+0QwO/X+SbRph3bnok14zDXjYsdcO7T5Q/3qTx5zJ/Wrv3rMtUO7c9Hm//WYW716tennn3+eMmbMmNVTpkz5+dTKhRHG/xI4sGLvzv9CyquF328DTUR/Pp8PHq/JnwE7OpuGKOekp0JKAT+NBqsARflZsPE6o7rKSNLzSXoTpNQhpCZYkJLEh1jEwONxQCqVgkfxwAlwgUCgKafW4d+kaOByOGB9QvL5QdBeCQK+KORVc2G12iAXuxEZVwGFQgavk4cA14qQ+QTDOpVszoj8ZRBAEFx46Ur4AxZQiPxziBlGGGGE8f8cFPslp/lrTtjlbhhhhBFGGGE04+DBg10/+uijWzUajSmsZAjjfxNMSLnHWijs3PIZaMdhcu4mYjkXDBWAx8+B28rqAdh0PAj5fshkXAiEDLzkaalbBTusWJN7LSorg+DxJVApJYCSgsVohkKmgMk/AbnV1eibnQyH6xc4zVboo+Pg9gYhohhQzUoB1hKB3R7BHnweD16/ABxBH+zIl6OqSAXZ0OHQ9xag8sAB7N28A/ROP6aPGIDf9v+GkT3FkMIICPjw04FQHlyG12S1wNAIOkxw2eogEkezfiDDCCOMMMK4SIRtwsIII4wwwgijHUyePPmXlJSUoszMzNy/uixhhPFXgN0EQXM84IEPl6MC/GAtKI4XxY1qHC3sBmcgE+6AHR6bEynRabA4SqDFNsRH2ZGYRuFYYzrM/Dn4edETmHHVDPTu3Qva6AgcOpiLXn0HISiLxrMPPQ6VQIrVWyvQo6sYLiYIiUIJmvZDIOYj6GuyZAjCB3C44HIEEAo8oLh9sHibBs6ekyDvrcXhnYeQ2CEZR5cvh99YjcPeRowbNRx7/GNwfFMZbul1GHp1HUJWEGzUC5Jjk9NIDih+AMbGCqgN7O62MGscRhhhhHGxoFr2P5+yXSKMMMIII4ww/qfBKhfCCoYw/pcR2uzAcMEwAWhUGhjL4rB6WxL25orgDdYjNYmLYcOnwuejUVpSgGqvFDDciw7S33AwT46IHjfi6NqVKCoqBI/Lg0DIh8PtCIWP/PmXZaDFfIyc0BNOoxEu4xHokhLh8bpCDhr5fAF5ry+k6GhSd7B/ueCQfNxuNSrMHBxyKZGllePotr3gicX49fV30Omma3F00S9osJlQb25AssIJRjoCiw7KMLfbZkhkplD+TLDJ8ySbK5+i4HHZQkqMMMIII4wwLh5hdW0YYYQRRhhhtINffvllMutQkrVoCDuBDON/Eey2Ag7Dsooc5JVnodTzAA7kfIwnHp2PHj37hiI6FB6vwI7Dy3DtbXOxdWc5fv78WXC0PPS/bgHWr1mBD15/DlKpKJSf3x9AVWkDAj4v7pp/Hd79ciUO7a3CtN4CxA69Hnz7T1AbGhHkCkHxheT9XjAcP3m9DzzCsnIYPrjk3sCsaJQ1xGIrlQRHahayqqyoi47EoB6dkMVTon71RiiSDNDHkPu//Y7UBDeKJF1wqHgfeqbbAa4PrJ9aLpf17cBuyaBgrStGkOEgvFsijDDCCOPiEfbJEEYYYYQRRhjtIDc3N3Px4sVXsVsm/uqyhBHGXwEOEcKDgSCOHC1BQBWJzd9/h22/rYBMqQKP4iBA7qkVUuw69iveffV9yPQUevTtDG9jPWivFxkJcVCrxaADbvC4QrjdXiSnRiM9Mx4Bxo+H512J597YgE5RByCSy+F0+ULRKrxOK8Q8GRD0IgA3gkGGpAe4HJpwqy7QcCGKdwwGGx/7DpZgbEIS8o2VoKwubBdXQt6vJ0r27cbdTzwCPyeAYzohOktV8AUoOEkWUkGAlIcbYnxZRQp4TljsxWFDhjD+p0HTNFVYWJial5eXUVtbG+l2u8VcLjfIRoaKiYmpYiNA6XS6hr+6nGH8M3DBlgxBvwfm+mrUu0VIIQsGP+CHzVSH6kYf4lKTEbRWwUJLEWdQnT3kcNAPU3UZSmutYPgypGakQin6I3rkIJwmC6BQQUq1tzoE0FCSh4IGJyiyekg1UUhJjIKIFw6EHEYYYYQRxplx9dVXL+rZs+fexMTE0r+6LGGcJ0KfS4ItQQ3IOeeEuX1rdBImJLRyOERIZh0JghuKkMBGK+AEm1Nwmo7QbzYsI/tFnzWxB9O8nz8UsvUPFa299Ow2BLa8TOj7edN7wJD8SUGC5D1Bcp1L0oS2CgSJwE8EZg65zmOY0DN/tAx/FAxbb/Jv3badqLSa8cSCuyGQ8Qn3yEaB8OPQgTyo1ErcNe8e7Dt0CG+8/iUoSQNiZUIIvJtRVeiC2epGhIYNxMMqLAKwObyg+H6IhWq4GTteeHgmvPuKkW8xQSoRIre0EtlSHeRKJczG+iYekoMmFwqhtgtiT1kVeosbcVeqBHct/Qz/DcoxekBf7P19B5xeBxqrKuGuM0MdaUBALgTvaCl8PCM0WUZCTy8ph4BQuynSBBvWUsCjYDKVgAn4wPAEl52uYYTxd4DT6ZSy4ZpfffXVB1jlwqnhzM8EmUzm6N+//44FCxa8MnTo0E0URdGXu6z/dJB5i1tdWZlgrirvWnz8yDCvsa5bwOvSBAO0kKwtAQ5faI5IyVzbY/jo/0bo9VV/dXkvBS7YJwOXL0KEToriw3Z23geHx4dSF4HK0jz4GVb5bENVrR2Up4EsTBR6dkwCuy4TYoIOkGWTEoRCDzVWF6OWVqNr9yQE/F74yH2/y4zSGhdkfCeqrHJkpargtllRWlgKZUoHcEw14MoV8JLFhx+bCH9xAbiJHRBNOVDnAATknUZ+Irolq8AuSV6XC5qEDkhSc1GVl4PqRsB4vBy67Cwo6AZUmwG5kIYtIEEE1wRTUAMZ1wmhJhbeqlx4FTHgeyxQxKRDJwsb0oURRhhh/C+AVS6EFQz/LDiI6Gj0upt5DA74NBs/gBw8DthvCzwOF/X1DThw8AgKCgpRW1sDitMUQYFIlqAEoajeoa/cfF7Tdxgv7YXH54bX64XTYkNKQiJuuukmGAyGP1S20BdzoDlaQ9NvLvuFng7C7g+AJq/zkMuNLi+qiDBu9NAwEYE8l7yzxuFBg92DGGEQ18XqoQ7WoW+3XiGFB3tcLqGYCWld/BjQtzsWLvkZsbGxhC4i8BguXD4/IrRyJKXFkHQu9OvVAzHPx2Lhf+/BtMlToOZsx9CYHGxY9jyGjXsYarUacoUU+YU12LNnL8aM6Y3y/CAY1X44AhGozf0Z4ti9EBF6OG0mSJQKNNQFyPvalIdpaiu7mcFOQieVbRcWJMXi+9xorHpnM6j4BNLeQggCQiBCTXhHOwIVxdAJA+jcldAvwsxGrmxtA1axBFZxQ2Qrn98EJuAHhxJeFlqGEcbfBX6/n/+vf/3rrY8//vgW1nrhjz7vcDhka9euHcUe7Pldd931DpEhH5FKpc5LX9p/Bsj6IAITDGmI7VaLofD40d6+yrKhVSX5Q8x11ZlMMHhOAdJUWdqncMuax5K79vpZkZy5pHvfAb8KpVIHh8P5R+42uIw+GSioIpTQa4lgbzGFtM8hj75k0Ra0IbNMroSvyAibTg6hrx7HigJI0rlgZaIQJfHDV+MkCzAHRaWNSOzYDUrKh7wqD3RRiYgQCcEo5bBJRaAFXFhq6uFRJyECjSizeZrfCDR5Dma9EQuQ3KkH2LD3tmI5dDoFPKWlCPK00OpkkJG1NGBxQy2VgzY1wuFyQyJSQaIQwWpzwuWlSYHDSoYwwggjjP8FfPHFF9e/++6786+//vov5s+f/+5fXZ4wzg0Bh4vVeRXYWWYDny+Dn6YRZJoEeh77AYQI7zQdICyKHMGormCiusDFCzYL64Q/4YpAscnJMzSXhpRVVgQ5SEtRIlKsxRWUA/lFBTi4/yBGjRwIhjxHxGD4g0xIQeD0A/VOHxp9AdT5fDhK+B+z0xtSFtSbnXASHtRJysAqLAIBKsSECXjBJgMMhrUZaFIa8EL2F6fzlYfJO8oig1AyfBTXViEtLhGXk/3kcaiQnUXvzumQimfj+x9W4YG7b0WA64FcLkVlZR12790Fh70B8DHo1H8k7p07HLVBLRiHBQWlhfA0HsNLz22AxzEaMoMMkdFKVJQ34NixXHicRuQfWIPsVA1SUmrRUeuCn9DcajUjJjYFHlb+4fBIGwRCSiBWN8MqB4ScIHw+Dmr5PORZquAwFaOXThVS2Lh8AjQ6fGCNW6UCL+QxXsRGWpAQ5wWrSwoE2OgSbCOzDi2bIk142D7isMJhroYmMv3yETSMMP5iVFVVxXTp0uVQY2NjxKn3xHwuBqQpUGHyoaTBDV/g/CaXd9555y7Wf9HPP/88ZdCgQVsveaH/AWAtFVZ/9Pai6tzDE/7os2KJEB2yEqDXq7B65W5u0YHd00COAz99BYFQ5OgyfOyrvSZMe/afpmy4OJ8MAgN69WyryRcju1fXpp/RaWjpvb26aM+YhVAZiW7dI5tOpIno3bPppy70vxb9mx/t3Sem+QkJOvfo0fxbGvpf2bH5nepuiA39UGFg6xsoxGb3wMlQoWtfVeiXLLEzWkonC91SNBegOc/mkhjUPc9YhzDCCCOMMP7/gd2Tum/fvh5XXHHFur+6LGGcHwSEk5mYmYrC2kMwEvGf3cNPcZs+NzAMG5mADx6fav4i3vQMxQqczZYADO1vsgpgAojSqeHjCWCyO7HxUA1MrhK86nHA4w3ATXPA5GxknQQQ7pJpVlLwQpEP+JwAGwMhpDBgpWKGw6oLuCH/BghtzuBAwhGA3YgALvs1ndu0u6PVGoGUjbWuaD7lBk/Uj+IFsaqwHl076VBWVonoiEjIRCJcPut+th4U+AIuOhMm2FFXjaU/PIasXvMQn6iCXK7AhvW7EB0vwZaNO2Cy+pC7/TfIxJvQPauQVMUPAd+DDM1xGJlilB6VYOdGCl4PB16XGxFKN+KjgdhIEZKiSCVoCpSAQoD2QqlWw+/3QiLkNm9j4QHNn49omt2+QoNHLmWkBsjzPNTWmclhgsfNRYKMS8ocgFhMeDsJBzIpAz6vaesMq6QIObRsJlrIqiRA+g7fB5upKqxkCOP/Lex2u3zixInL2lMwsBBQXMSqhegYw8pX6tAcaXHRqDB7UWr0oNFBwxcIwk8zoIMni41sntfMmvXdmrVrR2VlZR3/M+rzd4JYLHZN+ddDE+tqa+KMe1cvPLTz90Eupyfk7DZkNUXmf4riQSDgQ6mSISk5EnEJBqjVsta5qK7WfFq+Pq9HtmfVz09X5B4ZM+bOB6+QSqWOP7tuF4pwdIkwwggjjDDCaAe33377hzNmzPiBdXr1V5cljPMEYdaiuQGMzozDtwermvwnNDtaYMXlJmaOE7JuaBHMeazw2sIvh64xoS/cDjONzQf2w88XwEvThDkUhJQTbJ78kA+oZj9Q5E+LnwaEvCicsHgMfS8nL+JxTygQWl7GbQ6fyD3NcxXTJtXJCHJ9KKHFOGrnItrtQW1NHdKSE8+Q+tKA0+INkQ0ryfCRs+cV7Ny3B1X1MURwl8HhskAiVqOi+ghWr1iE/l0EGD3IA4ZjByXwQyKjIeUDKr4TCRoPunuCcNgAv48Cl9yPj+HAEBUAPxSBgkaQw0PA7wOPosjBa7bo4LRapqKNnSpD2pHHp6FUUxCIuJDrGNgsQTjtrDEKnzD1gZASqeUDIMOc/LflNyfklwOw2SyXjY5hhPFXY/fu3b3379/f/Uz3E7VCqCQnREN2jlRLqdDROVZ6Pq+IObTmy7uysl6adwmK+4+Dx+OWGjg1HxpS5YOyU0f+4eeVKil0ehUa6i2n3astKey7ffXym0dNm/nmxZf0z8EF+2QII4wwwggjjP/PYJULYQXDPxB8Cj2jddhTZkGx2xfyucCiyQ9Cs3BJn3AU3fIrFGmgxdkju7eC8SEx1oA8oxG0hw45h2yLE4qFM4Oimhh2lllvLzlrrRBosVhg3UIwJ363pzcQ0ELwGD++yivHf7rEoKK2HlFResgk4nOW5WLBFkeiiIFayYEuuB2ROsBsE8ApFELAoZAUQ4NKdCE9Q4IYrR9BVlHABCHi8+DncUHx2K9yDPgCBioZm5sPYiLQKJQiiMXkZqDFZCMQ2q4aDArJO3Xw+Fzgs1Elgi60mJ+00JVFkG7yo8EnL5DxghBIuVDwGfgD/qZyM82PMTwEA01WIuw2XNbfxIloEuxXRqCiZj86Y8plp2UYYfwVqKmpiWr5zVr2DExTIitaEvJZ4/QEIBZcfHiVw1uWzRo2864XIqNjyy86s38YrOUHHxAFq6680OdFIgE6dUnGjq1HQ8oGds1iFQ4+X5NfTXN96T/KrD5syRBGGGGEEUYY7eDDDz+8/ZVXXlnAWjSwXrT/6vKEcX5gtyGwYRAnZcThrf3HIeCJQtf93DZyO+/s/pXYCA4eIiQn6iNQUVUNgVIFl8sVUjS0deDYAqrVSgI4zTCh9eJJ6oymq0yTr4jTUnM4Z3Xm6CLPbDHSGCpkkFdcgh4ds5q+yF/GqAisbwadLhtatRwuqw1KRQBSsQeMz00EliCkci5UKgZyRRBCnhgBIsEH/DzQhPACyknqyQk53mQ1LuzWEj6fA7EUELB7XFgdA4ewpIwAPoaPAKsQCAk9ShwtcYF28hETpYZMaoGAlIXLY6NyNBHO62nTliR7SkR6AJ8Bj9VLBBk0BwcBq7xg+4Y/yGU9X4AXoBD0AUK+AEYfqYuIA4Hd1KSRCEeXCOP/IbKzs4+1/M6IlKBL3AnrBFmbyH4Ndj/WH7fA7qGRECHC4HTleSsgaJ9XVJBzpMv/mpKBpmm+2Fc//Xwla3a+9vto8CjuSQps1jcDe7QgQAewbs1eFBVWwx/gn76f4m+MC/bJUH9sI/7z3CsoE3fEvx58BEMz1aHrx9d/gH+/9iviJz6MJ+cNxnkZ11xmuI7+guuf/Azy3tfirfuugkzw91w8guZcvPDMq6gwjMRzd0+HVnrmnlpz4Gc88fISdJj9CO4bn3X2MKFngd9hxO41S/Hd8vUoq3dDl9QRY2dcgzF9MyAX/HEnly7jEbz04Auwd5qNf9877uztzwTgcjjgJQu+TCYjTEodPn34KSwuEuKB/zyGkWkne+52V+7D6y+/A1vyJDxw62ToJH+zdgzW47vnn8eXhwOY/8SjGN85+lJljPyfXsW/PtmDIbc8jIem9rjg9j4dDLZ/9QTeWWnEvBfewJCky/817K8Fjd+XvIP3ftgKizsYMv+NyhiC+x//F9LlF56rO38d5j/zPujU8Xh5wfUwhB3E/r+AxWJRFRcXJ5tMJs1fXZYwzh/cAA8+ImGm64XoEhmB40YnG3HxpHmTNZ9vZXraidrGbqfg8/ks44i05CQcLq0KhXNkLR04nNOZ7XMzUGfaAHFhYH08fFlZi6EJBjQY61DXoEWEJiIUFeOyRZogTLFILCPk4pK/PAQYPwJ+8i4h68+AA6mMA7GEF7IyYBUxtJ8cRNBnI4rRHpKe5pJzloB0SIbntG5NYR1ykjTN4Sl9rIN2wj14fKx/hnhMmPICKHkMyva9jLqKFeCJ7ew+5dZyBZimMJtNji24IT8ZHIa1nCD5saFAWYeRTFObMS0KItavg5eGl8OHXzQZ6d2nIXffN/BU1pMyBEJOysMI4/8bMjIy8tiQkwRDj9e4QooDVtEgE57Ms+TVulFn84V+F9S50SFagnjNyVFX2HFbSOSGHUU2SIi8MCxTCa2MD75IZuszaPjaP61SfxMEAwG+QiHmMU4fvF4/hEL+Gediv5/GimU7UVVpDJ1PmT4I0THt+y8MssoIf5MlQ9d+A3+9PKW/PLjgWdTvsqI0/zj2G4/j+h278cIPSzCzswYuay3yjh8H08eGJgfNNOzmRpisDnJOQa7RQaOUhEJJsVpla2M9zFZXSIMtUaih1yia77UgALuxHvWksysUUriJUCpU66EneTB+NxrqG+BkzRhFUmi1EZAKqeby2VBvbCSLFQVefTXycnOgijYiEGzay+dxmGFstMIbAEQyNgqGCnyyMLHxsB3WRjSa2UBYXEgVGrJwK8i9UwhAFjWP3YIGkzXkfZXNQxuhhpBqSuh329DQYIKbLLJ8MRvJIiLktbXlWYelAY0WJ3kHD3I1YQ6UUnAlOoyaNgsOcSxkzRpDH6Gz0WiGh+RDiUg++qZ8lLEdMesGKTSphqYdioTONlMDTDZXaD+jXEnyVMtOoeVJFUDD/p8w946nUKsehJeffw5dYuWozVmDBfPH48Wosfhs8bvoomAXYgepS2PIlErA0lmvIxMKF0FyvaqGLMgCKcSEaXAGKOiUBoydfR186nQI4IexqgYWP7muFsNusYSYB5XWENrzZSz4DY/d9RTWWKPx0guPY3RfFerLinA8TwyjqREVZU74GQpqNr1MCL4yDqOuuhY+eSKkrKKI0NFlJ+1I2sBP2lFM2kqnUYbCXTmM1ai10tCQdvWTdnL4gpCpdNA204ShvTDW18Pu9oXCVSk1WmjkopNJFPSivqYWDlIvpYiCzelHRKQBZA6Bm7y3nvQfmnCcMhWhtUpOmFhS38oSUn4aJqe3mcw0zA11sDo8YHhCqCJ0UMvYidqHquJyuHkKRCo5MJntCPDEIdpSXjPpV+w5GyZW1/S+xiocP56HpNp6VJeVwBPgQBFhIP1GDI7fEwrD5iUUl5KFwkHaSW+IhIjjQaPRFKojlxJBazBALjp1yHOQ1GscbjC4kRwhQMBlQmmNBSKVBmLaGSo3JVWR8aEhfbtpL6zTYoTJ4iBtyY4dNQw6JSgykdobKlBn9UEdnYgIwmh6bKSs9TaI1DGIjhATBtMDE0tzUj6hgow3txm2gBhxcTp4zQ1osHqh0keG2sFnM6KqwQYxaTOdnEJDTRW8XCmpH3k/4S11UdEQBuxobDST+vrBE0gI7fQnFsqgH2ZjAyx2QndKADWhO9uHWAdmPa68Ba/0mUby8YPb+hxa28va2DSOGMJ8KgjTriHzDredcRTwOAhjb4SHMM5UXQ0K8nJBC3uF+mLovtcJIztuCCNLiWTkPRGQ8E9XPgR9bpKuIVQPNjSwWquDUiJonir8sDUSetvdIJUM9VMVoQ/jYUMEG8EIlYg2RIDPuFFTVQcnI0JctB7C0yasMC4E99xzz5u33Xbbf0UikefcqcP424AMMxEr+ZJxOyUzGgW7c8Hx8ZsiNjQrFHzcYKvSIRg4MS5btkC0bJ1gBeb4CDlZ68jcyhHCZDJCIJa3pm1hIukWQRdNLiBaQkuejCDOFy2hGtuWqS1CszFZm35t8OAaJRe5pUXoq1KTuZh7BkuKiwe7dgrlOnBFZM6nreQKv5mLDIS2QIjEJA1ZJwKBQOjwB3hNlgTkH+ukPtjkvCJkz8En8zLFp+Aj5aUDInAJr0YTPtBD1gRuUAmHQ47iWgW8Qj0+++Q+wgNREAlcZD2UhvgPiiNupjEntK2lqS0IXxLgNPvhYEJKkCZatVHwMM3/Bdk2Y8icLILFUY4ErgkUWY/LGvaR8vghCCsZwvh/CIlE4nr11Vcf6Nu37y42dGVRvRsdYySnpRuYpsCAVAUZQ0yr49xT4SAC1I5CG8wuwuc6aRyucGJkdkSg39UPPCIQCL2Xuy5/NwiEQldhlWf/vlUbOxgbrKG5ac71oyGTn/7xjk/mvnET+8NsIjyyWAiF/PQ2KCqswvYtR+F0ukPzaHxqUmlcasbuP6MulwoX7ZMhptMg9I3x4YlZk1Dw5OsY3ZaHpqvwFBE419TKMedft8NQvAyPfbgeqdOfws8P9cHbj96Fj3d7cdO9jyA7uB8ff/ITZL1uxAvP3Yb41jaxY92Hj2HBp5sh1qVgQL9BGD3zaiTUL8Fdj34KUfdxmDGiE46u+garD1px90c/Y4L3F0y/9x34o3rghukDUbn1VzjcNFQkN8ZVh8+ffxCv/JyDHuOuwqCYAJYv/gG1it547f3ncei5GXhtoxs3Pfc0Bghy8cpbP4HJnoEvX74dehm/tWrr3rwB897+HSPmP4u5Xf348JVPUCzqjg8+egCVP7yMh99dC0OPKzG5XxR2Lf8eu2tlmP/8e7i+Nxev3XsjFh0BRs+4GtHWg1j0yybEjHkUX94XhWceWICymElY+dGDqF7/IZ54fxUiB0zB9D5q/PTua9jrTsaLH32NDsVf4s7536DLbR/j+wVpeGb29fgiX4QHHrsfKfbdePPr1dANvgmvPzgXemk7Agdtx4bVK1FkFGDKzddhcPcUliWDXn8tVu2/mkwsHMIsBLH7y4dx43NLoek2CjNHZeLA8kXYmOPH7Bc+wkOZ5bj1lkeQ28BBVo/e6DFwDGYM4uLxW++HpfsdWL34Gnxwz434eF8Nuo6YiMk9dFjx7dfYZzTg35++i0F8VsnD2jHScDqI0BpQNZXNUYTvPvkGru4qrPruO+wx6fH8l//FWM4hPPPgw7ClX42PX3oAhz+7A2+vKsOAaXdjoKEULz3zDhwJ07Bk5X+Q9+F9uPXjfYjuOgLXTuiOvNXfYuk+C6567lM83cuGO+bfj+O87njgnutg37MQC1fkYMjtL+KR6/qjtes58klb3Ynv9tYhJrUz+vUfhlmzxmDj83PwyR4fxl17HbKkjVj46UKY46dj5ZI7TyJx3aEVuP+eR5DD6YDpU0YikL8Bi9ceQP957+HFu2KxYNJobHPKMG7OLeipqsP7734LE1eG/uPnYHgmg6/e+i/K0AXf//4TmtQfDmz5/nNkCEei4fdf8e36InS76Xl8PScWT82fh7XHnUjM7o5+A4diSichnv7Ph/DHDcU9s3tj+7fv4fsDftz52ju4Z0wntO0R6z98FI//UoEFpFPO9n+JcTe9g6A6A3PnToe4dAM+XPI7Yic8jJX/Houf3ngaH2yoxvBp16KLpATvv/cpOFlz8d77T+HYO/Nw75e5uOOrXXh4iBY5q17DnAe/R7e7FuKjyVZMnbUABT4DrrpxBvSNh/H5ojWwqIdj5epncPyDx/DYwlzc+MJ7WDClKwpWvoern/wBvW95Af+eaMDjN1+DTbUSpHXtjYF9yRHPw2vvfQ1R9khcNyYda758F+uLJXj09Y8xLrYED906H7tdSbhq5ni4clbilw05GHbXh3iocxnu/vfnoDKGY+6VHbDh63exJl+AB1/7FFNTq/HobfOwqUGFyTNnQGM5gEU/bYK671x8+N7jSJW1iiSo3PIV5tzzIhrkWZg9fThs+9egzuxpiqgTdGP7wldx34uLybgZh4mDEnF41ffYXEbh/tc/xw1D4lvpbzn0M2bPexKNUcPxyLxJKPntS/y0rQ6T7n8Bc7rZcN/seTjKycTVs8aDqtiGr3/chqzZL+HNSTRuvP1J+Lpch69eeQiJ/sN48qa7sD7QA0s/fxmdEy7CLCOMVrDKhbCC4Z+NaJEQQxJ0WJ1jgojig9McqkHQRt738Zo1g6x03vzFmx3tLdsYGD+Nnh2SsXL3YdYWP/S1nVU+tGxPaAo5yTnhT4HXlEfLtgrWFLZJGXH+SoYzbZc4oQThQML1Y4OfwVAfKZPNicY6E3SxkUT0v1xOIBnwKRmEsljQ3gYI4CMF9RGaEkEkZPIrCFk5BNgNDQwfNNdDeAluyBUm64+BDU0Jrgg8rhIWbxA2K2C08VFr9KDS6EKdxQWfz0GY9VoiDPkg3rIZFCMguXEh5gVhpbw4Vh5ETR0FOxFhaixkjLI0bmXNOAhyqSblA+vKscX9BmkYmpRKTO6x0UU4PE4o4gTL53AIDeWyaqzY9Tpi9HHo2/saQvewgiGM/7/o0aPHvrffevOet557+K0RmQqegGpficBOP9RZrKLkIh6uG2Bgg+u0foiJ6zhg3bhp135+Ocr9T4BfkfTO9FkjJjbWm+RKpRRCkeCMafl8HvQG9Rnvp6TGoLK8AUePlLBzWjCm/6gFSqXSchmKfdlw0TMpTxqDG15+DqO/uB0PPjYbaxLi4G65yVGjc9dUbFl5DCu++BDioA1MwIfcnBIEJNNw3V33o/Tp17D8w6fwM+mlQlUG5kwfgyhROy/i6zHn0f/ggck9yXJlxXNTvkGlM4ioqmP4+ftcsPZ6Kcl6lOzejhX1O2DmxWPBg//GbaPT4OwhwfqDTQqU+tIDRKjZCq9fgfL9m/DTIXJRGgU134+yEhsm3/Uoyv0fYO27z2E5HYBUl4Vrpo44ydsqi95X3YfbKl/FyqXv4IEfaVCyKIyZOhF6hxEf/boa3JheeOzVlzE0ioNre6kx+tYXsHjVWvQS8LFqWyXip76AFx+bRQRaBgv+Q4esJqjqttZFfMSnZiJKvY7UaRU+yxGgwRyAkyzsZeWN6HASbeJw4wP3oOG1j7DoradD5onyuC6YOG441OIzfNEki7BUKiYdgAj4LhsInwIhpyl8V01hHhp8AsTH0fhm0XbYNP3w3xefxYBkDRrHZODakbdg1VeLcd1/+oSyMvSbhLeI8JmqoWAqW3b6u+SpuPGe+ZicHYmYul245bNClDf6kDgkGzoJaWyPAmkdshCpcDWll6Vg7l3zMK1zNOIbD+D3j/NQWGEH4k9kyeER+qRlQ7OhBLtXf4lcEQMnKb+zKhflppZvFhpMJMLm/HGdUBlVj6V7PkV5VSNkMwfh3rtvwXPvLsZ7zz4COsiFLmMQJo7oiva6Hi+6O154/W0M7aiD9+CnmLu5Cj6JFse2rgAbo0cYk4JoTiH2VbR5iAlg/5pvsKvYBrGhDptWLAldjkxKgqP6CGqsTcFWOfLuePCZfyEJNuQv/RbflcRj/vMPoZ8EsG39Fm9sLUZBPYNOTYTB0GvvxN03DQHnhj44ljkFe5atQfXsG0J3lem98e93PsKgeCF8DYfQIUaBnRW78RGhn7vBBNrpQ2FJ/UkLwhk6B1LGXI8H5s2EqDEaK1bsQWlJDQJcGTKz0yHdmIftK7/FEcoPk48Dpj4HNY2+s+Tnwf61a1Fi5mPKI4/jmRuHQ8DJwaE1W7HhbMU4BVHpI/DKx2+ii4qBrWgrkvUiHDi2FZ9W74O1xgGv3YPimipSxx+wsciOUY8/iSdv6UnagrQ1GRRsGDl3BR/JBgn25mzDZ7UHYK91wEc6TnF1JQqrf8CaHAuGLngfL8zvD7jroPTehlfWrcVvO+YidVTzHjnahh1btqLarcL1jzyLB6/qCM8hA3YcfRLsdzOvqRYrlvyCOhc5KzuEX+qPgA29G6cToaKoEK5B8ZA1D0tl1kg8cEcu3vxqDV5+cm/Isimx53iM7puJgkW3Y2uFCzK9EZuWLW6iQUoKgiXHUOHL+AOUC+NC0UYJ/+ILL7zwyF9dnjD+OIJEEB4UbcDBUgusQU6rJUNbAZ5z0qf/0wV0dgsCa7Iar9eiyuqA0+MFX0C1bpsIWR2cgRln33PCdwNz0vX20p7t/qlgTfq9jBc/eAS4maTPrywmTKsWaMdi6lKAVaUEGIpkHwmKkhO6kHmVYR1i8sHlE0mfzyXU5sPj48Ht5qHUJEFjowumRgfybH7Um70wE/pR3np46aZQnsqAG6l6HiiyMGXHqkIhOg1yCpGGWNRbCbdHiyAg67tSLgBfqCPvEMKSZERtIw/78mpQUloJxsuaE/tD7RjweiCWCeEmbcSWTyoVQkuY/eyUZPTv2xNdevZGXFI21LoECGUS0o4C1iiSdBR+s2KHjR7CPyctwgjjn4x5d9z5XqIS0nVfvfQCwwQvyvSxhZ+UKtSNV8x98DFWIL4UZfwnIjk1/TDPZN6t5zAjLkV+AwZ3InOpj9F07Pti936Df7wUef6ZuGCfDKdkgzF3vY/kpHcw7f734Gy5XLsJH36zHmWyTnjimlkw71mGw8eKQ2bu7to8vPPKi9hv1mLufQuQ5D2IN197Hx988DWysx9HB8XZ3qfEhOuuxZqCT+AIKNB/1CjILEewdnsNRk4Zg45lZny16i18/sGb0HhHonjt8tCWChV5Up/YDRPGDsHR73aDp0rDpBEdUH1wA3ICqejdiYOP7nwJayqVuOv+fyNDmIenH3obb773Pa7o9RjilS0aKSeWv/08Fm6sx4T5D2BsShDvP/E0viQC3vARL2HC5LHY+tZqPH//AzjWLxq/L/8BZi4RHseNQnZXChOGLMLC397Bw88ZEe08hh+XbULMqAfw6T0xJ6robcS6X77Bztx6DLjqZoyM8+HbL4pQ5T3Z2RQLpuEgXnvxbexxJWD+Qw8h1rIDT7z2Nf77eQoGpN0JylEBi1eCxNQYtO6oIgLj2Ov/hf0HivD1ewtQfmgtSatGwZ6N2Hy4Gj1nPoWPnr0K180egk3P/oSHFzyBa65Ix94VPyAfkZg7dyb5v/TCuwwLoQQq1iw8vwDfff8dJFcPPe9H/e56/PLFVzhaIcWse2cjybYH7x89Evr0FGTOZi3KoHr/Krz6xhdwxY3A/Q+Og3nnt3jnq5/wyQ+DkX7fGJyt6/GzJuHGoV/io+0NECWNwZie8Ti2fikqDaMxMgn4tCUhh4fuY2aj/5oc7GwMIrHvKHRV2vHbmk3I7j8OyeoLmYMd2PTVu3idzoFp/3IcDIjRZ+IYRJ9mSutDzopvsWxXMeKHzMSskXFY/+2XyCPMaPAiRrmnvhjfLvoRpTYJZl41G5GmbfiyuAAepqlPRiUlQco9ipUf/gfS43HYuWQ9moL5itBj3FhkfrUeS956BjxTDiJMB7HX5GL1QGAF8KTYSAgD27Him88grUzCvlWr4fSdobBBO7Yt/QHrD1QiffQ1mNFDgeXffIHSxqZyJA6ZhRHpW7H9s2fwnG8auPlr8N2aIxg870WM5azDur3lSL7iaszorcGqbz5HSQMTCoOWMHQ2ruywFb999hgesE+H1nQA3/96jLTdXIwc0EbDRSkxYNhQxP6yEz9+9gaieBNg2vEzGsye0BwjVBswbuoUbMj7Fn6eCsPGDge37jA2Hrejz8BerQoG1sS4ZMtivPbhL+B2HofHZwxB0epP8eHS77CwWz/cNfEGDP55P3Y3+pDaczI6af2k/6xH8mjSl2I9iJSLsWP3Grz7IRlHdXuwp8IEXCo3IGGE8f8EDEeACCqA0dnRWHKkCgy36YNB23X0JCUD5/S5OYAm54FdM9NRu3M3xBIRaNpHBGIBEUp5obyCnPadLrZcY+cYhgm2XjuTEuF8/SmwWwDYL/XsdHLQ7cAepQpd7A6UVpcjNSHpvPL4o+CwGx74FCTaodh1pAjeABcmkwkNDT5YHTw4vDwIaTP8ATdcIWcMRPj3MqGIElwv+3QAURoVG6eNXPOGLEwbPXLIEYPMDpmQJSZCKpWic0Y69Eo19LFJkKgl7E6xEF1cLjrUVqwjzobKcnSsrECDsQF5hYXIyclBUmIybrn+TrjcVoyfMBoSqRhMIAA6QNomZMUQIAIRlxynKGHaRCMNI4z/FYyeNe9Vu82s3vXLfx9imHYc0/xBdBx21SfZnbvuvxRl+6dCKBS6yy2ixfEcnKRkYLc7+Hx+BAIt1m1cCATUOSMUURQPfSeM/V6ROuSJy1fqy4cLtmQwdL6CMMNbEOTLoGUtc7kCpE+8H7sGzUaj3QehXAspWRxW7x8Cs8kCIuNDNGY0rr3XBi8RdeUxOvz7s9V40GyC3eUhS086vh5zCzSqU/0IKDCaCP49r/NBodG2Rp/uMvkhbJlwD0yNZrhCe5kn45aHVZAToZWTdgd27JuLRvJeH8PD6KFDcd1DRNwRK6FQKzD32S9x9SN2mC2kLGQR5E+ZCXWECiKKi2e+XYMHbRayYLoQYNLw/ZbZUCnloE7qB1LMfmkRpjltsFjZvekMXl2yCWqVssknw7zXMeLGZ8nia4XHz2DqzBugIQtri7+Gx7/YiPtsZpjtrP+E6bjl0TegkonBDbjwyZJV8FNSRGnVmPvvRZhynwkWp4cssiKMnXYdnG4vpCotZLz7sG7rrRAodOCoxHjzx7VwWEywOokgyUnD0gl3Qa2UgbFV4KWn5mKFeQwW//wU2qgxwFOm48kvVuEhl53Uww4v6/fhhtvJ+q+CREiF2K4e176A/TOegIXQ0u0PYNKU2VBr1BDzCWPlS8FnP6wifUAKvaqpZVQxI/H1RtIvhAqoIcP8d7/GdX4OIvSsQxMeBt39Drbd4Idcq2ddSuOxhctxW0NjyPmjMkKDtDe/xuw26Qfc+Ra2zfVBHqGHkkrGJ6SebJ/TEfo8v3RfU/8hjIpAfAUmX38PnISZUSsp8O58mzxHQ6HVhcoVOWw+tm29DkKFFgaVBAvXjCf9kvQdrw/IfBJT//U6lDLRycoJWQYefGcR7mR9TWib/L5xBVos+HIz7nbZYLLYCTPFYNrkmVCp5BDygpj73EeY5AXUWi3koiR8tX4sbJZG2EPtwsNVNz6ACKWE9W+NN1dtgYcI4E1yoQwPfbsFd9J86Jv3a9z4+kpM8xA5PIqC4OonsHXsQ5CTNvU7SVuNn4D7X48g/YaUOeDF8x/9EvLJoNWyijAOulz/Ig5PfRBm1kcHV4ix46bCSvobX6Y5zU/HlCcWYuj9NJR6EWH2bsS6DdNASTUIuTeIGIJv1v4GmtBcY1Djte+34jGjES5fEELJKMy49THSLzjQaCnIrn8Le6c+HXonyxVeO2suXDYn+Ao9RCoB3vv8U5RYBKR9hPDV8rH81y2gtRFQ8sToeMN/sHvq/eRZNxhKiLmzryM0I9SRq0PxmV/+fiN8XCl0rAaIq8CV972NITeZQuk5fBGunHQ1HKH0GkSopPh4zX5UHtuC5x+4HznK4Xhn6fsYnKEmfOSVGH69OeQHBuQ94ybODO13E5L3RBDavrtqT9M4cngI3abi1idVTWPzJJpxEDNgDtbumBbyCeENcCAbMQTX3OtEQCAj/UuO2NlPYceMh2A2s32MBpeagnkqNeTitl/IeEgecSOWDp4VGl9suk53v4K5T6qb5jGS4ptNB+Gwmkl53CGT46tumA+1XBLa6/flqm0hnxtemgORbA5ufcBF+pMQkfq/g7vd/x9oo4QP4x8KdhzRREDtGSHHfrkYhXYyZ5zy4U6AE34Y/O1saWgKNhAEx+dAZkIUjlfWESGaIYwiJ7QlIqQ0IH9DAixr9RA8OY+m7RLndsjY1g/DuaJLhPJtfg3DFWOVyY2uehWKysugM0RCJhCSub7Vu+IlQlMdhoy+BX1GXAcel9TZR3glwjsw3CYegPH5mkM5MPC6HSEnigzrl4F1puPywSHiwssXkrnRSeZhJ1YsXwaj2Qh9nBYKMUWS0Fh7JA92lg7KQ3CRNdynUYKnU6KUS8EeCID1rmFLSQMdmQaxTof6Dz4Ev6IW5fZG/PbZa+BRFKhfvkZQYcDwKdNxRc8seFhlA03DSPtgCTQ5UbNxaLiZQKi9Grkywi82taefCAOsE0o2FgWPpPdxmpRI4kCTv4kgOTSEDvGEf5gQGYtZyekh3jRI+kPY5W8Y/xSwFgcz5j3+SEJm912LX7ljcYCmL9iEp/fwid9MuenBpy9h8f6xkOiSfoHdcnfQbc3ase0ojh4ublUusGCnZKFQEApLyc43hkgN+g/siKhozWlzfiNX/4kqpf+8f6p1yAX7ZKCEUkTGns7MStWR5DhxziXpdFFt0hFB/AR4IUeQ8rP67eaS/HQn5dkCDk9IhNHIpn3Qp5ZPRJj9aNmJ1ypOzoB13GQQn75vmUMWSpkqInScFax3ZZkKkeRoD5RQBn2UrN177LMSZUToOPkhCSJPcsByOn1O/BQhLv7EGaull2v0p9HSVF8AV/Q0fPjcbScpGNoUhgiLChgkZ/5+T7HO8SJPd0rCIdejTnEYwyV1iIo/8dVXbYhCW8pL1AbEt+0ffDH00bEnLoijQl+DW0/V+jbpBYQ+bfoSEZ7Vurb5q0/0BeHJ76EkasS3ucA6e9SwTgbbq3Br/gJEGKLb6V8szZSIkihPfQBKXSROusrhQUHqoDit//Khj23zdZw8q4mOP6k8Sn3sibz4WsS3NJHilLYiddFFxp58jZRRrIgIHa35nWHrl1wbjRMjQYm4uDY14EkQHdemjXlnokkTpKRPS9v0a3XLOGJo0G4bdi1bhl151eBrYnDdK19jyrDuUDc7o5QotWhLUlUbYuii29KKBZkXSHppm/TqtsQj40EblYKsHr2g1fdHkkHWLEaQsUdoImlDl5OeI2NORuYb2Zm3yZ1ISfq/PqoNbU4hMOvEMUIfdUZateZDxkCEQdx+Og4v5Fi0vWmGxz/l/erzKHQYYfyvgXVcRtZchgiJ4zIj8emBYri8rG8GutXkra1SoC2T19baoSXEWJrBgJryWsJjSGHzuCESNdsHNvmUPg2s0qEpH+ac8v4f3S7R4hOCQ/42+LnYQcnQ02tHTl4OenXqellCWrbkJ+QL2bgQ4IgFZPLmhKI2cFsUI6xSgQ5AEGEgZeOFtiPQXD/e3HYM3y/+ASXlFRBqIyCMjACvSxqUcWNRJFdA7vKE1iJKJgEll5F1VgwTbYFLwIWfzIUCDuH5CLPOIfnLAj4EKTaCRSU0t01A8NbxoLwceIJ0K024fj8KSLnySkrgCzQx9ByG3Zza1N6BYLDZwoTtA7XNSh5WedP0PGvU4g34Q6E7A6Radk5zGwYD5D0MTAyFA7U1OGqsx0s9BoAJ+fYI+3MI45+FPsOuXNqlX67i/Qemb67IP9j7jzwrVamNY+e9eHP/oVcu43A4F2Er+/8HWq2uzsztNrb64KLdhw8WGiK0Soyb2C9kveByemC3s86+rTh2pAQOhxt1tSb8/OOWkDPIHr3S0a1HGmFhBc4aQeb1hvj0Jf9UBQOLP282DPrRUFWGercACYknoidcCnhtdSgsNyEiNhEGlfhyOVa+cDB+VObnkrpzEZ+aBq3sVEcgDIwlOSg304hKyyTCawBlRaVwCyOQEqcLRb24UGjShuP554ZfXPnDCONiwaGQ0P0KPECOPwsibQLu+s9Hf9r7wvj/h7BPhn8+uM1CMWtmnyAVoYNejV0VDRBw2w/Xe2J7w+n+E0LXiLCanZmBjUfymywfiCDLhrq8VPgjSoETSgQGXh6wvKYeUZmRCBpLYLXZoFGqLlm5TkUb95ghP1B1Pi/yrPXINTtR6bCj0m5CidMFq8sPu9uPgIwPjp/QK7sTRD26gGajQPE44PMouOwO0ERYZyM0Sbh8UAEGam0EZBFq+Mg1B3mBlQj1NNMUgjIYUg6csPpoURKFtq00/2YVES2hftjrrAVC029/yEah9bmWPOhA6++TFE3Nf1nxiRNoCX8ZDG1XEXEpSDhcLN66Ho936w0ZQ122yB5hhPFHQOYl/qG8/d0/3P/Kg2tzlk7yMydC6CjFSnu2oWNuoiLzaEd5t6P9Ow9dw3UH+JUlR7ufT948ivLHpXfZPfSaBx/t0nvAlstXi38u1BpduXjMzcmHjlXtNlZVZH/56epzPsOGqfx9Zw6cPO3igVffPCuK2+qR+B+LS+ST4dxgfFbsXr0QKyo0uPOOW5Ad2Z6LvQtBAA05W/Hhl7swYPY8XNU/5SxhGy8fHBXH8NT8OdjN9MWrn76PPrrmG7QdG795B1/vsGHuAw9DI23P02gAa9+4Gw8vNeHe73/FPR08WPrNh6jQDsUjt0yERhw2wAsjjDDCCCOMiwHrw2Bkkh4ltXaYWXmxmVdosVIICe0tAisRNAPtKPhphgONjIsoJReWgABetwc8JtC8LaHJYiEY5J4hhOUFlvsM+bRGmiDllrLbFYgUvsslQD+5ASXFxVB27Ur4oQsrg48I9vW0H9VOJ2rdbuTWNyDHZESJ0wGzi91y4EeA3QZCaOD3e0PEZcvhY7cf+PwIepq3GVAcBHmEFnZCG3KN3U7BSPngslZuQhGyAx4IVq6DkseHbsBAVMbocaQmB546L8YOHoFDhWXwWkxw+l1wsAoC1g8FITef9p1QEPjoVuUDL9hksRAkf8mNpsoEgk1KBNaSgdWIBJt4dw67bYJpUVAEW7ngYPAEb8+l+KHtNELW7SVpYx8hp59VWpB+wIadYUM5O8l7io2N6BxtCCsZwvhLwPpUOJ5zqOt7W/791JqKNaNcAVf7WlQCq9sq31G6vdcObO8VurAdr4X+dmpSxrIOWfkMFzE8FZSNQBd958NTpz51f3pqlz0SscRJURT959Tqnw2RWOya+sDTvdZ88s43lccOTj1bWtbKoUNWAvoMyKYP2/S/cv8fKBhYXLAlw7FN72Lera+iVqhHYnwkfKYq1JhpTH/yUzx+dVe8N6ML3t5DQ6URs4bNuO+Nd1G0dQN+K9Zi/LTZcGx8GDc+/Au4hgTolXyYayphpeXIzjLAYXbDYaqByUNhzguL8fjUDFTvXoyHHn0ZR80U1AoKVqMVur5z8Okbd6Jm/w6sXf0bqG6jMURTjQdvuR7b60SQyQQwpA/Gs8/egdXPzccP++tIedTwO8wIKtJw1zMvY+6Qk5USlrI9+M8jD2HVMRtJK4fbREZYzAC8/dGLyLJtwk13PI9irwQxEWLUV1VBnn4lXn32Kix69jmsy7OQRXArHrn5bjz19gNw/voGnv50AyhVNCSMGXdMXoux97+NF27qg31fPol/vfwznEI1qb8AxpoqoNnlIMeaj3Vrf0NptAx3zx4Nf8Fa3HPfCzhUH4RGKYLdbIYqYxReeeM59IqXhNe0MMIII4zLgLBPhvPH0H8tduo1UuOr8waNjtfLc//q8rQHVthVEQFyQKIGa1hPsc2rp/8P+DzjEmHaTwfRLysTy7btI0K0sPXLOhvWkhX8WdN6pvngouULOpF1m9nGlrCX513uNts52nuOLT2HDobedKCgEN1H9UZp4X5kmI2QanTnbR3hCLhJ3bi4Y/sWbK82QkAq4OJziIBOwx3wwxv0I+hnwPjIm4jQzuGzYTx9oNxuUH43kbgDuMIQA42Aj4M5R1DuIMI4ocm8/t2QJvDCntwLOzUarP/kZfi69waX40JifgP69O8R2hphqSqGICESxQNGwL9xJSpq6yCNi0FhZSN8Lg8orgfCICmPP0iEfQ/4LjpEa5piwPEGQAWC8AZc4JD7HH8gZM3A+mdgFQtBjwdcrw8SmZiUmxe6z0Zc8vDsCPi5YFirBw6PvEMApcsGtYSLeK0c8SIXpAE7JB4Lkp1BuDxCbCb57VEko15qgJhL+FU/jePllciO1hGmOvxR6FJg6pPLK7unG7Y+dHWP6/gU92zhq/4nQdM0VZyfk71z5cJbSvavm2w11oV2RO9JNsGluvCoy0zIzofdHhREUdDI7kLGfv+Gzp9/v2Fd23QyodyRqe149Pru89+d2v/qb8NbJdqHSCRyT5q/YFptTU1szs4tc/P27x4nFQkEUUqqQ4RGIo2M0iAiQgGBsMUajsOlab/wrJn+g3DBPhlalqzOo+bjo9fmQt14FI/efQd+/OAVDO70XvNdDZ75aROmJBN6mXPwVDv59JnzIj65qx+OrH8Vt9zyLoSZT+Db/0xE/Z6FuP2Gx7BrxRrUjOfg8TufwrYGL2RKBaxm1qzEjby17+GNn0bi6nbyjc4aiw++fAWdZEZ88eQ9+GafB3e8+C3umdgRXN9x3Djkajz9wNPoue5DdFK37PN346sn7sQiNnKAXEGEeQthCvywH1uFF94Yhrem06F9fR4PDU1CZ4ydfjv69euNrLRIPPHCs3DcNA27mF547qO30V9Ho2H8dbgbkdh7rBT1JiGMtQexddFCHO1mxlPvLoMnfTpWf/oE4jQ8LLxrNB5ZZj6tHu66Erz1xBM44k7Fa8t+wNhEPhq3fY5J81/Ew68kYOELdyNKFlYzhBFGGGGE8dfB6fZJ4hLS4l/46XhOfmEVNHJR+avzBo5JMChy/uqytYDxERZaGMDA+AjsqrHDGvoCfvawkqflwTCh7RHs3/SUZBRW18PucEAmlzfbMTDN/BGDy2wg2oq25aZ5fOwoKscErQGHikvRS6ECn+Kflq49iDhcvHp0N34rq0OqxwWZyQ2eVovjQg7sNitGUTL0DVpwhAjn+2khIo2VuLl3V/hsJjT0nYQKlxN9jVWYPmokKoYMhCUQQE6dCUxxObxeBlN6pOFWTQT2yOZj7O4jkI8aAXHDz8jIykJ1WQmS9TEwCyRwOGxQEl7PUlMHbUQkEaj84PN4cNNc8LxmXNHowz2D+kCvUqHAXI9/7ytCIccHCxtAvWXLA9tGRO6JSU5Gz06dYCooRuH2vSAVAZcTAFxeRIkYzO/qxrBuJkjF5aBCPiUksDuTUFTogKmxEnypk+TDhdsVhN8GSBg/rhFycTNVSerEweY6A5aIOuNAbTmuYrpetjb+X0NFrTUmKTn26hve2HJ1aVldcEyfpB8emtXzf1rhUFKY12Hb0k//dXzrr7M8Lnu7jtT6l2iwMc2IBtkJMgl4Al+P2J6HDlXt6+GivZfEvMrhtcv2Vu3syx5Lj3x+42c3rxz9d7Zw8Hm9osqS3GE+r0eWkNZxg9/vFyscue/XeMU/RCV3+eZyvDMYDHJ/2pk7/ddieuKxalvHQDCa5xdN5AcZhhdscFP+miBwrMkaKqSEZnd4BVkPwXjHt3z1O6ybG38wyGcN7ER8rmda19jFA1Xm7f07pWxNjzXkUxTvb0vvFly0TwZrfTlKympQnbcXx4vroIrshghdi9DODXlfPjtOxKwOrX+cQOv1VutDQSQ6p8ixqYGL8fe/ibtGZsJbV4A95X6MHdsBeZ+2k2uLJ2dKThiBWAjpQ9i3bzdKe2jgPf4b8mxeaDplQC9ou31BhKSMZAi2NKDbpAV47vaREHuqsWN/BfoO74/S9QuR3HsC5s28Af2SaHz9/IOY8erruPulT3B9J1JeLhd+txN1NfWoMB/DI9ffhHzlWLz+yXPQlW/Bo/ccRgVbNl0SUlkv11UlOF5ZH3LuV2Vuf94UyKRITE6Ev7gG+7btRWd+FPbu3Aubm48+cYmQCsMKhjDCCCOMy4GwT4Y/DolYiK6dktmf8S8tyTmeX1jNqGWCylfnDRqTEKk4/leWjSfggMeIQg4FpmTp8cX+UnB4kpCfhdY0zVsnQs7+mPb9bbVYFnSM06OixghGJoE/4AMlEIaiElCsc0GGG9qeEWROfNlmrSBYtLVkaOtboO3vpogUnJPedypa8jjJMoLDR05RFa6M7wKrtRFldTVIiYlv9U1xNnAIl2v1+BBlNKFr4RGkZCWS5yRgrA6MVBsQI/BBJM6E/8BBjBk/BIVHj4AO8LChtBRTDKU4evQY9gfd6JnVAanxSdh8eC825R+Bb+duiGQK5JsaMfOm27G6uhTcoBd1ZaXI1Snx7W9bYWBoOBVy7N66CS74wWgjwPBKENO7O8RcGkGvF7C7MIXi4o250/HJt4uwdu06TJkyGd+NHoaXisvwRd4RBIVi0DwGKT16IVWngWP/Uez+7HMIGCf8R3JhOpqDZDkH3388Eyr5XgQ4NQjyRLC4xeAEdbAYJeBxteCTbuIJFuPwMQe0ESNQb5TjwPF8uN1uaFQKKEQeiAV1kMsL8IqiDJUVJniZKWG3j5cYBp2KPbjmAK6+8c2tV5eU1gZG9U768eFrelwnoHjev7p8lxMOh12+4rNXnzm48YcbPU77qZ7G2wWfx/fdKBi7090tdd+EPnO/yIjJzOXz+f6nFt7+686yHZclUOu60vXDr3izy5Fvrl1zRUxkbOWlypdVqqz95o2neo2Z/d9ufQZustusSolU5vgjyoxAIMDz1x9/QmTNfzyZzChgbQTKKyDm8FiNJBXF4Q4vOI66tKwu686Z2XmCVS58sPrgHfeuqX2bYc5mbH6mW8HT/Al4/EHRwj3lcxYCc7Du4IkcOGCkAp4rUsY3fzFO/JVcIFobGRNXoFJrav8ODiMv2CdDS2KK8WDrD+/gaJ4dvW5+G/PnjISKyO2lXQdgiEQGvai5T/OlSO3UG4PIgqKT8yGPysTAIUPQIaFp3MgiEtF38BBEpBlCDoUkyij0HDAEnPQEIvorcN+32zA7fye+/m4Znt68EOrETrhy6hRIKR6UcZkYNNiNzFgthFI+OvcZDL0iHbLQbC9C/5texI6x12H5oh/w8lOPg6OMx0NfrMSY7skQnqQE4WDCo99i6I05WPQNm3YjRJpkDBk/BVpVBOJm3Al97Er88OPr+MHogjZ+ML5ZMhu90nSs6yM89tq70Hz6E3798F1EPnYf3vn2O3z+xSJ89vTjyOzSD33HXgmtUw8/NxNvLVuNDb98g2Xv/RtrY3qiz5ArMIRbj0QV6VtiHbr3HYiYiDRIVTGY/9rPmHxsMxZ+/wOe2OCCLqk7Pl/1HLolqXERPiHDCCOMMMII47JBLBaiS6ckdpWKe+mX3GOswkEl5Ve+cvugsUlRimN/ZdnSZAJkqZU4QNZyin9CEdDqOJBwbzzuCV8N7Qn6Pr8HnVKisL+wCh6fDxQnAEpwsrn8yYqF9t/TVlnQnrVB2y0S59pmwX6mEYkkWH7chJtTtSiuqkesPgpiPhWKbHU2cHh8aIjAHRERAZ2sM1wks+Ld+yDLzMJ75cW4JSUJQnsjrrnpehyurkJ9eS22VBajV3ISLDwabrUIWw8cRq/jeaiuM+G+Nz9GbkkBZPER6J2Ugn15hXj7qskIiGTgDe2LQGUZjvlsKHM1Ql5vhp08I4ogQmV6MkyMDx6jkQgDNOHZxOA43fBLhbD5aIikOgwaPgKR1bUwVpfD6XShs1YFWQENh1aPDh0HoCvJd/+KFeDUlMOxcyci6wvw6E2J6PjgAPgDAUhFv4N2G0HT3FD4Tb5QgYp6FTSEdrQvH0WFPpTV6nHFpPfQoUMHHM/JgUi7C0ePHiVt7YVIH4toeWckimmU1W+BQLwJm766FaPnvkbaUEn6Difsn+ESQ69VsgfPGsTMm9/cNrO4tC5wRa+Enx65pud1Aj7vwvcI/E2x+MP//Ofw2q/nny0NXyhy65M77ek+eu57fQaPXCWVye2npimvKklfV7Wmz+UrKZBrzMkc93HvvZ9M/2Vizw69d19sfof3/d7/hxduXsZabJQe3Dz26zaWG5l9R/18w2MfzBIIhedUMlWX5IyJo/NON6RnAk0+K5ggP01Q9p210TBNGRG5+WLL3Wi2RFz/1YGvVhe5r7zYvM4HrBLD4Q1IUww+aQ+J6VGKi0fRUAQ0NCfgkAmUL6mASJafW+neq47J+CnCEHPkz7I6uWCla8vcKTWkY+6CuaeFApz++EeY3vaCLB5zFryCOS3nybfjv0Nvb72d2HUa3vhiWuu5JH0Ynv9w2InnuXxEZg7GgmcGn1aWzPG34pPxJ84XvPb5aaVVRWfh2vuexrXnrBgX8qhs3LLgGdxy2k0uMgdNwpPkOB0UojqOxLNvjGxzrQ/ue7btuJ7b5rcWo6+9hxwnrsxofWEEnnmz50m5x3YagYfIEUYYYYQRxp+DsE+GSwexSIAuHRNDCodXluYeLSiqZhRiquqVeYPGJkcpj/7Z5WGF9SvSDcizlKFdO0KmJYJCKCLjGcCBXhMBobAODE8Cv89/UlSKs+FShJZsyaNtXtxgIGTeX2x2o5IXAxljQmVNFVLjE88p87LezfUKGRo8LuwsPI7pSR3RmBqLHLMRNQ4/3jVYEOVw4e0HHoa5uDwUVjLYtzOOllTCsXU7hKwzRD8HN61cRSSbBghcLog0OnhVEdjic8NPEZYzuyM4fAEb7xwUzUe2RAut3AKX2YmASgaBRAzVgH44qJbCuj8PDpsDEpEIFtIYrK+F3zh8TP36I9zZNRujH1+AXUXlWGO24MPSY7CT9xi69EE67cS+Zb+AcVfBsmI9Hh0ZxO33joMzGIWSaicE/KOwu+sAHwdchnXpKIDZwkdUZCySZGaU1HBRVsfDnLs+gEapw7YtW/HpZ5/BRweRn5sHi9MJpVgOfZQWqZ0zcPvYOag98iP2HNkCatEbGHH1UwjFvwz7Z7hs0GmV7MGzMZhx89vbZ7AWDiN7Jvz08DU9rxfy2a/U/3x07DVkXXtKBl18xtGR1z36QO9BI9acTz4bDq6cXGYu15075flhVsepO3/MWd7DH/Cd5MnebK03PPvBjBXP3PD9hG7d+uz6o/myzivLivIzV33y1Fv5+7e1hiIL0PaTwvfk7lo75Zf/PvvajLufP6sChkVsSvZKq0k+XGnL/Qhee2q7iWhfhLJh20a4o9bWUUnzDJHRJX+07CxW7s65cu5PFd9a3fR5WZ1cKrDfyhd084NqT4fMMBR8ziT2yFRgNOx7HoPvWJnbMKSrWCK1XO6yXbBPhqyh87El/5ztG0YYYYQRRhhhhNEKVuHQOTukcIh9bWneEVbhIBPxql6ZN/jKlGjlkT+jDGyUCL2YQv8EOTaVO067H9q92awvOJPagHWQxiFCcJ+kJGw8lhOyVGBDJba1TGgPZ7JY+KNoz6oh5FeAiM0MfPjuUCnm9klGQXku4qKiIRKcw58Yz4HORDCuYENSCiNhKquAsbAaddVloK4ciXoi4NexnhYjdeCKRWAoATgCObg6OVK7dINILoHL6wVFSsDrHIRQzkHAE0SZwwFPZU2TUkJCnlMpwChIWfhBuDg03E4zhHVGeHg0uA4P6LpG8KRCREiksDQ2QqfRwlheDa9IDC8ngA2k3TZWVENSVBNy4mhnrPAFuBAmx0JAu8CtqYLA04DG3aW4sasLc2f2RJDmgkv7EKsww+e3weHgwOH2NYW2JM8a7RakJgjBeN3kHoW01B5QqSIh4HJgrK/H448+grzD+7FskQOmgAcujxvjhvZHbIQOP29vwA19B+J4fSOKd6/GwAn3gKdQ4RI0cRjnAV2Egj1CCodb394+o7i0NjCiR/xPj8zu9Y9WOKR07LkjIbPb72W5B06yQgi63eLufQdvON98vjj25oPB81R+nguddOmmF2d8O3TYweXjFyy74UtZo1+WWSdDhJMPbsiBLq39/um564IPfjCtx4Dha8+WF+u8sqTgeMcdy7+eV7zvtwl2c0Mkq2g42zNimcKS3f2KX0Zec8+z51Ne1iGlMiJhIyIS0txut9xeVzRHT5c/CtoVc2pSOGpGG1BTDIfAXOZSvK9O7Pa6QqE0nesdXq9POO+bff9deNgyh9D5smxJacHSuWnjE1WCkjyjL3NHXlX/PY3C3ikaTtCgtNVBxMmA3xODgDcCZ7Oj8roSPLVHF4iT+zx2OcvKIrx9LIwwwggjjDDaQdgnw+WHSCRAp2aFw+u/5h8uKKoOSgXcmlduH3Rlaqzq8OV6L5/bxP6MTojEMWMlLI4g3Fwn60nqvPOgSGp296xSwEecVoUqlxt+Gw0pEaBDXtqbw0mwyoAWXw9nQ1ufDOejiGC3XLB5s+naWlCwv9nrVncAm21+jBSocaykEJ3SsyBg82STNmftD9Lg+01wFjwNyvkLRL7+SBZPQWSsAVKTEbLEGEjjIxGMTUBZwARP0AuOUoYgRUFA8RGflAAIBLDXViO23g9hcRlSomOxnQjkvrgEcAd2h46vBkciQEF+PigRBZrihehBB0kd/KQtog0QHi+El8dASAR+n7EBER3SYBIL4CivhqpLJ/g5XHBJWVnFjhd8IOCHhyUpucbx2ELRIljLBInfh4a8UvAFUvjqCjFqdjQ0sSryLiUOWCPBtxQj6KTBeIOwe5wIMCSvIBcNJj8CNCcUmlJCysQhZQypaoKBUD3ZMJZuqwWxOh3M7BYQsRCHjxzA4GtuxpayaogVWZCKeKiproOrugp8QiMB+O22WxiXD9oIBXvw7MCM297ZMaOopIYe3j3ul0dm97pOJKBcf3X5/gg0EVpjUp/x352qZLBaamM3r1s+ecSVUxafK4/lG3+cW2AuOdXY/ILA4/KY2dn/elkgEPgm9J66ZEjmiN+++c/tPxQUbR3VNp3X7ZR988zcNb67Xp/Td/xVC1siT7C+CirLitK2//Lpv45tXzHDaTNHnPOdFOWPz+yxs//UeS927Tt03cWY+YvFYrs4seP7QMf3WWVGef6hqXFU9StcvysRbYVy2qdOEBgfQ/W6x9AgLa1gYu+KSuywur13s/l4LGXT/t2lZo7fRXGXFFNskJ3LBjHFdWelJhzPSsXxKX3TlpwtLU37BQ01ldn6QMm7PI+pf9t7al/lQ/XVUYv10fEHL19pL8InQxhhhBFGGGGE8cfR69aFzOD+2a3nW3Ycw56PZnMu9P7fAZeiPCIhH52yEtgvQTFvrig4VFBcHRRTnNqXbx80Pj1OfeDiS3k6KC4HY+Kk+Cq/kcitQiKcn52HPVWYZwV8n8+HjrHxsBw5BicRNH1E0GXDWTYpClhJmIPz+ZDYVqlwpt9n8snQdutEy8FaGhw5UIrE4alQF5fAbDDDoNIgSHh+hgjpbGx2bv3P8JTeB6HPBiM1HN99kQt1tzqYPD7UrV4Bu9cNJ1+MyMxUyNwB+FjfDsIgpBIR9Npo8Fwe1C39Hq7f96LXFSNQVVYFJZ+Dw5s2ImXYAIwriYMjUo/NKhW6j5iAEmMp3A4bPCRfoUAIl88PoUQOjlQCnsNEGiQAb2kFRH3Yd/HQ6LEhltxn09KBYNPeFTrQZGbCHuwfL7tFhA2r6YWblEdO3s/lcUCJ+PDaadDuepLMA2t1FPaX85Dsd4Ddwu+l/WCCAbDu3L1uEaxWBwwaMRRyE4r3bYPXb4eAJ0FSeiqqjPWkPCJUmImcypXB6/OSsgfxyZKfwJFpYAta4faL4GfDmbLbQf7Eb3i9b/sWp84Xu/97zQXfv4S4YLniUpQnQiNnD8oBTL/93Z3Ti0tr6CFd45Y+OrvXdWIh5bzoF/wJ6D9qynf7fv3wYbupIbLlGu33CfM2Lbp92JhJP53NuZ/b7Ravrfnx6iATvCQ2NTqRzjS5/6yPWs4VCqX1ln9/Nf6bNx57+8hv3912qhXC4nfv/6ooZ8dIBtxAzq7VU1nnleeyVBBJ5daE7iN/GTbt1jeS0zKPs04rL0XZTwWr+EjI6PoT0PUnp9Mp9zUW3qb2lD6BoP/kqB1+Z2Ic8n5FYX4gKNFvbxQk3KOLjA+tRVaLRSu1HV6kdNWPUMqAr6/w4UvGh1wzF4/s5GNLrcB1Y5/Yjz7eWXmb29/sA+Ii4fb5zjsfiuL7ouKSDlgtqqnKuk2lZNJv41CS4en9+Z/a7ephcrncdinK1m4ZLlfGYYQRRhh/CtivfyET4TDCuLQI+2T46yAU8tGxQ0jhEP32qsL9hcU1QSEPtS/dNnBCZrxm/6V7E4M0gxQZVW4csNnBP4+Z5FRFAws+KW9CTAyKjI1wuFwhJQN7BJv5+/O1Vm7PeuFMCoe2ZWjPeaQfXoiDfBwvcSBVIEFFRQW0ciV4nCB8PB+Yhq3w5N8Hid+JgCASK3e6IBWZIPY0gJbEQKTRwl5XCyVPCJE/CK/ZCq5BSSjkh04XCYHbD9/mjbBt3Q7K7YPFaAJN3t3YYITOYEDlsRxsT+kOyf7f8dHrb+Cx/FzoGQnqYQePS8Fpd8HJOsZUSBEQCyCnm/hnvt0JlVSOcq8DrqJyyHl88IngzuH5wRDBnsM0KxhCGgYgIOSC56MRtFjR4HUhM5G0Q10JeDEZ+HXdJgztIYBAJEC3iP3YkhuHGnc+lCAyJrs9g9CNy/FDTvLZ+/suJE8dBjV2oaNegkUfPYUb73oDScnJ2LhpE7YfOIC4rl3gzisG7bTDQ+pgCnAwfeBg7Dv8K6pLSiDXx0IVFUuYa1ZZFbZk+LugReHgAqbd8f7OaUUltfTgLjHLHr2293USIXX6fqm/CSIitMZOI8Z/u2Px5/e1vV5wcPvw0oLjHZMzOp7R2qu6sSr252M/jblUZbm2500LlQqVue01Vglww4Mvz1tpiK78bdGbTwcDdKtcySoU9q3/8boz5ccK+hK5ypTYbfivbASJDp267metJC5Vec8XUqnULpV2eRXo8mpjQ02qwlfxH76jciKYYJv9ZQyP66obrHPV7Ydtnwsy/Qal0zgIAf9J/hdYf69ZERz319emPagwZL67Ztve0Z9xcfOlKqvbR58WeeJcUKrUdTnFEfd1kDa8HfLR0JqZpXt17s5H0nte8WiLtcmlxgX7ZPBbyvHzT6uh6ToCWQYOdq5fB0H2FRjfK7k1RrStaDO+XNeIYWMGQuUuxfINebhi7mykyM9lNuhDTXk95GSilrUzR9fvXYpfGzrghrHp2PHTZ6A7jMeQLH0ra+C11MAYVCFGc0kUR+eAB/tWLEZl7ARM6qL6w083FG/Fqv0cTJ00sN26/k/BV4MlX61F2szr0El+9qQBwnz8+vUXUPaagL5ZcRDzT/66Ewy4UVNtQUR0FESXy/eSsx7r1/2G3PI6eCN74f4ZA1hPtTiw5jvkUR0wMlOE1Ut3oPucOeDtXIydnK5kfOiweckyGAZOxsCMS+aH57xgr8zF178V4ZZrrwT/LKFlbcZK+ARaaBV/eC7700Ab87GzkgelswCl7kgMH9YVsvNs56CrAUuXrEHmuCnooJae+4HLDFt9BXziSML8//EJIOi1otLoQ3SM7pJrjJ11+/DD+kZMvGoUIgSsM2Yfcg/ugyShCxK0kpPSmvN34bv9Psyb2g0V9S5ExxrCGuwwLhmEAj6yM+NDCof3VhfvKyzeHuBzg/Uv3TZoYocEzd6Ly50D1vXfiDQ1Cg9YQQdPjyjBmugH2/koGwSPXG0W7oN+xEdoUNJQxzKtoa0SNE23sTxg59zz3657Kfw28NntwXw+6iqMqO4cBb7ZiLjGRqj1KlJeIuQXfQ6exwIXqYUgwEVJjRselxMaUyFs+o5gFDJQ9QL4yJxpqqgktfUSrlENjVgGOcOBu0MGyl57GkI/B4roSPTs2xebtm3Hwb37II+LhKWiAaVV5eh97W149vknkJTZGbXp8YgR6pBrNEIkFSPgccDPj4BXIoXU3AiRSAQB44GivgG0XAQeoavLbYdQLgFltINtkpCtSWgrSlObUHwxgnYafmstOHYhilKTIRfpwO0cxNJVJdB/cBQjso9DLpAimhmOSn8SXD4TNOy2DdIufD5pRZ4XHNqK33aWo290EuEr7Cjbug1P3DcE0655BkOHD0HPHj1QXl6OgX17sKbX+OGHH2A31uPorh9QX7uNlN2H/kOvBk8qYW1XLqrtwrh80Kjl7EG5gal3frBranFxjX9Q55hfH7221/USEf+06Ax/JVhLhQFjbnpv549f3HuqFcD+VV/cmZD60h08Hq9dA/2PV7y1gA4ELsn3l1hlTOOwuImLznT/yuvueV6giS5d/cFDnwVov+BM6QQiiTOj/7hF/cdf/15mxy6XxTrtYhChiyoEomYwTC+O09rQX2ba/wl8jsyTEgUDEthqxrebgUh92K7uMVGhVJWxp3w+z8/lcC5ZKEm/P3BG2p4NHboP/sB8bM0ENc8+tu31DIXj3rLi3JUJKR22XpoSnowL5gNLjuwGkzIIw3ukhJbNcWOG4uule2HunggNj1zxNGDjpnwMmH4jOmrZ1xhwewd2WxGNit+XYGWlHhOHdUDR5pWoUnbDsAQ3Fi09iCtmTkeK3ouNK39D9xnTcfyrd+HtfCWGJtJYuuIIrrxpJgQ0We4ZuukLgseGnGP5yFBasG3ZeiRMvh6RRduw2dcdc/rwsWJDATr06wl33nrstSRgTDYfixdvRe8pU2Aw78fyXAXmXp2B5R8tRdLoadBYD+CoXY9xgzo3hbd01ePX776FJ2MMriB1q9/3K/a5kjGiixrbV2xCzOgJzTs4A6g7vA4ri0QYOzgTRVtXwxw1CB0DO7GEvGPalV1RvGUZ9liicNWk/qjftRplip4YGB9EQ2kJyqpTQFXuwoZSNWaPVOKLhb+jx7hJ6BxJYfPy5RB2HIoeWjdWrdyFDuNmgXfoR1QbRmJMNzWZaH5CbfQojOvaJLRacjbj0zXFGDHpSgirtmNXnQGThidh08oNSOwzHHrncaw6zMHMcTH48fvd6Dl5AkSlu5Hvj0efVC9+/jUPIycMRcOR7XDE90cPUQ22F/nRt1ca8n5bgoaYMZjQJybU7j5HNX785lckDh2HZKYUP67MwahbZ6Nh1UL4e83G0Dhg2y8/gul5BZKcJThqUqJvRw22L1sKSf9rEVu6BL/WRmP6lX0RJQkQBoIwc6R9aZ8DO9dvRkTHvpAZD+DX4xRuvnY4WtSKTIAGzZMgWh8BniUPC0nf6DZ6BCQ1e7GxQIhxY+OxbvV+DJ0yBFu++BLRI6ZhQLIEe7bvh6ZbX+jdR/HzZiumzBoPfUiWDqLiwHqszQ1i/OjeqN2/Gge8HTF1sAw/fL4Z/aZMhL94GyqknTChZ3JTIaR6jJx8NToe34rleS1fq7woq3Gh6+hMaKOF6JvyOw7m2jB99LWINpajKD8XDsKkddHITgwmnxnbf9sJWee+SEABvlteiUlzRmHzu2+D6jcFgyIdWLwyHzPvmIKqVYtRoOqLkR3V2PzrL5B1m4CM4BGsOMbDjFEZOLBlG9SDpqGHvmkeCjQewX8XHkT/iaPBLT8Go4MP2l6LzdsPIqlXH0hrfscvh0WYNXsYlM38b97u32CJvQLpzH58v9GGaVcNg/XAahwSDsZ1/Sms/K0I2f17wHFsLQ64UjAqI4BVB2yYNWk4RKY9+GiFCdOu7YL1n/wCw6BxSEI5liw/ikl3TkPZylXg9ZuOwfES7F76BWxJo5FkXoelxTrMnNgLtoKDqOGnoUciByuXrEHG2GsRUbsBvxWKMXZEBxRuWw9H2gikuSvg56YgITMbOT+vRm5VOqIs27HySABjR3VHw951OMzthmtGZzV9S6IbseTzHyDrNRaZ4npU1DqQGfQhfyOpQzARw7sYsHvtKsh7TMOgDFVIHDCV78bitVUYO64f6RvbQKcNQHT1buzzxWFEtyjsXbcSom5ToS5ZjpozjEU0HML73+1B/xlXI8adh583FmP06AHY+fN3kPWdjKEdY5GzYw1sKVPQjd6Mz3dzMXVCH1TvXo5tlQrMmD4M9oPrcRTZGNddhNUrD6HLyKFA8XbstCViZtcAVm8y45qxUfjmq03IHD0VPeIk2LnyZ/gSB6B/PLBuxWYkjJ6NvnFNigFrxV5SjjoMGdUb9XvXolTZG0MVFfiZ0G762E44vn0rBB1HI1vYPB6ZAKpyduNIgxx9OyVh57KvUNhvNnoLjuKrVZUYMWEQbEUFsHni4G0swerfKnH1tYNwdMli1McOxsBkPn5bsQ5xgyejf6r2QpecvwRhnwx/PwgEFLIy49hlN+r9tcV7ioq3B3gI1r9468Ap2UkRv//R/EK+EzgBJErE6G2QkfXSFVIQ8HgU2olaeUb46AB8Xjc6xkTiUHVDSMHACqItlg7s3/Z8NFyMMqG90JYhVUarfMtjF0u4yN99lRbERspQXF2L7joigNNO+JxV4LgIG+UNQMCvgU4XjQPlAtC1pQiSeZ3HRs7gFYNLqcAzmyGL1cBD1kqZUAWPTITMuCgIFSoEZFo8++y/oYuJQoPJioriMtQWliL29uvR74rxaFj8AxRSLez5e9ElNh7FfAoKPw2n3x3aXsClufApJJCY+OCQelA+J5l/i6HK6gyTRASLsRYqjR7GRiOaVQzgBVrqTujKVlhCuAOnCxxSr7JSoFuPruCs3wj+qEH4cC2wPq8CE3tzIGV+Q1ZsNmZO7g+BZTNsNhqNbh8qK2l46Qrw6h3Y5YpFF60C3WO58DJGLP/qLpjNEsgiu0Ks0sLtYVBWlAerKQ96NWH8vX5o5F7YGmxY9+Zj4NT5MWjOXQiKeOCROZR3jrCjYfx10Khk0HRP47MKh/kf/j61qKTGP6Bj9IrH5vS+TiriXzYz8j+C6LjE4qw+I389tmvdxLbX929ZOWPAjHuei4qOrTj1mcLigvQdtjXjLlUZ+kcM3da9U6+dZ0szcsKMhckZ2YeXvv/ApxU5R3uyShGJTGmO7TJ82fCrbns9OS0j53Jtf7jUYL/sy1T67VCN6eB2uaQ1RQdvSJaYHoXfE3WGRxi7NO0xcVTWq3LqRB0pHo++lEoGH01fkJKBhUfV4V54j2bD54pvvcgEhZHB0rds1qgrFEpV4yUpZBtcsE8GmUYL61EjaPKUgKxqNmsjaJkaspbFUiCFXsFFZY0N3bQacIIuHNu+Gf7EHrAUmZAxaCyiNGIIumTh+M4qBOI0kMalIiVSBQGvofU9fKUWKR2IAKGywCA5CDtpKh3FOjpq2u8IkQIdyP3IGAlidSJYbQxaNi75LTUor/egi1gCaZQBgVoXYZiVkMZnIDs+ClJNDPiHq+AOKjBi5lVgPA04cKgWdoHyZNNGIhRmdYqHShLEjiIjInsMZD2cYuSMGeCLeTh2iKQJelBcVgN1zCCyAEuIIDAVXIEQjfsFiE1JQFRkJBjCfFRHZCPJoIc4SY/CSjcCDBfaBCIsRZPyRHTE7mP7YXQrIVHFICMjCpyGfDQG1BiXGgeNlIvO8ftRUm5EPHPC/JJhTm88WWwmspMMcEkTwC0xwWaqhMmvQB+1HMrIfpiVwgOfH8D4GQYErTU4UFkDq4JlGJIxZXocnA0lKK00QqrzwmyrQr3LALFEibgoJUodVrLMx4Dt6V53PTzSOKQnxkBLCqGXlzaXgEP6bpN3qVA5gwEYqyphFSohVuoRq+GjwuEBhzBysUlpiI2QgeuzE0aJQxgaLoJ+G8oqzdB2k8AQaQD/YCnsfi9oL81uNAI/FFucE/oiX19dBYeI5CkXgSsfgKsyKcJSVbfSQhgRhZTEWAgCpSipNMHQXwKlOhaiQBV8flI4EYf1SINawhjGdhoCg0YDTZ+u2Lsoh7zzCky8ZiL8tmr8XtYAX8K5LLkoCIlU6/WRcpK2tbkJQybno66sBMLIJHTtEweh9yccyKlAqi499ETAbSVtWo/ufSRQqOMgCebCQ8olUEciOzMVkdJ66IhAZ3OYkFvLRZ/RGdAIOejaMQpbqurRf0BvzEygUVdyHBU1hBH0nZjPbOVlECR1R3aiHh4qG9qSIvgcFpTWmtFBJENkQiS4e6rgIbyv8jQ+iEJ0ehYSoqLgshKhOs8Op9mNcqMH3ciY0kbpQR9zk74nbI3xxrT0S0s1LCIivHeIR4STh2htCTho3tLQ0llbOjBXhISMJBiUQhSWk7JkdoBEY0CkLAgPYf7qq0i7ZI5GpC4K2tETYCV9IOc3Mm7TMki6SJKOCaWrq6gh6UYiWqdHVI9s7P+1ECY6CwZWx2mpgokbh4GdEqH1SBFnyAfjc5Ax2AhDn74Qk3lkyMSZoITiVosouTYD06Ymw1pTRPpiIyIMNhRVGaHv3iuUftCEpvT5xWcfi0KJDjFEeIlwJyOSOo5GjxccRSQ6JkVBJmpjnM3hI4r005hoAwSxMSgWxpH5UA9zciQOH3URklbAJjEgQi6FtPsViOcKwDWdsJQUyqPQISsafFs56t0SDCM01agodE/ZjwPFdegZlxTSKhsL88DVd4ZaKoFuyCRk8wXg+fWYEUfDWF5A5mwjojKC4Eh5TeORCBWNVVWw8DLI2NWRsSpAid0Ds6sSmozuSIuNhM2Zhr3WE2ODcdSh0ELKMDkVWopG5zQlEbzM/zglQxh/bwiIsNoho0nh8OH6kl2VFbusEj5H+M3jY8/5bAvYWYliRwblx/AUA/Yay0JRIrxeNgT7+QuHrAKBIuWJkEkhNtnJbwF8Pm+zMoFzNl/fF41zKSnYuwUNHljJWJWYC1BdVY5EMq/zqTg0WnaFErBzaC+dD7u5wpB1AR2gQ1YFfA6XCPQceMx2SOIM4LjIOizjkDVci99//BUusrbRLuDR/zyLW264CfNvuz20hh47ngOjTwx9eQlZj+WweQOov/MxdD1WiBy/E7GEnzhUTibPzDhUNjQiSqcBt7yUrJ0+1vEanHWED+zGg0ilQENtLZTJXZp5ira1avZgSSZeis8HQ8rLNJhAN1bhuMWLqJ6doCfvqx7cD4X1HfGfVb9D4axApw77sXa3BJ26sh8uDkDhrQQb7IJMqXBZ6tDYYEZ5QQRocRzpGXKolA5yOOHxboC7kRta25Mi/OBHBENWLnUNhCZOC9QKGjKy4Bw89BIqqg/gmkfeRJCrCJU0QNNNPjBI32LDTpzwvdnGC2cYfynUKhl6dkvjk5E/+a7/7raWlNR4hnWLWfTkdf1u+KvLlj54xicFB7aN9HndrWaEbodNtXXFD3Nn3HLf86em31a8fnRBY2H0pXi3TCD1Xd3n9lfPJ21yevaRe99c1ftSvPfvArFE4kzu1P9d8vPd2prqBKXj+BfigHUQu4WCtWFj+NISe0SvKxWqiPxTn+VxuQHCRl0yJYPXHzhHiKAzIyomPq+myPxkFAo+R5tJRxhwdnPWH3sMygH3neXxCwLl8XhErHMQdjFhte7ni+jsobgxvhbbVi1FjTOIlE79cNMsNtxPcwKuFP2uuhm1+XuwbPE6+IVadOo1AFlRZMK9Zi4K9mzFDzsaoU3qgmuu6g5Yy5CWKAvtZyFSH5IzUqAUUohLTYOaCPKgRIhPS4WK4kGf3Q8J63bh9zwZtPGpRDoSsQZv0CWkQqGkIOUmIoWWgx8TjxFZm7B9+a8wJCQiI1kCMWGsUxOpkLMnCCKQnsaFSCCB156Pnfvy4BfFYFCfbAipZvqT98alpEAlYNlzAUaTsuft345VK+yISOmCAV2JEBWbAq5GjW7dZqHs8C6sXrkHUkMa+hFBVapLQKJfHTLJk+njkdzsU0SsjkEyI4dULkeSvgbb1vwCO6PG8KunEPG9BhVpCH21lxFBdNZkOXZtXU7ozEeH3uMwMU4Nn2ssXJu24MdfhFDLDGTBP2HaLlBGIiOBF2KP+KIIpKaIoI/Pwqxxxdi5Yy0aaRm69R2A9CgxguXHseNACRh5Egb3zYSMdIGGokPYn1sNUWI39MuMhoYskh137sSvv65BSkIs0pTypu2QhERyXWdMHSbCjpU/gSYCk9nNKvAk6DVmLDb/tgI/HlFApUpEvEKBpL6DYN/xO5Ytr0OiLgOxRKqVipKRyJM19XaBAf37pWL7tu2IHtADQwemY/fWVaiNiUVmehz4PCGksqbxFfQHEZ+SGhLS9F1G4hp9LjavWQE3X41eAwYjhtA1NT0BMqEISanJZJIkjJI0GWMHW7B59TIUkzbL6JQKScjKjLQtl0KvsVeh4tg+/LBoB6SR6bj6hmmQ8oKoyPkde46Wg2PIwuCuiaeMBA7EKgPS4pu/SnH5uGLqNBz8fRsW7fYjudt4jElTgSGr1q6NS1Fq9iMuuyfGZ5/Ih6eMw5Uju+K3tctQEpOKzG4ZkHG5SEhPI12btCLreCojDQpFHGbdOA17N6/AbqMXSWTMzRodTQREO4r378PRUhOhxWB0izkxB6nJ+8dLdmPZD9+HlDsJSXGQRRI69LFh+6pfoUlOIAxXDOtrq3X7qDaW0EUtgZT0xGSOMhSGi6+KQXqCEqLYKAzP2IitZExFkTGVniiFLi4Bw+17seynn6CJ1CIlJRpC0i+uGVuArUt/hJ8wrUY3a46qwMBxV2DThtX4/oAUBnUiotRiyEWJSOAqSRPI0XfkEGzashPLKg2IT+wIhZSLtHEzUbT/N7zw7xxkDB2Pcf0z0Hv4YGzeuqspXUI25DIu0sfMRHnOPixZvAOSqAzMnDsOshZbLW1nzJokx9ZlP8JDKSA2JBGGUI+sa2Yh5/edWL7PBF1KV/TrkX6iXQRCmHIP4mB+LWSpvdCnI5kHusYhdzdJf4DMXYTp7d8zA5lDx8JxhrHIwu82Y/f65XDaKHQeORkpGsJDp5A+KWwqnDY+jdBbCGEwFsk8TWiukEbEIlWgbhrPikgy7gSI7tQbV6lzsGP9SniEBvQZ2BuRUjKHJYtBCVVITUsAO4OLFPGYOV2EvbtWY5uFg9TuIzAj+cTWnJQhMyEu3I9NZLxwyNjs368rdCIeio/txeESI1QdBqBXigoSJhNdDOuw+0A5RvYeCPtOQu/l9UjSpSFOxUNMtynof3QXlvxwgN3zh+SEONJVxUhPZcgclIrZc7TYQ/r8DiuQ1nUIpiWpsWvF9/CmDsGQjEj8ExD2yfD3h1LCL+d4nDt25FcOeXBWr/lDu8U+TS53+iN5cJqNOtXkz6iUCCw/WgOazztJedqiPWSCXLRYLQdDgRqa+McWQZ9VTmTHaLA7rwpeh4WswU0Ta4A87yPrjEAggJAvaPWj0BIdoiWP9hQGbSNTnMt6IZQH9/Q8KMJR/FhUhgdSYnGE/I3UqSHouAD08Z0QCevgdgVQZslBr3g1Nud4QFsb4dRHQMgVwc7xg1dTCVHnDgg0NkIo15P5ipS9/DD8Ti+hAQ9V+YVYt2Yjiuvs+H3dWsy78WZoybr/U+Eh7Nm8Ht7xEzBp42Z4TbUQZ3YApOwWBzcEhIgmiia8nQw8Dp/wBGRNDYhQ3VgHodsDCxHMPTVm6LqQNSnIg625roE2+h+K5jWJ6qySOFIPblUNXIW5KLXWQiBVITG6I6xMEfhD+8Ntb8B2uwX+CguWHt0PhnahV2osBmcGIOOR8sAHhuOAgyE0sNTAR8rCV7ihp/RwBWlwA16ohZGI1gmwe08+4acIP8ejoVWxVSI9iROEP2hGQdkifPfeNkRHZqO6thECtg39DOy0GxxeJNIyBqHzkDmQkLx4FKuAuGDZIYxLCIrH9cTIOIuPHa274qqhaZ/dMr7TM391mVgMGjFm+cav9FW+2rK0ttf3LH3v8QiNxjx08pz/tmybcLqd0u/zvphzqd7dN3L4ul7ZfXZdqvzOB4FAgMrNzR2h0+mGabXaUWSuYz2BNmts0fTpMhAoJPPd2uLi4tUJCQkb+Hz+ZffnEBkVXUYk4GHsb9ZSg7V4YAukOEN6PkX5eVzOJYs34faev+PH9qBLyPq2LqdurIFvm9n2uoZpuDXn8L61HTr3WH1xJTwZ1JtvvnnPhfhkCD0sj8TQKyedNU1kei9MIsfJECCt1whytLmkS8TgVj5Ygb7DBoZ+RQ0a3HxNiR6DBzX/1mPkxBaroROKutRezWkje6ElVkd6vyvIcfLbB+mbf8gSMHRwQtPvpK6YRI7TIFCg68CBrac8gRRZfUeR40SSxC79kdj8O6HzIHK0eT6lJ1pUetq0Hmj5jqeM64R+cU2/BxvScTISMGRQQpsiRGHwmMknF0uiRf8rp55eXgJJdAaGNpNFokjEgBZi6FIwckLKSWkjSSNMTTu5fVK6DCJH2ytEcB92JU5txSZwoYrNxJXkgN+MvdJcKPnsvqsYXDFlxilpFRh0Sj3I9IUTtkdcRGYPwLQWp8KEyLFZ7b4UXL4YPQcOaj2XR2Vi/LS226Z0GDikqUNpB53oAPrMXrgqs/2asJ8x4ojgzh5t6xeX1S90nAlKQu8hbfTFlESDnoRePduk4RAhsN+YSWg/Fx40RMC9ihxtoR/S0vc16DO8pa5S9Bwx/qS8CceHTv1GotMZihiZ0RvTMk5WLMd07IMZHfu0mz6p84DmXypom/uoOLpTa5/KHDAKmac8k9BlIDlOvmby+0MevgNEqB83awTiJQKyOkSRsXtqv+2NliEp1CRi9OTE08qU1rE3Bnt1SO6cChG7HYukGzXp9HSJHclY7NhutSDVJmHMlKRTrgrRof9IcpyenkuJkN59CDnaXqWQ2W8EOdpcOstYhEiD7K5d0HVQN7R1M9J7wIkMksk7mtCtdd7SJHVGSytII7MwoFkmFyZ0xPiEthVMwtDmhwYNORH2mS/To9/ICe33Ny5rodIbk9Pb9gkBsvuOIEfbhFJ0Hzm59WzA6Mk4FQmdB5483xEMHdDcaUQq9B01CW2z7DtuJsII42IhE/Hq+LRn09YD5cPuv7rnQ6N6ZS9C80pLGL+nLzTfIOEF++uU2Ce1otjuQks0S1apwARPd/rCfllvz1eDnBIiEm7klJfA01BHBFUinPL58EVEwJDaKRRaUsKGvhRJiWDNa1U4nBpFouX8TBEleH/A/jTIccHi5COPoaAh+R0rqUXPjG5QDH8Z4qK7SX0jECGmUV5bgz4Skt5YB5M6FoyYCO52J/h+HzSEPnyHD1xS+AYiqHOIAC1kiOBP5hTa6ceGnb8jLxDEEy++hnjCB+QXHMfub5cis28/HNm+BgczO0KfkQkq4IeXJafVCo7XDx8pk1StACNTwWuugd/rBMcmho7QpkIggc9WSgR7MvsKeBCRZnEzTcoULgKgGB+EQglETicYswlemw0+tlUUIvhq6uEzF+EYJQFHFUHSaSGT6SA3eEFrGsC12EISSymfi+JiD9wuJ5wkH7+vRVZhwA8E0Dm2L+aPT0eyaQkUAjdsTBFKapXoP7AXPO56HCuuhEJM2ouh4fP5oVDI0IXwniW1hSisqkRdlR119Qx0kYTekXGgLbUoXroEB/cshFqThUnXvw2pQgUeV8i6dQInvLviTwWXy6Fj5byfVu/IHzymT+LC28Z3efLuKZ3cf3W52oIVZvtPvvW1Vf99/IO2vhlon1e09sv/vLz+65demHDPW1f3Hzpq1ea9G0bm1hzuxt7nks4kpIQMn8eHkEf+UgLwuQIOnxJy+DwBBORcQP7KiGyjEMmhECohFynIoYJcoibXVOgg6/L9xYSOPF94vV6xzWabq9PpniZzXqREIgn5QJHJZGB/nwJeIBDocPDgwQ5RUVH/Yp3sEhjtdvsb5PdbYrH4skcPOR9niQIB33dJlQw++qKUDBTF93P0nRfAvncw6DZbP4K0NFNUucjYENNdq4ssvuiCtrxPpVJZkpOTizUaTfKlyjSM/1Hw1eg55MzCeBj/W9DEZWFM3Bm0RH8UEj0GtmoH/0GQx2HQoLi/uhRhXCDCPhn+PhDxuWYibq7bvL902N3Tuz80vl+nL3HtGRTGFwwuKI4fQ1NUKD7iDDnypYMt26E4YIgwzWGaLQdCkmA7+6MILHI1bJ16IplMWdHTH0Idn4NUmouDtz6MwmM7kTl0DBi+CC4HEYxpf1O4S5JRaEvZOawazrQ14lxbJliFRJDiY31OPeZkRaO6LAdWwmIK6p8NKRhEbj6WHaiDy0Mh6LfDW5kHTnQmmcPU8BirIKOEcLhtbPB12D0OCH0Melx9JdZ+8Bo0Hj/06akoKCjBkMwM9OmQBrWUj+LcHEiyuyNmxCBwYqLhzitBZ70BZRwabDRKPqk3l8eD1+dEMCgHzQlCEGAgFojhsjpBuT1gSMIglzUqoSGQSOFxWEN2J0FCK0OQD3FeAd4aOxaZifHYu30bBg4fBpfTCwGh73d79uLNDStQb6wHYy2Dx0fDG/CSvBj2awBpRJpIKn7AxYbG9JMy0OCwziAZAWsEHTI/6ZOQhY6xsZBGDcW/XliCqwd2hkaZC5vfiP2lTmiEKnTt3BFWixV2Uz2EfDnSU9KhUDNw5HhJ+2Zi3r1vQxUbjbo6I/Jy8uFxWsERSsFl+Ni29DO88chQXHPnO0jOJnTi8lota8K4fCDDJRit4P2y4ffCXoM6xSy/9cruC/41pePfOqxl1wGjl+1f9eXdtWX5JzFWmqj44jteWTJIrlBY2fMoWay35Dl3s5luCJxT/v4hEEH/gv0AnC/MZvM9arX6BZ1OFzIFZeezpKSkUHhg9i+5f9ozbJqNGzciLq6Vx9LK5XJ268i/LRbL8+T3szwe77IrR84GAZ/yXUqfDJ4LiC5xKiiK7wH39BCunKBfqXUcXmoTiAcplErLxb4n9K7bb7/9Q/bARcSzPR1e5O3bD6OHCalkRaxZcmYSxNQF9e9zgt3Tnp9fCAsZU5roZCTHasG/WE1w0Ifq4lLwIpNgOO+wDwystaUoLK9HUChHYnIK6fHCP3W3nauxDOU2IVISIptoQBbmurJS0KpYxKj+vtECzg8+VBWWkDZJQ6Ts7A3sNZejxCZBaoL2EizXftQWFSCgS4fcXgIjPxZJ+hN79wNuC2lzExIzkvF3NHj02hpQYqSRnhSFdqxoLwx+K/IKTYhNS4L0MvJDrgbSn90KZMar//Cz5upC2CgD4vXy08eg24iccieSMhLw9xoVQRjLCuGRx5+07cJjqkaxlUJW0vkrWkwV+bCJ4pCo+zOi7JwA7ahHUR1NGIPoE9vnwgjjAsCnuE45x7d68/7SwfMmdX188qDOHz82u+dlc9vPCvnsN8KuWhmuzYpFhc2LWpMd9W4nrEQwdhMh1R9gQx6y/krOXIwyIgjnNTYistEOGZ+DYg5F5iIinNtd8HrcOLZuJ7qNGwlVfCxqGxvgp4NE4GTjVDAhx4enWjWcVMa215kzXG+3bnwImCDMDBe7zUEM5utg2fQcat1aWO1+7MupBO0nAnfABzaiu9JXDKvfC45OAqaIlI8I7vbyEkTGpcLkcoCWCEEX1yBt2lTULV2Cfz/9JN7976coOFqADVt2Iis+kgjTASg7JuNAbhXURXXol9kZPEJbW3UtEfYpUFIJfOSdfEJbm82JCL0GVF0t6uxWUAIhvFWVkGWkwC+oRsBkhVSlAW2uZSNPgjI1YEFCAmbcdAO0CckoJsJ7XUk1nn38GXCFWowa0gc6ayOW3jIfqxtq8PzCRWB4Afh9DvAIXyRk/R4FGLJ+CUK8kpHmwssXgKb9CHp84NIMeP4Au2MWt/3rTggJrYdOHI4lG3aBcfKRnqiGPoaD8qALx2tsiBJ40DlVhbhIwu/GBbC7qgKUvwvueOYjBIRSLPtxIejyX6DR+gj9OHB7/SjPd2LgFQuwYV0A37zwOOa//DW0kel/zEQljD+EaAW1YvOeoo69OxjW3za2x333Ten0t3DueD7QGSJrhs199P7l7y343G5qiBRJ5LbIxIwjg2be83SLgoGFUqmsJ39YqfyS7Et0u92X7UsJTdN8hmE2qdXq0+xJ2a1nCxYsaFfBwIK1ALv33nuxfPly6HQnRWvjqlSqJwKBwESbzTZKoVDUX6binxMCPmvJgEtmyUAHAhcdh5ChnfHwOU81622C19bRmL/jJUm3K+68FNYrVG1tbWRVVVVMZGQkYmJizv1ESyEDXtRXFKOoygIREWA7ZcWRhaLlrgv79x1Dp0lz0NFAxC77MXz08fcYO+dqqL2VKCishF+oRmp6MjR8LwrJOUdAo77OAU1cBhIUDqxbtQr5dgXGXjkeWdEClOccR6XVB318BpJilC1WjDAW78JPWy2Yfc0odCArRX3RLizfXIkxvZJRWlYG2uuDKDIJUSI3CovK4eHIkZyWBo3Qh8LSagiDHtRavYhMyUKiXgbaaUJhHhFOyKJffvgYYkZFwyDjoaEsn9TVDKk+EenJURCeyjzTNmz5+Sc4s8ZhbO8+ZJGyYMvqFdAOGI2OWj6qCvJQ3uiEwpBMhD09aHsNyqps8HvscNACpGd1QISUg7riAhTXmCHRxiIjXo5DOzZiy6F69Bk+Hj3iOcivNJFOQEMdpyELuAApRHBkbNUoMHKIICZGRa0fMZkJ4PhsyMvJg9kXRF1+DmQ9xyFGJSCCVzHyS+vBV0YhIyORLLDNjUYYi8oSUm+y8FmsblLOJHRIMYAbJO1SloeSWjukuniysEaB46jD8Qoj690QuvQOiFE0CTEBjxXFhYWot9HQJ6YjNVodokP+cbYcFGJTMhCrk8JnrUd+QSkcjBjJGRmIVAjhd5mQe7wQ9qCACH6ZiFSKwPidKC8qRLXZi4jYVKTHyRBkQ4JxCCsWZBUOOagwuiAn7ZueYICAQ64VFaDU6IHIWYCD7kwkJETAb65BQVEF3FwFktNToJMJ4HcacexYAZxBEZKzskJlOJVFo10WFBYUwEKGWd2BXZANS0UPfgBBiiE0r8T+AhM4DOmTiSmgg01ODWmHEYdJvn6BCkpCFl1MCiIkDGrLilFca4MyJokwXwpU5BbDLaRgrWsM+UhINXCwccVyHKjjY/yV49AhWYm6wmMoq3dBFZuG1LiIkGPM0sIC1FgDMCSmIi36ZMGbFaoLyhogjIglfTS2VanHek9n6Rb67Xcg92guTB4GsWlZiNNKwaE9qCzOR4WZhlZOISBPQIc4OWHYclBSY4VIQ/JLiYOECsJeV47jJfUQCp3Ys9uMKxOTwPcYUZhPGFLCmSakpiJKxUN1YRGMhFHzc1Xomh4FS00pCsn4kejjiaAcCQoelOTkkLr4oCX1SyH1a4mqydBeVJfmk/HCxnHPwVFuj5CSwW2pRV5+GbyUHCnpqdDK2irXaTJ28giNrZDp2DEajeq83SiSDiBj3Y7qRi4ZK5EImsn8Y+YjU16JzRsLCRNN+rGLQnw661xWTPqrFfm5BTB7eUjKzEKU8nS1EdsvivILSf04iEpKI/OGoll5w8BG6llG+qvPZgYj0SIjMxVyQjdTTQkKyxvAlRqQScanLGDBkfxqBINe8GRiFG3fiGIkYsKoQeQe3eRAMjT28lFC5sUg6W8H7LEhJYPLVIGcggr4WTqQsaJrowQNeu2kLfLR6OHDQeZGV/IkJGqFsNSWkb5RB0pNxn0yacs2WliPpQbH80pBkzk5s0MGFAIGxqpCFJUbQamiyXxlQNWeFfhlZx36jL8S3fU8FFVYQgrMmA7dofLX4Rjpz36+EpnZHSAl62kwwIT6ndtUi3wy7zrJ1eSMVDLO/l4qnXPhcvlk6JcdvXrLjmOt8ct7pBs2/aH7HU++/3dAtF5Rl5oaZ7jYfNg90Qquf/WmfSUDbx7f6dkZwzq/ezkVC23BDmMet2le6U/4jqbdmCfz6QxN5iUy4F1kFvOxX/V9ZLwQYbrKZSO8hAN1ZjucAQ721ZXBaDShaynhb+LiUMvlosNjt8N03b0IRnFxYN2v6DZ6ImRqPerqqsmLWWfW3FZXk2f0z9CGEq3bKNh0vHY0ekEGXDIW+eyHei4TsgZgHVTuMPEwIEmGZ96vgEAbD6ejETaTA0IJFwpRA8RsOlMjgmQN1uhjiPDNI8/5wa9thC8qnkwaQdQSHoYV2NOGjkD9zn244ZrZ8JM5iw2YV1lXiofuvRNfrP4RxYdykN6pI1TRaRgxcjBesZeDS4QH4+FDZC0liauq4SXzQqPLhSBfhAhSVpfbAorMy34iqEsz0tAoE6Khvg4qrRaeQgoU48ZADo2Oah2khG+tLC/B999+h9qKXUiJCWDxqi1QcB3oM3QQKDLHZlrNuOXqCfhy0Qoy9wnQISjBW/PuAU1xQ75x6kxGmG12CCK0eO3LL/F7ZS4CLgc4Hi9ySgtx7OhRDB48GB6yhttcHsQP6gtXeiL2d+oKn1IPinZi8JpPcYTQUSrrDTH3d8KLcTB0+jRQYhkcVgvsZWsxrquL8HBOWIx2NhQdxAovjh96D4PHzseKdx/Fkd3rMXBiMgT4Y3G3+2VHgcwHrec90vV/8H70/7H3FQByHFfaX9Mw7SzzakG7K2ZGW2RLFlkySAaZKYmTS+4S5wIX+u+Sy4XBAcfsmGVbYIFtkcXMtLtaZpwdbvqremBHaJliJ+7Pbm1PT3d1dXVNd32v3nvfed9/HpCRYm/v3z/vE8kUnG7n3tl54Fzh4KKUHffNGfWVry8edGnW+k+AMZNnrBsz+eDlFA40RI0M3fiEjAylpaXf9/l8E4PB4F3Jycn1n0SZFDQn4OrVq98cO3bshJycnPOed/Q59dxzz2HDhg1XLOPgwYP4zW9+g//6r/86L3cNPf7w4cNDX3nllS0//vGPB11O5vPThpb4kWU/wcSPH11dgoIadbiuM/+pZa81OY4h6LkowLjQ7r+npbXuqfSsfldUE7ka8E899dSKj5KTQQ4H4QtIWtKj2l2v4ETnzbhtcp+RQvJ14dje7eTloKKTEKyUgdNgrFyL56vScMfCcTB6G/DK8y9i1IIpOLhtF4YuX47xRQrWPPkM/Ncux5DSXHjas1CS5MMbTzyNYO5AZFt47F77HM6OXoDZw3K0l7G3oxmW9AIYGRH1p08RkiEiMy0LMnnY79p9CjfcuhApZhbtNR0QSZt6W87gmYMVWHHdEOzafhhzb12E8aZWvPjnlQgvnYEDr67HkMW3Y6zLC2/FaXIf/Nj75ivYF8zEoCwbzu1ZhcNnR2Pp7OHnGRqkoB/NXgaDsyPShKzJhWkLF9Mpdax/4TUkTVuI8f0daD29CS+8fgZTB6nYeUzF0oVToFRtwYtr92LJWBNWb6rEkHHDkWbn4A0bUTqoP041GzG4NAthst+e00bctXQCeWHtxJaDMrLzMqG2VGL7YRbFaanYQ65paqEDG59fjYJ5d2Bcahg7CPkMqCEc3PAyqpwTceOECfC1V+C151/HnMULyLl4aiHAmf17YZt8AyYMdKP6/VfwSm0ZMtv3IDhgMWaNd6H71Pt46tVKLBrBYdexMO66dVrfzL2qIEgGCWGJUAvFg00vPYP2mx+E49ha7PfnYSy5XzwZsIR8wM4N6+B1l2Fo/2Ryn3rR03kML25pxrLb58KmBLDtzZdxumwyvAc3I3faYkwoc6Dj3AlUtaWgcvf7sNgHIoX1w0u9wcjA5/CG11Ezdg5SG3agJX8qbhg/CF3HmnDoHBmsVL+PlbsULF0yFU6lBW8+8zTy5t+E4NbVqEkajNElZEzs70HYktqX7BPQ1C3eeOF1DJx7C8ZlKth+9oAm3dV6ahcOO5OQn3Yae08LWH7rFBgIEV678zRSnAG8seoEbr5vCZwQsebZv0GclonDq55BjbGE3B8rTm94BkfL5iHp7FbY596PCeNKsOnlJ9A9YiEGDeyHOt6MQQU8tjz3OE6T6yx38zi28UUcK5uFEcwxbGswY9zIATBJfvikpKgXgYTKjc9jVUMyhhe50HxwI7afGIQHFozRiLu/oxbb93Uj3x7Ci6v3YcGdN6HcwOLohhewxjUOyXWb4C2eg5ljs9C8/VWsI+cozeDIYErU/ISrdr+Nk82TMTGtGRtPmnA7lf7yn8IBUqbSdhBPvdOAhbfPwwCmG++9/Dwqxs2Dum8HlLG3YjqVDX3jT6QPZKM814GWba9gf9X1mOM+i3dOsxg/phw2PqQ9SxyWiEvEoXV/R0PadFw/Ng8texpwoh3wnHgXfz9Gzr10IkyBNqx65UUU37AEA900eNiHnW+9Dk//6zF7wkB422pQU9uqJVqj6G2twO5jBuTmZUBsPoPtJ60onsyDE8woGjgSSd5WvPHqayieOAQH1m9FRvlg2IUg3nhqJ8bf8gCGpScO+FTSz70I0a7na8PaVw5hxq23oCzZSnsNOiqP4IgyCMunjEdP0yG89so63DB/iua+S2O3W4+9g73nRuOhiRz27KnAvHsXI50Mlg3kueSXy1GcZSfttQNdhekIH9mAfRiJxdPGoPu4D0fOaA9eeL1BzVXY234aLx45h1vI4DnVzEAOeEh9XkPetYsxPtuBI20HQH4COPf+S1hJnhsjSlPhP/QOth0egq/cNFq7lubD7+Ll7c0YOrgAUvcJPL3/JOZfNwjb1m5H3shxyHEa4A+p6D9gAFLOcFof5U6tw8FqJ25ZNA6+45vwpy11GDqkELKHPFufOY7rJ2Zg5yEZmf3SyXvAp92HUNtxPLv3DO5/eDH5bej4zaPTryh/8IHff+XK338WGH3/82px8Uc7lsZEOzlp3ZYD58beOXvgz5bNGPzL/1w+8jMZFH4g+IhRz0qTOQsC3ORjvpkQN46O/VK132Z9bxdO7d+IMyEPdr/wMsY//Ai2uo04MrgEw8jz/Oxj30HPycM4RIimbewEqORYnpNgNBjIQFsgJJr8nsnVKwYGBllCgGxTOU4zKijk2W9iqTQiA0ZQYZQ5WGmiwvZmCEEPDMEeFOWmw99WCUdARK/YDc5kAkPGEwc3babkDY4UDhtIddNLrOju8kCWRLS0BeENBZGcXIZw2Ac3GfvwhHx7rA5wNKGiyCDU40VSmhPVnV1QnTYIvb04K9Zi4He+hb0/+TmYU/s1hY1UqxE//8GPCIm2w2J2wm9PQWpOFn5ddwLHwyEYRQUBQupVngOT5gIYnmZNh8XtQJDURaYPWPLyYtrbYCJtYXA50NXcgsKCEjTyLOTuIKb2H4KxA4ZAVlSkp6dgzNjBGHHvdBzZuwVHK9ajwOzF4JwC9JIHUKDHj1vHTsNrpncxyZ2B/1y+jDIcfOvhh1GaHIbZVIDcgWVYcNNcfHfZ7XimvgKvPPksFCc5b9iLb/7+d5jw+mp4SNt+/6v3YHrqIRxu9WDrvgPYNeEaiIyR9IMwio1dyLN4yLWwEDgWqsxooRn0PZxqEMh4xwejw4aArw6ixEPyS+iWGmEm42SvbIWns5uMZRUtkeaHwa+/Mv1jfj/tQ53vasF8VG1WRJ4n/S9MV/YhkGJh3z9wrDa1KNt59N4bRz/yHzcO/sxms//RoIaAysrK94qKii5Mn/WRYbVaZ5Clhqw+Rwj8PZ9EGMLx48cnLVu2bBZddzgc+NnPfob+5KYnJSXRa8DixYvR0dGBn/70p+glz5oLQY0KX/rSl/DII4+gtbUV3d3d6OnpoeWC8Frts8FgIMOXAQ/dfvvtv/u49f0oSHFae09/dyrNy/CJlLd3796P5TDN87zoKpuhJRHz+31uS+vOHQh2l8Z3YDl/O5fzI3dq9v6PWdXI+QoKCqpnzJjxTnFx8YwPc2DVvg3Y0Z1PCF0BQtX70aueb6jhrUkYNHpixJMB12rbxGYvmL3VaGovhaWzDt2sFUnU/YE8hziNX1DLvRrJEky2hQhpDalGpLnMaLRnY8iQDGSnJsFCswdHz5NZNALCYTJwr0jBoNxi5Ph2YB0ZjZdOLaRBk+R/qq95GitXH8KExXPgQDcONkRDUbiopB7ZjyEvApVxItkqobW9E4VCCF295GXOGJCU4oSh3YWyIYMh5qYhbMwEeWuhLcQhOSmiiMFbCcErcWLbloNwTxkCprsCq989hikL5yI1yYBzVQ0ocwG11c3kRTqUvIA6tIRPtHosnYFQFPiDIjLJD6ykIAv129/EcX4oZgzkoIaD2nfUX4C6UmqzLbwRUm89unu64G1sRlBMMFiydqQ6CKFpbUfAxqGz2wsDeYmnkOs4VFODDm8SumvPQTa7YTQmvMzkAJrqOuG1yaio64azPBUZpKy9NefQnVdEiFsTzO5yCOilo8LzxL0UMiihnhvGITMwJN+AsweOQ1WC5P4loXxIOTKsXqx64x2MmDUTnD0Z/cv6wxmqwEurazH/5uFwiHTWnBBhvhONHhUDnVYIDh71Dc0odoogD0wYBsXcoRSc2bwShwzjMXtwGlpOHiFbWLjIQKSysR2h0hS0tvRq/Yh3ppOB0H40tnVDCtShTbZjlCGMM+Z0DB5YghS+Ga+9vgfTFs9FuKUV2SX9YTfQ7NxGJNkVtHR0o8jBoIP0BesFvwGG3reE54Zgt8PF9aCyvhNFpi40e2TQ9J3uZDea5GQMHV6OnmQX1MxknDzLRrSzGVVLjqgNRMjAUQ74EZAEpKQ5CEFMwcBh/ZCbkQLBnY2ukxUoKC1HbhKDt198E/k3PYjRyfQusHCmuSG0EvI8eBhY0kcHGlIuCo0QrDakCAFSvw6Yk1VUt/qRVGCDK2BDVT3pS/lG1Ld0QiEDnobda7G5sxiLJ/WHv/YomshAzkJ+04y/Fj0BMkgkvxE/Gbyx9jQ4pCOobeyEILSgwStgDLlvzWA1+UMKV0oyTC0ODBo6HKH0ZISS0+A7U4OSQf1RkGnH9pUvwzB8MaYNTNGa05GchJOdbQgEU9HR6dFidy2p5D56TqGeDOySfA3okE0YHVVnoAk701KsqKitQ1eeEc0VVei0k/sYvX5OMELsbdd+K92NLQhLES+xoLcDjQ3kekN16FFNSCKDabvNBFduCYZkkN9Lcj9Csr04fbQVmeUlcGin68amlWtgm7QYZU4WJ47VROJ741DR21SPTm862qqqtVwS3ppdWH9QxvVzhgFtZ1Abs35oLtGR5x7tC2GPDyGpL42cjbRboKIVvmAeusngMyxZEWw+itc3VmP2wulwVXXieIMYd5lmeR4uKpdK+qzoBtq7gpTzwJ6SCmuViKIBQ2Ek5KPI0DeLZSGDaJOxE9llQ0hv60ZWIfkdkb+2vHL0L85Gx8EN2BMqxYpRAhg5CJ8/TAb90ecQqbzZQY83IIscn8Z4kJmvINlIJbtlhIP1WPvWdoyefwPymQ4cqOz95HSc/kHQczJ8eqAx0U5efmfr/nMjbp1R9qs75wz56XeWj/xM42evBnEp3siHCLj4P9p7Md1ixlsP3IWlT76EQ54ObH7yb5g2fiJaRw7GgVTy4/zzr5FGno9ZrR0QmptpmgNU+7xoTk2DqSAfKnm2qZR0v7MBc97ZijxCrAOEnDc1NWpKS5SwbyFjw8aaduQPG49yaS2WL3aDdffH1vc7cGzPZpgVJwYPmY0zldXweTOQ5LTgy7+4C8++9A72H29GfV2dNiDvDTMozUvHuIFFaJHDaGnuQm8HGfs4LORXHILBbIGJDPpNBhGdPb0QVRG8TL4h9W/paNNCAWgCuey7l8Ntuh+nnn0c5Zk52Eu+Gz9+Anbt2IfS7ExsTyeEvasT7LkWnD19DLzNDMVkhULeeyp5JoqkLUWbkVw7eRaSZ6qFPmfau8AGwuSzgp6OTlgNHDijAIvZil3nTmKg2YaRY4aBJ3UykNFdZUUAOw52wpWUjYDZgf0H9mLc7OswYMgQBPwhqBYjBhQUoaigH84cO45Xnnka72x5HkZPL+qMadhz8CDKhk/GLMdgvG20wFOYBKWzEz1eESvbqmDo6cCuJ/6MLT+5DbUVXQjXHMJSUxClDhM2DH4Uyy3Po4m0X1aWC8MJvXt3zVMYPGIWjA4rUsffjz8+/h8YOagH6dZ00nZ+iHyIXM9w7N9fBZPSS9ZNZMzE41MPgP8XhdvC7Tt2stac4bZW37do7H0pNw1p+qzr9FmBkGtKqpdCGwVcFr1+v38vGV83EPLZW1paeiPLslfyRqOPtzsIub+poqLiP/Py8n5HzvORFR0OHDgQT+JG8y7MnTsX6ennn/4//uM/tJCJpqYmfP/739c8G8aOHat5LgwYcH7Or8zMiIPHmDFjtBCKNWvWQBRFtr6+3v1R6/hhEQwGLR6PZ6zFYrnfZrPNstvt1OX4E4uaHz169LSOjo685OTk2o9blsVi7UTBjDJZknhJlnmj0Rik2z9JoXH+lltueZEu+JA5GfpPXoqYkfGa5V+54Nsk3Hr/vRcdI2QMx/33Do98SB2HB6K2k6X33Rnf5/oV90dW8ubg7mGR1QlL7o5/7ywbiEQY3Xm45d6+7x0jZuLhaDb4u1ZEc1maBuC+R6KdMXM2/j2aVH1F7Htk4eaH7tDWym7rq/eSBx6K1mUx4hM0zj7Tamoi62QElExahLi2TNJw3B271utvQSxB/ajZt8aVAe6IFVo8GfdH13Oixbtm3YqYD8uKu8qjZU7H/dHLd2WPwD13RUudvhTRPXD7/ZEC8m66J161eXc9EF27DrFDkkk7FZ2XNZ+AMyMz1w2bw40Zt94X3XgTYiaupFk3YVh0/cELLMys0Y7rlvW13V1fiaacz4lNuCXjtth9zo9JMYzGAw9FEnfdem/sXrhwy13R9Zw+BZ4xM6JqIndGryX7LsR6wsL7vxRdK4j3ydJrl8Xrfed9sa2jcO8DkdbPnhurlwN3kzYLtxzG5i4L+kW9GRjOiGtv7rue+fd9LbJStCJ6jnQ8EItoSivH/XeXU191jJ08CQYaehGSwJlshJhbUDT7tni7OQZE2iXv7mg/J8+eqUti9yoVD0TtzgNn3R6/Prc9sjF7Ut/k5ZKHHkYfWKQMvQGPxNQdXAk3R5XJANOrDeRYWzoWrOiTe74h2td7kyeC7aEzL+QBKTFITXcjd+hNiPWAa259MH7M3bdH+7RtAr4U/Xnccn/s9+sm/S/SE0tv7TtPyaQlfb8LR/Sqxl8b/03Nvu1+JKJo/ALENFAGzbkt/ju4+97Y2jDcXTgs4QgBRYT0x45JGh+btVmGSGvn456CyJasGbci9lr68qMj4/W+pyiynn9X3z0vd6bBV3uYDP5tKIznnUjCDffFfk+pePDRC39EDOxkgO222eCeuDDeB++PVj1n/oq42sI998RkGTgUTei75jELY3WYhRVRIT7XlL5ny4P3ROudPBePJuS9YwQLJi+6Lf75muWx+0b6auwBkexAn4mBgaNgDO6/J6Yy4YBLG47kYGH0J5oxbUm8H94T65zl5NkcKy9vJHmmj4wf79TeTnm4OyqOs+KRgshK7vX4+iedo0/HPyWcvLJp+8FzAxdNKf7zPXOH/+i7y0d+6rJj/2gIjBG5ThO2PnIH3q+txpdfX4u1O3fCvH0bhhYVImfcKJxMc6MyMxmetCTNc5WjoRKqBHMohKzqJgw9fghtL74GMRiEe8JwzLx+AbL7FYML9mDbO9thsNajZPgg2MJv4nhnKx59nIE/sA0ZxjotPOmOKeNxrrIBnjYZSblWOIwinnh+K86cOI7iLCPmjh+E2u4wWrs88HuDqOtoxPjsVPKeL8bf3hKQnepEu8yii1HIOCsNsq8GZgMPqa4dDHmXKITwU0t2ByH/noZG5PbLg9dhI8+4hdgrSxBHDsIeRoZcvBAnzTaoHS2o27kfjF8k71cFaosPTKYCob2ZPFZd8AoCOhuC6DdkJAzkOnmrHWHyDq0ipCKY7YSRJkeQw+CsRoR7fHjbB4wLdCO3tYWQ+jSIVh5r3nkfFbRcnwGlQ/pj/Nz5WLlyJUoGD8PvDr0LwumxvfYsVviCMNvt+MXba2Ht4bBv7SaUjR2JoQtvhBgKoL67C55gO9hWGaKiwt/RpoW2qTRRpcmKZe+04u6SfPzy+gbUmbvwizd2wMKswfMpQ3D/0Ga8V5+DRYNFHDK14Ju3L8b93/9vQg5GIj//NWzZ+C6O1myF1c5j+rXXYev2ozi9511w6QaMnDKd9IOoPriOq4LTxB47U9GguGzGlnvnj7sn/abBdZ91nT4PyMrKOlNZWfl2UVHRHVfYzdDT0/ODwYMHb41+fqS3t/cBQox/hsurMlKYiouK/i8cCt7i8/lmWa3W7o9Sx/79+5+gf+mExU9+8pOLDAwx0O/J9eB3v/sdjhw5grKyMpA6XrZc6vFFPfPpvk1NTWpKSor3o9TvahAKhUyE9E8n9aNEeLrJZDKS5dM6HcXg5OTkHV1dXfOTkpIOfBIFcjwvcZ+SeghfXV1dUFFRUVxQUIDij+rrqOOfHwYHrrn5zg/e718UhvShmPVxo4lZIwpKY7SyFHfdV3rF3f8hYDjkDp6GB66gGm9PzkFZcmR98k33X37HLyCseUMxO+9q9+bQb9ICXDqbjo5/RnxaORm+aLBx8o49R2oKrxvX79kH5w/9nuG2EcHPuk6fJmK5EowGI6YV9seJ/+gPr6hgW0M93qo6h8M79sMbEuCQgTwD0M9hxJzMbEzNTUOB2w6+yAYU50NdsDAiCq/QhJCMlueEekfNWboIs5cu0MJVWea/aBYUiP4ePEC2vb+3El96aBJ2nK1GSlo2ZBOPmsYGmHOMqKzqwN03TcTMa4ch2HsC7286gP02v5bLSeaMaFYlpPmB+ZMK8d7uBjRXnkQGeT90BCQk0/wOBgFWUp6NCUNWjYR4izCT81qaW9G4bQtYqwmGgeVwl5XCIdgQFmV0tzSivbYRguQDwhIMchhTC4swecxUzJ81AXUBAQ+t34q2I/tRnWVETX0tHP4wrBYz2uvrISe5wKQN00IrvMEwOJMF3eiElGTHj86dw9v7TuN/ls7DgUMH8fCX7wHjD2H3vn3o7O6BTVbRb8Q4fHXDRjR5O8FyKvZ4/Zj3w+/hl8vuwMLxE5GS7MY9K+7FmfpKKF4J75ypwH+teo20mwlKXZOWJ4O2v6qwkBUJErm1Z7dvxf/uYOCbkIOWQztQPNiGtp5WdDVsx0vcdOQoJ/GiJx0FOWnwyB34v58/hK4ODvPnrkDBoMGYYJ2tuXevfPJptDfsgyMzGY889itk5pVfuWPp0GA1MBXV1U1es4Hr+fqK8Xdl3TLk3Gddp88baA4Ct9v9K7J6JSODMRAI0BmrmJGBkvc/hcPhp3p7e79FyOz3cQmL15H31+LV3/4nvN3to/IGjF1553f/tCTJndzxYes4atSodT/84Q/3mc3mUZMnT/7gAz4EaNjFE088Qb0lzt11112//UQLj4K0kznoaVuUlWyn5f/DvCUIspOSkrbW1dXdmZub+9o/8LwfGvyLL754y0fJyaBDhw4dOnTo0HEpWFh53/5jtVnXjMh97UuLh33LdPtI/2ddp88CsVhcC89iem425uRnE3Iua+EB2vdUNpMsihYuGpPGjCCuOxctIx6oQaNMwWpL5COLcNBPSH0Lho/IAkj5dJBtMhnR3NsMg80IszWMQUMGYOK0ayHK52BmrBibZ4Qn4EGbKKOnh4EaCoILNyMtbwA6ek/BGhBx1uiHNdsFc1k/nGxoQ3dNE8TuaqQNL8J1+WUI2dpRazPhv3/3e6i9QSy97W7UtfRCyoiE7DGSCqPJgB/OX4yd721BRnIylj14F7KyCpAlAIWsjBMFt+GnzBL8dtcuDNiyE52rXsWo8kHod+u9WGMxo9pM2qW+AZ7OLqQkudEs0kS1VvRyAt5L82PMxrdg6urEs7/4A9I5FqIkoaWrA988ehIBuwO86gedplMYBQL57pyiYNGzT8Lg7YFTMMButKCzrgKtfi+MBQUQrUaoohFs2ADWL0XdfBktmz29SarAokPh8H872jDInIc1z1VRR1CkpgcQbHsbO8lxeRkytolh9C/NQ/9cM/zJQZw88iT27w1GyqH1gYh5t9yC65c9RoizlWxXtKTWHPfPlSD3HwETh7rmxtYWRVHlL981fkVBxtBTn3WdPu8g5J220TqyXCZJsIrCwn6jGhsby7KysuLtaTAYQsnJyT/o7e39lcVieZbjuLkieTZse+spdsNzv0QoEFc/ZGqO75r2o1uHtY2as/yPN9z77cfsdsdVq3VYrVbfo48+SkP19546daqEegDQyW5BuFhAgeZk+OMf/4gnn3ySJqLUQieGDRt2UXJc+ttqa2vDuXPnMHjw4GqyXE/zEFxtna6E3l6PO+RpuiaF6f4SehvHGlTZpIU2dfFA3jTSSV1XW1TI5/OdJvU8ROrWQa4hRKtO/kpkEUm7lDkcjmUfUIY1Nzf3ZdIuXyf7/+aTTC75SYIvKys7tXTp0lfIjVj6YQ5Uw93Y+fZqHA+4MXnqJJRlXuBZE+7Brs17kD7xGvSzfrgkNp8fyDi2bSPQbzIG5VwYkS+j7dwpdBjzUZZl+0xqdyECbTU41WHAsLLMT8zZrmX/a/jzu80ozEsFG/JDtJFB0twZSDNdcAY5gF2rX4evdAauLbs6l4BgZwNONCsYPiD3qusrBtrw6gtrMW7+TUiSGlDrd2BIUdQBXA2jYvcGHBGLMXt8CRr3r8UubyluGWHAu3sbMGnWRHw+7tQHwFOD557fjGkP3Ymcz7ouV4BIk7tWtmHgwBIYuCvfwXDtNjzxvorly6Zc0Qfv44CqkRzeTu5/Y4gMJEMw5o7G9dMG4MKuemmEcOTddTjQGISRlWDIGoTrJw+5hOyuB6se/zOquCyk2xl4/TIGTr0O44uSLyqxp/UwXt/QgoVLZiHpQ48ZA9j29xegjF2GqYX/WAlKHX3QczJ8OJgY+ejhk3VJEwZlrv3qkhHfsNwx8uJsXV9Q0PxQpkjyKUIi+YtS+32cURJVSOPtKVgwdRSaqvehf047TMiGNxxAajKhsmIulEArRuVb8Mc//Q0L5s1BTlIujrafRU0Pi31NfphFN1KsrcjIL8Uz63bBZEwnZINFsDeEbtYA07kqKB11GJ1XAjbYiapd+2Ay2fG1u+9Fb0838lNS4ejnwP7NG/Bwyzms3rAPqDqIEBvCDFJm3eGj6GpsQIk7BYc3voex99+vEQJNzcMCfKNHwNqSIkxv7UJx2bcgSiLWrl2L1CEjUT1nGpieLjLGqUNG6VgITgeCvg6EFBkKw8LMmBHIyEGdJKNW8oIJk+12qyaPqZJyQjKjKWVxIhmHk8+0bIUQ+jCjIKCE0BoSIfUjb1uaGyIsgaHqQMEQ2YeDYjZrObM0S0M0ez0VG2U5A7k2Fvs5Ac6pZegXaESq0AB7qgeTkpKQmWHCL1dSZaNWGJJzyWkVsjCw2GwYOGAoZs28HqMmjgbHm0kbGKP5ebj4OXSAJldvaWtprwmEwsIP7ppwR1H2sGOfdZ3+mUBIe6Cqquq3hYWF3M41z40+c/B9VxPhLW2N1VDkiHf88Nl3rL3j6//vkgYbu91OZTHni6JoYnlBzB44aaDZ9uTqUMB3nqQl+R0ze99+7uGD77x8z7XLv/KD2bc++j8Mw1xVCL7ZbO6RJGlgUVHR46S+d5+lim7d3VqOBipLuW7dOnzjG98AuY74MTRHG3020MSP9957r2ZwoOs1NTVUblMLpxgzZsxKv99/FyHgPVc4/QeiprpqQGb49N8Nim8IDdC4ZJAGebagdjNQcC1guPQeZNlRX1//2/T09DcEQQiTelEjy2XP29LS8grZ9+9k9UqjR5bco182NzeXJCcnf42We/VXdnWg4SCr1r17984Dx5adqawZTe6Vgd7bUcMGrx1ckrdm6eIb/nil4/mFCxe+QRd8yJwMZ3aux3tHGjBgUgmSrQo2v/F31AWM4GQ/7MXjMHeoDS219bCNSyhW8WHHmrWo8fOQwz5kDZuOwZZGvLJqLzLzM+BpboKcnAMnK6O3ux320qlYOKk0Im3X24iXXnwPo2+7CYVmBWuffhYZM26Df9ufcTiQgTSjiBavjKwUKyEbYbT7eEyfNx/90yL3J9zbgu0bN6NNNSDQ3YOcsdfhmv5BPPnL1yGU9APr6QB5zcJlMRG+3IaAczAWzxuLtoZaGNJFNBzbhPf2N4M3KAgZc3DtcAc2r96IJnMZQF6Ads8hbN5fB8HAEXrgwsy50+HoPIlVm4+DN7HkZW/F9LnXIcl/Eus3H6GmQviCHMbNnoPytFhHE3Fy6zrsqwlA4EUw7jLMndQfZ3a/g5OtInlBEvKUPQTXTeuPXU/9CSeFAqQJXrQHnZgzZzDOrFuD3R1miOoscGfXYmOtgPKSMpQXkBfhfnIdAmnXoA3XLJiDvEgWO0jBbuzeuI6QdR5KKARX6TjMHluEeAg6wyN3wDjcOG8k6ekqTq5/Flv3n8MYazW2Hm2H0ajCz6Vi3sxR8dsskXu16d3t6CQv1LBfwoDp12OAtRlvrd0FxWBCiFz3qOkj0bxlHbY3MQhKs+Fs2YI3TkgoK+qPIeVWHNxdAc6goqdXwLRF81Hoitco0lLd1dhABjNnA0mQrpuLIksD1m06At5sRK/vBJrKs+L7yqFe1DZ2kedALw5s2ojj3eRXK/WAzybXOrEclvPe6Spaz+7C2q2nYEtOgq/xDIxDbsMk5n1slsZi+dRcVOx+C/u85bhpnBvbN29Fk58MTLwBFE65AeOKXNrckqqKOLTxDZzqZEl/9AMZQ3DLtf1RsWsztlf6YDf4obgGY/b0waje8gaOdzBgpACUtEG4dVwSucBObHjtDdiDXeg1FmDudVOg1u8mfbAaBkJ8e8IWzFl8AwzH3sCqniFYMbMYtYfXYltDNpZfNzSS9JHUof7IDmw6RgacnA9q0kDMvqYM7/z+t/BklMMc6kAXTaxqs0AKd6OXycENC6/Fqdd/jWNyPukjMhrJ43nmooVI6TyC1VvPwmgT0OPnMHXOZLRuW493TjajzTMbo8tM2LxxF2SjCf6eMAZfMwdlTi82rtqIHoMTgq8JIZ4mJpBwctsa7K8JQuBUyM5+mD9zNGwxScVgK95dtwVtEgvJH0L/CbOQ0rQebx6TkZdmQEuzFykZbjBkUN3T5sPgGfMwujhFa3OGt2Do5IUYRu6nnxD8V1afRvvYAcixRIoOdNXg3Y074GcE+D0hDLhmLsb0c0XvuwGDpt+AIWTQ2NtwCq9vOgVPcADMtost6pzFidFTF2FCvhnBqp14euteDCiciprN7+Bos0gGsz44i8djVCzMQgnj3MHt2HWmA5zkA5MyGNdPz8CqP7yKYG4+zIEudJPfhNtpgxLsQDf5XS+9cSL5PXpxaPMatOwOoydgwswF09G1dw02nQ2j34AhGJ4pY/fBSvJM4kldTbh23hR07lmD904HyfdDMX3CcPI84xD01OHlv74E5BaD622DmDYCy+YNQ9uRrXjvYBNMZh5BPhUzZ00A13wE63ecJY8nFb2qC9ctnAP27BasO9AMs5GQFQvpizPHoLdiB7Yeayb9SoGXScZ182ch/eqsOTr+RfH6j+eXJNlNbTbzyI81qNPxUcCCFwSUzrkH3iffR3AHedaMDIBNugGTHDnYcXoHTjdlEDIv0jQ9ePudg6js8qC3xYMuTxApnA0iE8KeVifePLwJUsgJu9AJQ1kG+LCIHEXFpNwCGK1mpGXk4kiYRWNXK3hC1g/s34Mxo0bBabWBZ2kCZiu+ZhuIdxamoPfZJjJu8aOKvB+nZmVpRKG5pxMTcibQwSslQZrUHCUIslmGFw605RTCdmQv8jLT0Hm2FvzQEQgcPAI7b0S426cleVSba7TQBUULY1DgJ+NJRlLAygqdDiQESqZ+B2Doe0UwgOUsYExGEKIExUie6WYLeHKs5O+FQtoB7b1Q2+rJiDoImZSjGEiLmjPBWR1QuZgUNDSxe5r0lr5gFUbVvFAMEhk7ec7iRq4VeSV+5OcKGDrUiiPNEqbN5NBa7cN//epPcGcUfJYd5J8Ojywa9tj4gVnrS/OGHfys6/LPjMLCwrXkz9pAMPylQ1tXXxQ2cO7AOwurq+76XUFhyZnLlUHIqxbeVjpg0JHvP7877+C2jXPf+N1/POHpajtvRlESw8b1T/38/2166fHHFn3jDzeOm3zNxqupI/U0cDgc93R1dX0rIyPjOwMGDHiQktmNGzdi+fLlmuHgUqDPjj/96U8g5B0vvPAC+d0NlQkBftXr9T5K/rZcicR/EHze3iSu/eBf8oOtN17VAfSh035kF9KGFYC3WhRF2UCu59ekDoeosYfuQmU6rxbUGFFXVzcpNzf3LUTFlC+JsBcZTjbDFw6bP46RIRwOm1paWvr5/f5yg8Ew1ufzTVJUZsCRk2dcuw8cQ1UtlVyPOEtQo9KxU2em5GakHmtqas7NzMy4bB4U/tChQ8P27Nkzhrqd0IycV4uiQaORdZq8eKaMhrz7dVRZx2DFwiKwgRa88dxanCqcfdExLXvXYxMhWOUFSWAMdi1bsGoCjNnlmDt/Ggzt+/D4mi5MXTELhsYTeHHzWUjjSsAlaLpfBMGOgUOmY+oAC7a/+ioCQ6/HzCILDhAyXNnQjuK0nCjpIwNkloNZsMBqbcOxQ6cwtj95aboyMHbGQuQrZCD+5k5MmTsP6VYPXvnzKtR1j42fxpWah4wU8lIOqkhKT0FybhGGl52ExTEKZSmdeG5tI8YtW45im4r6XauwYfs53DYxA7kZ9Wj2ikhxp8Fi6MX7b+1Bv2tvx+gcI/xV7+PZdw4gb9lkTblAbjiMrVUClq64IR7c01axFUe607BkyUTYeBk7X30Su06maokWh02ajYlZYbz7+htokQnxGT0A1aftGFmegxPVZgwYew3mjcpAy8k9ZH8jIUwmMqg4jtM1E5A3OHKG9mNbcM44AEtvGAIjOrH6b6/hYH4BRmdEmbcqof7UHrwRrIXRYEZ2ybVYWJ6Bhv1VmqyVzcqgueIMarujSeGUEI68vwXHWxjkp9thcArwtrVCTklDUVYqarv8MKWnI8mViZyxQ3H2gIyxQwpwbtNOlIychkWTctF+9oBmhDHbrQi0HMaxyi4Ujjw/Qa7gysfoESUItuViREkyemqrofBmmi0VnuYKnGrowkUZEcigRJQZQqpIH2AlnD15GJ3Dy2CxMwm7iNi78wRGzFyKIXkOVL33HHZdqt+R/nRm/3vYV+VHv1w3DEk8Aq2N8Oe7yH3SSoKkMhBMpE7GMA4dOYq6EgYbTwMP3rUkYdZKBJ1cMZC2tJL9DpD9usZNJhfoxqwbF2qeDAfWPI0DFWT77lMYueh2lKaSNj2xFs+vO4rrr/DM8jVV4q33DiI1vx8Z/DlJlbzo8QTA21MxatocDLZ34tXnNmLgTTegzMLg7Wf/gvrmsVp+jiGDJ2NyWSohuK9j9Y4qLCwhgzaDibSvhfSh0zhd58OU8WNwzHMc40eXwUAGWTJrIN/bIXVX4ERlA0zqcUglk7BsXH/IUU8GuX4PNlfbcOvtC+CCH3vfeAlbzpRi7kAXqXEYx7duxKEm8mLMtMFo59HV3QWXIiB38AgsmNQftVtfxy5lOJZN64f6/auxrbIewwpTInKyVFeeNCxN3BUMM2DVEPlLpSkizw+FDkBZHiarE5znFA6dbEgwMjBU7AFiOEiOEallSpslo0klLwSVitz3zpuo4nrQEEjDnXfMg3R8K9aTihf2SyfndSDk7UYoHOlX3s5z2EB+5/b8IghWFxkId6LNkwrB4sLgaxegzNCJt1auxeBps1CcGtZ+g5UdE8lv1oZhY+diaqEB53auwY6DtSgHj5KJ1+KGEZlor9gFlTfBaiO/3MZzON3ci1SGR/HEOZg/MjuhxqQfJudhyux5yEYVnn7xAJqajdiwpxsL712GVEg49f6b2HygDgsHZaIgvRktvSGkJZFnFs/BQEhFRpIHXlklA+VkqOFGbNnfgumLlyLHyaF530ps2HIaN88u+5fLiq7nZLh65KbZKz7rOnxRoVKFGlXFkKFTwd39Kt769b0YeLgKTPKf4XPehKmjijGkNxPbDp3B4ZOtKBnYDw6mFSfbzsFkSUd9Uy283jCCXDZ5PlqRVpBG3hXA1x/5d9RLPdiwdiu6jh9BdyCIlIFlGOweilNnTqLX60NnRxfOnD6Ds2ShcdWqkYOXJe8/JQhWIKSeN6BWDKDLbEB6TrY2kXPiyCGsX7saKWlpWHzTUpSWDsDfN76Hs9v2oDJdRUmLhKSWDvSOHIPas6fA19hhHkqen0luQtrrYCbPwaCJ0+QrqbqPSL2hRVlT5FCCIhVtAksGxAyV7aD5LCQv+S6oGSVo+IYSDJH3M3VQIGNyWQJ1NFbDISoirxkrDJJMCEwDJLkGWlwLx0ZCVUhxKpUZZQWYyXjDyLBImTIA/zn+qzi18jUIHT0oHTMTFeFknGk5hrOHV6LMboTbkRQPkfgYqo5fKKy4bqD+3P0EUTp2xqpda576clt91Xlp27vbGnNP7lyztKDwqz+52rKGT565ZvD4vTkbXnniSxuf/NEvL/w+HPDaT299+d7RE6a+R3NDXG25SUlJbeTPo3Tp6enJHjly5IKvfOUrK5577rn+hLDbQqEQR4ku/Q0ZjUbZ4XD4S0tLm371q189GQgEXjGbzZW0HJvto/ssE3LtbG5p/qtValuQqnRePACMgeECYYP9QK8p+w8mR85aq9Xenfg1zctDZTg/DnJzc/d3dHRMsFqtb5pMpkhqdzou9bUAnWeAAE2DoT1XFqum7J6wMPIhGupy0TX5/Vaf12sXBCGppbl5rNFkmkzuS7kohkslSb5I/SIcDsdDVkYPHaAtPR4vfvfUK2gjz3uKQCBor+vq6u9Odl9RGpZft27dnFhOhg9jZEhExtjZKHzzLfz9xYNgyODcPmQiebCyOHvBfumjZmBCw5uo8QbJA5qFi5BIhr3K/E+WJAwqMWP9i68h1cZpWZEzPvioOBQpBG9PN7yEqBsJ2TCwMuSr9t1Q0VJzFg0dPpgFDi11hEyOLEFaZjo279iOwxkzMG1SJrasfB77DZwm2zh7bgE8TUdQ2dRDXuICOpob0OkdionXjMb6za8QYm9EQGQx5do5cWlELnsoIRTrsebZl2AQVPAppZg9fjgG12zEm6++DI5cgyFjNOaUJ2HPoYtraXUng2veg21HU2GPX5uCcLAXPe2dUEQzobTkpS+LiOUvThk0Ffnr1+PVF0+Rl6mIpPLpGJ6RMLVPSEtO2Rgs1DwZYhAR9HrQ3e0Do1ApO/Kyj8n4sUYMmjANzb2b0B6UCPkj7U2Ildxdh1P1reDMNoTaGgnJCiLVlQRL91ZsOpCJjFg0ER0whLzwkI4sk+sNqYLm7ki/Pt/UxCDJnYbQ3v3YeSod/eBDDz2GSucpDKTwxTeXXjftA12ESMrkuccKPCTSNm+98SrKZtyC/pkWbaZjzPiBWLN+JU4lO+FtqINpOJDafyBCL63FC81pCLZVgy0vR/GIqRjWsRG1AYnw2BB4SzLp17HrkOEj5+r2hSEbghAYAaac/phZ2oCnn30ZdkMYTMpAzJxcpu3XRQZ4sonux2tGByjd2PTaShiC3QiYCjG3ZDAkow/vrX8FR8kgrjdswtwFg5EpkTo/vxHPdx6G1FkLMb+PXFoz++H6KQOx8XArXA4yMCLXa7FehcRuqAeHdryHlkMiWvxmzF9aALlmu+bCpiJEWpjTrs9gsSJVbca7O49gXD8GPZ3d8FNvVInO7ijIGTUBlRu24Y3mM+B66yGqZeByxmBK7lq89dyLMHJksGcvxw39YwEUBgyYdC1qutaj1S/CRNoy1WwBF7i6QZkc7MGed99GhZeDSQnBmjMAGba+XqNIAa1fKWGZdmFtQEnfflpvl3uxc93bONtFjuXDMKYWw21iUL3/XfJbHYjpg/qeNpzZgVFTF2BCvgEtxzZj1VvbsWTxOFzT2IXDTX7YSCewGM3kIR0JA7QmFeAaMhDderoTPLm/LJ+meRh8ENSwF0e2rkbznjC53zbMmJePrj0n49+HfV50d3aRwbJNuw7xEn3+cmBTyzFnZBvWPv0CzGYOYS4VM2dmo6N2P6qae8kzi0VbQz06A0PAnqtEfXcAZKyMpoZGSOUTMWVEGjavfgU0INGPJMyaV/ovZ2DQoeOfBZqsNc3lQJ6ZA4ePRv+/7MXGN1/Ac3/6EYrMz2Hjuy5YMiZg2KAslBfyCIU74XAPJI/xMMaOmQSBPNOef3klqroJSQ+k4f7bHsaSG68jzzAZq85U4veOTegwmCDUtaBuzVYMJs/2zJIyhMlzfmDZQIRDAU2Kmbo695J63P3970LMy4eSlUKerQJCBgE/9dYgg048tHYjw+NBkkfF1vpTcA+owH//4rd4Y98+CNMnQ2ZzcM7pQUVrAwSeEAqTCbasVPCkHK6yGQ2t+xAIdZPhSijufsupZBwSljSDgWqQNIODIsnaJAlCkvasp4NzhrxchVAQqmLBgIICDM0rQkZKBjKSUpCZkqIlgUwhxMBFHmw0n4WBjAdMZKEy4zS3Bk8IDp09pZNWPm83fv2zn6DlzEmsPH4M+QY3tvu9CFTWw2b148zRM7ClFmF/SwuOVVSjtHyARjw4PRxCx2eA7Jy8mqKx17/UVv+771743e43nn5k2qL7fmW2WH2XOvZS4HleImPg9dte+pUn6O+9KApWEsWrGGxeHoSgN5A/f/jRj35El49T1FVBURR239FTD7xb7f/1L7Z3CONyzHhsSn+kmVVkqA2wIZJqQjQmb5XSx8wjbdVLxzwfz4zwwSDtUBMIBCaQZ+vM9BRns6N95zrIouvC/WzBhrt6TreOrGbzvpmWlnpjZYt3ybJX6l03DXKg3Sdhf1MQf7ohExaB1YwIHxY+f4C8N84/bt/ugwv3jB02a/L4sasudxw/bty4Xd/85jd/OnXq1G9+mBPyyYW4+/6Y7KAd0xYvv2ifBffedf4GzoHpi2+/YK9C3BPTbksfjQdjapSEINx/W4IGKmfGwGk3kuWCw2/sk6+ctGRFfH3E7PPPY3YXYPGdF8tqLn8gqrWGXCy/MxZmlIKl90frfkss2/4cFF5ogxl0Lb4UU9XDVCwfcMH39jG4s+TCg4bi5tuG4tIQUDZlHsou2Dry2sUYecG2actjsodmXBtXhUjCvQ9HZfEG3x4/Jn/ELNx/odJeFLzJhckLbr5MfcgtGbEAKy46VkDp1MVkuWBzfl8fuP6mCxPapuC2Oy+UOHDizgdjV1sUlyrMHjQN95LlUhDMqbj1nuj1po7CfQ/GwjTS8GDZlAv2LojL/t0f1QK89ua7cG3CHorog33uzXC7oz71ZLiWWjIOK0oiQoNV73k1TwZj6kDc86Xz5VMprll0YX+OlsJaMO2me3HhVaRPuA79J5y/bSrZ78KmvO2RRy8utGwilpPlfJRgxSMlF++rVcKIgpEzcd8FnWfBPbG+k4klD/Tdp+tuj0gebtnnxLBR12ieDHEMugYPDcJFWLjigfj6PQ8Nu+j7ebdcnLtm4LT5uLglo1W2ZOD6i1ROlsblHQunLEXsqZMzch5uTdiLMzkxfu4tGI9Lw54xgPzeL/yRxg62k2NvuuhY28hrSS9KhAPX33Ff/FM6eQbElDVHzlpy0e/0rqiipHvsdSgZe/53N94fa3s3Fi2PSU9aMe/u6HPqzkdxUY+ecwuiIqLIGjoDD170KMmLy6XGYHLk4NY7b4p+6o87ozK3GD4ddw6/YOdBk3DHhfd57Bzce0HdMXQ68i/3GPsXgp6TQcc/G6h3P2cw4IalKzB38R3oaGvCprfXYsPalXj8hXVoae1ESGGRmZ2FoqIiuBo4zJ4zGY//9c5LxAjzuLa4PwY5krEryQe+zIAAIe+Hzx6EOTsTLCHvO578M64dNRN720V07XkGjZX7IZUWwpSaBDMh+Dwh1ayBhwIDJEaG5E7GKZpgUvVB4V341rpVsLlTYJk3C06rHQK1C5B3V8Bih+ol+0lB+Koq4bU2ostsRkAwwm9KhmB0QDGZIVAViPR0LU+DLEvgjp0j9QpD3Lsb4oF3oARCEEQFTqeLJrRDjjsVBdOuxZfvuxtjs9wXzl5cFtSgcbqmGv/2q1+hN78EJZ5qeE+ewfiHH0H+0VNwCxZ8bcENeHfzVqxb9zYCsgiryYJJc27Et7/1Tdx+x+1YtGiRNpbVJSp1fNqQJEmoqq4qP91xdMaovHE31BzamnZ4/TOXdLvv9Xalrl/16q0Lb77zrx/mHOQ3lfLl/3vVHPPSOQ8MM4XUgf8wngyfFerqassCYeXVZ4/0DHz1uAdLBtgxtcCK77zbiqouEd+enEo+5/sEDg8U9it9/vLuDZ8OzGazv6Sk5E263qmMH2Js3bPaygSHXLifUxCHOFH5tthRS9ZLcMdQJ/68vwtPLsrG80c92Frjx5ziq/Py4Bg1aDQK1R5Pz9mWtq7ax59ffVevz29J3IeGTbz01obvDSov3ZHkcl1SXYSfNm3aZrqQ9Q9lZNCh418BrGBF8hWEZwqvuS1Oar8omLr0ng/eSYcOHTp0fK6hkvG9OyMNS1fciwXL7tAINc8LEcasRtxijcYr+yA5GBWvPPoofvDaW1i5c6+Wk0AmB5sMJvL+BPyWJKyrrdByJqROGIWymWOQYbMi0+FEstOOTLuLLElId1tgs5hhMQgQeB6CYIDKyDjh8eP+3zyN+poj6Go6oZ1TVsJgSXnG4jLYSBnZnm5wnIJkxoCh2f2QZ3cgLBhhsZmRmZUGK58EdPeg+sRJ9CCAts4mdNg47ErNJmW2YOKUifjav/0bhg0ZBAkKaYOI54dEXY8VRvNMoFKV1A2bRjMoakjz0ORkjpAlerU86to78eBvfw9XaQEsFgE+MQU5P/guQiKPQMcuCKQKvZ5eDC4uQke/fth2cB8MZiOGDCoDI8hobmjG7t17SV0unFLQoeOjg87Ah0Ihc3d390hC6m/Jycme89yevxb8cevPmceXvYgfbf53PDD533DzhFux7c0nEfCeuLgMWeLtoY60D3vucz0V3K3PzdY4d0z5JhYOZORNSQf+o+5zbU2TZZlvaW78hd8f+BL5yHxpjBt02dMQwA82t2kJzX85Jx1dEvd2SlrGnclJzrbPus7u5LQ6v3n6JLnjyPOcr+GGS+0jQESS2o5rClPw2kkP1pzpxd3DXXj2cA+oh4bLpHlTKSzDhFRFrHMmJa/q7OjYm5qWtislJbXmwvLopOCIESO/+4s/PvmXfUdOL6bGhdh3lefqR/3qj8888ZX7l9+bnJzcfuGx/ObNm6fRkIlp06ZhzpzLqJzo0KHjs4WvDuveOwJr0QhMGnAJ9RB/M9557wCE/MGYMigXetinDh0fH3pOBh3/zOBiOWXI+8BoTPBejr4fzOarkL1hedgEBf93y1L8/KalYBleyy8gUbrOyprkJqOlima0hVUvlt5MBLVviLKMsBhEOBSG29eL+8aV44c7N6D3yG7qhg3eaIJssWvfB1xpaFHIeUQ/RK4Bmw8fBzktjFYO8x7+OtZtOYXguVot9NFEjR6iB2pQBBcIwZCeAS4lAxva2rDhoQfBiyoK8rOQmZECt9ONLPLXZjND4AWU5xfCarZoJMmd4iZ1sOC4ScUTR0+hNhAA73QhVDoYsDlhlCS0F9tx5I9/wfXjRqFffg6mTJmi5cZISnLAAhl5qck4U1eDjvZmtHt60FvXCs5g1I0MOj4yqHdCW1vbYLfbPZv8sm4xGI39WZY1ms1mxmzuU6FaNvoebD6zHr9890f4xY1/xYpnFmJy0TVY+OD38advL4csSfF9DUazf+ziFb+cfttXP/R7zizZuw2cIRK2RJ4JEX+GyL9hKQRZkT+3sUEN9XXXBgKBlbKiXiQHMSbbjDduzSVPOO4kx3E3lxX3O/pZ1PFysFisvZJh9OLWKvXHaUrjJR0EXOiUM81JeHR8ase+FuXIjXm9m24f1W9HaU7afvLc91/oYZKefuUEBC6Xq+uHj31tSU1tXfFPfvXXVXWNzXGn+/3HTs7949OvPv6df3tgyYXH8bt27Rr305/+9Jv0wfpJGhlUOYTObj+c7iRcpAJ3FQj5uuFXjXCRF8Any5dCOLLuZVSmzsaikVHDXbgbp6q6kVdcAMv5IgY4tflFnDSNwpyhqQiIApIclquoj4SGs2fBZxYj/RLZ6b8ICHg6EBZccJovfMaoaD22BW+eNeHeyXY88fopzL//RsRMqL6WfXhhdSPmLJqBZDr+YVkYTaarlvdqP7EV2yra0HSmHqPvehSjk6mllwyHpEjSh97q7Vh1zIrFC8bAcVGhXmx78XmIY+7CNYWfswjz3npUtZtwy9zLyJNaMjCg2Ix1JxswedDVS4Lq0KFDhw4dlwejSSxqa2xsCwMhIQtLWAmDVxi09vTi3p/+GKdqqgm5bofMCJrng6bIwPDkc4SI0KRimou1rGhJ2SnoN8YhI7S8BUxAJEM1EWwwBLGuCiIhRfR7hWyjY1WW42jKZBw8egyeKg8kTyvkjjYEfT1gwl6wZMSgSjR/QkR9QmUoqVIgkHf+yYYGHKs5p40rqJKEiZAzidRBkmRwPK+dX84h79P770Qduc4cpwXJvBOddeQYpxUepRfy2SqYwz24bdz1+HvnGfzYmIsf/+7XGJaUicW3L0eDYkQv54TJqeCdbQcxsLQYXmMvsnLyoUPHpRAMBk3VNfWl+w4dmXnydOXk3oCYExLDtvlzrvHOnTElheySRw1wmZmZH1gWx3L47vU/w7K/XYe67mrcNPIO/OdbX8Ffl7/sXfYfv1kfgulvxQOGHkhJTWu5WqnJS8FpdQZOfre1k+WES/oDk2v63IVKtLe3ZbY21T/HGSzXXG4f0s4dZrP54aysrJf/kXX7MKA5MdL6T/iWKIrfq6o4MzTg99ktVqsnr6DwpMlk9lF6U0CXUuDCAOSPg/y83Io//+IH5W+uXnfnM6+9/Rt/IOhQFZWtbWgavP6dTUtmz5j+6nn1pKESNO503LhxH8qKdW7fW1h32oE7loxDz/ENWF2RhmXjRfxxZQPm33wdcrl2vLXqAG5Ydi0Ov/giugfMxcz+Vmx85gn4S2fhhlEp2LpyFdxz7kRW1Trslgfi+lGZOPHeavRkTUJR6AAOiaUYlVSNNXtVLFl6DVy+U3j27UpMuWE2XN0n8Nb7TViwfB7SjDx5WXmxb/VqtOWPx6RCO3asewvWYfMhnHgDzalTMKWEw75D9Rg5dYSWjViTPlIlVB7cAzGlDJk2P1585mXMXToL595bhbrUMZg5KAX1Td1Q+wEtJ7ZhZ0cOZg4xYPXbRzDpujlIlmqwat0JTFyyFEVRicXaQxuwvd6JWZP7o3rLS9ieMh2Lx0aS8XlOb8Fzu0XctHgsIdp74U8ehLTAYbxfY8OsKeWo3b4W59wTMDx8ABubU7Bk9lDUbXsN77Vn4aZ5Y9C+720cM47B3PQGUo4XN988C0r9fqze3Y3ZU4uw8a2tGDN/MUqTw9j42rtwT5mDEqEDb286jMlzb0B+kklTfzi+6W2cNZbgmqHZOLbpLXTnTofzzDp0FV6Piblh7D/ehKGDS7DpzVeRN/1GDM+QsP6VtUidvhyZjWux29cfs8Zk4ui76+AvGAtz9S4E8ki7l9ix791NsIy+AfxB0u55CzDB3YjTPQ4MyLdhx6rVsIy9Pq76wJAHIS/wF4VE9jZVYss7Msw8A1e/kRjtaMbT6yow95a5MLXsxTNvV+O6JQuQ4jmKNw6IuPvOWYjZcFMGTMGisnasemplvDyW5alYBXz1h/DWnhBuvHV63MCgSEG88+IzkMtmYHwB0NrtQxIZzLQdXIW3G7Ixd0oRKt9/G43uUVg4rhh0KNR8bCte3dmBm5bOgHj6HTy1RyL9cxbY6vfxTr0b980pwf6jNSgaVA6mbjteO2jCTTcPw7rfP4GsGTdhbIER77/+BtSR8zEul8W29e8hd+xEnNmwCqnjFmBoegAbX9+I/Pl3Y1TGF9NIpUPH5wF6TgYdOj4YLMNoRgCaHyHVYMe7h09DFlhNzYdmbWbIeEu2CBEDAcMiZDRqx9BJBIVjtO0qR9epMUIBb1TBmTjtO1nlyV+jlnxRZY1gyMtc5TmYbA7kT52Ilua18FHDgNUNVXZAohm9OY7UhoZDCNoMK8JBMEE/RFGBGO4lwyCyUEOELIGKAMmSTM7vg5EcZSnLQ/aSxagV/XAKVjjNWairPk72F8G39kAeUa6pW6SsWoPT4S0oP1aN7zh3YEjhSFS3dODXv/s9cnPzcOs1E5CRnon//p//w7Z1NTgh98JQUowbwqRmAlXFEOPGGx1fbJw8dXrk//7h6VeaWjv6xbbRkKLvfeMhjBp2iWRYHwBVUZRsW9axl+7asPrM2bMHfnzDijdYltUI/4hp8z+xelMlA4blaUbEKwQdf/LY/P6uhaS9XiWXyaUmJ9WVFRftzM9K3jFsyIBt/YuLjwoCL17quNrqqu8EQ+H/4gyXz7xNyvxxbm7uj8i1fWQ5yH8kqGxlafnAvf/o8y6YN+fpmddOfeXXjz/1+Nbdh25PTXGfu9DAQKElfqQLWf9wrjLkReFISYHZYII5Pxvy/ib4JCdsSWnkweoE35mQA4IxICMrCw6ngKzcFIjZ2XC6bUh1COjxBVHf2AXHADeMvBHDZi7RZI/q9xyIHszBlZmKZLsZgfpWKK4C5KfaYbDkwSWfQldY1owMatiHhk4PUoe7IBiMmDL/Zs0q3WWZjkKzE131J0kHa0DJhGHay41l6YutF8f2n0TmzCEodjrJiyOEQMgPKptcPCEXLocFORkunEy47KC3E7BnIjPZCStXiBR+Dzp6wnEjQ3djAwyOfJiMZgwkBH2gIWbtJy/NlBLMn2OC2t2G5vp6eNEPfHcjePtYsr8JA6YvwgCBQ+OOg7Bn5SKZ1CmcmYZ0PgMpyUlgU5IQbo3oxRptKUiyGCClZEIQ6yCSl6TBlYoMUi8+dAqNfqDIaYXVbMWNS/K1hEca5DBaOjpgGjAaBsGEEbOWarMTHa456O8yo62mBrW1jSgs7afJjKaSMow2AekOBp093VAaOmApcJP62jD6uhsBQtLXn1DQPyddi50cMXEsfGRw0KSdTEHDyUOos07GCFIPp51Hr5hg2CTtNOe6dDjP71hw5pZhzvzrkBL15PRUNkNwpSEnmSZ6ykKay4+MtBQ4DW4IaNCEAsy4MqRgO9a+dxwT5iw+z4NBVYLolQQMzs2EK4X0JZeVlCehoa4FJvdQ0r+NGDydEH/+fM8GU3oO2ZcMaLLSkJwkIz3NBcWbDpzzwNdeh4NnGlE+cgRsqU4o4R5QRwre6iS/jVQYlGa0eHmML0kDdXQZO3EcJLMPe8N2jMxNg8MuoiiFxblWH0aR/qdDhw4dOnR8XsGR93ZYlbG/pgKvHN8PtrwEqjcAmVXA0Hc+VX6g9gM62KCGBfJX0TwcaEoEOiZQNXUrMojUypPFkLaYbWT8YSUjs+5ecGEFxpIBsJUUIGtwKUyVrah49hU4OmsQJsdS5QdFIeMgKaTpuauyrKkcydF1qjjBiQokVtJyQYhUwzLu4k2rx8JPxiwFMyejztut5WywhAIIu/JopnwYRA6de/fgmYFD8G8HjmDnmQbYe4Jw9HbD1t6FgwcPwRMWYDCbkUXGu2+ufAvdBjI2LcwDn5WJ1h4bXm6uQ+Xvf4SnH/kPGDhctYemjn9tJLuTGsaNHPzyyrc3x13fWc3758o+qUGvp7Pb693T0da2yZWU/JbT5Wq02Wwe6jkkEA6SayxAbkrBp1ZvzchAidQ/GOUlhbsKcrOOdXZ7sts7u3O27d530zbgpudWro/vQz00TEaDz2I29QwsyTuw+Lpp4wm5TLlM0lXVZDJtSUlJWWy1Wrv+YRfyTw6L2ex/7GsP3fEYcGGm/zg0CcuVK1cuoqESWubbD4GwpxkH9/ogG5Jw510jyAO5GVPGKhrh460pGD+ekCzegrJxE8G5Io/TwqEToRAySp3tCkdMQogQ57ybV6C56hQOHayCNTkHxYXZSC0chhGKCylGO8YQ8kszJjsGzsZtmQ04tW83RIMT1y6/FW5LhPwx5nQsuPMO1Jw6jQP1ASRlFaEoLxVJbheqzlSiM2jCtHmzkWe1Q508B/KpBtS2l2DG4lk4U3EMx31JhCCPh8uRjuuWLUHFyVPYWanCkTYEw1NSkcwKGJ1mQ2aOG7ekNOPU4T0IMTaMWbICKQkhEUPm3IbMhnM4fugQeEcGyvvnRb9hYHWnoqfqLCpaPHAPmIyx5GVp5XKR01iFE2R/zp5O9s9FSskwjEaKdpQzbzDGpFkhkIeNM38QJpB1eDsRDnTi+IFIHebdshhO1gdpjBk2I3lxWwdixZ3ZqDx1DAfDLDL7lSAvNWoxF+y4ZsntaKqqwMEDdbCm5KK0mLR3mgWnTlegVzTimutnIdNKBgxqCA0VJ9BaCaRMWIqxmQ6og+9GS/VpHDlYD3NKDkr6ZWL+zUtwruIsdlQTkp5P2t1phDBkImkXB7LLFoI9cQr7j/SSax6HTELM7Wx/TLJyWkhNT1cX7OlJ5Poi1TPaczG0KEju8XbtM8ubkV+Yj8ljRC3qUzVkYMJECxykyQVnDiaNS8JFUaWkTYZMmARzPFG2Ak9zG0bNWYyCtPPNEZzBhQU3L0LlmePYVS8gacBYJKWYkX3DPciuPYVjZOBgSMpCSWEOog6jcGSVYKItYlRik4pI3VRYSP3V1HxMGRUm9ykdS6ZZcXz/PjjS03DtxEyYGRPpXxPgJscxpjwsu+8m7Nn4Al45rWDx8kUodFlx24pUnDt7EDuDPLIm34phqZfOAttacxoBaxbM3gYEbNngO+vBpBZc5a9Whw4dVws9J4MOHZeGEpdzVDVjQW1jC7pqmlGWnovqpiY4bC50MD5oPg5UYaKjHf6qRnCdXvJuNsGYlopFE/NQHm5CXX0Hqr09ENOKMXL8FJQI7WBbasBJ3eCDPZDJ2/d0Ry/2VGzGoa29ONrJQFRFyDSBo8UEg5VHeObNyPjaN5CR7sKZ49UQq5sQWrcWTFsNeH8PGW8wkBQGNHLCIFNDBlnxB8DIEhRZIdfhhcmSCogClEBQ83YNB1Uo/jYwKodAuBf33Xs7xkwahT3TxsHj9WiSnRXV5xBWJJQXlqCkqAic2YyH/++/8fqOnWScSsvqhK2XReHIYWhpbsGO1h784u11eGzuJXO26fgCwuFwdJcV5W8uLsidWVFdp+m5UbnA7/73b7QJtWsnj8XQ4syjBp7/rWCx780v6HeUxtRTb54MumRcUjTiUwedRQeNM/4HIz09rfn3P/3usGAwZP7ef/9mHcMy4ZMV5yaLohR3DaLJCQPBkI0ux8/WZs+dSfjFpaOgz7Ese0d+fv77H6UusSSIF/69cP1K2z4IlwppSdwWW7/w7+cB/KFDh4b9+c9/vt/tdn8oI0NK3hBMTrehiJDueIvxmRgYVUeD0YGysoh0anb/8vhx6f36BBrTi2LbWWQUDiJLX/nmjH6IHJ0Ee4IQqdWdjWHu7EtXijUhf8BQnBf1ZklC2bBR5+9nTcXwkTF5PitGJOee/z3MKB40HMXnbXMilh2Ec2Vi2KjLxEWxAlJz+5PlEt8xArKKBpAlcSOPlJz+ZEnYlBm7dlJ9QlxjAoXW1DxN3jLU1Z88dHiU9ktLCDVwon//Pp8A3uxC6fALrjteRwMyiweQJWGbNRmDR/Q1tCKGMHLiFKTkkrqY+hJVMCxP7tPA8+4VbcOi8mFIvKyUgth9NqF8xIUynlaUR52rSstd533DW9IwatLFSW6zY5sEcj/LkqI7J2NA6SVUagmhzy8rT9jAwl1Qfll/Lq2thl7cVqn55WS5eH+LOytef9aW0OedaRgYvQUp+aVkOf+4wvIEcVLegoHjp8FeqKAg1ardR9achP5DLnPP0sfi7ltFcGRgJ+RHA05SouWl0GtVELZMwG3FPC6Ra0uHDh0fATRONhAImE0mU9BsNgc+6/ro0PF5QYiQ8Nb60/h/P/o23nxrA8pybbh3Pof7HJNQbUjCmkNnYBxchAXjp2N0bgkOnT0Bf28Y2197EQvHGjAxV0SLXYYi1SJjcCEeGPYovAYFHcebUTTmVuSXToXCGeANhBAK+tHwwptwHNqBX6zoh5efehLrT3aglxoDpCC4Xj/4Z/8Ed08d7v7Sv+HaJDeOcFb8aUg66vfVoOdcJeQOD0QYcP9DX0ZtQyVSUrPR2NyAvZu34NZ77yLjigLMX7AQN73+Kqq7PZDISzkocQh7/TDZXOhmRbxyaB/uHTUWyQYZTqcTo0aN0pYYqMGFLn/9xrfR863HsJ6cx6h5PKbDK5uQlO5Ce0sH/r57D26fMhkFSZcYv+j4woG+X6ZMmrAuKTm565s//OXORCLa3NqO519bA9PNc3+zZNH8DyUx+WmD5/nPxMgQg8lkDPzsB/8+NRQKGTdt3XFLWAwrb6zf9v2mlrbzWFZHVw9++vtnsHD2FOTnZCIYCqO+qRU5mam/GDp44I9EUeSbmpoyw+GwgS7ks0AXWZa5j2IU+GcANVIJgiDGFhoeQpfYemz7xz0Pv3DhwjeKiooqy8rKPlSCCzshvBel5NTxD4ExiZDcpE/3HKxgRGFJyQfvqOMjw+oi99F1lTszLH2gXmEHFgajHt+pQ8cniV/96ldf1XMy6NDRB0URNTWJfRuewXt//xkmFIkY+4gZQUcG0tJuwJbt+7HhYBPWvvwC0jLTsPfgETz73LM4fbICAi/j328dhlxrI5jcFhjP7EZXKAM52XPhkTk0+DMx+5bbIZuc2H9kJzynX4W5ex92ne7AoWofmjrd8M+bg8KpN2Fu8ioUGJvR396L7qAZc6fk4beVyShmHMjIy0HTiTb876yJOOqQ4S1141BlJza9vwsdR3Zh0dJFeHPzFtx1283oranAvbctRU5uHgRyXbcOHYP9G96FGArBJ0o4snsvRsxZgMC5bniSU7Doid/hmVvvRlGqCaqi0oQUfWoaMfk+8i6+fdkyvP3rH0Bpaofs9aJzkAWZ7jTwQi9USaaZ9z+7m6jjc4kB/Yv33TBj0q/f2rjtqxd+N27MqHc/gypdETRcAlc2MvxDCDr5vYXmzJz+NF2ff/3sZzs6OtKe+vtr39+6+8idYVHUfJo7u3vwt5dWxY/Jz0rZfefSG16rqKgoukyx/9KgBhS60ImUq9mfekfQZJPU8GCxWPwul6ub/v0gQwRfVlZ2ii6fTLV16NChQ4eOfw3QF2lhYWGV2+3u/KzrokPHZw1ZVBESJJzY8jakqjewcEEmWttPgDGacLwyE11cAG9tb8DGje+Cesf+72PfwqQRKuS2jRDC7TBzOSh0F0C1C9i/oxIms4o2TzJmjmewv0bGoJFDYLQ60FJ/DrnmXUguDYPtNUBgzKhr90IKN+LV3z6ER771axxyyKja+gRUl4o0cxCB9gZMdtRjU1U3xg9RUFA0CDXnutFo649tW97HwRO7EQ6HUdHdjff/3//ie9/7HtobWmExWJGcnA7BYNaUKG4fWI76+hr8qqoBISMHJuDD0Y3rMXDEEPSKZiiwYsXaNVCbavHX2+9G/9RkMIY+A3/M0HDuzAmwNS0I8yYYnHZksAw6Gxs0I42ZEWBjP2cKVjo+c/j9AUtDa0epwSAEwmExHtubluKuzc7KrP4Mq3ZZ1DY0qXnZl/bs9vu9aUajsfYf5b6vKAp77NixQdQLYdr4kX8zG/jmwyerFimqytQ3tw+j+3AsK44bWvrX62ZMfuIfUad/FVCPjpiHh9/vt7S3t2sx/SzLKsXFxRV2u/2SuTn4F1988ZYnnnjinltuuQX33HPPP7bWOnTo0KFDx+cUDz744ON0+azroUPH5wEMJ0KAEVXHNwMev5Ys+XijBINihsudhffe34MZs2YhNTUVLS0tSBFO4fSek6hq9KC3RYSznINPliB4wqjr8MDMqRB5aMmwg63dCId5yAqHnoAXKaoPguJDUAlCDffArIRg52R4pBDaWoMIBEQYZCfO1oYR8Kt4alsYXcoppNz5Nl5b34mRIwcjPZnBiXfWo/LUCXT6ApqHgclqw+++933s3rMHb69diy99+csgA+RI6CnLaskYvzNzNsoOHcdXn/wbgjYGQQOL09vehcvlRsqYseisasS1+YXol5qsJbdUJDq+NoENAHRecH9NC/5vzXbw+cOQVJgNjpPQFSDXYxDQ0+bB3bPnI8l+6ZxLOr64sNttvd94+K7bHvj6D04nGhkGlBZt/yzrdTlseG/rrb9/8uXFA8tK8O+P3IU08nuguVpOnK7Eb/76HOoamqsZhlFsVnPXiKEDVl87fcJfRpSX7aKu+p9GfSjhHTx48FEa4tjR0ZE8bdK418aNGramsbmlQJZkIdmd1Eh+g2Ge/iB1fGTQcAqbzealky/UuEDb/XL78tXV1QXvvPPOjMS4Mh06dOjQoeOLjubm5oyGhobsjIyM5uzs7IbPuj46dHyWoLLTiihBMLvR1uxH2Mujs90PiwC0BXn4FDsqDh1DKBQCGYRiX1UY/u4wKiu9CEsswkc60THVg7qzRzB+SAb8viDZpxM9nB1p0hFsX/1r5H7150jPzsZbfw+i6cgJ2A3dCDJOZOSlwCnxcPry0NrZCQfhKevafNo51LCsGQlUpQ6m//1f+Oy5eD49DQ//5D9xw40LMHbGJKQaLQiIIXQ3tuG/f/hjNDa14k9P/AnjJ47TklMGZVHLOaUFP6gSZo7oj7NDforntm3HXze9B4XnUWKxYDKhJzOunYXsTDtYOYzfrduMxzduhNfIIyUjE1mZSWisrkXGiAIowRBUjiHtQko1WeBtbsOvl92BG4eORkgJg1UvFPCO+pfTMHAqrUn+A6OCp2mv1YhUqI5/bbicjs7/+c6Xx//0d0++WVPfPIB6AaQ6zEcocZYkiU9cqLt77C+dxacLnXH+oIWe54MSFFJcLrlgbNmyfdd8uvHYqbO488vfvuT1kLLZXq8/ecv2fXfShW4zGg3+wrzsfZPGDntu3JgRa9OTk5s/KcMDrRd146dLbNsIQJMrpO0Um42nuRdoLofYQtsx1n6J7fhJ1OnziMT7SI0EsYWGniQusdwMNFTio5yHX7FixVMzZ87cSAZR+z7pi9ChQ4cOHTr+WfHUU0+t0HMy6NARQUjhYGREFE5ahpeefwGr3m3Ar/4zB3uOhbCv8jB2HupFt6cRX/nKV2g+Ezz83d/hD7/+NcY7K1FTdRqnzvTgv56swdhhmTi3u4qQ7GRcN78ffv+Hveif3oCZy6/FL374dTz86Hdx820/RHPz/ehqr0ZHWxtSUlLR6Q3h/S27MHxABs6c8sNkzcLq5/+A7PQMMCwDgReQ5k4Dx1FaTuhPyAPOboWoSOAJX1AIT3rzsZ9geHs3hgs2vP7t7+GlqDgYL1hhdqXAlJwGKdUJwWSE0WCEyWDDTaoJlW9vQEf1KawnZb3LCvCHgxBYKwIP34lwUSZK3ekoJseGunrQ4glBESR4WQm2rhD6tTXCcfwkikJ+7Fu3EQdJ/QipAlUHoGBZDmarRdsmCEYIZnPEoEA+L/j6o0gqL4eFFa5wZ3T8M4GS3d7eXrvX67VR1/OYASFxn3tunnfXmne3Ptjj8edlZaQeO3nyZPnlyvu08EGGCJvF8pHCCEOhsOXk2XNT6PKX51bSfGO99918/fL0tNTG2D50tpwaCqxWq48udOb844ZdUEOGpsphMgU/TjkxUGNEooEncbnQWHE5Q8+l2jVG/mOfY54CiUaBROMAXafXFfscW/8krvHjgqczNHT5rCuiQ4cOHTp0fJ5A340jR47cr3sx6NBBBv5QoLIGDCrMx1d/+ib+/os5ePb1XjQFJdwwvgAbXnsHvrCC9Stfx4TNm/DIY9+CzIg4cboFpSXTEQ4fhuRrxuyxaejqVjFxwkCsebsCXSEJz28i5Fx8D6Mn9oe/4T28sHIn+pWNgdGejPqGXlRUtcGINkwbW4I3Vm7Acy++jlumzsG4sWPgvISEX0TpIV1bFxQFoqLCt2sP5Jpq2MYMB0qLYWU4KNGhPKv2ZanTJC3lMBAIIeBrJwUABTdMR440XcvbIKoS2aSSvwKqyfWZ69sg1zaj1cDCxnGYwPJwyDIE6ohgNYK19AOT308rm5cu5kmSqsBDyiFEE1/7+teRnhWJcVfBEmJkjK6rYP4xefR0fEo4fvz4wKtNtEdIonzDzGm//7Tr9HEwd+aUx8tL8t/ftH3fbYvnzvjFW+vee2TM8EFv5uVmn6UEO7ZfKBQ2b921/5bigtzdPMeKFdX1YyeOGfHa/kNHZye57LWjRwzbdGHZMaWH7u5uV2yb5tWRmtqWm5tb9w+6xCsiRuo/63p8nqFZzv7nf/7nW4891jdJQ2+q0+nsoes9PT1Omvwq9h3dnnjTa2tr8/Lz82tin2k8zJEjR4bEPr///vuTJk+evC32ee7cuWtWr149L/b5hRdeWLZ8+fLnY58feuihP/7hD394OPb5j3/840MPP/zwH2Kf6WxSom45nWWi9Y99psfSMmKfadn0HLHP9Ny0DrHPtG60jrHPtO70GmKf6bXRa7xU21AkWpsubJsL2y4vL6+2pqYmLmx49OjRwUOGDDkS+zxp0qT3t23bNjn2ec2aNXPnzZu3OvZ52bJlLzz//PPLL9c2F862fVDb0GNpGZdrG3puWofYZ1o3WsfYZ1p3eg2xz/Ta6DXGPtNrp20Q+5xosfu4bfNx+1W0z/937POF/erCtqHtTtv/atvmwn51YdtcqV990r+5C9vuw/7mPmy/+rR/c4n96sP+5j6obfTf3Ef/zV3YNv9MvzmKSz3LV6xYAertBx06vlBQISsiRNZACDr5bUgKZJ4B1atTNbd9FgNygSUTi3H8bAVG9xuAn/16K3qNVvBOHhabE/kFRXj+N79HXVsjenw+tFedgsCZ8Z1b8uAJe1HQbyiam3sh91bBFPBi+rjB8Cmd+Mtf3sKQQbUYPaIQWY52KFIzsvKc5NgsrFtXgy/94Ovo7O2CidRDbmhEV0UFrElO8EYLRFnVRrV8LNyA5lggFaYGBjEcwKlDBxEgRJ6xmcl2BrKJhypHfvYKIfQxEk/JfuxhwESGyVAZRotioDHnLFkUln7DoYisw66CIeeSZFn7nmdpZgcZLDk3Rz0SlIi0JbStl2htVdHamRouVNKGKm/RtrOIGD5ix+r458bAgQOPUyODx+Nx0MXn81n/2V3zCwvyj5FFG/Pcecui719qH7sNPYvnzvxV7POAsv6a5/ysayY/fblyY7P21E2fejHQd7LD4fB8VLd9HZ8NtKcnHeglDvYSQW/slTo/HcRd6Xs6CLzS93QQmTiQvBB0oJo4SL8QdKB6JTdWOlBNJAkXInEQfSkkDsIvhStd2we1HSVWV/qeDuKv9P3HbRtKABJJwIVIJBCXQiKxvRQSyd+F+Lht83H71ZX6PMXHbZuP06/039yVf3Mfp199UNvov7mP3nYf1Daf598cxT/rIE+Hjk8adM6ckuoW2YdHK5uxucePICHKCp35j85PMoQUsxN+D3kKA0EMod98DmNZHi2VR1FdcQbnBCNEaRZUfyOyXlkPf+053D0/AzlDRXTBgdS0TGxauxHBUC8y0hT4u49AkQVkGljUn25Ad2sy5s1+BMMmX4+gpxdSRzcmjByOIanJ8Jw6g84zx6C0tqHu374LduQQpJSWgnUnQVAlKCEJcjCMsN8PNSgSZh+ESg0lfiDgSILRagYrCAhTGcmYkUHpI/LUyBADy7LRNlEhMapmRKBLhPiLUY+JyLFUWYLuT/aCgRo4uMhnhpYd3SdW3nntTb6j3hZU/YJVVPAXfM/ouRj+ZUBd9emSlpbWeuF3sXcQNTzEZvJj+QPo38QcDInL5/XdFXPdT1yokSAmhUgXGhZBF7qemPfhs667jk8GFz7LdOjQoUOHDh06dHzBEFR9MDImNIclfLexA89RBYggi16jCRBYjexSrkxn7akMA8vKyA6pKFE5qFU12H36CHoFQrE5Fl1SCCrjh2C1InTnjbA31WNQ+R60e7swcUg5nn1qFSlARiAcUXQw8wxCahBJSQwEewbkUwqOPfZzVKo/BktGqgbBAMbkhCIImleFxLMwmE2oCHjg2bUHhcdOwpliB++wwZySDtadjJDbBa4sA6zLBWOSDemMFZY//RldnnbY0rJoXAQgyZpRgRocYsYCakSIoY/gM2R7H5eL7auqsrZo62RheR68wUCaS4CB42ls+WXbO3YeWhY1PhDyqRsUvsCIkWtKxs1mc4Aun3WddOj4ONCNDDp06NChQ4cOHV9QUJJNSe5f91choPCE7PMoY2X8hGXAmhhwLCXYhEzTUANFQigYQlN7DYqLCtEpCXj8rb9DcVngNRoQlgnhFvtIeig6ux/KyMKjDdfha0MUvLnh7wh6ZfB2P1JcdiRllsNocmHf1v2w2CQ0nzai51Q7Os1WGN1ZZJsNPC+AsblohkbwHEfKVGEMK9oSkAPoZgSk+jhkmc2wp2TDVt4fhvwsGDOSwbldUFkL0iAgd+wo7P7t/8En+WCXBc1woMrUq8EHhovUWSZlUSUNNhrqQKFR/6gBQFEVLW8DvS5ZlSGRha7T0AgaSsKS9qJGEZqIkud5si6QNmQjhgTyH12niSppW0U8G6iXCANBIG0fDpLmE6OtRwM42EjYBPleN0Do0KHjnwm6kUGHDh06dOjQoeNTR2z2m/7LxP4HPqpz8OU4Z19CgciMO/VAuOA0dLOsUulGBVu2bMEf//fnhGRbwCkBMFIQgmokfJ4n21j0SiGIoqjty9odcLpc+MajD2H4oDIcbGrDJpMZ53qC6LJKUAmhjlWL5ijQQggIKffxHPUVx0/PqHAmP4Bff3cucvy1MDqcWPvM/+Lc6d3gXQx8Yhq2Ha+BkR5rCAFdHTD19mr15YV67XqoCUOm6g5hBYKkQpUD4CUFBpr7gFFgfM1ACD9P9hMhcBw4cg0CVZdQJLBOK/IFK4y7TpPqWFHr7UWNx4eAgYPH44EsiZokJm0xajSwM2y8TRl6ZppngXyk2fuoQcFCzmdjFM2wYCR70O8Ysj1MjQ/UcEDOH1Kj5gKyj4fUi5oNqDeIgogHgyZzScpgyL6rDh6A0enWThk2mDH/wQeRPXQguVcKeIaN3sWLQy4+LNQLPsS7EnP+Pkzi/tG+pJs6dOjQcTXQjQw6dOjQoUOHDh2fImRFISRd0ma2oSoaWeMoaaRkWumzClztZDU9Qurz5O8jfjRXgqxo5L5KBP7n7Dmsoh4IFidMHCHehChaJerCzyFMTk/DDlwF45D57z9E5YuPw8DlIL+gFJPmXoOBPIODb67Ejm1b0dbaSQ4hdSckvCfgwX3f/HcUFxTiLz//JTZ88zH8z+Z1ePKdrQg4XOjlVYQo2xY4rZ40mEAJieglhDpArtkXDGPZ+tcwKiUFE7pDWHHf/yEoteHgey9g77ZjGDlrMHIKC5GZlAK3zQir2QCrxap5EMjREIOejg60eXxo6/WjqbkFNp8fKaKMPELUzRYLLGaTJg0ZImenjhVMNNTBJwbRS9rcRYa/HlI5h8sNY0Y66rZvQ79kF8yk7iw9IMKq4+ejeSqCCqeFOEiqhF4mqK23ybROkbwM1LjAkvPQ4wMcNSIw4MnhobAYz/kQEuXI3SP/c1KkbHpdQcWvrb9wpomUVw+e3C+GM+EbdzyC6yaPwaiSEq0M6u1gcdpgMBnBknPYrTZN+pKeOzXJHvGoIIvb4dSun+M5GC2RMAyOfDZEPSe0iBc24h1By9E8JVgmUi82GhojShEjlRbOwWtGFTCRJJc6dOjQ8UHQjQw6dOjQoUOHDh2fCkRIjIBeQrLrahtw59f/E6b0LLgIoTUajDDwVKOgL7lgbIpZlmVNdUDbLinxhINyQvJBEMJKSbBGvqH2bYMXsqsArXc/iCY1GQpD9vEpfXkEWCVyLgKXBbC2NuPkvnXwZrghGVNRY7Hj9O7DGFN5FhlKJ0x2K4QWDsFQkFwL9U6wgTWaUNPUhhm3LsNX7r8L37phCRYMGo1/f+FZVAQDqCPXyxjPH2LS+oUJ9Q7zDMxk+LmvvRdVhJS3vLUKE0w2zF7yb2Bs69FwvAFJ5Hu+pQdKRSdae7sQ8PvRG/DAGw7DE5DRKavoJSTeq7IQCeHOczqQl5SOlMx05BT1g4W0L0wCzGYzGelyCJN2kyQRIll8hDz3hhTN4NEqmLF/3wkUu1wYV5wF3mCJeCzg/ESNtO3oPYkhrhZBtiVuj0NW4qEWiTke5GiIBCXxNJFkbD22D8sxOHLiCE1USwUswTM8Tm9/D11VJ+BIckXKk9lYHklI1B8iuh5W+uoWlMR4nxIR7UcyA1mKxIQotB9FLVNhWdI8JSJl9/UT2q9Uch897Z0w80ZIpG4jRg7Fhqd+rxk2dNULHTp0XAm6kUGHDh06dOjQoeNTAMNEyBhLiPUbq1ahICcLzoxMuK0GLTGgwHJgEpzSIxPHqmZgiCkXUAMCDW1QKUmXI7PL1LigRlUOIrPrSoTsCiz4Hid237oYdRIDNRikfv2a/KLm7a7xQoXwbh4sIejurh6ktp9Bc2cLfByhrB1dUGuOo6tkJPb5ujGeEWB3uGGxt6O32aMZKMAZoBLirpLyKdn85V+fxnOvv4G//vzneO3LX8GrO/bgx6tXoj3FBh8lrzzNfdDXJiqpZ1iTmJQJAWbxRljEQYOKCTKPwsGDUF3Vhm5yDs4gIOwywBtywOv1ossXhjcokYX+9SFASDxdJEKea21WGFwWqGl2GErz0K+oEHaLGQ6HHZzAa/kPRFFGUAyDpxKV1HDDWeFXJLhOV8BA7o9M2gMi9TaJ3TvmPCKdqAzBRu8Z9UZRaMhD9F6piHknJPSBhP4QydkQ9RyIhktQDwKW4TXvFoFjkeFwQUryknqzWmQE9TTIzkqHOzk9fvZYqawS6TcUEif3nVHpq6uMWOPTcJJoxaj3hRzdzjHnJbiM15h6P5Ddd2zZhqNH9sCSVYy2th60d3fDbXdqfZpldK8GHTp0XBq6kUGHDh06dOjQoePTgMoRPieBIUTuXG01oXuE4BPaF5YJ2aXGATpvnkBkY0aD2F+6xMiuEjUsRL5X4uESmqs+LYOQZ6si4NCwIoi9KrLVVnr6CAiR5aPu+wo4WAm9R1cLBrtMOHjiACRCwCVRBEPzLvAqPEfeAwwWHA8JGJ7kRkpaOtpaGhDw+6CyvDYLzhLiTtUTOMLKmxqbseiOu3Dj3Fn41gOPYPawgfjyX/+GI54u1HMhhEzG+DVSd/uY4URmaf1ZVPsDGP3my1hszMT+szVQeKuWkFIk5wv6fPCRv5Lo13ImyKSekqYGEZmp52GA0C2gqbsXR1u7sKXFg7SMUzBY7bDZ7eDMZoQsVqhuN0Lkc8BqRpCQZN7BITWgIIXUppeRIDM0nIAOi6W4HCUftTgkejJQci6Fw/HriXuZKGJcaaJP5pKsi1JcwlK7FSwbVeigFoSI0UGObpdVBla7GQ5XJAyC05JcsrCZLDBzwsXdi+3LnCBElQy1UBo2plxB+k80kST1X5FieUHof2y0TgkJO1ia9DJqOKCGKM2jhqd9S0XI69O8P06dq8P4gQ5yQj07gw4dOi4P3cigQ4cOHTp06NDxKUBGCLzMwUvWXc4kqJKAtOxUpFkcsBtN2kywnJCGL05YCbmLrVO3ejlhOwUlsJSEU1ASStepT0Rd5TkMPnAS3OGzmuFBluSIYUKV4gRYDPvgTnLiO9/5Do7U1iInvQBywSCcaa3DjoN70a2GwVpM6A2GcDTUjXBbAKOS0tCvoBCVp49BJmRTJrxVMhLiSWfjKZHlOYiEm76wfi3Wb9qEP/7kZ3jpK1/Fy3t34zsrX0EPp8JLSLxoMCLRrYFel5/Q3wChv0aJxxu2IEpvX4oTX/tvqL4m9IS9keQTYXKtciCSf4K0GU0wyXFmCKyJlBmAzDBo7ulCA9l+oqqKEGATVGoASUoB73aDTU2Fpf9AWAuL4cp3waQIMPup8UYiu9rRHFRQJHqhhHgovBo3MsRIPlWaUEkdYvcg1vax+6CFOygSdQqIGAvkPklMmncjdotFNaJKkVh2zItAWycFUEMAZzRo3hJcdD8a9mGz2eLlxo4NK1K89ySGZQhKn/eCJIUifYmchmcRv4aYdCZF7HqpkYHmcogZWOi+FqsVKmuGarHBnGSDNxTQzk+NWlqBOnTo0HEJ6EYGHTp06NChQ4eOTwGKymkEXPUGMGH8OLy6YQd8AQ4eE4tASAY1McTmgyNGBfU8bwZt0UIdNOd6qErf7LHKRmfZySaRkPhD772Pc4cOwe9t02atqSajzxdAKBSEKoVpwL22P/VoEAip3LH6bSxdcTPuv+lmpKWko5OQ6C2l5Th09hSOdLRh75798BMiecboQ6CnG+Oy8uCqa0CPtwd+MQhZsZPam8HyAgRCSAWqLEFItFcM4aYHH8CMaePxk298F/NHjsWjf/49NjTVoosVEbBbtDprFx6bTCfHhUk9FUKIj4f9MD/zMxh+9UdIK18khJiBSBMOyiI5LpIwU4INNN+FQkMNAgx4gwA5TAh8r4/s06t5CAgWMyy0LDFMyLoJptZ2WK1muFkFBoMFyQ4WqQYGXU472rzd2HKgDTaVXAupIxOtH2OIeJKQxiZtaNA8EmTSjhKpC6MZCOgFKFqyzcRQCC2ZYszDgGHj95gm29QcB0g5gsxp9achDJoXAv2fOjfwkQSXmg6F5pEgk74ShDdAVT7CkBnEwywULZcCc1FohypG7nVfIA6jyW2GEQ23oTKcgCYFGtuRGjSMWoJI6uXAavvQ89D8DQy5xwarDSxp47QkBylFjqpd6NChQ8eloRsZdOjQoUOHDh06PgWoUSZtM1swcfJkmJKcKCkpgYmJxDHEDAoU8YSOwHmE8TxZQeUSyfYoCaYhDPPng04ws4qIs8dO4vWXX8H6jRvQSzi33xuOlxkiJDJETnOk8jSOff9H+P73fwie51CQlY8V992DB5beqKkXtN7qw5p3N+P5995Go7cLK0/uR1lmEvjj7YSMK/DIPiiiijDNeUCOp6ETNMeEFBS1me7X316PzZvex08e+w4ef+hBvHP8FL776kvo8HPo5CT00tgNPtIOjGZJYRAOhiB3eRDkSVkPrUC/2+aj/sEHgaZW2HgT7FRWkqZEZAOwIgiHwsIZZmBQWRhSU2CdODXeTjGvD4Uj61wN5Ko6MFWqRtJl1og2Us9OLRyCwYD0zEhEA01nEPLHk3CGlb48ByobCfmgIgzGaBgKuXJwkhdqIAzR0ws5FNK2J3oy0BCUWCoEvyrFMyTwigCZJyTeaIJkSSVtbkKrz4tOIWKsoIdw5PxMbzdwvBVmmy3qkRDJsUGNAlo4DRvJjaCpQ8TLjuVVQJ+nBGSwaiTMgxpq+KjHQqSeatTDIuq5QcqioSPUC6a5uQeK0wJYzLAyBhQW5GseF7qYpQ4dOq4E3cigQ4cOHTp06NBxGfiDfjz42P9DLSHr3mAvIeiyRt4CgYCWx4BCipJKNbYey6dAiaBGMlUYxQjx1RI2MtQNPjKtrcXNW4zgjGayH6sRXLMSRjohdkZCCk+cqyf7R1htUA0iYrpgICgGiPT8kgjG1xkhoFG1A80TQhNRJPuSXXizDbIU1tQCqFdDRHUAEX1JghCp49maCvzgP7+N//nud2GwmNCvuBg333oz/v69x1Df1o3Wzm68unYVTjV3wtjdBUPYS641TAi8ERI1cgiElDK8Rn5lOitO1R98Pnz5376JP/b7NZ555hls+v5/4X9efhWvHz+KFs4Er0RqSYm2gRyfkg6ucDCU3Hwg2QHVZMBJ0m7Djp1E6ZpN6Cc7YIEZZpGB3esF5+kB19MDc083LEUZmHD/Arz7+B9Re+QEJKsBAtcnz6iBjxpvYnYagyF+j6lRIqa4oZhs2rrmzJCQ7DFGqlUaoiCGogoLRsiKALjIbchMhaTGVCn6SLiRylrGHAYSNEqp0kXIrqI2dAx+Vy9kE4+hpC/40s2orW+H3BXS+onmtUDLUluiXhBhLYMn3S5SI0Q01EFSQpHzMtTzI6B5bdhdZgjOgGY80M4dU7yAGg/TUeP/UIMMNA8Gg9EIVSatEDZjSe5izB96jxY+4Xa54TZbP+IvSYcOHV8k6EYGHTp06NChQ4eOy+DQmQZ4JB77dr6L/BQbxgwdSPgpdatP0ma1aY4AGrN/3sRujMhGQwI0p3pC/LmEGPzI/kxkJpqNKAbECCOiiQZPnjqJkcU5l4y7j3k+UMMC5KxoqEUsZwA1NESNDUrExV/7TBUFyHe9Hg+6urrg93eQbZLmDSHSr0Bn28mK34+Ww0ew78gRWAiZNhpNyMjJxrwbb8SNk6fixVUrcXTnXljCpEzBC5leQ8gElZB6mh+B2i7Cal9eiR37mjFg6DD88Ic/wPdW3IlbJkzD/X/7Kzp4AXXhIKm+G2JyOpTcXLBp6VBJeVoqA07FYaMD1QsWYnJDFQz+JFi8JmR5fEjv9MIuizCVmWEucpPr8KGrtweu8SMgMEYtPIC2t8bPY0kYo+EoFIkyoTR3BaJtH8lVwETuK9sXAhHzbqBeBAo1YNBQB7XvnrBx9Q6txL4PCd4niQoV1KvBSr46uYPcox4WvIfVyra0qihnSzQDSd/5Ea8Pw1OvBe4SdUTcq0GJ5lugBialVenrktHkkDJNdBm1MGlKF0ysqipCwRAOHjiII42HkDvKiZ2mlXgw9/uw8FZyjcGr/+Ho0KHjCw3dyKBDhw4dOnTo0HEZVNRXw52dRki0BAsjwJ2UBMFgiAgJqn1J82KQNZnJyHpigkBqDIjtR0k/ZX0sR2Ug+WgiwKi7etTDgR5roq70NH4+TpKjRgZKbtmo0gQlxkyM0PfJESoyE99H47wamVS0cyQlueFwOEm98rXPAb8fzY2VCIfD2nlj9hKRnKebqk5IIrqqKnHipz/TrsXIcXCkpWhGCbbHo+VkYEljiITYx1IRhqQwaQtFyxug5QEgZX/nO9/D3574C1568UVs+v438efV6/C7Y8fRnZKGdqcNMFN5TFLX6GUyoUhOih5Cntfl5GNEVyfcT78Md30dyu+5EcLEIjRLNByFh1LbCH8gAC7gh5Gn+QrkPgNBTA406ukRuyeJiIUo0Pai8p6spmgpRbaT84fYaP4D8i8vM5qXA5vomUCND3ElSAUxS1OiikOsm9BSWIWNe0wo1CgUVYpQoyEQNFeDQu9X1BuDti81ukTqCu2+05wQ9B7Hc0Fode3z2GCjySVifhUxCUvqG6HG+okmjxoN7dDOE1nCQQXdXV6ociYqandgcOE0Uv7FChc6dOjQcSnoRgYdOnTo0KFDh44LECOhnq5eDMhPQUgQkJpkRVZ6BoxG43n7yAqNxb/AEEDASkpcepIm46MEPhgMotfnjxBenkVY4LTyTGYbbDaLlkAxRojtDhONhtDI+qXkLTWZR/KdGPNqoGoSokQ4rhJRNdBUD8gxZD+NvCJCnNUEowU9Dy/wyC8qiSsOBHt9aG1tJaTdpwkfatdIroW6zFMiGiJltDQ0adtpKIjFZIbFwmn+9owSqR9L1RiokYWGb9DkiAYDlJCIqtPnMPOaObjjzuX49mP/jtnjJ+Brv/wdKseOQG2ZCjFIjhcVcBKpI01gGGtjE4fTig0TrG0IHnkd3bgJla0MaUPATettFuBISYKRqngYzRASPAhofWhoSSyURAszUfpyJdAwh5hHSaLxQeYZzcBAr5mPGnBYmviCi1kL+ow6bExKlIDjhPNUIxL7VMzzIHYvXe5Urc3iYQ+kzWgoDjX4aC4GmiwETSBKQ1H4iMFAZSKGEOrJEPuPrAs0QaNBgNHEa/eKGrCojmmisSVWj3jfpYE1sTAPTtUShdbYa8FING8DOc4M9MBHduSgCqqeiUGHDh1XBd3IoEOHDh06dOjQcQFiJNHicKG+6iwh/1awvCEai69qxoAY2edYY5w09uXfU7XYeEmVNe8GhRBbamCgBDIQDmpkUg7RpI0CWKeREEQLOUQgZJ7VeCWh5eCNNk0GkzA+zROCZ7hIrgeqFsBE7AgyIjKKlJRS8pxogIjJK15onIgRTE5JiPKI5pfQtht4ZOblaDPajKzC6/Wip6dLy8GgXVkCEacmFR8hplSe0WiwaCEJWn1Ef8J+9C+dP2dhJERYCvrw7N+exMqXX8ETTzyBN/7f9/HKpi346fNvoWnMNHSZDJEoA78HqiRCCCswkDLtDY2QNr6EZHjR9rPH4Jh3O4KFg1GX5cLoAjuSHU7SnDQroqzlhYhBllUtnIHmy6AKDbRMlRpk5FiyTU7LYEHXFE6OhELQsANaRFTJQWaj3g604aWYwgMb9wjQ8m/GWpMaVzSjBad5kbBRAwL16qB5Nzhyg03kb0AVwdPUEFI4YnzQjiXXy9B8DOFIOIfmeMFRYUvI1PuBHkC9X8jxgsJqBg1eMIA1cjAYBc1gpXnGkHbmBHJ+Ieopg77cENQ7RI5WNdHgQNUsuDADs8ME3qrCkMFqyhkpghmsgeb/oAaWj/Z70qFDxxcLupFBhw4dOnTo0KHjMijMzEBNZRUhWCE0d3Tg0KGDEWIWnbFXonkQtM9U9o+QO5pWj3KxMN1GPQdo8kc24smgUu8GQpqDYRFBehwhoE6nE3UNDYhwWlbLbUDLo6EHoZAYMUjQnAo8G3f5p4oK1KgQDoXg6erUDBihYIBUMxAhp1T2kiaSpOEKSiT/QGTpuzbN4yGeP6JP9jA2284LQtyoYnPawXORfBG9Hi98Xr9WJ1WVNe1FalwIK5JGShNDRuLnQoKkIvmS1r29vR2LFi3C9Gum4g+//j9MHT4E3391Aw7ChAq7E2ETA8Hnh5FcV/7+PRh1ZAuMrW2QAzxCzVvg2rMFvMUGftR0hH/yqBbS0nH4CEz5afD0dGnnkpVIksOwGNakIMUwacsgaVNSR5q3gdYloBlnYl4pUVNBQigENepwSlQrhGxXojkSYlKXqopo3oQYc49er6ZJGfF20NQfFDlunKBGC+qh0ObpgocJw2TgyS2QIRoMEIMhhINhhGhf0VJ4qpHwDLrwTN/9MXIQaH4QKh/K85rCB71nEtnHQL0ayL0zA9o95DTpkWjySkQkUeOVjXYCXuYR9AfQ3dUN3sHA5TJokp4FmeMgqaRsVoYuXKlDh46rgW5k0KFDhw4dOnTouAxGDy7GO9u2YNGNM+ALK/BxkalcKep6T0EJXJxAN9RBOVMHiQ2jp60JHCGKNGxCC12gng50xp+QT4NgJSTajBZCEtubO8mAjNPc4Sl5DAhhZJuscFqsqCek2mS2wGgyEYLJwWI2w2y2wmyzIjk5GXa7TTOAUAMCrZkxOoNPqamSYDhgokSSEtKYC7+UEDpBt2vHRQ0A9Jq2bd6CmtMVkBQJIZlBQDZqbviCzYQkFwO7zQ63ndRF4NHT04PTVafBE3JMsw3IjNpnv1BjiRHVuLxjPD8F+fzeu5tRVDoEf/jDH/CnOxZi0+nTeOz5l8HabMgeWo7aPz8O7H4f+yCBikAaCWkXVAY8KTMQ6oFlw/P4K1kM2TmaaobRL0KNymPSyXza3oXuQix99GFkuVOhbD6LpgNntJAQmVybqITj6hISEwkf0IxHUSMSDTXhlIgSCNkKhou0a1ANw6cGNUWGiF5HZLvAcHGvhkSlCZYqQETlJqUoW88mpzVxZiSb3fB7Qqg1h8GLtJ+RNlJU9IpB9CgiRDYaaoEwOT6yHmJkzSuBlh80BxGi4RYcC94uQ3ZY0WtW0Si0RqQsqcIEG+27vAKRi/Rdhu0zMjDkHhsFI9KH2nDT8EFItaVjTMpC9Byrh+oNwF3eD7ZU90f+LenQoeOLA93IoEOHDh06dOjQcRlYjQb8+GsPXXGfsBQmZJSFv+YsXrl5Gda2dyOlIAdpAwvBmARwJgMM1KU9ITaervcnhFNWI2xTIJyPj2b/FzXSGiF+xVnOuLcBG88zIEEN9wBNPQg2EuIeTTBJp9N90dn0y8XdJ6oJ0FwPfbkkJI1Qd3Z24cSpKs3IQI+JzIILMPAM7ISccpys2Ut+8v9+jMmTp4AjxJUzG9BW04CHp8wDZ5Q0rwFKtH2EyJ5Qe9CuBLQ68oQiy4jkSKD1iBlpYqEcX/rSl/DU00/hmaeexroffht/2r4LvX9Yh0HtFnjyplCtTfi7PFBCHvCkLUh1ILAKjDS0hOYpqIa2zjEWmFxpcA8fBcs1U2AtL0ZhUTYcVgVmCwPRnITadzejeXI2ZCMPySJEgjmoF4km/4l4W0Wgxj0dqKGIScj3IFA5UUSMKPFwBFWM3z8t30KCW4eWXFK7g32qD91k3UNzYUginnv1JRgMRphVHqPKh2rtb4g4RERCLiLZRrV1syRHFSYYmJVIAkhqxODUaDhHCEjyR3VKtQQU8kV9l0lQxUjMtxCSJXT1BvCnn/0cL3t+jBJDBoaVD8Pi3X+K9iE9O4MOHTouD93IoEOHDh06dOjQ8TGgKU0Q0nX61dVoCgI1hLQVF5cQousCbzSAM9A4ei6StC8qd6n9VSMODloZ0Zl+CklLGNmnJBCDiFhSSVVTFtDWEowJMbIeR581Ia4gcKGqQgxqVBpBNrfD1eqBgdSb5QkBF2ISjtDkE81mi5b3MCUnE670FEhBESwh6icPHILsCRCSz5P/BC2cI0lhkcmYcE71wQcaHqIgoEQMLNSeElbprH9EvlOkBhe/gsM7DmHwgGFYNuU6WE+3IFmwQRHMcJhsCPFBhBgBvh4jWJEQbNJOshSKSjNGPAZkRMIXAmERYQSR51SQmsUgPYsj98MKISTjxHsH0ZFmBrLd4LUwAjV+b9S+iIe4EkVU/zFCrrXEizGDDeL3QTMmxFQxVEM8YSWLiwl5VEciujNg4DktxISGS5hcDk1RghNMMOank/rxULlYiAaD+JVGwzYQNRpFlCEiihcxJZOIQUvuOytz8b3X8lBE68JqBpHIdp5TwIdlhNtq0d5Tj2TJiC5vJ7pbO2FLS4KB4RG3funQoUPHBdCNDDp06NChQ4cOHR8DlGpxhNzXV5yGTIirWOPTZp7FUEgjpDR5osJFhlyJs8CJBC+WlDFSIHVrv9JMsRrPoZBoWEg0OJy392W2JyJWLzrDzpC6MjR0gxfA8xF1As23grr+U7d7Ro3zS9XAIkS+ba+q0cIkyMGR8lQ2oktBDS6cC1T0k/oLSEJCTgFC8C28EUm8BXbOApEcGmAVhGh+wTPdMChGskHSVA5YVgVHyjbRMBKzFWpIhBIMayEN1KOBLpCj7UfIskDqKZ6sRmvbGzBvLEBrfhaEDAeMMgPP1mMIpBkQEkPkRKRWUl9rSwnEOdE4IEeJO0fDHWJtySjRuwjtHsc8HFj1/PwTl0I88SITCVnQZE01IQkuouJBjQSk7akhgRoOuGhbypqPRKT9Y/eMniGm/AEmIk+pRo1ZsprgvZBQFTXBsyUmxSmrfUlBGS2JacRIEZNRDfUEEKxqhDMtWZMrjRiIdOjQoeNi6EYGHTp06NChQ4eOjwFK4yjfMljMEPxBWBUONlcK3KluQopNGmGUmKiUIMvEQwaoa74iSgiFQnF1CPpX0tztIzhPySHBmBAjlbRMeo7YzHVsn0SjRUyqMvY3kWDGoCqR8AhRJGRSlSOGBOpsEVOkoOWw0TwOTJ+cplYXOrNuMxPiqcKgRs4pJbSPIFG1B2qkoLKUiLhnsDypC09zISIok725gHYNlOjSc1MvCoW2W/S6qCqFZmygM/vUcGPmwVqtWphEzIBB68tFc2Yo0fABGqLRXdcGqbFHM5LQ2XpzVgqc4QC6bRZYCH0PUqXHmIEg0RMkATHZUi0xZvyWUPItRSQxE+6RKPcR+0SDQywPBT1cJBfOa8kajVoeCK2fkGux8tbovryWJ0MLW0HEAKC1ZSwB5AVym0aub0jPCIbYXSX7MBGlEc2bJSp1qp0gqooRTWip9Z+ozCk9J9WjMArkPCYLRNqvqCeOQSALrxlEOF6XmdChQ8floRsZdOjQoUOHDh06PgY4wr7Dgoyi0RNRs3MHIWB0xpuSYjojrGjygQI1LSiylgdBI+qagUGGFIzIW4bDPgQJ8Q1T1YlAGLIUMQZoNDI6Yx2SxEjiRbLwAq/leaDqAgoh6TwlgJQMShEJRjqrr8ZyNVDyqEQSQ6rRc1NoBofoNfjDIe08QW8nAp5uqGETQpSwG7hIskJq0OBYbfZfo6fUa4ASc6qCQUhy0dBBCEOEgapa0PSIrDHivk/Iay+pn0GheRPIZ1JPQ5R2U1MKNWjwihQpn8o00ll9KsRA2k2IJjDQPClkVVunKSwUntOMNRwlxio12tAjWc17IWZs4dHnVUANOUGtrEiOCTs1THiCYLv9CFoIyRfVuOEgcebfEJ+pZxJdE8g+sdn+iDMDzQMpSmFtofdXiRp0NLURpS9KIVHSk3pMaPdSELSknqBGFarQEQhp4Q8WmuuCJg2VoRlHYmEPMiPT1JNafYwMr7UBvd+SFI4oUNDcEqSfsFFPBkkh5VF1E5kqalDpVBkiaUtJxXlyppF+3OchQe8T9VSQSX+k+SJZgbSxwwJTYSbp21E5TD0tgw4dOi4D3cigQ4cOHTp06NDxMSBTwkuIW+GcKdj8q59jkT0dypb3CHEPA4hIKAb7pr8jMojU4EAJMg0voCEP5C/dTpP2GamXAjUk0Flss0EzIAiCAYrDBDP5TPMiGASnZmTgjQJgEiLyhTwfiddnNb/7iLQCxXlu/33EUqJuBNH1ICGhHDk+4A9gyoSpkEURtfV12sy15hFAvqOz+ZQU0+ObGhux/u23YSJE15GZitJhg+EozEFTVa1GrlsVHyHHhPRSSUhCkDVvA5WFIrGaIcIADjwjwEyOtzMGskRUL8xkaEq3abP1lMBHvTUE8j0XMyJQ8h3dTq+YJnukOSAEOtseDWngo2KL9NJ99HhSbwPLR+wFpA2d7QGEXj+AIKPAS90rot4HvQoXF3UUL5HDgNaJl6KqIiytR8SgIFCPjqgSRST9QcSwIzJ9UQoiIf+x/A4xIwM13FAPEmo4MgkmjG+G5t1Ak0eaWg9pXhK0i7DRsIwgE1HuoOimHidsJA+DSYmGoVC5TTGynd59noZ5kHNYqUGCtH3EE4U2bMQbJebhQBGmMqnR2tI+Tb7EeE6AiR9ByuBgz3DB4XTEk5fqyR916NBxOehGBh06dOjQoUOHjqtBGHit6hS+tPJFiBwPCyFggtKXF0EhBJ5fvBiGQBCenjaYojkCtNh5lonPHLOXIGcKowXfa0RR4syECFvAW01aiAFHZQmp9KQSlUSkM8xs38w4p7nO01nzMARJidoN1PisfhrPIS2k4K033iTEntBVf5CQT1ovSauXRk65mBcAzaXAaXW1WC1QeLKNfkfrRzM+GqhfBgfu1DmN9IO0Q2N9K/wdfjCWLGBYvkboJT4SGqLVj+ejoSKknmzEmMAaLYA7A6pRIKSZkN1wEAiR+vhCYEIRDwxZkKA5U6gsPN0tkIMBMCK5vrAIlRoFaEgEIdcqNZYodFukLTWX/3heRZrLIXofaMPIKtTac9QsQTUptO28EEZYYDH3njtx9003YFV7B1raJWyoqNTur6CJcMTcEbQgBHINnFaeKkVOKrJaEIxWKjWCSKQPSL4gaetOzYNFO4/YF4ohJ4RUxNRGNGlLmuvAbgXrdGr3IYkzovHdtZDbGqFYHUBSOrknBpitNmSUF8FP2o0aK0QxGJfHlBPufTwfRzRppJa7wh+CQuqnnVuUIh4etN9RI4iW4JKsR0NAaH/zkIW3uWAymJBdWYdJhbnkPPQeGT/8b0iHDh1fCOhGBh06dOjQoUOHjquAQgj/huNHIHm64X9nG8aMGIWMjAxYBGM0gZ6gGQRgNwAOOyFoseSMjJbzQIvJj+Y5iMfnXzJRo/Zv3/e0GEqO49vl8/IzSFGiq2rbwxFjBmWUsoJQjw9H9x5GEyHquVpMflgLZ4i7umu5C2VC4OV4jgeZi/xlPJ54bgPNCCFEjAXUfR/RfcIKD85sx8BrRkIO09h/VSPQfjFSL7ouR5NBUnIeVkKQDRbC3M1a8kpWisYc0Fl5VYJC0wlEvScU1UiINkMIPKmLxYZYFgBaZrz6Khs3jqgKH29XOZZbgSbgjBl1aNhITN4xwdAjQYQlKx+bTP2w4Z0zmJEiQBB7caenIrovbefwpTtFTDKUhnPE7qUc8RSAg5zSpsblLLWQj9hhohC723ElEZUqOhgYbNmwCmGvDwIXuZ4S6pXgoAkhvVD8XrBmAwrSS2Dwno4UEe7rL4hKcUYvve86mfg/YIwMGBMTb8tYH9QcWxJaJRiU0NsTxIatx4GMXCQPHIodJ+oxpTgfWgyFDh06dFwGupFBhw4dOnTo0KHjKuBVRNR2tUFt64bgC8JKCK6REEcDIclaLDwbcaePQNWc4iNgIp4GTMywkLgeJZjRI7TZ52iixVjoAxPLYSD1JRhkEiUp5ZgxgxoZJI20al4EdB9C4jk5alTQkgeql1IyBM+xUY8LVnOvj3keUBKtJVOMelpEkgLSsI5oDgVyfpvDAYfLDQP5TlYiXh3+kKLll6CJLHv9Ac1NX5SkiGoCmKhxRNRCKuh1K2SdESUtp4Sm1kGNHbS2tJ1oIgYaChFTU4gbCBiN7Gp1ot4SFrtmbNBm8yOT94T8k7YUo/kMxIgHBKt5lfQZBVibFYrdiLAxciOSXRbUnq5DjuwHq7LROseGzImSofTfRGWP6B6SEi+berfElTjiSSWp4oi/777HVUVoiwowU4MS9VqhhpyoZKbmkUHqTUNNaG4Lk0j6XYK4Q8xQQW8uF/UgURMMGIkGB83zJi6x2QdWQbyuCkSYqJMDzShC+hWVCpWCPjQEuiGSyvGaIeXifqRDhw4dFLqRQYcOHTp06NDxL4UPkmtMTHh3nsLCBxwnioQwKyIhtTxCcggOpxUmQk6pDCMXnUFOdIMPhyNqDrIsxcvWCDqToDAR3U7JOvU+0GQSo6ELkfWEWXe2jxL2EV1Vc8/XrkXlyRJVV2CgEXzqgaDKYlw9gE24xsS6aqIMauRAmuVACy2g0oVRwhqJ96f1iclvRsgs9RhobmtHm+8oDAZoqgORepvjxpGYTKKR5ongaEJIA9kmaCEfbNSQIWsyjQIUoa8dNMnMqMFDVCLGhdgsfax9FJaP5DYgi0TL1iw9LKRYiEY0ZEA7hhp3YjkHEmbwaZOxVgt4gdGkSD3k3tLkmwpt10hx5DqDl+1HsXCZuEdAgkIFVb+Im0SkPiODoorx+9dXP9J/tOSe0JKHRvoKvQYFBp6BxSSAM5gjMqI0SSQ1mkSvh7ZBDJKSoG6R2GcS/l4qn0KiIiUbTS7JCbTefnI/JeS7nQj7Ra0fgNSfqqLggrJihiAaMqNDh44vLnQjgw4dOnTo0KHjXwI0SV4ieUzEhZn0E7dTXM7AcN5xhPg5LCZYbC6E3KkwWZ1wGZzaTLcSV4KISkUSohcSwxFpSjqDn1BeLARBjX7WcghQOUNNspAnxM5ESGMkkWNi/gaGo0Q3SmQTjQwGPr4eK59+yzKkTqTOQZWcX4peq3SeT3wcoZjCAyGwssQkGDk4rVxZS2rIaMkBY2EVYS15JbnWAE0aaUQoTOUnuaiBwBcPyWBlIW5MCKtyhPxTNQVBiCcrpNKSlBpLWjCHGj2vZrrpMzhEDQhhTb6T1eoKXtCui0YniDITSYBJjhXojD/DacSZSmLGJD+ZBENFfJ22dTQxpMIrqG4Lo4s1RawPKs13QZaoNGaiUUBNIPBsTG2Dlos+ydCg1m6R+8IhMQ8DE79XSixnA8/BaOQhk7+i5nEiawoPPLlGhvQJ1mCBwjGkf/CQ2IhRIObAQA1ZcaMKm9BnkJAXJNaX5EguiUgekb4ojwsNJXQ9GAySLk2OVQyah4fr/7d3NT1SG0G03G237RkQWREJia8DB8QpPyEHfk2Un5gfkUOkRIqUS9AmAkECaBhm/dF26lV3e3o3Bha47daTVuv1tO3qco+0Vf3qVdPQ/t/XoufRz0dfLroS8TknJyer3yeFQnE9oEkGhUKhUCgUVwK73U6C+oUdkNebZwFUnlDId53zhEM+JgVnW+PoYb2l3xsO/itLv/75Bz1rNxIkLsN9Yi94Gg6dlAlARG8eu0CN54B3slHnwFgJfl3tqCqr0AFBjp18JsG1zQLZWAcvXIOocyBChHGeYACAwSDtL6dQJtC93dGb7r0MhKDi3IUWibiHl3aTyewpygsgSE1dKQxZE3ekCzEgjEEQW4TWiVS2NI0NYVPbS3JjDpUZZaRGwC5Txh13/qQKLIbC8u/Kit/SbjxFgcI5yQhgxx5dN3BPUy4+GSsegy12ts0iUeEq0Yuw7LfBtWSbDZm2DYKZ6CwBO1FxYWZhJYiYZsE+jl0sJsv3R+OJMsxNEhp8v7+6XQjy4a0xXydRz4Dfe2INIOmzdGbwQX9BNCjmXtaHJBnO8HDxDq+LfWiHKR0exiXh0XIQ/+rdQN1hICdMEZRWcEBPvJ74CcYiEdXTMEw0IMkzhaTNOEypJoIOto5rthBbCzqWSCBp5edhYVJ03UFab4rdvF5RXlII0yZ02cD69XVFpfXUn72nb7dbevn8lKay4bkPSzJK2oZmiRxNMigU1xuaZFAoFAqFQnElgCAn7Q4DOVVcWvddoLPnbfg+lFgQVkIsK9jPe/rh8Xd0f2/o9PZdajkQRZDYm3nZPe78GJkEHNSZKegP8Pmeg1QEvSPfdz+HJIO0EByh2zBLa8kRWgYUdr2X2nixOxz3Q6Tsi57B2WK3sCEQnKOrQd8HVgU6FXAAWt9wdO/p97T7+zkdTl+Q6wfROxD2BYQpowaDjSwG2IiEB3xp+TcCceykI9gGOwAMg4rHQZRQkiJbQy/+eUm//Pwb+W6IiYLQDjHhcDgeo9Yf5Rgz/wvqkwBlTKaIfgLECJL2AhIsJh6XLogN8lhn+HrbsP1O7JjB5MC7sC1N9ZaobmlsIG7I14C2H9kVBvOxVZzvDf6pRb8Bc3AcSFeo9yiNsCHwzJ/qR1T6WbQKfGykIMyKIiReYO8U14ZHMB7XVQG/VrEtJEpo4jvauCGUoeDasUsOIdOP0vYTmhpQoiie3OHnl9TznNHO8nb/irbdG1kTk7AtKmFzVN/cDBkN9uGt9maYF/vI2WPZTd2EVqhbvtcJ+wznXTmxW3xMctnle7KZ+B1H5kjlfEjSIAnG40OL1IrXb0lvzzoC1wK+Ss85X+KjYg0KxXWHJhkUCoVCoVBcCWw2G6rrerU2PtcfyM9/iNXwMfz44MG5vy973WXxKW2Ir0Fu62pdfk6rvyQ+x97PHbs2fhGAjMF7fn7t+OK1whC5oE2RziPpk4LlkY6iiZb+f79kH4LqxJpJyaO15+S2GpvEHknKQhIbYG0tfck7WUOedMvn8CmkuQjDwftz3Tly3Yvkt7XnKBSK64X/AIscsSWFpJWZAAAAAElFTkSuQmCC) Exploração dos Dados Carregar *dataset*
###Code
import pandas as pd
df = pd.read_csv('http://tiagodemelo.info/datasets/dataset-uol.csv')
df.head()
df.shape
df.CATEGORIA.unique()
###Output
_____no_output_____
###Markdown
Verificar dados nulos (NaN)
###Code
df.isnull().any()
df.isnull().sum()
index_with_nan = df.index[df.isnull().any(axis=1)]
index_with_nan.shape
df.drop(index_with_nan, 0, inplace=True)
df.shape
###Output
_____no_output_____
###Markdown
Adicionar coluna ao *dataset*
###Code
ids_categoria = df['CATEGORIA'].factorize()[0]
df['ID_CATEGORIA'] = ids_categoria
df.head(n=10)
df.ID_CATEGORIA.unique()
column_values = df[["ID_CATEGORIA", "CATEGORIA"]].values.ravel()
unique_values = pd.unique(column_values)
print(unique_values)
category_id_df = df[['CATEGORIA', 'ID_CATEGORIA']].drop_duplicates().sort_values('ID_CATEGORIA')
id_to_category = dict(category_id_df[['ID_CATEGORIA', 'CATEGORIA']].values)
id_to_category
###Output
_____no_output_____
###Markdown
Distribuição das notícias entre as categorias
###Code
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8,6))
df.groupby('CATEGORIA').TEXTO.count().plot.bar()
plt.show()
###Output
_____no_output_____
###Markdown
* Um problema recorrente é o **desbalanceamento das classes**.* Os algoritmos convencionais tendem a favorecer as classes mais frequentes, ou seja, não consideram as classes menos frequentes.* As classes menos frequentes costumam ser tratadas como *outliers*.* Estratégias de *undersampling* ou *oversampling* são aplicadas para tratar desse problema [[1]](https://en.wikipedia.org/wiki/Oversampling_and_undersampling_in_data_analysis).* Lidar com essas estratégias será discutido posteriormente. Preparar dataset para que todas as categorias tenham a mesma quantidade de publicações
###Code
TAMANHO_DATASET = 200 #quantidade de artigos por classe.
categorias = list(set(df['ID_CATEGORIA']))
data = []
for cat in categorias:
total = TAMANHO_DATASET
for c,t,i in zip(df['CATEGORIA'], df['TEXTO'], df['ID_CATEGORIA']):
if total>0 and cat == i:
total-=1
data.append([c,t,i])
df = pd.DataFrame(data, columns=['CATEGORIA','TEXTO','ID_CATEGORIA'])
fig = plt.figure(figsize=(8,6))
df.groupby('CATEGORIA').TEXTO.count().plot.bar(ylim=0)
plt.show()
###Output
_____no_output_____
###Markdown
Representação do Texto * Os métodos de aprendizagem de máquina lidam melhor com representações numéricas ao invés de representação textual.* Portanto, os textos precisam ser convertidos.* *Bag of words* é uma forma comum de representar os textos.* Nós vamos calcular a medida de *Term Frequency* e *Inverse Document Frequency*, abreviada como **TF-IFD**.* Nós usaremos o `sklearn.feature_extraction.text.TfidfVectorizer` para calcular o `tf-idf`. *Bag of Words* É uma representação de texto comumente usada em problemas relacionados com processamento de linguagem natural e recuperação da informação. sentença 1: "Os brasileiros gostam de futebol"sentença 2: "Os americanos adoram futebol e adoram basquete"
###Code
import nltk
nltk.download('punkt')
from nltk.tokenize import word_tokenize
sentenca1 = "Os brasileiros gostam de futebol"
sentenca2 = "Os americanos adoram futebol e adoram basquete"
texto1 = word_tokenize(sentenca1)
texto2 = word_tokenize(sentenca2)
print (texto1)
print (texto2)
from nltk.probability import FreqDist
fdist1 = FreqDist(texto1)
fdist2 = FreqDist(texto2)
print(fdist1.most_common())
print(fdist2.most_common())
texto = texto1 + texto2
fdist = FreqDist(texto)
print(fdist.most_common())
###Output
[('Os', 2), ('futebol', 2), ('adoram', 2), ('brasileiros', 1), ('gostam', 1), ('de', 1), ('americanos', 1), ('e', 1), ('basquete', 1)]
###Markdown
sentença 1: "Os brasileiros gostam de futebol"sentença 2: "Os americanos adoram futebol e adoram basquete" ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABC0AAAA/CAIAAABLv6XfAAAl/klEQVR4nO3dCTyU+R8H8N8crnGfSZGSHEUl3SulpINUoqWkNl0oxRYVrRK1kvTf1X3poOQqSodCB23laDs2lmxY5L7lmJn/jCuFMeaR8djv+9XrpTme5/k9n+f3+z3Pb57nmSEjAAAAAAAAAPg+6HR6p8+TWbwGukUgECA9jkF6WEB6WEB6WEB6WEB6WEB6WEB6WEB6WDDS6+olcl+WAwAAAAAAAAAQjEMAAAAAAAAAfQ/GIQAAAAAAAIC+BuMQMKDQy//wslzpHpFejUQWhmdGGEt0eU0i+2r+sFWaciyfbBBZdGehaN9N2+/UvtgyctJvucTZN0uijfC+Mmzpm83Xfimz3rO1xAFVrwDX/feadq+AZgi6BZWkOzAOAQMJ9d+gLc4R6Yhn7JpdVgaqAqwHIQ1/H9IatQMdTk9yUOLpoxIC0AFJbMycOTP/pUyWYbsacjDJfwy0bvDdQTMEAwtXuk0Yh4CBpLEsu5j5d4KD555VskTWb274EHLhDUJj+qJgAHSNT2XTpfubmv9f0/NJQCegdYPvD5ohGFC4023COGRAoJUn++/ZeejKg7+KGxGf7HijjXu9nY2G8TW9WJ8d6bntl9O3k3JrERIermPm4OVlM0Wim4N0/KHlnZ8q99Pzpv8/sxpMspp+9iLP2lWxaOi2l2k+EwS+uvbgtvLvmmouqcz3vnEcyes4+UJevJXEv7cOOOz1f5CcWUqX1DDe5nPMac6gdk2EVpFyfJXDLwFJhXSZSdbel45ajuJvfoFV/v0frSL5nJPDwYDYjApEEFdfaHPw+B6jobxNL5U89dmwYX/w23I+ed1NHrZ130zZ9YpXP7KS171YOsQ2/Ne6ndZnsvSjcm/ORV0sqOXNcrZ3L8r7bdx3M71GZIzlkZDf9d/9YmHj9ySPNnim08WgfXOkSVyIp2lFO930PVpHFi3xq+uyvl5wfXbndbL9JDNfdSzGPCHc9Am0kife6zZ4hr4r5x+mZ+e9Z5D7zO1/Ip2gokemkoijzk0ozaND67akvMJp3ftOumnaXVW8/q3LrqxHW7mrde/Y3jN2R4xrd8lNY0Hc0Z93/i8kIasGCShMN99x2HvTZHEiOwWzexCqccnW5VJiIV16orVPwNGVI9nYufSrhsw5fFa2nurq+IHF/pfF9qWVPju6cd2+oDdlzG7Ty4myx2Bfaku3WX5jltjizo98mq+67DRw2vuO3Wa3B0W9YsBt6v+imlcHZk90SaQicS3DH5Vqnoc+DHZbFPtX8OsrJrKkuteec432vkfys1ZsUOMviL8adnbzzEyBtLtrFQbYxicIjv7R1uTzmZA/65CYrqXZOG1l3rCu3kwSn2S5eprXhfgKRNFebjFtsrpA+cMt0wxP5iBp3Z8cxhSF+4W66GfQ/3zmosHfMk1j0p5lKRTtH/RUo+6/f35qlZHUmBSP8QKs8++jlceAmnttxYx1kVWEkYs2rSTHngqN9FiULfTXc2dVXmpekJXB9shqxKNq+OMk6p3NP+WUt5uS5YoTeASYp3U/RW63LW7Q+GG6sjDqekGMNzO73fzwzdayY2YvGJMV/ObNJesF2ep1VerzF6r4h6fGepo7G2SenSHElYy62PQ9WkcWLbGrxdLLuquTTToWg4SfPoGaG2hp4HS7BiH5GQsmNF5fY1JX17ROfE1l4WhFbht+27r58nBb974P1k2bzYrX37DoytjfyoJdr3vHhtb+gJ9embBTZ6Z3GiKPnGOqWZcQ+vic3fTXNa8fb1cjd1+wG3bL7ivOMzVD/oGJL05ZmqtPjbdX4vkv7NxxWtl6rIudCItKy2L7EvKDV891uFnZ0m0G/2RaXo2+dJssdRX4TpmeHxT1DhzVVtA5Wn74z/sY/RT/nPOvolbLk+kVT+xH6/yWc83Jb7ehu3phQuR7hHjn/xZy2VgcoWq7E55BRYNlEY3b5e51BJFJ2w67ld4J+TMDqa3zPrFCpuJG1+OQQfrO7plhF+IT0Qhzj2MOSsSsk9MZ7Q3JWAdHnJ4hTLXX1FHckODlHrc5yKD1MsnKoQfS7lsPI1c926Y+1Tc77eypV7uPT6pglb9GH60852glL25lSiqMGGt59er+CcRXomrjDmS+CriT87OqQk7oIcaRChJZFvIkyEgSVT7ZrKbjV9U6IcuKp4GIJOZOujFL7Jf3T7aN4kW0whvHu1jQCMabmbfy0CrG+KZcNxb7ODZ9xJYUesZrrXvp/vqUVKWxarvfF8Xc/VA3Q5M7J5k63fRTerKOw3reEqnZ13Z1VyebdChGfoABTvqExqyQw8xBCGXh1ec3l8sSSu+tVTU43/qjVxx2boRvWzep8MZR3Na976Exm1XTZlXx+vF9tiy6Mva38nQxFo3u24aGav4IaF06NTfE+UgaQmLLw/8IXChBzw9cOMrizgvfIy9tTozqvmAFQzwyIlYNIZYsqFc0DKl8eT2hZIuS9KeBv3PHaWXjQOfHD9Vd1g0Wu4zG3BAv5iBE0DDoxQ3TQYSymA1qeqdZ/lZgG1aBs39Q1KubBsYhuFf9OiShnlEBJ61dOJS5OQkiWj8ukPntVEFGVEKBm4aYsroESiyJWjp2+tJFc/X09A2c3EeI4OBj+j5W/S4yhfGHoDxZojQ7uwzVD58kjxKynt5OqzUY3fIe7Z+M5JkRC2ksmSXhe7HkU+LbUtrodyzzl+TaGrGJKG18+Y0x83+0hrrPDSLDGSXORFWFVVRUmx6bxnxh7LKpTV87JjzebI6En39J84TdVLzWFdeyXqbE282C2mgumyZJRATp0SrCKKWSNG6JthhjwXLjGXN4X1/5qbIRIe4cC3a66aeIt7zKzjqSRLtuiY2dL5SdOtleWzFw1CfUZsQ11bHxFrMGMY/xxKet0hc7f6Ws+VXOV4T61VJwXfe+h26aNouKN1GAa4XuTq9s5erc7htdW0Nrr+btzZeMBRHHm05hRkqQNY+qMG99sfuCaVkZyDFrrrDqtGEo5A0qy2fUOAp+GjLHcFrZOND5TmQIJ7uMyvTYpuunxpnryjC7TbFJK/TFT18qZacYrALXYvudvbppYByCd/S6kgLmCTkBSUn+lqEwSXiQMEIFqDK/korkZ/iEupesdrv1T3ywH+OfG6PTNfSJDNg6XrgXvtF2wKDXl5UyLwehP12nMazd85/+yq9HLbsffklpgebz8CQhaUGESlBNaU1jN/n3+3EIohbFHbLZ6nsz5VPdt69UFzNXjU9UlLd51UhCUkLM9WbqruK1rDhJQq51x9j1glqRhUSalkTg4Wd+3s8nItTUQRF4KYy9fj2dRu+tde6pTjc9FbWMQ9hbR+GuW2LnC2WnTrbXVgw89QnU6hLmnfl84hItRSWLyIogVIYwrsi3i8Fv3fsuWDdtVhUP9eNDw17Yyuw0unbt/QtmXWXWZH5xkQ5DFHYKJiLeXMWJTUVhzI9R4/DUkDmF28rWY13sRKglHOwyWttvW7fZXC/YGYewDpz9d8I4BHxB4JOQoSBUU1tc8pmOmrofakVeBfMlUTlRRm9JktJ1icx0Kk59FhsTEx1+4dTdzEiHNaeNXjiMHPhfNcg8S0mvr65vPoigVhVVd/VGXjEJfoQ+E7RcLrtOoLQ9T5bRbnvwubSUEbEIgTmjwqZLGBj7bnJ3+fdz9OJb6w13hlUh5bV+h8w1JKqi1i0+kNr8GokiwVi16jpGb8QIkMD8OrJ/y1on7LbitbyNSOh+Qf1ep5u+p+vYdUuU63Sh7NTJr97fWgw89Qkkijhjh1ZTV1rSEnBjRV7bjQoYVmR4+4Xguu59F6ybdg8rXj/RK1uZnXVva2idTVhbXFbfHCm1uqiwogHxikqhaM4KhqeGzCmcVjYOdLoTIXK2y1jX2n7LWttvS71o1fWRD+vAK9vPo882DYxDcE9wzNKpPKEPGl5cuJ+3zEKORK94GXC7kPGC2uLp0qTa9LAT/nHZMlbum3VMVHRM1iyRHq3pnpGXVtiAcNNVcYokwvyMoKIg+VVh49RhpPLnAQ/b32aNmq+lrC2roSFEUVugiSKe00sbVQ0Wawmg+pwH1x8WCCpIfzncRC8v3Ms1WzmEVPU6LIb5ycPQqWPEiKzz7+qSm36jLutJErNPVLFytDZWIxdF/P6J+TStgUpHAko6I1HIK5Ry7XHR8qXSqCju7L0vnV0PV5zVgvq/Tjc9+rbkrNaRVUvsfBzCVp3sFI76BIERPyih0NcoOSD2049msqjkyfl7Xxop5ysyvH3rxnfd+x5YN22OKx5X9c5W5rzRqRuOI0Q8oydffVJsYiyFiiIsRywJqxRaGpXuwmnBcNSQOYbPysaJznYiDZztMtra79XHRWZLpQmlCRfvl31ZEssjH9aB9/SgqFfAOAT3iIMXe+0aq7331Z0VE3SC5wyrSAh+kIuQ/PojG1R4ERVlBx04+ox6JiZ2pa4CuTz1/pUMhKTmLVUdSGc8u0AZY6oreDWi+oWdgekrPfIfN5PFhdCnKjq9qfsnCQ6SZp6Mz/AyN/m4yHq//b6f9s07l3lQTyfdVIv0NvRaQjEasydp7kxGr9DYdOcfMcVu4g9hs6T+Do/MZjwca7NeUwARBVjl3+/HIbyDxiiS0cfGVB8bu2KVvKjwD+qyKD7/4+U9bqO9XZdumbVjbUzVDdOp88y06hPuvGHep0lrDpB1xeuw4iwX5KTKjXVnB6tN3+G3Plit48FV5T1tiWR5067rJCs46hPIw0y2zHReF1tza/lE3YAJ9OQH7xCzjmFdka9at9FSrWE4rHvfEVmBVdPmuOJxVe/0MByvO0nezHOjh97xrFCTqfPNtGnPr0dXMnqK7a660jIFHBYMRw2ZY/isbD3CYifCW8TRLoMs3NZ+p8wz0258FvminnkGpGWBLI98WAXeg4Oi3gTjkAFAUGtP7B+DnXd4X40Lu5yAhBSnr7Jz97KfJcnYsRBH2kbENDrsOBoWejKF8V5eGQ3D7bu93fTFcXMBKeeI0otOXttescE3LjX8Ct1o+/mrok4T7ZPp9bUNzJOZYnrue+em7LyX9/b2PcUleyXXHosPl93icup28JlExDdY2+LQYd9t45mnPxkTMGYnvvh4gMbFrR43Umv45GfaHvV3VGu6kpdV/v0eUc7s9NlH5o7nEmNPnytY4nTtgWWxk67pxQ+RgbGOLlarAsLTLTf6RGfcv0mdbXf+mqDLnF/e0+tqmgLs0YqzXJBDfz0WpLPe9N9gHeaZLltiV79dSJQ06KpOdvNzh/jpE8gKqwNvZK7aePj+xyf3eebYnTlX5bjk2L9kvqavEeJ8Rb5u3btOnl27Emd177siy7Nq2iwqXv/VSz0Mx42OIDbL9+ntwdt2+928F5iOBOSnrXb08rYbJ0BEnBcMPw2ZY7isbD3BcifC4S4DtbXfD/dv0vU2nLjQsHvZ79ktS2R95MMqcLYPinoVjEMGBKLYxA0nYjac6Ow1kpSOw8UEh74uEnfwjdmXTt/35TF58EKv2ByvL0/o07e0/Z8gNMH5bq5zu8kVjT1uGnt0mKug7sUS+sXm/y9979zhdVb5Uyb75dH9erYWfYtfedWZl6vOtHvGP4Pq3/p/ofme9z96fnnNhL6n3Tt7tOIsFzSz/ZsFdS4U0y+0PRSeHVTJpQtoWG/6Hq4j6qolfjWfr+fJ10WdZDFJM7z0CbTq3E8is7ceM7yoP1WWB1Fz/fXyGU/Laci1jPQ4XJEOrRt3de87I8uybNpdVbz+rLd6GLYaXWfP8A6d73p9viuGgiGS/MZE+sZ278RLQ8YCj5WNfax3IpztMr5tv5+Tft791ausjnxYBM72QVGvgnEIAAAALqEVRm2ZufFRA5L5wcJEk/fD3YuPqIhn+jbr0QPrV8wAAAB0AsYhAAAAuISsuC4srmGH85HrjwKOP0FEsZEz1zv8enADfm60BQAAwLGm38Zh4ycYQVcgPSwgPSwgPSwgPSy+V3q0svTYUzaTT9l8l7n3F1D3sID0sID0sOjn6dEeLBLr1wXsHHMc0vL1QaDnGJUS0uMYpIcFpIcFpIcFpIcFpIcFpIcFpIcFpIcFiyEcXJcFAAAAAAAA6GswDgEAAAAAAAD0NRiHAAAAAAAAAPoajENa1GVF7rGw8HpaiYRMo3ODZgtzu0AY1WdH7Fq2/PBzpcPpSQ5K8NUzPQPpcYxa/PiIrf2hkOSCRr4hUy1/Oe5jPVYYh3fOcQ3UPSwgPSwgPY5Bv4cFpIcF/tPj5jik4e9DWqN2IK53efSatGs/m1kdf1XPxUL0pvqcSJdl5j6ZskLcLgkeQXoYNGQcN56zPWO+R+BhXfHc2/s3rdctkUgNMhlE4nbJcAHqHhaQHhaQHgbQ72EB6WExENLj4jik4UPIhTcIjeFeCVrUJHraHn8lbeSyturwvphabhcHs4Z/AtyCpfclBCm7KRtlcbs0eAPpYVDz0tvjqfj6uEu7ZjBPKE7V4E1RNHM9n77IWQU+XO0e1D0sID0sID0MoN/DAtLDYkCkx+44pD470nPbL6dvJ+UyjtOFh+uYOXh52UyRIDa/duuAw17/B8mZpXRJDeNtPsec5gxizLj6kZW87sVSObsHoRqXbF0uJRbSpSda+wQcXTmS+N5DU80llTn1G8eRvI6TL+TFW0n82+P5NP/ibmNB3NGfd/4vJCGrBgkoTDffcdh702RxZtloFcnnnBwOBsRmVCCCuPpCm4PH9xgN5f163Uhik7dedNy6YvgLi6O9FiwXkYasCE12VBCtjuJ2SfAI0uNc3Yc70fli+uZaLVc1EqVnWEwgrAx7WrhDRY7I3bLhAdQ9LCA9LCA9zkG/hwWkh8XASI+9cUjda8+5RnvfI/lZKzao8RfEXw07u3lmpkDa3bUKpLKHW6YZnsxB0ro/OYwpCvcLddHPoP/5zEWDn8AjwDziz79ht+y+4jxTM+QfmPjilKW5+tR4O/FJlquneV2Ir0AU7eUW0yarC5RzMh97JR56ZcJOnZneaYg8co6pZl1C6ONzdtNf17x+vF2NnHttxYx1kVWEkYs2rSTHngqN9FiULfTXc2fVr0Yi/BqbXDUYf6t6P16uIAoOUeB2GfAL0uNcXU5SNpJbN4Sv9QmC8HA1KRSZnFOH5AS4WTJ8gLqHBaSHBaTHOej3sID0sBgY6bE1DqEWJkS+R4h3/m8hl43FEaq2O+EZVDRYFtEQNffaLsbgAclYB0ecniFMtdfUUdyQ4OUetznIgIdIYt4rQysY4pERsWoIsWRBvaJhSOXL6wklW1bqO7tnhl2IT0QjzD2OOSgRs05O52Q+SlK5Ic5H0hASWx7+R+BCCXp+4MJRFnde+B55aXNi1ItbmZIKI8ZaXr26fwLxlajauAOZrwLu5PysOgLuzwegtzVUltchigTly4WpRIo4BdWUVFO5WCoAAPh+oN/DAtLDYmCkx9bxOElUWV0CJZZELR07femiuXp6+gZO7iNEmGte8S4yhfGHoDxZojQ7uwzVD58kjxKynt5OqzUY3TK5lpWBHPO9wqrThqGQN6gsv7IRDfr6lFE1p/OhvL35khE4cbzpFAnGaIUgax5VYd46V+PLb4yZf2kNdZ8bRIZLIpSJqgqrqPA9YQAAAAAAAHATe8fjwjN8Qt1LVrvd+ic+2I/xzw0hOUOfyICt4+rLSusYb6A/XacxrN0En/7Kr0ct4weyiDh/03eIEXkpTddD0Wn0bxdA53Q+9LqSghrG//nFRb6564OBWhR3yGar782UT3VsrSYAAAMeRhNFRcXMT2JaPmagVhVVIyFpIRx9dQcAAPQA9HtYQHpYDIz02DwvQJLSdYnMdCpOfRYbExMdfuHU3cxIhzWnjV5Yi0nwI/SZoOVy2XUC5ctsZbQpLGbXAYGXw/m0TlhbXFbPGN0wxinU6qLCigbEKyqFotcb7gyrQspr/Q6Za0hURa1bfCC1J6UCAPQAv4K2AopJyvqMRjV/VQet/O+3RYThExX4upkSAADwCfo9LCA9LAZGemyNQ2rTw074x2XLWLlv1jFR0TFZs0R6tKZ7Rl5aYQNFbYEminhOL21UNVisJYDqcx5cf1ggqMDOaIzQdHajtqyGhhDH8xFUNxxHiHhGT776pNjEWAoVRViOWBJWKbQ0Kt3lSRLz1nMVK0drYzVyUcTvn5gT0BqoHU7HAACw4x2+YOHQPZeuPC+fPVuU+ZlA7l3/JMIE3+mSuPniDgAA6BHo97CA9LAYGOmxNQ4ho+ygA0efUc/ExK7UVSCXp96/koGQ1LylqgJkUdN9P+2bdy7zoJ5OuqkW6W3otYRiNGZP0tyZ3cyUJDhImheh+gwvc5OPi6z323M2H5K8medGD73jWaEmU+ebadOeX4+uRGjsdlddaZkCRTL62JjqY2NXrJIXFf5BXRbF53+8vMdttPdeK5W24WLNc1dz5ycV9MbiZMakqOru1gWzpMh8qrZnf1s2BH83ktBrs18+Sy2nopqkggZUnvr0YXQmiSQyavJEBQqufmOTKyA9LPjH2rvNPmFtvkLJz3Gm6D8hrjZxkmvumQ/DXyviBqh7WEB6WEB6WEC/hwWkh8WASI+twvKMtI2IaXTYcTQs9GQK4zGvjIbh9t3ebvrijB5K0uBYfLjsFpdTt4PPJCK+wdoWhw77bhtPYRzfs56rmJ773rkpO+/lvb19T3HJXsm1nM2HIDbL9+ntwdt2HQ2+F5iOBOSnrXb08rYbJ0BEZqfPPjJ3PJcYe/pcwRKnaw8si510TS9+iAyMdXRpNw5pKPjjVkzsl28XqHjzJJbx59O8alyeOGnMCVyt5/Su9eEpq3mnGH9UPd+/2qnS8R4a8DVIDxPysDXXH1ba2nia67lSBYbprrsS9+sccTiQYQvUPSwgPSwgPUyg38MC0sNiIKTH9v0hOg4XExw6f5FP0djjprFHh+cpk/3y6H7tZiK/MZG+se0hQWiC891c53YTcDYfxrho6HzXa+OV9EdsaLA/4+dsqiHafEqKX3nVmZerzrR7p38G1b/D/EUN7zXicsTROR7lHW/pO7hdCryC9DAiik/ZGpC0NYDb5cAhqHtYQHpYQHoYQb+HBaSHBf7Tw9XJG5YoE/eeXWtt8eMyisZrV3X4CAcAAAAAAID+a6CMQyqj10wyCC0TVlm6f7EiDEIAAAAAAADo15jjEAIBX9eSsVCZGmqvGWrfl4scQOlxAaSHBaSHBaSHBaSHBaSHBaSHBaSHBaT3PTDHIXT6ALo7om8xKiWkxzFIDwtIDwtIDwtIDwtIDwtIDwtIDwtIDwsWQ7iBcl0WAAAAAAAAAD9gHAIAAAAAAADoazAOAQAAAAAAAPQ1GIcw0cpenNzu4BX45J9qRBRXmWPl6rvfQk0Qzzck1WdH7Fq2/PBzpcPpSQ5KPNwuDs5Aehyr+3hzv/3u4xFvimlECfX56/f/7rZEka/76QATtfjxEVv7QyHJBY18Q6Za/nLcx3qsMJ77oT4F6WEB6WEEew0sID0scJ4eN8chDX8f0hq1A3E9uMZ/zprMsHn4GSGSqJRAeVHqPd+VOoViqRcXShK5WCzO1edEuiwz98mUFeJ2SfAI0uMcvSxm6w/GJ2hGe87/qiNZFPe74/6lc1HiqwNaAtwuGh40ZBw3nrM9Y75H4GFd8dzb+zet1y2RSA0yGUTidsnwANLDAtLDBPYaWEB6WAyA9Lg4Dmn4EHLhDUJjuFeCZnXvz/3KHIQM2/rolY8OJeucwai1McVB/3v0vwVLJPD4cVDDPwFuwdL7EoKU3ZSNsrhdGryB9DhHL44+cDZHcUfitb1NA4/ZU0RT5BcHXnv/i9Z4fm4Xrv+reent8VR8fdylXTOEGQ+navCmKJq5nk9f5KyCuw+4+h6khwWkhwnsNbCA9LAYCOmxOw6pz4703PbL6dtJubUICQ/XMXPw8rKZIkFsfu3WAYe9/g+SM0vpkhrG23yOOc0ZxJhx9SMred2LpXJ2D0I1Ltm6XEospEtPtPYJOLpyJPG9h6aaSypz6jeOI3kdJ1/Ii7eS+LfH82k+uGksiDv6887/hSRk1SABhenmOw57b5osziwbrSL5nJPDwYDYjApEEFdfaHPw+B6joV//ziFBZMqOo0frRTRMpooyhh2DtCYORjEfG4oLa2hIAo8fBpGGrAhNdlQQrY7idknwCNLjHEF8nv/btzUyKi1nPwgCMkNE0B+ltTTulgsf6j7cic4X0zfXEm5+TJSeYTGBsDLsaeEOFTl8npntQ5AeFpAeNrDXwALSw2IgpMfeOKTutedco73vkfysFRvU+Avir4ad3TwzUyDt7loFUtnDLdMMT+Ygad2fHMYUhfuFuuhn0P985qLBT+ARYB7x59+wW3ZfcZ6pGfIPTHxxytJcfWq8nfgky9XTvC7EVyCK9nKLaZPVBco5mY+9Eg+9MmGnzkzvNEQeOcdUsy4h9PE5u+mva14/3q5Gzr22Ysa6yCrCyEWbVpJjT4VGeizKFvrrubNq+5EIr8KC9VvaHlW/vhrykfFXevxocTwOQhiIgkMUuF0G/IL0MCAJDVZW//KwNi0y+hO/9lxlOBnChrqcpGwkt25I2800BOHhalIoMjmnDsnBdW3dgPSwgPSwgb0GFpAeFgMhPbbGIdTChMj3jAP2+b+FXDYWZxyr253wDCoaLItoiJp7bRdj8IBkrIMjTs8Qptpr6ihuSPByj9scZMBDJDEva6IVDPHIiFg1hFiyoF7RMKTy5fWEki0r9Z3dM8MuxCeiEeYexxyUiFknp3MyHyWp3BDnI2kIiS0P/yNwoQQ9P3DhKIs7L3yPvLQ5MerFrUxJhRFjLa9e3T+B+EpUbdyBzFcBd3J+Vh3R+XrX/xNgbfRrBuN4asquHZMEezFnAP5jaKVxeywOZI5zDzOUhk9U2dBQWV6HKBKULx9/ECniFFRTUk3lYqnwAtLDAtIDAHALW+MQkqiyugRKLIlaOnb60kVz9fT0DZzcR4gw+6yKd5EpjD8E5ckSpdnZZah++CR5lJD19HZarcHolsm1rAzkmO8VVp02DIW8QWX5lY1o0NeHJtWczofy9uZLRldJHG86hXkzB0HWPKrCvHWuxpffGDP/0hrqPjeIDJdEKBNVFVZRO1tves27k+a6m24WISS34kqQrTJcFgsAh6gF950NFnrXrA6+76QBZ0MAAAAA0An2rssSnuET6l6y2u3WP/HBfox/boxjdUOfyICt4+rLSusYb6A/XacxrN0En/7Kr0ct4weyiDh/0+3eRF5K0/VQdBr92wXQOZ0Pva6koIbxf35xEV70LWpR3CGbrb43Uz7VdbeGte9+M55iH12JiGM2h989bCQHoxAAOFP/z9V1M82vCG6OfOazUBa+GpxNPIwODhUVMz+BbvmQhlpVVI2EpIVweoFon4L0sID0AADcwuZBAklK1yUy06k49VlsTEx0+IVTdzMjHdacNnphLSbBj9BngpbLZdcJlC+zldGmsJhdBwReDufTOmFtcVk9Y3TDGKdQq4sKKxoQr6gUil5vuDOsCimv9TtkriFRFbVu8YHUzmZC/XRjkz5zECI469fY8O3aInj8liwA+gPqp0hbXfPrQ9zibrlOF4cLstjHr6CtgGKSsj6jUc2fgtDK/35bRBg+UQF+fqV7kB4WkB4AgFvYGofUpoed8I/LlrFy36xjoqJjsmaJ9GhN94y8tMIGitoCTRTxnF7aqGqwWEsA1ec8uP6wQFCBnc9RCE2H+7VlNTSEOJ6PoLrhOELEM3ry1SfFJsZSqCjCcsSSsEqhpVHpLk+SqhjvULFytDZWIxdF/P6JOQGtgfr16RhawU1bK/9chJR+vgeDEAAwqE8/bmp6XnT3kzt7pkBL6hne4QsWDt1z6crz8tmzmd/cR829659EmOA7Hac/Y9S3ID0sID0AALewNQ4ho+ygA0efUc/ExK7UVSCXp96/koGQ1LylqgJkUdN9P+2bdy7zoJ5OuqkW6W3otYRiNGZP0tyZ3cyUJDhImpdx4JLhZW7ycZH1fnvO5kOSN/Pc6KF3PCvUZOp8M23a8+vRlQiN3e6qKy1ToEhGHxtTfWzsilXyosI/qMui+PyPl/e4jfbea6XS8kEP9d/wgyHlzP9l/G4w9GTrsRNJbmVk4rHpOLxXnV6b/fJZajkV1SQVNKDy1KcPozNJJJFRkycqUODIsDuQHga0oqgdux7zzfOdUvX8QXTrsyQxtakThvBDet3hH2vvNvuEtfkKJT/HmaL/hLjaxEmuuWc+DK5sYwekhwWkhwXsNbCA9LAYEOmx1c3wjLSNiGl02HE0LPRkCuMxr4yG4fbd3m764oz1lDQ4Fh8uu8Xl1O3gM4mIb7C2xaHDvtvGUxCqYT1XMT33vXNTdt7Le3v7nuKSvZJrOZsPQWyW79Pbg7ftOhp8LzAdCchPW+3o5W03ToCIzE6ffWTueC4x9vS5giVO1x5YFjvpml78EBkY6+jSNg6h1Ve1LuBzVeXntvnmltR1uI0FFxpzAlfrOb1rfXjKat4pxh9Vz/evdqp0vIcGfA3Sw6DuY9yLSlR+a6vRrfZPa/p+SLQfDkc03SIPW3P9YaWtjae5nitVYJjuuitxv84Rx83OhMsgPSwgPQxgr4EFpIfFgEiP7ftDdBwuJjh0/iKforHHTWOPDs9TJvvl0f3azUR+YyJ9Y9tDgtAE57u5zu0m4Gw+jHHR0Pmu18Yr6Y/Y0GB/xs/ZVEO0+WQyv/KqMy9XnWn3Tv8Mqv83c+dRcnhN72LNcIlHecdb+g5ulwKvID0MBCb4ZNN9uF0KPCOKT9kakLQ1gNvlwCdIDwtIj2Ow18AC0sNiQKQ3cD6kpEzce3attcWPyygar13VcTMQBAAAAAAA4D9ooIxDKqPXTDIILRNWWbp/sSIMQgAAAAAAAOjXmOMQAmHAXAVamRpqrxlq35eLHEDpcQGkhwWkhwWkhwWkhwWkhwWkhwWkhwWk9z2Q6XR83o0NAAAAAAAAwK2Bcl0WAAAAAAAAAD/+DxNT6NtwOrvsAAAAAElFTkSuQmCC) Sentença 1: [1 1 0 1 1 1 0 0 0]Sentença 2: [1 1 2 0 0 0 1 1 1] TF-IDF TF representa a frequência do termo.IDF representa o inverso da frequência nos documentos. Texto no SKLearn Opções (paramêtros) utilizados:* `min_df` é o número mínimo de documentos que uma palavra deve estar presente.* `encoding` é usado para que o classificador consiga lidar com caracteres especiais.* `ngram_range` é definida para considerar unigramas e bigramas.* `stop_words` é definida para reduzir o número de termos indesejáveis.
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
tfidf = TfidfVectorizer(min_df=5, encoding='latin-1', ngram_range=(1, 2), stop_words=stopwords.words('portuguese'))
features = tfidf.fit_transform(df.TEXTO.values.astype('U')).toarray()
labels = df.ID_CATEGORIA
features.shape
###Output
_____no_output_____
###Markdown
Nós podemos usar o `sklearn.feature_selection.chi2` para achar os termos que estão mais correlacionados com cada categoria.
###Code
from sklearn.feature_selection import chi2
import numpy as np
N = 2
for Categoria, category_id in sorted(id_to_category.items()):
features_chi2 = chi2(features, labels == category_id)
indices = np.argsort(features_chi2[0])
feature_names = np.array(tfidf.get_feature_names())[indices]
unigrams = [v for v in feature_names if len(v.split(' ')) == 1]
bigrams = [v for v in feature_names if len(v.split(' ')) == 2]
print("# '{}':".format(Categoria))
print(" . Unigramas mais correlacionados:\n. {}".format('\n. '.join(unigrams[-N:])))
print(" . Bigramas mais correlacionados:\n. {}".format('\n. '.join(bigrams[-N:])))
###Output
# '0':
. Unigramas mais correlacionados:
. equipe
. único
. Bigramas mais correlacionados:
. duas vezes
. entrevista coletiva
# '1':
. Unigramas mais correlacionados:
. equipe
. único
. Bigramas mais correlacionados:
. duas vezes
. entrevista coletiva
# '2':
. Unigramas mais correlacionados:
. equipe
. único
. Bigramas mais correlacionados:
. duas vezes
. entrevista coletiva
# '3':
. Unigramas mais correlacionados:
. equipe
. único
. Bigramas mais correlacionados:
. duas vezes
. entrevista coletiva
# '4':
. Unigramas mais correlacionados:
. equipe
. único
. Bigramas mais correlacionados:
. duas vezes
. entrevista coletiva
# '5':
. Unigramas mais correlacionados:
. equipe
. único
. Bigramas mais correlacionados:
. duas vezes
. entrevista coletiva
# '6':
. Unigramas mais correlacionados:
. equipe
. único
. Bigramas mais correlacionados:
. duas vezes
. entrevista coletiva
# '7':
. Unigramas mais correlacionados:
. equipe
. único
. Bigramas mais correlacionados:
. duas vezes
. entrevista coletiva
###Markdown
Criação de Classificador Importar bibliotecas:
###Code
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
###Output
_____no_output_____
###Markdown
Dividir *dataset* em **treino** e **teste**
###Code
X_train, X_test, y_train, y_test = train_test_split(df['TEXTO'], df['CATEGORIA'], test_size=0.2, random_state = 0)
###Output
_____no_output_____
###Markdown
Criar um modelo (Naive Bayes)
###Code
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train.values.astype('U'))
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
clf = MultinomialNB().fit(X_train_tfidf, y_train)
###Output
_____no_output_____
###Markdown
Testar o classificador criado:
###Code
sentenca = 'O brasileiro gosta de futebol.'
print(clf.predict(count_vect.transform([sentenca])))
###Output
['esporte']
###Markdown
Seleção do Modelo Nós agora vamos experimentar diferentes modelos de aprendizagem de máquina e avaliar a sua acurácia.Serão considerados os seguintes modelos:* Logistic Regression (LR)* Multinomial Naive Bayes (NB)* Linear Support Vector Machine (SVM)* Random Forest (RF) Importar bibliotecas:
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.model_selection import cross_val_score
###Output
_____no_output_____
###Markdown
Lista com os modelos:
###Code
models = [
RandomForestClassifier(n_estimators=200, max_depth=3, random_state=0),
LinearSVC(C=10, class_weight=None, dual=True, fit_intercept=True, intercept_scaling=1, loss='squared_hinge', max_iter=1000, multi_class='ovr', penalty='l2', random_state=None, tol=0.0001, verbose=0),
MultinomialNB(),
LogisticRegression(random_state=0),
]
###Output
_____no_output_____
###Markdown
Validação Cruzada A validação cruzada é um método de reamostragem e tem como objetivo avaliar a capacidade de **generalização** do modelo. Normalmente a distribuição entre treino e teste é feita da seguinte forma: ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAn8AAABnCAYAAABxVCQkAAAABHNCSVQICAgIfAhkiAAAHU9JREFUeF7tnQn0VOP/x5+QypKSpUiJVkIK0WIroezZKmSPjuNkOccSx3bsHMeW5RxLkbKlJKKoUCi7kqWEbMmaPeHn9fz/48z3mWdm7p25M/fOzPtzzj0193vvs7zuzHM/z/N8lnr//CtGIgIiIAIiIAIiIAIiUBMEVquJXqqTIiACIiACIiACIiACloCUP30RREAEREAEREAERKCGCEj5q6GHra6KgAiIgAiIgAiIgJQ/fQdEQAREQAREQAREoIYISPmroYetroqACIiACIiACIiAlD99B0RABERABERABESghghI+auhh62uioAIiIAIiIAIiICUP30HREAEREAEREAERKCGCEj5q6GHra6KgAiIgAiIgAiIgJQ/fQdEQAREQAREQAREoIYIrBF1X5csWWJ+++23qItVeSIgAiIgAkUSaNSoUZEl6HYREIFyE6hfv75p2bJlpNVGrvy9++675uuvv460kSpMBERABESgeAKtWrXS5Lx4jCpBBMpKYO21145c+dO2b1kfoSoTAREQAREQAREQgXgJSPmLl79qFwEREAEREAEREIGyEpDyV1bcqkwEREAEREAEREAE4iUg5S9e/qpdBERABERABERABMpKQMpfWXGrMhEQAREQAREQARGIl4CUv3j5q3YREAEREAEREAERKCsBKX9lxa3KREAEREAEREAERCBeAlL+4uWv2kVABERABERABESgrASk/JUVtyoTAREQAREQAREQgXgJSPmLl79qFwEREAEREAEREIGyEog8vVtZW6/KRMBDYOnSpebTTz81zZo1Mx06dDD16tXzXJX8UytXrjTvvfee+fXXX03btm3NBhtskPxGq4UiIAIiIAKJJyDlL/GPSA0MQ2DcuHFm+vTp5p9//rG3ofyNGDHCNGjQIEwxsV9LfuwbbrjBLFu2zLZljTXWMEOGDDG77bZb7G1TA0SgGggsX77czJo1K5au7LvvvoZ8rRIRiIuAlL+4yP9/vazsfPTRR2VvxTrrrGN23XXXstdbygrfeOMNM23atDpVvP/++2bChAlm0KBBpaw68rLHjBnzn+JH4atWrTL333+/VWabN28eeX0qUARqjcCXX35pRo8eHUu3e/XqJeUvFvKqNEVAyl/M34W3337bTJ06teytaNGiRdUpfyjSPnn33Xd9pxN77q+//jIffPBBRvs4Tx+l/GWg0QkREAEREIEQBOTwEQKWLk02gYYNG3obuNZaa3nPJ/XkaqutZtZcc01v87RV5MWikyIgAhVG4JdffqmwFldXc6X8VdfzrOnedO/e3dSvXz+DQaXZyeGg0rNnz4x+4MDSuXPnjPM6IQIiIAKVQABHvAceeMAMHz7cjB07thKaXLVt1LZv1T7a2uvYJptsYk4//XQ7uGDPs95665kDDjjA9OjRo+JgHHroobbNL774ovnjjz9M+/btzTHHHGMaNWpUcX1Rg0UgiQTatWtnnarCCHa3r732Wp1bcN7o169fmGJsJIJaEpjdeuutBieblHTp0qWWECSur1L+Yn4kAwYMMHvssUeoVlxyySU2/Ee6oPRsuummgctZffXVA19bSRduvfXW5vLLLzd//vmndxWwUvrCCiZOKkceeaTB1g9vX4kIiEB0BJhIEUIpjDRu3Djj8vXXXz90ORmFVPmJzz//vI7iV+XdrYju6Y0S82PChiusHZcvbl2TJk3MhhtuGHNvklO9b/s3Oa0L3hKetRS/4Lx0pQiIgAiIQH4CsvnLz0hXiIAIiIAIiIAIiEDVEJDyVzWPUh0RAREQAREQAREQgfwEpPzlZ6QrREAEREAEREAERKBqCEj5q5pHqY6IgAiIgAiIgAiIQH4CcvjIz0hXiIAIiIAIiEBZCKxcudKQlejjjz823377rfnhhx9svXga49hH2CeiGkTtCEY+9CVLlpjFixebTz75xPz88882zFSq7g022MBsttlmplOnTjaMlqSyCUj5q+znV5bWL1261JAjd8WKFYYQMeuuu67ZaqutQqUZ++abb+xgRlmUQ6ia3377zXo6k4Fj4403toNaq1atDBkuki4MjKTmI2jp999/b/tCvmTY0IdtttnGDtZJFwZ8BnrSxhGOgefCOfpBCIuOHTsa4qGV6pkQkoeXDceyZcts/bxw+E5Qf+vWrW1ga9hKRKCaCXz11Vfm0UcfNc8//3xGKC+334Sp2WmnnQzxQDfffHP3z6E+f/fdd2by5Mlm5syZhnE6nxCBgDGB+IZ9+/bNuPy8886zY3268Dt3hbSmM2bMcE//93nIkCFmv/32y/p3/aE4AlL+iuNXsXcTrPS5556r035+bH369PnvHLNAAibzYnaFF/KZZ57pnq7zmUDLL730knn55ZcDDSrc3LRpU7P33nub3XffPWuKs1yVnnLKKYaZc7rcdtttpkGDBrlus3+7+OKLrTKXLhdddJFVQFLyxRdfmEmTJtlAr3///XfWMhkgu3btag4//PCCQvDwArj33nvrlN+7d29z3HHHZa0z9QfyAl911VV1rmOwZlBOCQoeAaSnTJlivv7666xl0lcUL4JlE48yqviQrGg89dRT9rvhxqx0G8MKBy+6Qw45xCqECG0+99xz61zKysQ111zj3q7PIpBoAqtWrTLjxo0zEyZMMPw/iDDZnDVrlv0NH3zwweboo48OPUFjDHjkkUfMQw89ZH7//fcg1dpruI8xhvRsPuWP3zOT43yCQuhTClP35fpbvrL19/wEpPzlZ1STVzCojBkzJvBglA6JGexjjz1mXn31VTtQhBFW0caPH28HNQJX80JPiqCs0K8gAzT9RkFkdXDYsGFWEUyKwPj22283H374YaAmMZAzCUCRR+EPG5cyvRK4MOOfOHFizoE//R54z5kzx7z++uvmhBNOMN26dQvUbl0kAkkngNJFUPo333yzoKYSAB4FjtX7888/P/BWMPVed9115pVXXimoXt1U+QSSv79W+YwrrgcMCPfcc08gJcftHFu6F154oZk3b15oxS+9rM8++8wOij/++KNbRdk/o7CgCD/88MOhmTB7HTVqlFVckiCsmF1xxRWBFb/0NrMSzMpamFWC9PtZkSXFExwLmdVTLyxZLZSIQKUT+Omnn8zIkSOzKn4bbbSRIS/5gQceaPbff3/Tq1evrLZ2jLf8toLKjTfemFXxw2xlzz33NAMHDjQHHXSQXd3bdttti5r0BW2XrisfAa38lY91RdSEzQfbjWFX7FKdYybK4RNWjLbYYgs7gGEPxxYixszYE/q2HlH87rzzTnP22WcbX1YTXx2lOJeyh0kvu0WLFmbLLbe029SsTLGatmDBAsOA7grbwzAllVScdoBs05DLlC3XlLCligE3NpfY2bFlg+LNto5vWxubTVYajjrqKLebOT9TFtvvb731VtbreOm0bNnSfj9oF98NTA7Yak8J38u7777bnHzyyVnL0R9EIOkEmPxcdtll9nfmCiY15PHmd+kK339W4Jmcs8OSLtOnTzc777yz6d69u3tbnc/Y2bGz4soOO+xgjj/+eOvU4ZPUdi+2ga7JUPr1mKa4277sAj377LN1imUF37dtnLqoTZs2vmboXEQEpPxFBLJaisHgOOXhxQu4Z8+edjBBOeAFjnHwRx99ZLDnCyINGza0s1cGFn7M2RwH2PYYPXp0xkrfwoULrVLFgBiHsDWK3VtKGJCPOOII69ThCnxmz55tbWhQtNKFwRClicE1LsHOE6cKZM011zTklWbwxXjcFRSvBx980Ls6wMsD+8N0W0j3fvcz9kw+xQ+lnu/HPvvsY1jp8AmOKHwvU1tjKNusxEpEoFIJ3HfffYaxLV34LTC2DB48OOtkl2t69OhhHcqwUXaVR1b/tt9++5z20vwWXcGe94wzzshaL9dTd4cOHexBznFXmUuV2aVLF7d4+95wr99kk03saqYkHgJS/uLhnshaWb1i+wAhT/Dw4cMzXvDNmjWznl75VgZRHPEG69evX6DtAgaM5s2bm0svvTRjW5EZbVzKHwpbqq9sv3BkExRblCK8766++uoMR4a5c+faQZMVtnILNkEpGz9W1lhN3XTTTbM2g5AS2Cry79NPP13nOngw+x86dGjW+9P/wOod9pKusBKMXSffp1xCO7kOpXPs2LF2EuKuLOS6X38TgSQRYPKcPqFMtW3QoEGGI4jgjY95zWmnnVZnwswY/sILL9Rx3Esvj1V91xOXiSAr6WF2VxhDcMCSVC4B2fxV7rOLvOWsWvFiZWA566yzMhS/9ApzDRRsiV5wwQXWCy2McwDKn0+5IuZVECeLyIH8W2DKc5iQA762+epk2wQlzxXKQgGMQ1L9YJXvnHPOyan4pbcPb2WUWVfoR9BngrOIO1mgHTiP5FP80utldeLYY491m6LPIlBRBFj1c00q2AL1jRm5OsbEjO1hV3wTrdQ17lYx59mRUSgll2L1f5byV/3POHAPU84VrOhk24LLVxiDCOFRfNui+e7l72wDuMFLUTLcECxByorqGmz7MHwOI2yXo8y6EtTD1r0vqs/Y6vnala18lHxf3wk1gf1fPkFxx1HElcMOO8y+dMIK3w9WVyUiUIkEsF8lCkC6MN6ddNJJoVbeUvcTEstV3LChZpvVJzjkuZLNRtu9Tp+ri4CUv+p6nkX3BnuOYsKS1K9fP6e9Sb4GslLoUwqw+4pL8HrLZquYrU0oTTvuuGPGn9l+jUtwpsAgPKxkC7IcpC/EInMFZRo7v0KF7Sa2qiQiUGkE2JJ1V8GJYZnLBCNXH/kdYP/nimsLmPq7qyhynkkckzlJbRGQ8ldbzztvb5lJxi2+gTBfIOBStZm2kOWiECELiitsu7iDv3tNqT4TwDvXdn22elF8mRS4ks/pB49Gn5MHTiaFtCNVP/ZGxUxQ3H7oswiUi8A777yTURXmDMWIz3SC1T+f+HZkcPCTA5WPVnWfk/JX3c83VO9YtfN5aoUqJIKLfbPTuJQ/cmgWKj4lFlsfNwNJoeWHva+YvuCZ50q+1QK26t2+YuuHN2KxEpcDULHt1v21SwDzFdIopguToGK/y6msN+nlprz6XdqYfPgUwCeeeMLcdNNNNvWmpDYIyNu3Np5zoF6irARJgxaosBAXoRARX5BYfxyLFi3KuLuQoMAZhRRwwrcFHbQYlFi2ZVwFCEW23Jxx4ikmW4rv3nzKH16NrrBKEcWWbaE2qW579FkEykWAyVAqjFaqTsaIQrN7pMrwmV+4oabS+0gazyuvvDKj29OmTbOZdMgXTIpNxgxJ9RKQ8le9zzZ0z7IF9wxdUJYbUPAweGZWyrF8+XL7L0GHk2p0XOwASJxDV/lzP2fBFenpYvvhU1bz9cPn6BEmNmAuAL7V4VzX628iEDcB36oaQeEJCxW15AqFRJxAwnD5vIJRGom3Sq5hnNb69+9fsNlL1H1SedESkPIXLc+KLg1bqiiFlaE33njD5vjFyzXXbDTKeqMsq9iYfD6lKcr2BS0rjn6kZxJJtdO3fRy0D+nXhXXAKaQO3SMCURLwZf+Jsvz0svKtyp966qk2DBfB0302yEzsiKvJgb0vYa5QGsnKJKkOAlL+quM5RtILX6aHQgpm4CAwMEdctnqFtNt3TzGOCZRX7P2+NhVyrth2FHK/7wWkFbtCnp7uqQYC5VT+8vHi90xIL+ILknYxVygtnEfI6U3gfzKQELi/kPEgX5v09/ISkPJXXt6Jri0KWyy2cm+++WabHzaoEOcKmzIGF2y5sPvzecUFLU/XJYOAT/GPaoKRjB6qFSIQnAAOda6wklaK30TQlX6cTW655Ra7OzNx4kTz9ttvu0387zNjO9cyqT/33HMLjgWbtQL9oawEpPyVFXd1V0aQ6KuuusqQYsgnbNXhaUacN2LOpZQ9vNXSt/GefPJJKX8+gBV2zrfy5wbwrrAuqbkiUDABX7YjbGBvvPHGgsuM4kZW8YhJyoGT1uOPP25TxGWz6cWEZ8SIEeaKK67wZv+Jok0qo/QEpPyVnnHN1HDHHXd4FT88PNkqIBhpFKuLNQO0wjvKs3YVQPdzoV2My/u70PbqPhHwmTz88MMPiQKzxRZbWMXuxBNPNM8995yZPHmy8aWEYwv74osvNrfeemuoFJ6J6myNN0Zx/mr8CxBV90nj5YthRb5KtgiwLQmq+PkMkKNqp8opHwHfSkdUTj9RlVM+Gqqp1gk0bdo0AwG7JdlW2DIuLuMJFNUDDjjAMKEfPny4d2sah66HH364jK1SVVESkPIXJc0aLottAldIJVaIcbBe7C7Jyvzsszsi1E8Uki13aRRlqwwRKAUB4qi69n2EuHInzaWou9AyMcchLMy1117rjfvH6qAm64XSjfc+KX/x8q+K2vnxL1y4MKMvheZvzWYzmFGBTiSaANkEXPEFfnavCfI5W+7SIPfqGhGIgwCKlC8VG+Gwki7YJp5yyikZzWSsjmpCl1G4TpSUgJS/kuKtjcJ///13b1qgQgP6ZstLWRs0q6eXOPa4wiSB70sxQkaY+fPnF1OE7hWBWAj48n3jPVvsb6IcnenVq5c3M1GQybovNIzsdsvx1LLXIeUvOxv9JSAB3zYts9xCAhzjSZY0I+iAGHSZQ6Bt27YZTEhv9fLLL2ecD3Pitddes+kAJSJQaQRQoFzBeWLSpEnu6cR9Zkz3JQIIYsvtexfkykKSuM5XYYOk/FXhQy13l3yzOlZnfHHe8rWNMAOS6iCAjROHKySRL9TrF+N44pFJRKASCbAbQmw9V0intmDBAvd0qM+sHr7yyis57ylmhZHfnm9iHiTPts/+15eTOGfj9cdICUj5ixRnbRbWuHFjb8T3sIPZzJkzix4Aa/MJJLfXe+65Z0bjcNZ48MEHM84HOTF27Fjz5ZdfBrlU14hAIgkMGTIkY7zE8eOyyy6z6TALEYIzn3766YYxNJeQz/eGG24weBmHlenTp2d4JqP4+VYD3bJ9aR2XLl1qOCTxEJDyFw/3qqqVyPW+FR5WaNjmCyJEmH/ggQeCXKprKogA+UCZHLjy/PPPm4ceesg9nfUzTkX333+/DT4rEYFKJsDKX9++fTO6gPnMJZdcYu68805vvFT3BnZX5s6da0NpjRw5MvCkCA/dYcOGmbvvvttmU8on/PZQKu+6666MSwcMGJChyGZc9O8Jgvv7tn5HjRrlfUdQZyEKqq9unfMTUJBnPxedDUmgS5cuGSndWKFhlslA44txRRVsDbMNiNFzKmQAjgKLFy8O2QJdnkQCDPjHHHOMTQvlytSpU83nn39ujjrqKJvtJZuQd5SJQbqHLy+dKVOmZLtF50Ug0QQYExnjXO93VgAJrMx4iJJIYPwWLVrY1TXGR5wrOHB4IgWmz946SMe577HHHrO2hptvvrmtiwDPTNQ4Uk58H3/8sd1K9m3Rtm/f3sYCDCJk9mEiOGPGjDqX0w/iCKIMpxwE8R5mcrjHHnuYgw8+OEjxuqYAAlL+CoCmWzIJ8ONlW8C1KeGFfd5555kddtjBdOjQwTRp0sQwY8V2BK9etivSbQMZINgikPKXybhSz3Tt2tX07NnTzJ49O6MLvMD4fnTs2NG+6Ej1h30QRvA4dbz55ptmyZIldWKJkYaqd+/eUv4yaOpEpRBgUnTppZeaCy64wKBguYJ93euvv26PUgpjMQqoq4Tmq5Pt3nPOOceESdeIIsfK/apVq+oUz+qjb9cH5U9SOgJS/krHtqZKZrY4aNAgc88992T0m4Fszpw59sglbdq0satEKJGS6iIwdOhQay80b968jI7xAiJDDEc+YQJxwgkneA3PfY5H+crT30UgLgKs5hE8mRRp+Wz1grSRoPr8zkotTNJQ/ILY+qW3JTW+s90siZ+AlL/4n0HVtIDVmBUrVpgJEyaEjvreqVMnc9pppwVOAVc10GqkI6wQsNXFC+PZZ58N/f0AE6vCvNywMUVhdMVnU+Reo88ikCQCDRs2NGeddZbp06ePXf3yBcvP1V7CrKD07bPPPnblPJ/suuuuBoeradOmhd4yJnTT4YcfbusrdKLF6h9ZTlgkKCQaRL7+6e/BCUj5C85KVwYggC0WNiTjx4+39lz5hPyv++23n9lrr70McaQk1UuA5zt48GCzyy672Bdd0K39zTbbzNr+YFeaEl+oGF84ieqlqZ7FQWDgwIHG9WDHJq9Y4budspvGxo6Ub8uXL7dODxyrr766YazkO77xxhsb7KI5tttuO3suqDRr1syunDOJwt7urbfesvZ82NlRD2Y71EWZmOhgh4ctYPfu3a05ThSCoooZCI4n1P/VV1/ZRQMUYcw+sP/FTMgXEzGK+lXG/xGo968R6T9RwsAIO4gHUZR11lpZuMdjGJwu/DCDBNtM3UNSbuyq0oUfns8zsxC+qZRv2HRhs4WRMjM9Zoys/rRs2dLOVLEH40efLgxCbtR47snmNJJ+L84B7qoQA1iQmSoOKq53cliuLisUYDeSPX3PZytDAFQ3kPG6665rGLzzCX1ww6HA2JduLV9Zqb+jbC1btqzO5bwggsT4ylYHLxxs+rD95HnzAuB7zcoA5TKJ4IXIS859foQRuv766+sUzbWEu5BkJ4DXpU9xzn6H/iICIhA3ART/qG0gtfIX91MtoH5WQooVlIggikSh9fCyJpWRL51RvjJR9MLak6TK5OVWqEQxg3fr9oXAca/xfV5nnXUMRyHC9ieKU5SCQhZ1mSjWHP379w/dVF+wWQZIiQiIgAiIQH4C2mfLz0hXiIAIJIyAb8uY7TCJCIiACIhAfgJS/vIz0hUiIAIJI+ALTVHMqm/CuqfmiIAIiEBJCUj5KyleFS4CIhA1Aewose1MF5xJ2rVrF3VVKk8EREAEqpKAlL+qfKzqlAhUL4Fnnnkmo3PE/8MuUSICIiACIpCfgJS//Ix0hQiIQEIIkDHGlymE0BESERABERCBYASk/AXjpKtEQAQKJEDYmiiEcDAkvXdD+eC1vtNOO0VRhcoQAREQgZogoFAvNfGY1UkRiI/AqFGjbOgegnkXGvpm0aJFNg0WMSBdIfBuvriJ7j36LAIiIAK1TEDKXy0/ffVdBMpAgJU6shbMnTvXOmWQ4YMI/kHi8uHY8eSTT9qcwL549N26dbPppiQiIAIiIALBCUj5C85KV4qACBRBAOUNmz2OMWPG2ADPpI4inRMBrclCwhbxL7/8YlM+kfnDF8w51QQSxZ944olFtEi3ioAIiEBtEpDyV5vPXb0WgVgJoAgSsiVI/mdfQzt37myGDx9uyGYiEQEREAERCEdAyl84XrpaBEQgJAFyEkcl5BM+8MADTd++fTPy/UZVh8oRAREQgWonIOWv2p+w+icCMRNghW7hwoVmxowZZv78+eaPP/4I3SK2hnv37m2TmwexFQxdgW4QAREQgRoiIOWvhh62uioCcRCoV6+e2WqrreyxatUqg+cu6dmw6+P49ddfrULIge0fyh1HkyZNTOvWrU379u0L9hKOo7+qUwREQASSTkDKX9KfkNonAlVEgJAsHTt2tIdEBERABEQgHgIK8hwPd9UqAiIgAiIgAiIgArEQkPIXC3ZVKgIiIAIiIAIiIALxEJDyFw931SoCIiACIiACIiACsRCQ8hcLdlUqAiIgAiIgAiIgAvEQkPIXD3fVKgIiIAIiIAIiIAKxEJDyFwt2VSoCIiACIiACIiAC8RCIPNQL8bmijOgfDxbVKgIiIALVR4D8ycRdlIiACFQOATIbRS31/s2x+U/Uhao8ERABERABERABERCBZBLQtm8yn4taJQIiIAIiIAIiIAIlISDlryRYVagIiIAIiIAIiIAIJJOAlL9kPhe1SgREQAREQAREQARKQkDKX0mwqlAREAEREAEREAERSCYBKX/JfC5qlQiIgAiIgAiIgAiUhICUv5JgVaEiIAIiIAIiIAIikEwCUv6S+VzUKhEQAREQAREQAREoCQEpfyXBqkJFQAREQAREQAREIJkEpPwl87moVSIgAiIgAiIgAiJQEgJS/kqCVYWKgAiIgAiIgAiIQDIJ/A/DhqjZ0SAlQgAAAABJRU5ErkJggg==) Na validação cruzada: ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABVgAAAOgCAYAAADI6x2fAAAABHNCSVQICAgIfAhkiAAAIABJREFUeF7s3XusXdWdJ/gNk5Sq7FFM1dhMq6WyTbdaajsJD9moM8lgoKtmsFEiHkYi3cUj/IOJkmAglVSH2HnIDlQn1YCZRMH8A8Gki5RiHpUodkbTARslHTVmAnnYpfpj8HX+GCl2deN026VRS8Xs37HX8b7HZ59z7rn7vPb+LOkK7n6svdZnn3237/euvfYF7+QlUwgQIECAAAECBAgQIECAAAECBAgQIEBgwQIXLngPOxAgQIAAAQIECBAgQIAAAQIECBAgQIBAS0DA6oNAgAABAgQIECBAgAABAgQIECBAgACBIQUErEPC2Y0AAQIECBAgQIAAAQIECBAgQIAAAQICVp8BAgQIECBAgAABAgQIECBAgAABAgQIDCkgYB0Szm4ECBAgQIAAAQIECBAgQIAAAQIECBAQsPoMECBAgAABAgQIECBAgAABAgQIECBAYEgBAeuQcHYjQIAAAQIECBAgQIAAAQIECBAgQICAgNVngAABAgQIECBAgAABAgQIECBAgAABAkMKCFiHhLMbAQIECBAgQIAAAQIECBAgQIAAAQIEBKw+AwQIECBAgAABAgQIECBAgAABAgQIEBhSQMA6JJzdCBAgQIAAAQIECBAgQIAAAQIECBAgIGD1GSBAgAABAgQIECBAgAABAgQIECBAgMCQAgLWIeHsRoAAAQIECBAgQIAAAQIECBAgQIAAAQGrzwABAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglnNwIECBAgQIAAAQIECBAgQIAAAQIECAhYfQYIECBAgAABAgQIECBAgAABAgQIECAwpICAdUg4uxEgQIAAAQIECBAgQIAAAQIECBAgQEDA6jNAgAABAgQIECBAgAABAgQIECBAgACBIQUErEPC2Y0AAQIECBAgQIAAAQIECBAgQIAAAQICVp8BAgQIECBAgAABAgQIECBAgAABAgQIDCkgYB0Szm4ECBAgQIAAAQIECBAgQIAAAQIECBAQsPoMECBAgAABAgQIECBAgAABAgQIECBAYEgBAeuQcHYjQIAAAQIECBAgQIAAAQIECBAgQICAgNVngAABAgQIECBAgAABAgQIECBAgAABAkMKCFiHhLMbAQIECBAgQIAAAQIECBAgQIAAAQIEBKw+AwQIECBAgAABAgQIECBAgAABAgQIEBhSQMA6JJzdCBAgQIAAAQIECBAgQIAAAQIECBAgIGD1GSBAgAABAgQIECBAgAABAgQIECBAgMCQAgLWIeHsRoAAAQIECBAgQIAAAQIECBAgQIAAAQGrzwABAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglnNwIECBAgQIAAAQIECBAgQIAAAQIECAhYfQYIECBAgAABAgQIECBAgAABAgQIECAwpICAdUg4uxEgQIAAAQIECBAgQIAAAQIECBAgQEDA6jNAgAABAgQIECBAgAABAgQIECBAgACBIQUErEPC2Y0AAQIECBAgQIAAAQIECBAgQIAAAQICVp8BAgQIECBAgAABAgQIECBAgAABAgQIDCnwriH3sxsBAgRGJnD0dJY98tbIqlcxgZkWuPh3smzbP5vpLmg8AQIECBAgQIAAAQIEaiUgYK3V6dQZAvURePU/16cvekKgSoHN/6jK2tRFgAABAgQIECBAgAABAosVELAuVtD+BAgQIEBgjAKvHHglu+jnv8zWf2T9GI/qUASmV2D3lt3txm3ZvWV6G6plBMYocPjg4ezVb7/aOuK6D69zzxijvUNNt8CeP9uTnX47f1wuL7f929uzpRctme4Gax2BEQlcdOFF2T9/1z8fUe3NrFbA2szzrtcECBAgMKMCf/u3f5u9/O5Xsj/Y9D/NaA80m0C1Ak8//XS7wv/tG/97tZWrjcCMCvyn/+c/Zena+O0//q/uGTN6HjW7eoG/euGvshNzJ1oVX/7ZK7IV/+OK6g+iRgIzIHDxhRcLWCs+TwLWikFVR4BA9QJL/+G/Z5v/299UX7EaCcyAwPH/YUm2b+k/nYGWaiIBAgQIECBAgAABAgSaKSBgbeZ512sCMydw23/91cy1WYMJVCHw6EVXVlGNOggQIECAAAECBAgQIEBgRAIXjqhe1RIgQIAAAQIECBAgQIAAAQIECBAgQKD2AgLW2p9iHSRAgAABAgQIECBAgAABAgQIECBAYFQCAtZRyaqXAAECBAgQIECAAAECBAgQIECAAIHaCwhYa3+KdZAAAQIECBAgQIAAAQIECBAgQIAAgVEJCFhHJateAgQIECBAgAABAgQIECBAgAABAgRqLyBgrf0p1kECBAgQIECAAAECBAgQIECAAAECBEYlIGAdlax6CRAgQIAAAQIECBAgQIAAAQIECBCovYCAtfanWAcJECBAgAABAgQIECBAgAABAgQIEBiVgIB1VLLqJUCAAAECBAgQIECAAAECBAgQIECg9gIC1tqfYh0kQIAAAQIECBAgQIAAAQIECBAgQGBUAgLWUcmqlwABAgQIECBAgAABAgQIECBAgACB2gsIWGt/inWQAAECBAgQIECAAAECBAgQIECAAIFRCQhYRyWrXgIECBAgQIAAAQIECBAgQIAAAQIEai8gYK39KdZBAgQIECBAgAABAgQIECBAgAABAgRGJSBgHZWsegkQIECAAAECBAgQIECAAAECBAgQqL2AgLX2p1gHCRAgQIAAAQIECBAgQIAAAQIECBAYlYCAdVSy6iVAgAABAgQIECBAgAABAgQIECBAoPYCAtban2IdJECAAAECBAgQIECAAAECBAgQIEBgVAIC1lHJqpcAAQIECBAgQIAAAQIECBAgQIAAgdoLCFhrf4p1kAABAgQIECBAgAABAgQIECBAgACBUQkIWEclq14CBAgQIECAAAECBAgQIECAAAECBGovIGCt/SnWQQIECBAgQIAAAQIECBAgQIAAAQIERiUgYB2VrHoJECBAgAABAgQIECBAgAABAgQIEKi9gIC19qdYBwkQIECAAAECBAgQIECAAAECBAgQGJWAgHVUsuolQIAAAQIECBAgQIAAAQIECBAgQKD2AgLW2p9iHSRAgAABAgQIECBAgAABAgQIECBAYFQCAtZRyaqXAAECBAgQIECAAAECBAgQIECAAIHaCwhYa3+KdZAAAQIECBAgQIAAAQIECBAgQIAAgVEJCFhHJateAgQIECBAgAABAgQIECBAgAABAgRqLyBgrf0p1kECBAgQIECAAAECBAgQIECAAAECBEYlIGAdlax6CRAgQIAAAQIECBAgQIAAAQIECBCovYCAtfanWAcJECBAgAABAgQIECBAgAABAgQIEBiVgIB1VLLqJUCAAAECBAgQIECAAAECBAgQIECg9gIC1tqfYh0kQIAAAQIECBAgQIAAAQIECBAgQGBUAgLWUcmqlwABAgQIECBAgAABAgQIECBAgACB2gsIWGt/inWQAAECBAgQIECAAAECBAgQIECAAIFRCQhYRyWrXgIECBAgQIAAAQIECBAgQIAAAQIEai8gYK39Ka62g0ePHs3efvvtaitVGwECBAgQaIjA8bnj2emTpxrSW90kMJhAXBNxbSgECMwXcM/wiSAwuIB7yeBWthyNwLtGU+101XrttddmEQxGue+++7KtW7dOVwNnpDV33XVX9vTTT7da+8ILL2Q33njjjLS8mmZGsPytb30re/HFF7Nrrrkm++IXv1hNxWohUBBYd++9lXlsWrcu23nnnZXVpyIC4xR4/XuHsmf+bE9lh9y2b1u2YtWKyuobpqIH/5fPZXM/n2vt+vm8PWs3rB2mGvs0XGDnxh3Z8WMnKlFY+7+uybY8eU8ldQ1byZFXD2eP3PpI/oeH09maq9Zk2/ZvH7Yq+zVYID5HT2zZXZnA/X/5QLb6slWV1TdMRXs++0y2/xv7W7s+8NwD2bqPrB+mGvsQ6CpQ53vJyvevzB7+6Z937beFBEYp0IiANcLVFLAafTncxyncUrgaNUTQ2JSA9ZVXXsl27dqVxX/T5ycCVoXAKAR+fby6ETy/PX16FE1UJ4GxCJzKR7SdqNGItuhLClcD8NVnDwpYx/JJqt9Bjh87nl8b1QSsx1cunzjQwT0HW+FqlCOvHmmNZJ30H0MmjqIBQwlUec+Y9JMGcQ9M4WpgHHj2gIB1qE+FncoE6nwvOfaLY+4lZSfe8pEKNCJgHVTwpptuyt54443W5o8++mgjAsT777+/NSIzSgSm0e9u5aKLLspWr17dDqrrHjDG5+Cll17KHnvsMVMidPtAWDYygT9c0XuEXTGAfc+SJdmypUtL2xLrp7ns3rcvezL/imK07TSfqcm0bemypdnyleXXQ/zym0KZaGGvbSfTg/lH/b2LlmRLli1pt3nNhjXT0CxtmEGBFXFdvHNBactP5AFsKvGZW5JfS2VlSf65nHRZFaMEv32mFcvzwHca2jRpE8dfuEB81nvdB2btnnHmHrg8O3F2tPryCT+BsfAzYo9pF6jbvWRN/lTQwW8fbLG7l0z7p6++7ROwFs5tjE5s2kjXhfT55Zdfbo1cjbC1rtMsxPmPqRBitKpCYBICrz/+eM/DrvvUp7JfnzgzcmnLpk3ZZ265pef207zyt6dOZSkwNtp2ms/UZNoWj0L2ehwyHgfduXFnu3G7juyaTEMHPGr8svzQTx/OYrTeilXLsw23XT3gnjYjMF+g3yP08dhnjASNsv7D6yc+BUC/87fxE5taIXCMXF2fX/dxrSgEFiqw6tLVWa/7QOc9Y9v+yU8b06+P2364vXXPiPB40yc39dvcegILEqjbvWTDbRta/XcvWdDHwMYVCwhYKwatc3UxgrXu845GwFoMVyNMvvzyy1v9/vKXvyx4rfMHXN8IECAwYoEYLbL585tHfBTVE5g9gfSL8ey1XIsJjE7APWN0tmqup4B7ST3P6yz16sJZaqy2EhiXQEyBENMDvPXWW1mM3K37lAjjcnUcAgQIECBAgAABAgQIECBAgEDdBBo9grVztGLxBVgHDhxon+sYxdjvhU4x6jHm7Ex1xD6XXXZZK5iLkZ9lJeb6TPO+xnaxfXpbfVp+9dVXZx/72MfOqyK2S3OFFtseIy5vuOGG0uMWX1aVpkSIyuP/i+uiz9GPVGKu1uJLnnr1K+0zrEtqTxpNWjwHnT6xbb8+tzvR53+innBYtWqybw3t00yrCSxK4GT+aP5zBw9mv8o/66nE3K+b1q/P3tfj51XnQbvV8558TtgPrl2bXZ/X1a08V/jZ+stjx9qbHMunPSiui7b0ml+2W92WERhE4PXvv56devtUa9N1H1nXehw5Hh099L1D2amzL9qJUaatuck6Sszhd/DZV7OjPz/aXrM0f3RzbT7v17r8Uex+pXjsmIO18xhzeb1zPz9zXay6dGUWj7xGifYdPpi//Ofs3JpxzFWXrmo9Tt1rfs1+7bGeQBKIz9aR/DMWJaawWHPV2ny+4Pmf99X5Zy4e5+9WOq+h2GZ1/vmNa6zzc965f/HYS/M5YbtdSwefPTOvXuybRihF+w597/X8mjnavnbjWHFtrc3brxCoQqDbz+3Oz/tC7xmD/vzuduxin4r3jHTdxvp4oWJME1K8V629ak1+baztez1WYaaO5gqM6l6yJv/8xnXTq7iX9NKxblwCjQ9YY77NbiWCxhQ2RpBYFrBGwBkviiqbszP2jXC07NH6CGW/9KUvtZqQAs142VYx+Ix1nQFrjK6MR9aLwWqxH/fdd19rn3hpVTEkjW3K+hx9KPYjwt7ivtHP1K6nnnrqvDYVjx9hbHH74rr0/2XtS+vjWKmt6Rz06nf0OSzLrLu1oXNZ9LfTq3Mb3xOYZYGvffe7Wbxcqtucp1/bu7cVsu64445sZY+XbUWwGtumF1R1esTyCGw/s3lz9tH8D0TFcu8TT3Ru3vr+J4cPt75SeWH79uxDeVCrEKha4JnPfqv9BvbPr9yW/U3+S+jeh/bOO8yGP9kw75fQ+Ef77rufaM9r2dmmeNNzvFAhfsnuNbdq8dh3796SXd0xD2uERc+fbcvND+Yhb/5Sk0dufaT0uHu/sjd74K8eyFa9f3Vnk3xPYEECRw4eznZv2d3aZ2Ue7j/wnU9nO6/b0X7BTiw/kf+C2xmwvv79Q9kzn3lm3nbpwK9mB7M9n30muyqfF68sgIpti8denoe73QLW3VvO3TviF+1jv5jLnsivyeKL7tJxs4ey1jFv/+rt5nNto/ifYQX2ff0H7Z/B8XP77+ZO9L1nRPj/zGf3ZK8W/jDQefz4+R3zqnZeU8XtFnLPiPtWBFBRb9yTOku0Je5Td3z1jp7zm3fu53sCCxHo/Hke87xWcS+JNriXLORM2HZSAqYIWIR8BLDXXnttabgaVUdIGKFfhKH9SoS13cLVzv0idIzwshiudgsGU/vKQtjOeqv6Pvo7SD+ifVdcccV5YXJZO6Lezn53bhvb7No13S866Wyz7wmMQyBC0Wv/zb9pBaPdwtXUhn2HDmU37diRHTt+7i3UxfZFPTfm68vC1bRtvLwqwtQIdBUC0yoQYWZnuNrZ1ghX45eD9NKgtD7eVh0vHkkl3vQcAdX+b+zrrGKo72OE1Oc+8LnzjlusLI6587qd7ZGtQx3ITgQ6BE7kAVLnL8TdkJ7Pg5z4A0B6y3ls0+1N7hHsRH2n8tCpihLXbRy3a7h69gBxzAh3FQJVCsRI6kHuGfGzuzNc7XbP2JOHsHu/Us2/k+L6ihc/dgtXk0Fcq4989JFs7hdHq2RRF4FSAfeSUhorairQ6BGsMUKzOFI0RpCmx/JjpGTZqNX4LMR+xbAv6oqRk/HfKFFPBH1pFGwEfzFlQL86Y9/0OHw8rh6l+Lh61JPqjHVbt25tBbhp1GW0K9anQDe1oziqs9jnGPUZo02jRNui36kM85h8GmGa6uh06WxffB9h7M9+9rP2cbv9T2wXfYp+ptG50b65ubnWHKmxLvUrPO68804jUbtBWtZYge3PPJP9Kr9eorxnyZJsy6ZN2d35VzyGH6FpBKsRvkYw2gpHv/nN7MUvfOE8r9imrJ7Y+Mf5KNQIVqOOKLF9TBmQRqO+/vjj7Tp3/+AH2ZP7z4yy2LRuXbYzv25TiTYqBEYtkELT9OhZPJIfb59dsfrc9ADFXw5i9M89u+/JVl62qj0yLkZrPJGPrkshU4weilEWi30TemrbmZGxt+SPbOdTCuQjWo/mj34efPZA9sOzI5QiZNr/9X35aL07Rs2l/oYIxGcqviIsXZ9PfbHy7GOZ8fh+Kp1B03Wf2JhtyqcPiM9olAh7zoSce1rfx/URYVJcP4stabqAuM6uzkftxfUYJaY4iFA1XYuv5tN5bPiTq1tTeCgEqhCIJx6ixM/lmKIl7hkx5Uyve0aM3l6Xb5vuCfEzfPfd38xHYZ+ZDub5h55vTcmx2M9pTCeQ2hajYqN96Z7xf+dT4BSD4T2f2ZP1e4N8FV7qaLZA/LEuyqzdS469mT8hUfh3nXtJsz/HC+19owPWwCqGiMVHw5ctW9ZzHs4YuZpGhkYw+cILL8yzj3A0HqOPOlNoGYFsr4A1Koj9oq5u85vG8YojYePx/wgbiyX2i4Dx5MmT7ePG8YsBa1mfo63DhKrp+BFwFkePxhQAYdCtfRG8hmGUCIGjX/0e7Y++hU0KnmPfaG8cJ9al+sIp6kxh97wG+IZAAwVibtOYczVKBJcv5o/fF+dajZA1HuWPIPRf5qNcY4TrT44caYWlnY/pF+dJ3ZlPJdA5BUBs/6OHH85u2rkz++XZOV5jFOuHzoa1MXVAKsU5VuP/i+saeJp0eUICNz94c/748i1djx5BTgpr4hfqbT/cft78dTGn3UM/fTi7b83WViiVAs+yOrseqGRhPKodvwQXw9qYB3N1HqZekO+TRipFOwWsJYgWDyVQ9nlPlRVH3XW7huIzGyFPXA/xR4coEbhW9dj+lvxR7c7pOFqBVx62PpiPHkyjW2Mk+GKDq6EA7VRbgW6f99TZ4j0jlnW7Z8TP8Fgen9N0f4mpNqr4nMZ1G/ej8+4Z8UeS/KaRrsX4A178EWSxfwis7UnWscoEqrqXRIPij3RRRn0v6fx3XRzTvaSyj0TtKzJFwBCnOEZ8ptGSEexF0FlWIjQsji4tm6s17V8Wrsb6OGYcL311hqvFNsTI1lQicEztLWtnFcsjXB3UJY1sTceNELjfVAZhWQxXi22O+orr3nzzzSq6pA4CtRCIUaSpxMjVshdZxbyrd19/fXvbzsf7Y1RqcXqBzvA17Rhh6d0bN7br+eXZkbO1wNSJWgmcmc+re7gaHY0XiMRjnfEVI5DKXtYTv6SmF++c2W9u0U7xS0nMg1n2C3CMFkwlwqQYeasQqEqgWzCU6j7R+qxdcO7a+PCVpYeNka3FcvzomRFNpTsMsCICrs5wNe0W12jV1+IATbJJQwT63TPi53C6Z8TnsNc9Y9Mnz/1763D+h4AqSly3ZfeMUVyLVbRZHfUWqOpeEtdecVqmuXyU6WJLr3vJKP5dt9j22n82BBo/gnWY0xQvpkolgr0IPMtKetw/PdYf+5aNrOxXVwSIb731Vtmh5i3vbFMKZwfaeciN0lQDsXuEv8URwd2qjG0ilI1gdZBRpzFStVcJnzTFwzgC5V5tsY7AtAjEKNL0uH606daOl051tvP6/FH9vzg7b2q/YDTmaS0bdRojW8sC2M5j+p7ApATisfteJUaFDjoytPh22zR6rlfd/dbFW9DLfjmPfZeffRQ71dOa2qBjWb9jWE+gm0C8aKrfZ2/XkcHmu49fUqO+9KhozP24+uwj/d2OPciyeJy6V1mx6uL26iquxV7Hsq5ZAv3uGTEdQHwNUpYs+71KP6etaWTyPzCUlc5r8UQ+v/hir8WyY1lOIASqvpcsyaepST/TY378xRb3ksUK2r+bgBGs3VT6LCuOQr3hhhv6bH3msf+0Ub9Rmn0rm9INItAshppX9wlxohsRwBZHnRaD68V2s67Oi3Wxf/MEfpw/6p/KB9esyWKUaq9SHN0ao1WL4WyEqe/Lp+VIJeZajWkEykpsn77KtrGcAAECBAhUKVAMrqqsV10E6iRQ1Uvn6mSiLwSKAu4lPg/DCBjBukC1zsftYwRmv2CwGDxWObIygt44dvw3jQKdVLDY2a+yR/k7uWO7FFh31tG5re8JEFi4wK/OzoMae/76xInWy6sWUjpHqe7I5129aceOVhURvsb/R4ga4e2mK69sBbD9QtyFHN+2BKZF4HQ+X93B/KU5h/OXWs394syjaWcelZ6WFmoHgckIxNx0h/KX6MzlL+85nr/M6nT+0h8jRydzLhx1egTSPeNoPtVMemmhe8b0nB8tmT4B95LpOydatHABAesCzToDzH5zqi6w+oE2j2Pedddd80aMDrTjCDcqunROT9DrsMVpBDpte+1nHQECCxeIQPS5/GsxJR77/w/5S6w+9sgj7dGtUe934uvsi7Tem4esMddrvDRL2LoYbftOi0C8GGT/N/YJjablhGjHVAjEHMXx0pEUHk1FozSCwBQI7P/6vmzvQ3vdM6bgXGjC9Au4l0z/OdLCwQUErINbTcWWMc/pTTfd1G5LmuM1RoJ2znnab87SKjskHK1SU10Eplvg/fm8068//nj23IED2b7XXstiGoLiy69+lb/UKqYPiJGtO2+/vTWyVSEwqwK7tzyRj1w92G5+zHMX862uunT1vC7FS0rizbYKgSYIxC/EOzfubAdI8fKR9R9en63Mr42lFy2dR7Dns88ImprwodDHlkB83vd/Y39bY+X7V2ZrN6x1z/D5INBFwL2kC4pFMy0gYF3k6YuXTi1kxOZiDheP0N9///3tKrZu3Zp96UtfOi9YTRuMM2AtGghbF3OW7UtgdAJ3b9yY7bzzzsoOEC+yiq8o8TKtmI9136FD2U/OzvsaI1vvzEe6vrB9uxdeVaauonEKRLCawtUIkB547oFsTf6LcvfyjoC1O4ylNRR45NZH2qFpvC39tq/eXvr28r0PfVfAWsPPgC6dLxD3i2K4umX3lmzDbWf+nXT+1u4Z55tY0jQB95KmnfH699dLrhZ4jjvD1HHOGxpTA6TjRTsee+yx0nB1gd1a9OadAeugLm+88Ub72IPO27roxqqAQIMEli1Z0u5t8YVVVRPEy7G2XH999uIXvpAdyke3xujVVLY980zVh1MfgbEIxLQAqdyeB0jl4epYmuMgBKZCIObJO5HPtRpl+crlPcPVqWiwRhAYk8DBZw+0j7TxExt7hKtjapDDEJhiAfeSKT45mja0gIB1CLprrrmmvdeB/BHZcZXisYptGNfxex0nAtbiFAX9XvyV6irOYXv12ZFwvY5jHQECCxN4b35tphKP8p88dWphFRS2/kE+OjWmBYivePlVWYl5Vx+/55726pgyYJThblk7LCewWIF4aU8qK9+/erHV2Z9ALQQOHzzS7seKVStKR67WorM6QWABAsV7xrp8ygyFAIFyAfeSchtrZldAwDrEuSsGgTGKdJBH4gcd0TlEc7ruMu7jRSOKUxI8/fTTXdtVXBjbFO2mLTTu2wEbEJgBgU3r12fvOTuKNeZJ3b3v3Ii8suZHCNstQP1OHqzG3Krx9eQPflC2e2u5l1v15LFyBgXijdC9yuFXz4VOvbazjkCdBE71uS6OHzuenZg7M9q1Tv3WFwLdBE6fPN1eXPz/btsWw9hu6y0j0CQB95Imne1691XAWnJ+T548WbImy+677772aM0ICO+6667SbWNFPAZ/xRVXtLYbJIwtq2xV/mbuVOJlV2Uhaiy/9tpry6oZ2fKYEzaV6POXv/zl0mNFG4vrY9/Ol3SV7mwFAQIDCyxbujS7O390P5Un84D1F/n1V1YiXN22Z092044d521XHA373MGDXUPYVG/Mx1osxSkDuh37l/koV4XAtAnEvKupvP79Q6XNe/4re82/WqpjRd0EVqzjBv9DAAAgAElEQVRa3u7SsZ8fyyJE7VZi+aP5XK0KgaYILC9cG8UpZjr7H1MJFOdq7VzvewJNEHAvacJZbl4fBayFc16cRzRGpqYAszMUjSDwi1/8YnvPCDsvueSSrPi4e6yM/SJEjHA1/j9GbMa2w5biCM+oL0LU4ijQFFrG8crC185jF+c9LYa2nX3u3K/b9+H36KOPtlfFC7giVO5sSxyn2MbYL0JrhQCB0QhsyV9ulQLOGMX6R5/7XPa17373vIA0QtF/ma+LkarxSP/W3bvnNSjqKY6GjRA2pgsolghoY1lx3tVbN2zo2rFi6BoBawplFzONQdcDWUhgSIFNn9zU3jN+GX5iyxNZGnUUI1pj/rCdG3dkex/aO+8IZYHTkM2wG4GpElj3kfVZ8Y8PO6/bkb8M7kD+Iqszo7yPzx3P9uZ/dHjwA59rXy9T1QGNITAigasLL7Q6kj/VUHbP2L1l/r+vTr99buTriJqmWgJTJ+BeMnWnRIMqEHhXBXXUpoo787drp0fbIxSM0DSVd955Z14/IxBMAWqsiO0j8IzwNY3E7AwWI3AsPka/ULgIWCPYTSM/o/4IMLuNoI3QMtqXgtLOtqRjR59TuBnbFvv88ssvZwt9bD/qmsuDkgioo4RnfKXwutimWB/L4zhp/UJNbE+AQH+BGMX6wvbtrVGpaS7Ur+3dm8VXCjkj1IzwNZVY/vQDD8yrvFhPbBt1xXQBEabGuijd6vnMLbd0bWSaviAdN9qXSrT3Q2vL3tbetToLCVQucF3+kpIDew60X+jzav6G6PjqVlZeujKL0XxR/LLcTciyuggsXbY02/z5zdmez+5pdSleeBWBUWdoFOtSEJselz5WmNe4Lh76QSAJLOie8f78nvGLs/eMfGqBeEQ6ri2FQFME3Euacqab1U8jWAvnO8LEhQSgEZg+9dRT88LBCBAjzCwGmhG4vvDCC/NGvQ77MYtjFkfPdqsn+tEZWpa9jKtzNG63+ha6LEaxxlcxNE0mxZGx3dq50GPZngCBwQRiTtQILT/aMZo0QtL4KoarMeL0Rw8/3HUe1ffnfxT50Z//eTuYjaOnsLWzng/mAWkcs2w+1ghly8LXwXplKwKjFYh//G/74fbWm9LLSgRIt/3b27J7dp97sVuESYcPzp8mo2x/ywnMosDGT2zKNj+4uWfT11y1Jnvopw9n6wsv+zHvZE8yK2dcIN0z4rNfVtI949Pf+fS8TebeNFVSmZnl9RVwL6nvuW1qzxoxgjWNNo2T3O9N9RGYxjbf+ta3Wp+JCCB7ja6MQDa+YpTmm2++2ZpvNZXYL0LEG264oXR+0ThWhKZRinOstivp8j9pJGy0sTgtQTzuH8eKY0aJIDbaFKXX/KZRX7T1pZdeao94je87+110LE4t0KWJrVGxN954Y6t9xXqjHbFv9Du1s9v+sSyOn2zKtikuj76nNl922WWD7LKgbWK0b2pzv8/Rgiq2MYEFCMR8qr/NR5tGiQBzISWCzsc//vHsT/MRpTENQHGe1GX5i7CivuuvvHJeeNqt/qjn9ccfb+2/77XXsuL8qVHP+/JrN+oaZATqlk2bsvfl80vHS7NOnh1BG/X3m7O1W7ssa5bA8vzt5Tf3CXi6iWz6xPXZqbfPXEOrLzs3t3m3bWPZipUrsl1HHs8Ofe9Qdiifh/VE/vhzlPglec1Va7MNt29ojzoqtueCC86vsd+x126IX8rPhFarL+3ftuLxVqxecf4BLWmkwIb8MeX4bEYZ5HMU263Kr4X0eVp60bm5h3sB3pyPYr0q//wf+utDreky0ijVuGY23LYhW7PhTBviGonrtawMcuyFfNaL9RXn+Cs7vuXNEOi8ZywZ8HM+73oa8J6xbf/27Ej+R7a4Z6Q/KsQ9Y9Wlq7ONn9zYvmfc/tU72vejxd4zBvms97sHNeOToJeDCkzbvWTjpza6lwx68mw3MYEL8kff5z/7PrGmODABAgTOCBzNn1S/6fVzGkv/4b9nL/y/8+c4ZEWgKQKPXnRltm/pP2139+++92R25bv/z9YjugoBAln2J0v/dZvh26f+PRICBHKBmBc3TdsQAbV7ho8FgTMCW9fem/+x9ETrm8cO78pW9PgDEDMCdRa4+MKLsz/+3T+qcxfH3jdTBIyd3AEJECBAgAABAgQIECBAgAABAgQIEKiLgIC1LmdSPwgQIECAAAECBAgQIECAAAECBAgQGLuAgHXs5A5IgAABAgQIECBAgAABAgQIECBAgEBdBASsdTmT+kGAAAECBAgQIECAAAECBAgQIECAwNgFBKxjJ3dAAgQIECBAgAABAgQIECBAgAABAgTqIiBgrcuZ1A8CBAgQIECAAAECBAgQIECAAAECBMYuIGAdO7kDEiBAgAABAgQIECBAgAABAgQIECBQFwEBa13OpH4QIECAAAECBAgQIECAAAECBAgQIDB2AQHr2MkdkAABAgQIECBAgAABAgQIECBAgACBuggIWOtyJvWDAAECBAgQIECAAAECBAgQIECAAIGxCwhYx07ugAQIECBAgAABAgQIECBAgAABAgQI1EVAwFqXM6kfBAgQIECAAAECBAgQIECAAAECBAiMXUDAOnZyByRAgAABAgQIECBAgAABAgQIECBAoC4CAta6nEn9IECAAAECBAgQIECAAAECBAgQIEBg7AIC1rGTOyABAgQIECBAgAABAgQIECBAgAABAnURELDW5UzqBwECBAgQIECAAAECBAgQIECAAAECYxcQsI6d3AEJECBAgAABAgQIECBAgAABAgQIEKiLgIC1LmdSPwgQIECAAAECBAgQIECAAAECBAgQGLuAgHXs5A5IgAABAgQIECBAgAABAgQIECBAgEBdBASsdTmT+kGAAAECBAgQIECAAAECBAgQIECAwNgFBKxjJ3dAAgQIECBAgAABAgQIECBAgAABAgTqIiBgrcuZ1A8CBAgQIECAAAECBAgQIECAAAECBMYuIGAdO7kDEiBAgAABAgQIECBAgAABAgQIECBQFwEBa13OpH4QIECAAAECBAgQIECAAAECBAgQIDB2AQHr2MkdkAABAgQIECBAgAABAgQIECBAgACBuggIWOtyJvWDAAECBAgQIECAAAECBAgQIECAAIGxCwhYx07ugAQIECBAgAABAgQIECBAgAABAgQI1EVAwFqXM6kfBAgQIECAAAECBAgQIECAAAECBAiMXUDAOnZyByRAgAABAgQIECBAgAABAgQIECBAoC4CAta6nEn9IECAAAECBAgQIECAAAECBAgQIEBg7AIC1rGTOyABAgQIECBAgAABAgQIECBAgAABAnURELDW5UzqBwECBAgQIECAAAECBAgQIECAAAECYxcQsI6d3AEJECBAgAABAgQIECBAgAABAgQIEKiLgIC1LmdSPwgQIECAAAECBAgQIECAAAECBAgQGLuAgHXs5A5IgAABAgQIECBAgAABAgQIECBAgEBdBASsdTmT+kGAAAECBAgQIECAAAECBAgQIECAwNgFBKxjJ3dAAgQIECBAgAABAgQIECBAgAABAgTqIiBgrcuZ1A8CBAgQIECAAAECBAgQIECAAAECBMYuIGAdO7kDEiBAgAABAgQIECBAgAABAgQIECBQFwEBa13OpH4QIECAAAECBAgQIECAAAECBAgQIDB2AQHr2MkdkAABAgQIECBAgAABAgQIECBAgACBugi8qy4d0Q8CBOot8PcX+HFV7zOsdwQIECBAgAABAgQIECBAYDYFJBazed60mkCjBE5d+O7shn98S6P6rLMECBAgQIAAAQIECBAgQIDAbAiYImA2zpNWEiBAgAABAgQIECBAgAABAgQIECAwhQIXvJOXKWyXJhEg0GCBo6ez7KbXGwyg6wR6CGz+R1m27Z/12MAqAgQIECBAgAABAgQIEBirgCkCxsrtYAQIDCpw4QWDbmk7AgQIECBAgAABAgQIECBAgMDkBIxgnZy9IxMgQIAAAQIECBAgQIAAAQIECBAgMOMC5mCd8ROo+QQIECBAgAABAgQIECBAgAABAgQITE5AwDo5e0cmQIAAAQIECBAgQIAAAQIECBAgQGDGBQSsM34CNZ8AAQIECBAgQIAAAQIECBAgQIAAgckJCFgnZ+/IBAgQIECAAAECBAgQIECAAAECBAjMuICAdcZPoOYTIECAAAECBAgQIECAAAECBAgQIDA5AQHr5OwdmQABAgQIECBAgAABAgQIECBAgACBGRcQsM74CdR8AgQIECBAgAABAgQIECBAgAABAgQmJyBgnZy9IxMgQIAAAQIECBAgQIAAAQIECBAgMOMCAtYZP4GaT4AAAQIECBAgQIAAAQIECBAgQIDA5AQErJOzd2QCBAgQIECAAAECBAgQIECAAAECBGZcQMA64ydQ8wkQIECAAAECBAgQIECAAAECBAgQmJyAgHVy9o5MgAABAgQIECBAgAABAgQIECBAgMCMCwhYZ/wEaj4BAgQIECBAgAABAgQIECBAgAABApMTELBOzt6RCRAgQIAAAQIECBAgQIAAAQIECBCYcQEB64yfQM0nQIAAAQIECBAgQIAAAQIECBAgQGByAgLWydk7MgECBAgQIECAAAECBAgQIECAAAECMy4gYJ3xE6j5BAgQIECAAAECBAgQIECAAAECBAhMTkDAOjl7RyZAgAABAgQIECBAgAABAgQIECBAYMYFBKwzfgI1nwABAgQIECBAgAABAgQIECBAgACByQkIWCdn78gECBAgQIAAAQIECBAgQIAAAQIECMy4gIB1xk+g5hMgQIAAAQIECBAgQIAAAQIECBAgMDkBAevk7B2ZAAECBAgQIECAAAECBAgQIECAAIEZFxCwzvgJ1HwCBAgQIECAAAECBAgQIECAAAECBCYnIGCdnL0jEyBAgAABAgQIECBAgAABAgQIECAw4wIC1hk/gZpPgAABAgQIECBAgAABAgQIECBAgMDkBASsk7N3ZAIECBAgQIAAAQIECBAgQIAAAQIEZlxAwDrjJ1DzCRAgQIAAAQIECBAgQIAAAQIECBCYnICAdXL2jkyAAAECBAgQIECAAAECBAgQIECAwIwLCFhn/ARqPgECBAgQIECAAAECBAgQIECAAAECkxMQsE7O3pEJECBAgAABAgQIECBAgAABAgQIEJhxAQHrjJ9AzSdAgAABAgQIECBAgAABAgQIECBAYHICAtbJ2TsyAQIECBAgQIAAAQIECBAgQIAAAQIzLiBgnfETqPkECBAgQIAAAQIECBAgQIAAAQIECExOQMA6OXtHJkCAAAECBAgQIECAAAECBAgQIEBgxgUErDN+AjWfAAECBAgQIECAAAECBAgQIECAAIHJCQhYJ2fvyAQIECBAgAABAgQIECBAgAABAgQIzLiAgHXGT6DmEyBAgAABAgQIECBAgAABAgQIECAwOQEB6+TsHZkAAQIECBAgQIAAAQIECBAgQIAAgRkXELDO+AnUfAIECBAgQIAAAQIECBAgQIAAAQIEJicgYJ2cvSMTIECAAAECBAgQIECAAAECBAgQIDDjAgLWGT+Bmk+AAAECBAgQIECAAAECBAgQIECAwOQEBKyTs3dkAgQIECBAgAABAgQIECBAgAABAgRmXEDAOuMnUPMJECBAgAABAgQIECBAgAABAgQIEJicgIB1cvaOTIAAAQIECBAgQIAAAQIECBAgQIDAjAsIWGf8BGo+AQIECBAgQIAAAQIECBAgQIAAAQKTExCwTs7ekQkQIECAAAECBAgQIECAAAECBAgQmHEBAeuMn0DNJ0CAAAECBAgQIECAAAECBAgQIEBgcgIC1snZOzIBAgQIECBAgAABAgQIECBAgAABAjMuIGCd8ROo+QQIECBAgAABAgQIECBAgAABAgQITE5AwDo5e0cmQIAAAQIECBAgQIAAAQIECBAgQGDGBQSsM34CNZ8AAQIECBAgQIAAAQIECBAgQIAAgckJCFgnZ+/IBAgQIECAAAECBAgQIECAAAECBAjMuICAdcZPoOYTIECAAAECBAgQIECAAAECBAgQIDA5AQHr5OwdmQABAgQIECBAgAABAgQIECBAgACBGRcQsM74CdR8AgQIECBAgAABAgQIECBAgAABAgQmJyBgnZy9IxMgQIAAAQIECBAgQIAAAQIECBAgMOMCAtYZP4GaT4AAAQIECBAgQIAAAQIECBAgQIDA5AQErJOzd2QCBAgQIECAAAECBAgQIECAAAECBGZcQMA64ydQ8wkQIECAAAECBAgQIECAAAECBAgQmJyAgHVy9o5MgAABAgQIECBAgAABAgQIECBAgMCMCwhYZ/wEaj4BAgQIECBAgAABAgQIECBAgAABApMTELBOzt6RCRAgQIAAAQIECBAgQIAAAQIECBCYcQEB64yfQM0nQIAAAQIECBAgQIAAAQIECBAgQGByAgLWydk7MgECBAgQIECAAAECBAgQIECAAAECMy4gYJ3xE6j5BAgQIECAAAECBAgQIECAAAECBAhMTuBdkzu0IxMgQKC3wKl3TmX/1//3H3pvZC2Bhgjs//r+bP839rV6e/emTa0vhQCBLFt/771thseO7EJCgEAu8PpfH8r2/NmelsXGT2zKNn5yIxcCBHKBndftyE4cO9Gy+Py+bdmK1Su4EGikwP984cXZB37nA43s+6g6LWAdlax6CRCoRODUP5yqpB6VEJh1gd/8l99kR48ebXXjv/zmN9k/nHJtzPo51f5qBNJ1EbW5Z1RjqpbZF/jN2+fuGXH/cG3M/jnVg2oEjs4dzU7MnQlYYzDHkn9YUk3FaiEwYwL/LfO7RNWnzBQBVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxggIWBtzqnWUAAECBAgQIECAAAECBAgQIECAAIGqBQSsVYuqjwABAgQIECBAgAABAgQIECBAgACBxgi8qwk9PXr0aLubF110URZfysIF3n777ezFF19s+d14440Lr2BG93jjjTey+Epl9erV2eWXX+5zNKPnc5qaffrkqezU26cra9KKVSsqq2vYiqJPh773erb0oiXZug+vH7Ya+xHoKvDr48e7Lh9m4XuWLMmWLV06zK72ITBRgeNz1V0H8bN6ybLJXwdHXj2cHZ87ka3ZsCZbsXLy97KJnmAHr0zAtVIZpYpqKOD6qOFJ1aWJCzQiYL322muzFLJ+6Utfyr74xS9OHH4WG3DFFVe0HR977LFs69ats9iNgdocYfKuXbuy6Gf8f7fysY99rPVZisBVITCMwKHvHcp2b9k9zK5d93ns8K5s0iHr5z7wuezEsROt9m3ZvSXbcNvVXdtqIYFhBNbde+8wu3Xd56MbNmSPf/zjXddZSGCaBXZu2pGdyMPIKsqaq9Zk2/Zvr6Kqoes4+OzB/F74RGv/5SuXZw/99OFs6RSEvkN3yI5TI1DltbLhTzZkW568Z6J9c61MlL92B6/y+nAvqd3HQ4eGFDBFQAEuQtj0VRaqDek8tbtFPwfpc9omdeSVV16Z2j4ttmHR1wiTI4zv9Tl4+umns0suuSSL/yoECGT5L/zH2+FqeBw5eAQLAQJDCpw8dSqLEbvxFf+vEKirwJGDh9tdiz/Qna7wyY66mulXMwVcK80873o9mIDrYzAnW41WoBEjWAclvOuuu7IUHD711FNZjFCse7n//vvbAWH0N/rdrcQozWuuuabtc8MNN3TbbOaXRbhaHPEc0yGEy9VXX92eEuBb3/rWvFA1PjfJZ+YBdGCsAsvzR/pjRERZOX7seHbk1XMhZa9to44l+aOekyzRnxh9lEawrv+IKQImeT7qeOwYddqr/ODQoey3p89Mu/G+/L71vpUrSzf/4Nq1peumYcWT+/ZlX9u7t9WUD65Zk734hS9MQ7O0YQoE1ufTr/QKIQ99/1B2+uSZ6yB+Jq+9qvyzPumnHoJzw20bsoPfPtiSjfZOQ5um4DRrQgUCC7lWVl66Mlv9/tWlR43pKyZd4t9VrpVJn4X6HH8h14d7SX3Ou56MVkDAOlrfWtX+8ssvtwLWGLW5atWqWvUtdebLX/5ya0RvlAhNo8+dUwBE0BxTA8Qo1zTCNfaL5QqBhQjEL729fvGNOel2btzZrnLSj6YN0rddRx7PR64eziJs9UvyIGK2WYhAv0f6f/ypT7UD1k3r1mWfueWWhVRvWwIzIXD7V+/o2c7jG3e0/zgX95hpv3es2bA2e+zIrtac5CtWLe/ZNysJLERgIdfK+g9fmW3+/OaFVD/2bdflAatrZezstT3gQq4P95Lafgx0rGIBAWvFoHWvrs4hYgSrxcf9e82vGqFrzEEbwWqUCJ4jbPUCtbpfAfo3iED8sqwQIECAAIFBBeLFVivKB5wPWo3tCNRewLVS+1Osg4sQcH0sAs+ulQiYg7USRpXUQaA4r2wEqP2miLjvvvvmdfuNN96oA4M+ECBAgAABAgQIECBAgAABAgQILECg0SNYIxCLN8Wnkh4Nj+9jns0DBw60VsWoxEcffbSUNb1xPgK6VEfsc/nll2d33nlnz0fHX3zxxeyll15q1R3zfEaoF/VEu1JgF3XEC5c6S2wX+6bRk7E+jhvhYMyRWhYQxpyhqRRDxfj/4rroc3FEZszXmh6J79evqD8swjHqLbr0a19qW/H8FM9B1JfqTdtGndGmsj63O9zjf+bm5tpr49z1K8m6+Lnpt4/1BMYhsOeze7JTJ8+8FCced4u/5u79yt78kdHD2fH8BSJLly3J7n/ugfMe4T+d73Poe69nh/PtinO/rsjnxFt16aps4yc3terqVbodu7j96/ncgK9971Br0ZX5o27rYi7B/LgHn301P/ZrrfZFiTauzI+Z2t/rmNYRqFLgx4cPZ9/J7/8/PnJu/uM/XL48+2h+j455W1eu6H0NpLaU1fOH+f4xdUG3erY980z227MvtPpF4Z70y/z/7/3mN9vd/NOS/at0UFezBOLnfryhPMrq+Hn/iU35z+PjZ+8dZ66F9R9el3V7pDS2O7jnYPsek+Ti3hHzq8ZTDb3uHcVjx8/+bsd4YssT7RNy+1dvz+8RS1vHO5TfTw59//X2unTMeJQ6tlEIVC3Q7VqZ+/nRbN839rf/7TTstbLhtqt7Nneh18o9u+9p1eda6clqZYUC3a4P95IKgVU19QKNDlgjLCx7A3wxeIzwrixgfeyxx1qPiXd723wEhFF/hH6dYWX6ZLz55pvtNkQdEdalx87TNsXgL5bFdhGERjjbrcRxY13U020O0bI+x7E7H5EvBqxRZwoTIwzuVSKMDZtupV/70j7F8xPnIB7Zv+mmm1qBbWeJdsXyCKajz8M8qh99Sv0fdI5Z4WrnmfD9NAgc+v5r2Ym5M0FlTGD/6K2PZHM/P/cHhDNr5rc0/vH9xN1PtF9QVVx7Yu7My7b25788bPzExq6//Kbti8eOl3J1/lJ99M257NWzv8THupg0/5G8fenFWKmeaGO0Oba9+cGb86DVXJrT8NmqcxuOHT+e3fvEE9lP8oC1s/w6X/eTPHCNcHTn7bdnm668snOT9vf96snyer5z8GB2a/7Crp133JEtW3ouBNr32mvZr0+cf9sJvOoAACAASURBVIXGi7uey/dJ5db8ftUtoC1tlBUE+gjEz/n0s/lIPg/qkjzojD+YpZdlxe7F+0h8H38ciz/exb2hW0n3jvg5/8BfPZCtKnmBUPHYy/NjdwtYU9viOJsf3Jw9X3Lc9jHz9dt+uP28e1C3dlpGYCECxc/riavWVHqtxPVU9bWy/xv7ul6jrpWFnHXbDirQeS+J9zHE7xfTei/54df3Z3sfOvNC0WIfXR+DnnHbdQo0OmBNIxATSjEsi3UppItwr1uJUaXFMDTVF/+NulJ9EdpFqPizn/2sWzXtZSl47LVR1Fl8y31sG+2LuVFTQJtGvqZtO0PWYn9inxQOF/vcqw391kUIWgx/y1yiffGiqGjfICNGo9+pb8Vzl5ZFu+L/45yUBeK92r7Q+WWLx03noFf91hGYhMCezz5zXnjZ2Y6Dzx7Idm/Z3V4cv1jHL8SrL13dGsEUv1SnfxilX6S7/QLcWW+/7yPUjX/4n3vbdbwYa/m840Udzz/0fLYmXghmbtd+pNYPKRCh6E07dmQRpKby3vxljsuWLGl9G+FqlFh/5yOPZC9s3559KB/N2lm61ROhbIyAjRIjUSMsjRIha9T34he+0K4mts0uuKD1/cl8JGva9j15O4pBbHsH/0NgBAKn85dNdYar3Q4TL2Eshq5x71iTB05RjueB7bFfHGv9f/wBbed1O/PAc1tpyNqt/rJlcV97/eyo1XS/im3T8dIxd+e/1G/bv72sGssJLFogPv+ulUUzqqCmAnEv6QxXu3XVvaSbimWzKtDogDVCvbfeeqt97iLAS6MjI6Dr9bh5hKbFcDVeeBSBawplo9IYwRkjOaOk4C9GYZaVFMhG0BfHvuyyy1qbFuuMY6btItR76qmnzpuCINqWHuePbWOf2C6VYp9jJGwatXnjjTfO266snb2Wh0ExXO3mEuujfdG2CHcjkI3wudjPzmP06nOsizpS4Bnu4dyrvs76h/m+OL3EQsPZYY5nHwLDCKSRoTHyNELKeNQ//iobf1GOEgFq/HKQyrr8EdB443Tx0crYJkYLpcdHI2SNR/sXG3imaQiuy9sWUwEUjxnHil+iU/j6fP7X5cUebxg/+zRDoBiuRsj5+D33zAtQIzj9WB6s/jK/30SJka4/evjh80LPv/jud9shbVk9Eax+Ld8uSgS3X83//7P5I/9RimFrbPO1vWdGVbwvD3uL61obKwRGJHDuj17LW1MFdHtxYdwTUrgaIWc8abDpk9fPa1HcO3Zet6MVsEadez6zp5LAM8LVOGZMFVB8pLrzXhX3mMMHD7t3jOhzotoYxX3mD2bxR+lBr5XOz204TuJaaf2hIp8WKoprxad5FALDXB/uJaM4E+ocp4CXXA2pXQxXI1SMUK8z0IuXIBVHUsY23aYSKDYhAskY0RnziUYAHF9pxGkEiSkMjX2i7m7BXoSzUU8qZVMJDNn10t1SmJs2KHOJILf4GH/sl4Lo0srzFeEQQWxnn2P5Cy+8MG/XUfe581z0Cs579ck6AuMQeOg/Ptx65HJ9PiddBKvFX5b3f704gjR/jPM7nz5v3rp4lH9LPo/Xyvefe8VzzKVaRYl/SN2Rt61zrryYt29TPudrKp2PplZxbHUQCIHd+/bNC0W7jU6Nx/Ff2LatNUVAlBh5Gvt1luJj/J0hbWwb9Xxm8+bso/n0AKl0m5Kgs17fExi3QPyx7aGfPtz6ORxzsqav1I59+dMHqURg1Bmuxrq4d6Q5IOP7CHHS/OCL6U+EWdG2zvkq070qwtdUqrpXLaa99q23wEKvlc7PbehM4lp54LlPt/5Q4Vqp9+dz0r3rd30cyJ+iS2Uh95J4SmKxJT7/ve4lca9xfSxWuXn7C1iHOOcRchZHVPYK1yJkTcFrhKvd5g9NTYigsGze0rRNhKfpK4LKslJ8w32aOqBs26qWF0Pn6Esvl1hfDJ8jEO0XPveaWzXqKwavnfPWVtXHqCfOfYx2TiWC5Di+QmAaBWLkaoxaLSsrVl2cxVyp8RVBZ69SHEFaxT9s4h8uveZWjTalEn8Fr+KYvfpnXTMFniwEpRF+ls1tGo/o373pXOjfGYwWpxcIybJ6Yt3GwhyuafqBZurr9bQK3P618//wldoaIWnM753uHfFCqbISf9ArhjhHDp57eVzZPv2W93v5YfyBLhX3jX6a1i9WYFTXylw+Z/1ii2tlsYL2X6xAv+tjbf503TD3kioGXkSg2/muiGJ/Y2BKKu4li/0kNGf/Rk8RMOxpjjfYp9Ir5EzbRCCagtMD+ZuJy/bpF9LF+uKj/r3a3zmaNkLBfvX3qm+QdcXwuFe4WnRJUxlEuBoha1iVlX7tL65PAXhZXYtZHtMqDBqwL+Y49iVQhcDKHuFq1B8B7KBlSeGNzOmxn0H37bZd/KOqV1l+dhqDtE384yZNbdBrP+sIDCrw4/yFVikYjXlOP9rnBY7Xr1+fbX/mmVb1EYzGPKlpbtTYv1hiWoE04rWzPVHPb/7yLzsX+57AVAjEi6Z6/dIZTxzEVDKDliUXLWk/Sl3NCNYzI8nLjh9/OEylintV2XEsJzDKa6UK3eX5KPJepep/1/U6lnXNExjl9eFe0rzP06z0WMA6xJkqvtzohhtu6FvDqIO/4ouq+jZmRBuESTHUHOSlVdGUCJvTtAdvvvnmiFpXXbURCKcgOc5rjKpVCNRRIN4OfSqfnL5YYplCoE4CxVGoMc9pvxKBaQSp6eVT8d8UsMZ/P7RmTfbjsy/EinldY8RrfPUazdrvmNYTmCWBaRjls2TZ780SmbY2VGAarpWG0uv2DAhMw/XhXjIDH5QpbKKAdYEnpTPMjJcrdY4W7ayy+Oh7v8fgO/ct+z7CzBhJW5yuoGzbcSzv7NegAeuow+cq+57mlE11xryv/UbVVnl8dREYtcCRVw9n+76+P58n73B7tNGoj6l+ApMUKD7W/8u5uWzdvff2bU4KV2PDzlGquz7+8az4wqyYfiC+3puHtxHgfmjt2uyD+ZfAtS+zDWZEIP7wlu4b6cWFM9J0zSQwVoF0rcS8wFU83jzWxjsYgRELuJeMGFj1YxMQsC6QujNI7AxcF1jdUJvHdAODvBRqqMqH3Kk4enXY0LHTdsimjGS3CFeLc8zGVA2DhsgjaZBKCVQoEG+v3X33E60XkCgEmioQwWkxPB3GofUyrO3bs7/47nez4guvfpWHt/H1nYMHW9Xemr/k6jO33CJoHQbZPlMjEC9J3PvQXn+Qm5ozoiHTKuBamdYzo13TIOD6mIazoA1VCQhYFykZIVu/EazFQ1x22WWLOmJn0BcvdorH7K/O5437/d///Xl1Dxt0LqqBNdw5RgkXw9U4B73miq0hgS7VWCDC1Z3X7chOHDvR6mW8jCReELImnx+18+VY+/O3Ru//xv4aa+hakwXi8f8/XH7ujbGDWCzrmHc19omQ9fF8JOuf5gFqTEHwXD73eoyOLYa3EbTGV7xUK4JWhcCsCcQvxHv+bE+72WuuWpOty198FS+1Wlp4M3lssHNTfo+ZO3OPmbV+ai+BxQq4VhYraP86C7g+6nx2m9k3Aesiz/s4HxOPUaLFoO/RRx/N7rvvvkX2oJrdi6M5F/KCqeKo1YUE1dW0un8tMbdsvNQqlQhXB3mBV/+abUFgOgT2fmVvO1xdvnJ5tu2H20tfblJ8GcJ0tF4rCFQnEPOnRjBaVYmgdWX+x8/04qxf5vfweKnWk/v3t1+s9bW9e7P35HO3bsnnaVUIzIpA/GFuX/4Ht1RufvDmbPPn/aFgVs6fdo5PIK6VGOXtWhmfuSPNjkC8qMq9ZHbOl5YOJnDhYJvZKgnEqNBiEFh84dWolV566aX2IWLk6rSEq9GoztGyg4asRb/OOkbt2a/+6MO1117b3ky42k/M+lkUOHLwcLvZt3/1jtJwdRb7ps0E+gm8N7+npxLzqY6yvC8/1pbrr89+9PDD2ab169uH+lo+nYBCYJYEjr05N+8Pc8LVWTp72jpOgbhWTp8888LQ+CO2a2Wc+o417QJ/c/CIe8m0nyTtW7CAgHXBZNm8uTeLoecQVS1ol2kOIyN0Lgak8QKufiUCzFdeeaW92Q033NBvl7GtT+FqGmG7detWI1fHpu9A4xKIvxynqQHimJ1TAoyrHY5DYFIC8eKpVH5y5Eh28tSpoZsSQem93/xm6+sHhw6V1rMsH7G684472utj6oDiy7ZKd7SCwJQIHP35XLslK1atmJJWaQaB6RMoXiv+jTV950eLJivgXjJZf0cfjYCAdQjXmPM0lZifc5DRmrFdlS9x6nfMON64y5133tk+ZLyIq19/i22McDZG5U5DSeFqMo75VqM/CoG6Cxyf6z2CL958qxCok8CH1q7N3lOYRzUe2e9XYqTrvtdeO2+zCEnjxVbx9Z183tVepXjMXttZR2DaBeIPdb3K3M+Pmn+1F5B1jRGI6QJ6lc5rJY187bWPdQTqItDvXhLXj7m863K2690PAWvJ+X3zzTdL1mRZBInF0ZrxGHmvwDMeLY95PK+44oqe25Ue8OyK4jynMfKzbHqCCC7vv//+ftW11henO+jVh0EqiykLUn0Rrt50002lIWvni6OmZV7T1O5kEeZPPfXUIN23DYGZE1i6bGm2fNW5l/o8X5gnrNiZ0/kv0Lu3PJHNFUYtzVxnNZhAiUDxJVNP7tuX9XpkP8LVm3bsyO585JHztis+9r8vH8Ea862WlZiHNZXWy7Xyr84Sc7OmUnxBVud2vicwboG1+QutUjn282PZwWe7/0HhyKuHs0dufWTczXM8AlMjsKLwb6x+18rOjTvntbtf4DQ1ndQQAkMKLOReEi/kVQjMgoCXXBXOUoRp6ZH1CAAvu+yyVpAaoVtx1GqEiPFyqwhMo6QRj7FNhK8pZIwAdNeuXe06+43o7PeBibojrE31xPFjdGUaORqh8IsvvjjvsftUZ9mxo4+pRN+jvbEsto8RpQt58VRsG2FkBKtRor5oY4SnYRvrwyqmDyiOXo3jRD+moUQQXgyuo10LGQ0c/SwG4dPQJ20g0Evg6tuuzuJFV1GOvHok27rm3nyOsM158Loi+/t83rCj+fxh+/OXmXSOpOg3EqPXMa0jME0C8YKp/fmI1B/nUwREiVGsEaTGy6niRVVRYuqACE135wFsCjvf7phOYNOVV2Yxz2q8zCpKBLG3btgwr56oN0a3xijXVD6ab9OtFKcv+OXcXCvQ/WA+4vZkPqVAvJArphpQCExCYOVlq7KYTzJNMbN7y+7scH7/2HDbmc9yjDKK0DXuKQqBJgus+8j6bHnhZaKulSZ/GvS9U8C9pFPE93UQELAWzmLMAZoeBY+Asfj2+HfeeWfe+Y4QLULW2Ca2jeAw9i17lDyFsot5kVPU8eijj85rV4R/3QLACC0jKEzB6lz+y1m3EqFwjHZN2xVfnPXyyy8v+LH9qC/amEbQhkvRsbMN0c5wnIYSbY2AulgW+iKxCMAFrNNwNrVhUIGbH9ycvfbXr2XHfnGstUv8why/AHQr6z6yLnv9e6+f2S7/BTpGV8QoWIXArAs8/elPZx/7d/+uHbJ+Jx7zL4Sgnf27NQ9fv1KYFietf/qBB1rBappTtV89EZgWR9AWjxPTF8TI1lRXa/qCs1MYvLB9exbrFQKTEIif+w/81QPZzut2tv/49uqzB7P46iwrL12ZnX77dDuMPeZJiE4i39dYIK6Ve3bfk+3cdG50qmulxidc1xYk4F6yIC4bz4iAKQIKJyrCvoU8qh5h4s9+9rO+oy+j3tgu/rvYEiMqI5AsC2ojhI2QL8LRYtDXGRymdqTQdiEjVfv1IULJt956q2d/i+2s8tj92mY9AQLnC2z74fZs4yc2nr/i7JI1+eOgD/3Hh7PNeRhbLHP56FaFQB0EYjToC1/4Qvb4Pfd0fVw/9THmTt1x++3Z/5Fv163EiNcfPfxwdnc+KrbXPKux7jObN2cv5kFprxKBbbfpA3rtYx2BcQisev/q7KGfPtwaydqtLFm2JIs/4G3bvz1be9W5PwYczqcNUAg0SWDNhrXZY0d2LfhaMS1Tkz4lze2re0lzz31de35BPjJz/tDMGvY0PfYfXYtgsiycTF2PkYwvvfRSe1RnbF98gVM3otgnRowW526N/WJUbK8AMfaLryix3UJGP0a/DhRepNF5vIXUHdtGfWmka2dd0b6iY3rkv5tFcVmnZfQxpiAYZP8YVVt8XL9fQF0csTvIee5se+fxOtcP8v0wxx2k3qZuc+qdU9lLf//XTe1+a4TosUKIGf9IH6QcOXjuF9h4/GbQUabx2H+Epukf9fEL8upLV2XF4/aru9/61iT1R8+86GH56hXZipXnzz1Z7GO/+gbxqMs2MZVDmic3wrmykY916e8w/SjOexrBZHrEf6F1RT3xqH+aDiAC0Xj8f6GjRqOenxTmYk31xOP/gz7iH9MKRB1pJGv0K+Z7HXT/hfZ9Fre/+F/9q3azv33q389iFyptc/wcj7mzowzycza2K/5s/r2LlrZ+9g9a4ud0vA06TSWzIp9iJp54SPeeYt1RZ+e9bJBjL+ReMEh9g/ZtlreLaRrSEykRdsf0P8p8AdfKuX+TLfS6n+XP0ta197ZfmPTY4V1Z/MxSzhdwfdT/+rj4wouzP/7dPzr/5FsytEAjAtahdexIgMBEBZoesE4U38GnTkDAOnWnRIOmREDAOiUnQjOmSkDAOlWnQ2OmSEDAOkUnQ1MmKiBgrZ7fFAHVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCAhYG3KidZMAAQIECBAgQIAAAQIECBAgQIAAgeoFBKzVm6qRAAECBAgQIECAAAECBAgQIECAAIGGCLyrIf3UTQIEZlTg4gsvntGWazaBagXee8l7s/98zd+1Kv0n731v9u6LXRvVCqttVgWuueaadtPdM2b1LGp31QKX/ME/ydK1EfcP10bVwuqbVYF/cfm/yE5dcqrV/Ev+4JJsyYVLZrUr2k1gUQK/f+FFi9rfzucLXPBOXs5fbAkBAgQIECBAgAABAgQIECBAgAABAgQI9BMwRUA/IesJECBAgAABAgQIECBAgAABAgQIECBQIiBgLYGxmAABAgQIECBAgAABAgQIECBAgAABAv0EBKz9hKwnQIAAAQIECBAgQIAAAQIECBAgQIBAiYCAtQTGYgIECBAgQIAAAQIECBAgQIAAAQIECPQTELD2E7KeAAECBAgQIECAAAECBAgQIECAAAECJQIC1hIYiwkQIECAAAECBAgQIECAAAECBAgQINBPQMDaT8h6AgQIECBAgAABAgQIECBAgAABAgQIlAgIWEtgLCZAgAABAgQIECBAgAABAgQIECBAgEA/AQFrPyHrCRAgQIAAAQIECBAgQIAAAQIECBAgUCIgYC2BsZgAAQIECBAgQIAAAQIECBAgQIAAAQL9BASs/YSsJ0CAAAECBAgQIECAAAECBAgQIECAQImAgLUExmICBAgQIECAAAECBAgQIECAAAECBAj0ExCw9hOyngABAgQIECBAgAABAgQIECBAgAABAiUCAtYSGIsJECBAgAABAgQIECBAgAABAgQIECDQT0DA2k/IegIECBAgQIAAAQIECBAgQIAAAQIECJQICFhLYCwmQIAAAQIECBAgQIAAAQIECBAgQIBAPwEBaz8h6wkQIECAAAECBAgQIECAAAECBAgQIFAiIGAtgbGYAAECBAgQIECAAAECBAgQIECAAAEC/QQErP2ErCdAgAABAgQIECBAgAABAgQIECBAgECJgIC1BMZiAgQIECBAgAABAgQIECBAgAABAgQI9BMQsPYTsp4AAQIECBAgQIAAAQIECBAgQIAAAQIlAgLWEhiLCRAgQIAAAQIECBAgQIAAAQIECBAg0E9AwNpPyHoCBAgQIECAAAECBAgQIECAAAECBAiUCAhYS2AsJkCAAAECBAgQIECAAAECBAgQIECAQD8BAWs/IesJECBAgAABAgQIECBAgAABAgQIECBQIiBgLYGxmAABAgQIECBAgAABAgQIECBAgAABAv0EBKz9hKwnQIAAAQIECBAgQIAAAQIECBAgQIBAiYCAtQTGYgIECBAgQIAAAQIECBAgQIAAAQIECPQTELD2E7KeAAECBAgQIECAAAECBAgQIECAAAECJQIC1hIYiwkQIECAAAECBAgQIECAAAECBAgQINBPQMDaT8h6AgQIECBAgAABAgQIECBAgAABAgQIlAgIWEtgLCZAgAABAgQIECBAgAABAgQIECBAgEA/AQFrPyHrCRAgQIAAAQIECBAgQIAAAQIECBAgUCIgYC2BsZgAAQIECBAgQIAAAQIECBAgQIAAAQL9BASs/YSsJ0CAAAECBAgQIECAAAECBAgQIECAQImAgLUExmICBAgQIECAAAECBAgQIECAAAECBAj0ExCw9hOyngABAgQIECBAgAABAgQIECBAgAABAiUCAtYSGIsJECBAgAABAgQIECBAgAABAgQIECDQT0DA2k/IegIECBAgQIAAAQIECBAgQIAAAQIECJQICFhLYCwmQIAAAQIECBAgQIAAAQIECBAgQIBAPwEBaz8h6wkQIECAAAECBAgQIECAAAECBAgQIFAiIGAtgbGYAAECBAgQIECAAAECBAgQIECAAAEC/QQErP2ErCdAgAABAgQIECBAgAABAgQIECBAgECJgIC1BMZiAgQIECBAgAABAgQIECBAgAABAgQI9BMQsPYTsp4AAQIECBAgQIAAAQIECBAgQIAAAQIlAgLWEhiLCRAgQIAAAQIECBAgQIAAAQIECBAg0E9AwNpPyHoCBAgQIECAAAECBAgQIECAAAECBAiUCAhYS2AsJkCAAAECBAgQIECAAAECBAgQIECAQD8BAWs/IesJECBAgAABAgQIECBAgAABAgQIECBQIiBgLYGxmAABAgQIECBAgAABAgQIECBAgAABAv0EBKz9hKwnQIAAAQIECBAgQIAAAQIECBAgQIBAiYCAtQTGYgIECBAgQIAAAQIECBAgQIAAAQIECPQTELD2E7KeAAECBAgQIECAAAECBAgQIECAAAECJQIC1hIYiwkQIECAAAECBAgQIECAAAECBAgQINBPQMDaT8h6AgQIECBAgAABAgQIECBAgAABAgQIlAgIWEtgLCZAgAABAgQIECBAgAABAgQIECBAgEA/AQFrPyHrCRAgQIAAAQIECBAgQIAAAQIECBAgUCIgYC2BsZgAAQIECBAgQIAAAQIECBAgQIAAAQL9BASs/YSsJ0CAAAECBAgQIECAAAECBAgQIECAQImAgLUExmICBAgQIECAAAECBAgQIECAAAECBAj0ExCw9hOyngABAgQIECBAgAABAgQIECBAgAABAiUCAtYSGIsJECBAgAABAgQIECBAgAABAgQIECDQT0DAkv8TMAAAIABJREFU2k/IegIECBAgQIAAAQIECBAgQIAAAQIECJQICFhLYCwmQIAAAQIECBAgQIAAAQIECBAgQIBAPwEBaz8h6wkQIECAAAECBAgQIECAAAECBAgQIFAiIGAtgbGYAAECBAgQIECAAAECBAgQIECAAAEC/QTe1W8D6wkQIDBJgVPvnJrk4R2bwNQInDp5Kjv99ulWe5ZctCRbumzp1LRNQwhMUuD43PH24f9wxYpJNsWxCUyNwG9PncpOnnbPmJoToiFTI1C8Z6xY5Z4xNSdGQ8Yu8O7s3dnvXPA7Yz9unQ94wTt5qXMH9Y0AgdkViHD1pb//69ntgJYTqFBg71f2Zs8/tLdV480Pbs42f35zhbWrisDsCvzJ0n/dbvxv/vIvZ7cjWk6gQoHnDhzI7n3iiVaN7hkVwqpq5gW2rr03OzF3otWPxw7vyoSsM39KdWBIgYsvvDj749/9oyH3tls3AVMEdFOxjAABAgQIECBAgAABAgQIECBAgAABAgMICFgHQLIJAQIECBAgQIAAAQIECBAgQIAAAQIEugkIWLupWEaAAAECBAgQIECAAAECBAgQIECAAIEBBASsAyDZhAABAgQIECBAgAABAgQIECBAgAABAt0EBKzdVCwjQIAAAQIECBAgQIAAAQIECBAgQIDAAAIC1gGQbEKAAAECBAgQIECAAAECBAgQIECAAIFuAgLWbiqWESBAgAABAgQIECBAgAABAgQIECBAYAABAesASDYhQIAAAQIECBAgQIAAAQIECBAgQIBANwEBazcVywgQIECAAAECBAgQIECAAAECBAgQIDCAgIB1ACSbECBAgAABAgQIECBAgAABAgQIECBAoJuAgLWbimUECBAgQIAAAQIECBAgQIAAAQIECBAYQEDAOgCSTQgQIECAAAECBAgQIECAAAECBAgQINBNQMDaTcUyAgQIECBAgAABAgQIECBAgAABAgQIDCAgYB0AySYECBAgQIAAAQIECBAgQIAAAQIECBDoJiBg7aZiGQECBAgQIECAAAECBAgQIECAAAECBAYQELAOgGQTAgQIECBAgAABAgQIECBAgAABAgQIdBMQsHZTsYwAAQIECBAgQIAAAQIECBAgQIAAAQIDCAhYB0CyCQECBAgQIECAAAECBAgQIECAAAECBLoJCFi7qVhGgAABAgQIECBAgAABAgQIECBAgACBAQQErAMg2YQAAQIECBAgQIAAAQIECBAgQIAAAQLdBASs3VQsI0CAAAECBAgQIECAAAECBAgQIECAwAACAtYBkGxCgAABAgQIECBAgAABAgQIECBAgACBbgIC1m4qlhEgQIAAAQIECBAgQIAAAQIECBAgQGAAAQHrAEg2IUCAAAECBAgQIECAAAECBAgQIECAQDcBAWs3FcsIECBAgAABAgQIECBAgAABAgQIECAwgICAdQAkmxAgQIAAAQIECBAgQIAAAQIECBAgQKCbgIC1m4plBAgQIECAAAECBAgQIECAAAECBAgQGEBAwDoAkk0IECBAgAABAgQIECBAgAABAgQIECDQTUDA2k3FMgIECBAgQIAAAQIECBAgQIAAAQIECAwgIGAdAMkmBAgQIECAAAECBAgQIECAAAECBAgQ6CYgYO2mYhkBAgQIECBAgAABAgQIECBAgAABAgQGEBCwDoBkEwIECBAgQIAAAQIECBAgQIAAAQIECHQTELB2U7GMAAECBAgQIECAAAECBAgQIECAAAECAwgIWAdAsgkBAgQIECBAgAABAgQIECBAgAABAgS6CQhYu6lYRoAAAQIECBAgQIAAAQIECBAgQIAAgQEEBKwDINmEAAECBAgQIECAAAECBAgQIECAAAEC3QQErN1ULCNA4P9n796D7aru/MBvHHcmLaUseiI5qalpXfFfS+ZhIjHTg8PFJJmxRNnFQ1RMYkDwD8KDzcsN3QbJQCND2g4P0XYskaniIdyBLsSjcVnqPxLMVdnxpKVgbFrKX0FXpPJHS1UNTEld0z0VZv+OWEfrnnte95x97z3n7s9ynTL3nL3XXuuz99E+53vWXpsAAQIECBAgQIAAAQIECBAgQIBAHwIC1j6QLEKAAAECBAgQIECAAAECBAgQIECAAIF2AgLWdiqeI0CAAAECBAgQIECAAAECBAgQIECAQB8CAtY+kCxCgAABAgQIECBAgAABAgQIECBAgACBdgKfbPfkUnvu1VdfLd5///1Gtz772c82HsrcBX7yk58Uzz77bHH22WcX999/f+P/l3qJPr/99tvFL37xi2ZX16xZU1x66aXF5z//+aXeff1bBIGp56cq2+qqiZXF2kvWVVbfIBWd+uBksffbe4uTH5wqNt+3uVi1etUg1VhnCQscP3a8ODJ1pLIerv/S+mL5iuWV1TdIRdO/PFrs+/7+sh3LiqvL436x2zNIH6wzHgI/Pniw+PDkyUoa+5urVhWfW7e454xKOqKSWgo4l9Ryt+v0EAKHfnSoOPl+NeePUfnOse97+4v4t+CiL24o1n9pwxA6ViUwmEAtAtY777yzOHr0aEPogQceELAOdqwUN910U9MxhawDVjXyq0WgGsdNBKydSgStr7zyiuOpE5DnBxLYvXXXQOu1W2nyK5OLHrA+d8+e4sDHofGh1w8W/+a//V/tmuq5GgscmTpc7N66uzKB+1ZvK9ZNLm5I9NiXHytOHDvR7NP137mhsv6piEAusL384fu9E2eOtWF0rp2cFLAOA2jdRRU4MX28PJdU9xnKuWRRd6eNL4DAc/c8W5yYrub8sfaStcW2/Yv72evg64eKlx/e25CL7xxPTO70A/cCHEc2MVOgFgFrvzs9H+kaoxMjQFvqJQLEFD5HfzuNyowRwGm5MMlHdC41o2eeeaYRJvcq4XHhhRcWTz/9dHHjjTf2WtzrBGopcCr7ZfxUOYr1ZDmi1Wi+Wh4Ktel0HON5uDr9y+na9F1HCcynwDvl5653pk+/n4y2nU9pdY+CgHPJKOwFbRgngfjBPpX4znHq/VO+c4zTDlwibRWwZjty586dzRGLdQnN4pL/CBSjREjYKWCNEat33HFH8cQTTzSnCFgi74EZ3YjQNA9XYzqJxx9/vDFKNQwiaI5QOh8VHf9dl0B+Ke7zUevT1t23dG3SnnueK+JDQ5T1X1xfbPjSRR2XX7V6ZcfXFuqFzds2FxEwReB09b1X+6CzUPBjtJ215WjTbsd9XG6/v7zcPpXrvnN91+No1ZrFnYYifkDYfO/mYm85imJZOUWA0atjdDCOYVN3bNlSfNBlioDv7t1bvHf8eKNnmzZsaDw6lQgtR7nsK6dDiP5EuXjt2uJz3/rWKDdX2xZYYPX5E84lC2xuc+MtcP0f3ND8TtGuJ3u//VLzB+MYoTp53aXtFms8F593Frts/PrG4vCBw402f+HWjcWqidE+py22l+3Pj4CAdX5cl2StETTGYymXPFyNEb1vvPHGjLlmI2S98sorG4FqjF6NQDZC1wjnl7rNUt7vo9S3yesmuzZn78MvNT8MTZy/pvyw0335rpUtwIsT560pdh55cgG2ZBPjKhDz8q66rvOH4CMHVs4IWDeU82qN+ofmmHc1HgqB+RboFpjGtl94881mwHruxERxbTmHvEJgKQrEj1vdPhM5lyzFva5Pwwhs6DFH6dTzbzYD1vis1u39NUw7qlrXd46qJNUzjMAnhlnZugSWkkCEpfmcqzG/aqcbeaURvan/Mb2EQoAAAQIECBAgQIAAAQIECBAgUD8BI1gr3OcxkjEeUSKA6xTO9bvJfG7UbutUvd1u2xrktflqXz4nbBXz5eb1xb6LaQG6lUuzUSBpJOuw+7zb9rxGYD4Ejpc3hYjSa0TgqXJeyZPlXEZRlp+9rLwUaGHu0p5vt1cb58NHnUtPYC7H1GId9+l9uZDvtaW3p/WoaoGYiuDDU6fPA8NMJ5DX86lly4oVyxfmfFK1h/rqLeBcUu/9r/dzF5jLZ5u0bGxloT7/z+U9PffeW6MuArUOWGO0Yn5JeApHY+fHvJoPPvhg4zhIl4p3Oiiinlg2bvyU1xHr3X777Y25TTsFb3FpecxrGiUuPY/LzOPvqC/VFevHnLB5iddi/tQYOZmPukztjUvY77///rY36jrnnHOaVeXtba0rLo/Pg8vLLruscUl8lGhntLdbibldo41zbV+qM98/aR9Ee5NZ3vZ06X6nPndrZ3ottpHmo12xYkXPVVr3abSn9bmelViAwAIIxPyV+76/r7GlDeW8rTEn5P7v7WvMEZnmc538ymSx9amZ87/GB42p5w8UB1//s+LIgSMzWhpzLa0vLy3aXF4GHZcNdSr5ttf9o7WzthF3/X1o047G6svLOh/+D4+UbTpZ7Pve/mLqh1Pl3U1PB8Dxer/b7NQWz9dL4PEvP1Yc/dXpG+Jc/wfXFxMXTBR7v723cWfZdNzft29bsa6cAzYvx48db7w/Dv7o0IzjL5aZKOf421jO69VtHrJYbsfGh4rj5RxgUWLbrZfhtXtPpu1OPT81Y060leVcynFZ3ub7rsmb6b8JLIhAhKFP7dtXvDA11ZxqIG04pie4edOm4nPret85uls9nymnLtha1hP1tYatV/7+7xfvnTj9Xsrnmv3ZkSPF+ttuaxo8e9ddxbnl5ziFQNUCziVVi6pvKQvEZ6znfndPo4sT560u7nrxG43PVPv+9f7mZ6qYz3Xb/u2zGOLzT0xLMOh3jnzbcR+Kdtu4fd3tze0+/B8ebszpf/oz38FZ211fTkl1/Xev7/o9Z1YnPFF7gVoHrLH381GL+dEQYVke4HU6UiKITQFp6zJRd7wegWBrWJmWjW2kNkRA262+tE4sd9VVV3Vse9QXQWE82t2sa9A+x3pp3W42sUwE163Bamp/al8Eug888EAjhO5U8rbGf+chb+v+iv7GNjtZd9pGej4C1jxQ7rV8q+Nc1u1Vt9cJVClw8v3yruYfB5Vxw6m4UVZ+06B224obCz1WBlT53dDz5SKgOlB+EIrH1t1bOwZO+baPd7jpVmpbfIWOkGnHFx5qu920zbhL6F1/fFf5wW1Nu6Z7jkBDoHEH5uZxf7Rx3Hc6nhNZ6w8PrZTx/tm9dXcjqN32p9s7fuiO4/jE9OlQKNrRWvL3RdzlNr5Q7LlnT9ubTUSbX3745fKHjoPFIz//l61V+ZvAvAm8U37u2vLYY7OC1bTBuOlUPO7evLm4+5rOPwBEPVc+9FBz9Gtrg/98erq4bdeuIkbFvrJ9e7E6u9lW3KArBayt66Wbd8XzH3w8srZ1GX8TGFbAuWRYQevXSSB/vxTFR8XL5eelGNDRrcRnpvghIz5jtSv5d464YW6nH5xbt92urnzgRvz34/c8NitYTesd+tHBIh7tfoxvV7fnCIRArQPWCMRidGgqEfil4DC/K3ynUYkRcuZzb8Y68YgSIWh6LQWDb731VtcRjhEOdgolUxtTXamdaeRmupw9tht1pPAvgs7oZ2pX1JP3OV+2dblO/U5taff/7ULQ2Ha0L+qL19M2ow933HFHo5puIWu8HsumcDX1OdUXfU79jf+PPkfIOt8lRuemkpvO93bVT2AYgfhVuPWX4db6GiHnxh3NsCdGjsaNheIOvVGmy1GBEXKmsCoCp5XlKNbWkYCt9fbzdwpXY5sxWjAeEUClu4JGHbHdPXfvafvLdD/bsEz9BOIHhTRqtVPvW78ExMjRGHkaN5OLS9XiR4dD5ajWdAzGsfrwzx9pjH4YpsSxHaO1o6T32rJyGo7YZrxXU7uP/epYGey+1PGLxTBtsC6BVoHWUDQu5b92crIxwjRGku47dKgZvH537+kvz+1C1mNlQJqHq1HP5eVI1TTFwE8PHy5iNGqUCEyvKoPYPGSNUa1pWoJ3jh0rol1RYv3PrV3bbPaKsl6FwHwLOJfMt7D6l5JAfH7vJ1xtHVixvrzabu0lp6+MaP3OET84ryk/l8VVdMOWfCBJjKpN3zmOlp/34jNXKru37qrk896w7bX+eAjUPmDNL71P4V/sui1btswIIlt3Z4y8TAFqBH1xQ6Q8xIzl87Ax/jtGp7Ze6t9ab/wdYWMsO1FeMtVaItRN4WoEou1Ga+bbjfVjuoG8bXkbIoxMl8XHMv20r7VN+d+xrRR2dnOJ/iW/CFljPtNuc55Gn+MRQWZMT9Aa/uYjf1OAO58jSmPUcnKL/sfUBAqBcRL4QnmZ86ZbN7Wd12j3zbuaoU6ETO1G6rWONH25/HW6ioA1wtP4kHPni3fNCq7yACyCp8NlyFvFNsdpv2nrYAIRUsaxvLE85ievn5x1bMXxnH8J6DRCIkZs7yo/aEeJYzX+jjqHKemHinbbjHY99s8ebX7Qjy/3nUZuDNMG6xLIBSIUjZGrKdiMkPPJW26Zcfn+jvJzcgSr333ppcaq8d9fLj/L5aNP4/nbfvCDGfO2/vtHHplRz93lMvv+7M+Kr5cjWGN7EbI+9eMfF1F/lPT/jW2U22oGrCtXFk9+9auNZRQCCyUQ55L4Iezqezc7lywUuu2MrUD6gfiScpqjS8upyNa2TMkUHYsrgtLnoG7fOfLPQs+VVyRVEbDGdleff3oag9bpzvLvHLFcXEV06XWXju2+0PCFE/jEwm1q6WwpAsQ0P2v0KgK/1nA1no+ALw8sI5Drdml9rBPBbYR37cLVGKkZj1TahavtthuBY6/tNisd4j8iMM1Dx3ahc96+PFCNgLRXiTlfw7M1XI31IuDMn+81ErjXttLr4RZ1pUdM9xAjafP2RpvmM8ztt62WI9CvQAQ5N5TzsLabND5G6eUjXNuFq7Gd+CByy+4zc7bGOu0uhe63TWm5+KATcya1GxV4dTnfa3yxSeXYx/NrznUblq+fQPrQvulrm9oeW1N7To8gDZlYtlOIGV8SYmRFKhHyV1HahatRb7zP4r2aSnxZyW/8UMW21UGgVSDmXE2X38dI0dZwNS0fUwPko0gjGG0taXRqPN+pnk0XXVTcfPnlzVVjvleFwCgKxPkhrlyo4lwS55NUnEtGcW9rUxUC8fkmvi+0C1fje0P8UJ3K1nK51qAzXmv9LBSBZxWfhRqfDcvvHO22Gd85YsBHKv95aua9KKqwUcfSFKj1CNZBd+lrr73WXDVCwm6Xh0fwGo8U+EUI2Wn5COm6jYSM1/MAs1uo1zoaNELh1ucG7X+n9VovmW8XOqd1IwyNYDrCyijhE23s1qdYvlOJ+mJ7aVTsdDmfVxUl6stvhJbXGduL/dWtn1W0QR0EqhToFh7FdlZOrCrnVD0TnLb70JHas7q8aVBejh89USy/YLjLpbfu6j4iKaYqSJdTT79dzfu8Sl91jaZA3CSq27G8rrwUbdXu0zdrixsjdCsxbUCaKiAuXRu29HpPxpeS+GEhjQSJLxXtfhwZth3WJ5AEYl7VVCJEbb3xVC71O+Xcqz8tL+uPEsFoPuI0vylVvN46ujWv5/L164uf/fmf50/5bwIjJ1DluWRdGd6kcMm5ZOR2tQZVIBCfXTr9YJ2qz79zxM1IO5XWz0IxX+uwn4Vi2+0GdKQ2xDQFadBJXFGkEOhHQMDaj1LLMvm8q53C0nyVCDZTwPrmm292DVi7NSdCxJi6oJ/SOspzoUawprZdccUVPZsZwWQEqmlKgQiuu83F2i18jY3lfU519mzEEAvENhZiO0M00aoEZgn0+jASHzTiC0Q/pfVDyak2N/Ppp558meXl3JPdSgTACoG5CvQ6btZOri3if4tRer0no00xJ2szYPUhfzF2U222GXOiptGrjXlXy8v+u5Vzs+ms0iX+aX7VCGajjjTVwO5yZOyOG86MyM7rPbf8PPjqt77VbVNeI7DoAs4li74LNGCMBOKzS7cyl+8cUU/+WaiKq+a6tS1eWzXR/Qf3Xut7vZ4CAtYB9nsKS2PVFStW9AzZ8sv9qw460yXsVdc7V5Z86oJYt99RnbFcGpXbWsdc2zAfy+fti/ojUI2APd1YK0a3Rmg+7Ny189F2dRKoSiCC08YUAOVk9QqBugjESNGYhqL1uD9cvhcUAktV4J3sCqAYcZrC1m79/c1yPtT3TpxoLBLzt6aANf6OS///1cfztMbUAxHgbt20qbh43bquI1q7bc9rBMZJwLlknPaWti62QHznOD59oryx6OwrhOKmWQqBURcQsM5xD7WOWOx0+XinaqsKQiOUjEvy87C30zYX4vm8XzGStHUEbac25KNSW207rbOQz0f7WkfOxrQA4R/zsEa/479jmW7TOyxkm22LQFUCU89PFVPPvzljTtaq6lYPgVEUiA/2+763v9j//X3NEaOj2E5tIjBfAv/1L/6iWXWEretvu21Om2oNZO8ppxj48OTJIsLVKH9e1nlbeUOrKJ8pR7/GCNiYg/Xy8kZaCoGlIuBcslT2pH4slMCRA4cbN7zK7wOxUNu2HQJVCghYq9RcgLoi0LvqqqtmBavtQs2FDCzzbfUbri4A17xtIqaGiH7GvogSNyaL6Q3q0Pd5Q1XxyAjEF4PHvvzYrA85MZfSsnIKgbyccLnyyOw3DRlOIG7wFsd9upttqq31uI/3R7pcf7gtWpvA6Al8cKr6EUIxLUAEqTFFQASsqcR/x+PFcu7WGPV67eRkcXc5p6tCYJwFnEvGee9p+2II7LnnufKH7f2zNr2yvNFnXnznmEXkiREUELAOuVPuuOOOOYVq+XQBg2w6Rk2mUasR5sX2I+xrV+9ZZ501yCYGWqeOweKVV17Z2PcResej2w3MBkK1EoFFEnjunj3NcDXCpY23bmrMy9pursivLP8Xi9RKmyVQnUDcvCAPV1efv7rYdOvlxfovrZ91A4QY1b176+7qNq4mAiMqEPOibipvPjWXEqNS25WYyzUe75RTLcU0AT8rHz89cqQ5P2uMfP3u3r2NG2X9+0ce6XpjrXb1e47AKAg4l4zCXtCGcRJ4uRy1moerV9+7uYgbwMVNrVrL7etuK06U0wcoBEZZQMA6x73TGiTGqMXWS8jnWGXfi0ewmuYrjZXeeuutBdt2r0bmBjGaNQLHVqt2dbROLdBumVF+Lr+B2QcffDDKTdU2An0JxCU66a62scLDP3+k693X+6rUQgRGXCAuS0sjV1euXlls2799VrA64l3QPAKVCOTzp/7m3/t7lY8ojdA2HlvLuVmjRNj6YjmXfQSrUSJo3fLoo254VcneVMlCCziXLLS47Y2zQPwgsffhvc0ubNu3rW2wOs591Pb6CXyifl0erscRGubB4ULOgRo3U0olRk8uVLDbj1hrW/q9YVXu1++NsfppzyDLRHh92WWXNR4PPPDAIFVYh8DYCxyeOnMDn/VfXC9cHfs9qgP9CBzLbqZw/XduEK72g2aZJSmQB6wxwnS+y+fKm109+dWvFjeXN75K5Wfldj8o521VCIybgHPJuO0x7V1MgSPZd461HUatLmb7bJvAIAIC1gHUItxMJW40tVBllOc5jdA5D0hfe+21nizRnzyIvbS8dGwxS4xAjcA3Hv20P9qat7/dNA2L2R/bJjCIwInyzumpLG+Zb3WQ+qxDYBwE8rvVxrQYCoG6CkTgmcqH5XysMcJ00HLbD37QuElWPL7z0ktdq7m7vBlWXlpvltV1ZS8SGBEB55IR2RGaMRYCx7PvHGPRYI0k0IeAgLUPpNZFtmzZ0nwqwriYe7NXiblTq7zpVK+6FmMEZu4So0F7tfHBBx9sssUI2LjcfjHLBRdc0Nx8BKe9RifHfs+nOFjs9i+mnW0vHYFlZ58Jl+LSnW4l5k1SCCwFgTxUPfar6Y5dihtcxSWgCoGlKhAjWD+3dm2ze9v37Ok5mvRYzJ/aJkBdsWxZ45L/eMScq93KfNxcq9v2vEZgPgScS+ZDVZ1LVWD52WdunJv/ONGuv/u/t8/8q+1gPDdyAgLWbJfkl/7nl+O37rUYqZmP1rzppptmzI2aLx8BXLwed5m/8MIL+wpjW7eX/s5HeEb4lweUaZnYXoS57V5rV2/e516BaLv18+fiZlspZIx2xKX2neqMADifT/b+++/vVf28v95uv3ZqfzwfzqlE31unSZj3BtsAgXkQmDj/zA1Kjhw4UoZJs0cdRcgUd/zM502ah6aoksCCCeTHfQSo7X5ciPmJd2zc0ZyrdcEaZ0MEFljgd665prnFuCnV13ftKiJEbVdihOs//r3fa9yg6h9/85szFtl40UXNv+Oy/3YhbFrgxY/nYI2/I+SNeVpbSz59wTvT0z2D39b1/U1gvgX6OZdM//Koc8l87wj1j4XAxHmrm+089cGpjt854nPZnt/dMxZ90kgCbnKVHQMxgjGNRk3hX4RmERY+/vjjM46Wp59+uhkgphA1pguI6QPiUvF47u23326EiGmUY4SZw4xyjLojOE2hX4SUEbSmsDeebx1VmRqdj7TMO3LFFVc0wt8oUVeEwdHGqGuQG3i98sorjSA5thd1nHPOOUWEjxEOR/+nyw/E0cZ8dGj0I5YZhRL7OYLhTu1vt1/jGBmFgHgU/LRh/AXWf2lDsTK74c/LD79cRNC69pLTl41G8HTo9YNFfBBqLe1CqdZl/E1gFAU2l3et3bFpR6NpcWzfsfb2Yv0XNxTxZTl+UIiRFfE+aC2n3p/9Pmhdxt8Exk0gpgm4uwxZUyC6/+DB4s/Lz28XlyNb0xQCMeJ0X/l8PjJ10/r1M7oay8bcqk/t29d4PkLYuJlVPBdh6YflPKvt6rl548a2ZK3TF1z50EPFteXny6jn4nJb+ettK/AkgXkWGPRc4s7o87xjVD+SAmsn15XfL9Y2P1/Fd46p56eKDeV3kWXlNGXdvnP8VZvvISPZSY2qnYCANdvld9xxR7Fz585mIJqPsGwNWCNUe+ONN2aM0ozQMA8O86MpLT/MKMcIKCPAvOqqq5oha6dtphGiKYyNsDefOza1LcLZaFNaLu9zhK9zbW8s/9Zbb81wiTrzenOXaOekb/R3AAAgAElEQVQohZMRLsd+zY27tb+K/Vq7f3V0eKQFYt7VbX+6vdjxhYeaI/UiWGoXLl1979XF1A+nmpfs5PO3jnQnNY5Ai0B8yI8vxvmo7EM/OljEIy9x+ecN5U2wdm3d1Xg6wtiYQ2zVxCqmBJaUQJoTNYWscZn/i/HIRprmHY7lI5RtLTtuuKHxVApZo57tzz3Xuljz76hn6+WXt309TV+Qbr4VoW+q68kysFUILLbAoOeSaLdzyWLvPdtfDIGtT90y4zvHiWMniv3f3z+rKV+4dWMRN5FL30em3+48ndOslT1BYAEFTBGQYUeAGeFav6FiLPfuu+8WMZq10zrxfISIETp2WmYu+zsFgJ1GfEZgGn2I0DIfLdsp+I1tz6XP/bQ1+pna0KnPeTv7qXMhl8mNO7W/6v26kP2zLQK9BFatXtUIWS+5brLtovFr87Z924rN911TrPtHZ26I0i6EbVuBJwmMoMDV95XBzu5bitXZJWupmRGsbiw/3D9xZGfxD7+0vhxZcWau4tYQdgS7pkkEBhKIsPPgk08W1062Pxd8qpxjNUaOvrJ9e9twNW00QtZYJkbAdir91BPrPvONbxStI2U71el5Aosh4FyyGOq2Oa4C+XeOdjcZTd854sftfAqOgy0/gI9r/7V76Qmc9VFZll63hu9RHkhG4JbPVdqp9hgFmkaCxjJxefx83lk+LlfP72Lfbzs7tT/qSlMJVNn2vN5wjHCyH89O7Vzo5/P2L8R+Xej+jfL2Tn50snjtr/5klJu45Nt2srw8+lj2K/HqCyaKGOWqLLxAzEH18sOnb7B0dTnacnMZCCrzIxCXpZ04enrOyV8vb8KwamKl435+qCup9SvL/0Wznr/4t/+2kjpVMlsg5ltNJW5gFSNKVyyf2/ngg/Jy/pg/ddh6Yk7YGA0bZXXZjnx+1tktr+czL7z5ZnFbOX9uFOeMxTkGnEsWx73XVm9fd1vz6qsnDu90FUovsAV8/cjUmfPMyvLqIFcIzS/+pz/x6eKf/p1/Mr8bqVntpgjosMPzm1h1WGTW0xEcdhrxOGvhCp6IkHKQdnba9DDzw3aqM56fr3q7bbPK18a9/VVaqKt+AhGmxiVvCoE6CcSIingoBAicEahijtMIZKuoJ0LVeCgERlnAuWSU9462jaKA7xyjuFe0aS4CpgiYi5ZlCRAgQIAAAQIECBAgQIAAAQIECBAgkAkIWB0OBAgQIECAAAECBAgQIECAAAECBAgQGFBAwDognNUIECBAgAABAgQIECBAgAABAgQIECAgYHUMECBAgAABAgQIECBAgAABAgQIECBAYEABAeuAcFYjQIAAAQIECBAgQIAAAQIECBAgQICAgNUxQIAAAQIECBAgQIAAAQIECBAgQIAAgQEFBKwDwlmNAAECBAgQIECAAAECBAgQIECAAAECAlbHAAECBAgQIECAAAECBAgQIECAAAECBAYUELAOCGc1AgQIECBAgAABAgQIECBAgAABAgQICFgdAwQIECBAgAABAgQIECBAgAABAgQIEBhQQMA6IJzVCBAgQIAAAQIECBAgQIAAAQIECBAgIGB1DBAgQIAAAQIECBAgQIAAAQIECBAgQGBAAQHrgHBWI0CAAAECBAgQIECAAAECBAgQIECAgIDVMUCAAAECBAgQIECAAAECBAgQIECAAIEBBQSsA8JZjQABAgQIECBAgAABAgQIECBAgAABAgJWxwABAgQIECBAgAABAgQIECBAgAABAgQGFBCwDghnNQIECBAgQIAAAQIECBAgQIAAAQIECAhYHQMECBAgQIAAAQIECBAgQIAAAQIECBAYUEDAOiCc1QgQIECAAAECBAgQIECAAAECBAgQICBgdQwQIECAAAECBAgQIECAAAECBAgQIEBgQAEB64BwViNAgAABAgQIECBAgAABAgQIECBAgICA1TFAgAABAgQIECBAgAABAgQIECBAgACBAQUErAPCWY0AAQIECBAgQIAAAQIECBAgQIAAAQICVscAAQIECBAgQIAAAQIECBAgQIAAAQIEBhQQsA4IZzUCBAgQIECAAAECBAgQIECAAAECBAgIWB0DBAgQIECAAAECBAgQIECAAAECBAgQGFBAwDognNUIECBAgAABAgQIECBAgAABAgQIECAgYHUMECBAgAABAgQIECBAgAABAgQIECBAYEABAeuAcFYjQIAAAQIECBAgQIAAAQIECBAgQICAgNUxQIAAAQIECBAgQIAAAQIECBAgQIAAgQEFBKwDwlmNAAECBAgQIECAAAECBAgQIECAAAECAlbHAAECBAgQIECAAAECBAgQIECAAAECBAYUELAOCGc1AgQIECBAgAABAgQIECBAgAABAgQICFgdAwQIECBAgAABAgQIECBAgAABAgQIEBhQQMA6IJzVCBAgQIAAAQIECBAgQIAAAQIECBAgIGB1DBAgQIAAAQIECBAgQIAAAQIECBAgQGBAAQHrgHBWI0CAAAECBAgQIECAAAECBAgQIECAgIDVMUCAAAECBAgQIECAAAECBAgQIECAAIEBBQSsA8JZjQABAgQIECBAgAABAgQIECBAgAABAgJWxwABAgQIECBAgAABAgQIECBAgAABAgQGFBCwDghnNQIECBAgQIAAAQIECBAgQIAAAQIECAhYHQMECBAgQIAAAQIECBAgQIAAAQIECBAYUEDAOiCc1QgQIECAAAECBAgQIECAAAECBAgQIPBJBAQIEBhVgV8r/nZx3q+dO6rN0y4CCyrwyX/yt4rz//Z5jW2uvWRtsfbX1i7o9m2MwKgKPPDAA82mLTvXOWNU95N2LazA//J3/27xwD/4B42NOmcsrL2tjbbAPXfeU5x6/1Sjkb+98n8tlv3astFusNYRmCeB5Wctn6ea61vtWR+Vpb7d13MCBAgQIECAAAECBAgQIECAAAECBAgMLmCKgMHtrEmAAAECBAgQIECAAAECBAgQIECAQM0FBKw1PwB0nwABAgQIECBAgAABAgQIECBAgACBwQUErIPbWZMAAQIECBAgQIAAAQIECBAgQIAAgZoLCFhrfgDoPgECBAgQIECAAAECBAgQIECAAAECgwsIWAe3syYBAgQIECBAgAABAgQIECBAgAABAjUXELDW/ADQfQIECBAgQIAAAQIECBAgQIAAAQIEBhcQsA5uZ00CBAgQIECAAAECBAgQIECAAAECBGouIGCt+QGg+wQIECBAgAABAgQIECBAgAABAgQIDC4gYB3czpoECBAgQIAAAQIECBAgQIAAAQIECNRcQMBa8wNA9wkQIECAAAECBAgQIECAAAECBAgQGFxAwDq4nTUJECBAgAABAgQIECBAgAABAgQIEKi5gIC15geA7hMgQIAAAQIECBAgQIAAAQIECBAgMLiAgHVwO2sSIECAAAECBAgQIECAAAECBAgQIFBzAQFrzQ8A3SdAgAABAgQIECBAgAABAgQIECBAYHABAevgdtYkQIAAAQIECBAgQIAAAQIECBAgQKDmAgLWmh8Auk+AAAECBAgQIECAAAECBAgQIECAwOACAtbB7axJgAABAgQIECBAgAABAgQIECBAgEDNBQSsNT8AdJ8AAQIECBAgQIAAAQIECBAgQIAAgcEFBKyD21mTAAECBAgQIECAAAECBAgQIECAAIGaCwhYa34A6D4BAgQIECBAgAABAgQIECBAgAABAoMLCFgHt7MmAQIECBAgQIAAAQIECBAgQIAAAQI1FxCw1vwA0H0CBAgQIECAAAECBAgQIECAAAECBAYXELAObmdNAgQIECBAgAABAgQIECBAgAABAgRqLiBgrfkBoPsECBAgQIAAAQIECBAgQIAAAQIECAwuIGAd3M6aBAgQIECAAAECBAgQIECAAAECBAjUXEDAWvMDQPcJECBAgAABAgQIECBAgAABAgQIEBhcQMA6uJ01CRAgQIAAAQIECBAgQIAAAQIECBCouYCAteYHgO4TIECAAAECBAgQIECAAAECBAgQIDC4gIB1cDtrEiBAgAABAgQIECBAgAABAgQIECBQcwEBa80PAN0nQIAAAQIECBAgQIAAAQIECBAgQGBwAQHr4HbWJECAAAECBAgQIECAAAECBAgQIECg5gIC1pofALpPgAABAgQIECBAgAABAgQIECBAgMDgAgLWwe2sSYAAAQIECBAgQIAAAQIECBAgQIBAzQUErDU/AHSfAAECBAgQIECAAAECBAgQIECAAIHBBQSsg9tZkwABAgQIECBAgAABAgQIECBAgACBmgsIWGt+AOg+AQIECBAgQIAAAQIECBAgQIAAAQKDCwhYB7ezJgECBAgQIECAAAECBAgQIECAAAECNRcQsNb8ANB9AgQIECBAgAABAgQIECBAgAABAgQGFxCwDm5nTQIECBAgQIAAAQIECBAgQIAAAQIEai4gYK35AaD7BAgQIECAAAECBAgQIECAAAECBAgMLiBgHdzOmgQIECBAgAABAgQIECBAgAABAgQI1FxAwFrzA0D3CRAgQIAAAQIECBAgQIAAAQIECBAYXEDAOridNQkQIECAAAECBAgQIECAAAECBAgQqLmAgLXmB4DuEyBAgAABAgQIECBAgAABAgQIECAwuICAdXA7axIgQIAAAQIECBAgQIAAAQIECBAgUHMBAWvNDwDdJ0CAAAECBAgQIECAAAECBAgQIEBgcAEB6+B21iRAgAABAgQIECBAgAABAgQIECBAoOYCAtaaHwC6T4AAAQIECBAgQIAAAQIECBAgQIDA4AIC1sHtrEmAAAECBAgQIECAAAECBAgQIECAQM0FBKw1PwB0nwABAgQIECBAgAABAgQIECBAgACBwQUErIPbWZMAAQIECBAgQIAAAQIECBAgQIAAgZoLCFhrfgDoPgECBAgQIECAAAECBAgQIECAAAECgwsIWAe3syYBAgQIECBAgAABAgQIECBAgAABAjUXELDW/ADQfQIECBAgQIAAAQIECBAgQIAAAQIEBhcQsA5uZ00CBAgQIECAAAECBAgQIECAAAECBGouIGCt+QGg+wQIECBAgAABAgQIECBAgAABAgQIDC4gYB3czpoECBAgQIAAAQIECBAgQIAAAQIECNRcQMBa8wNA9wkQIECAAAECBAgQIECAAAECBAgQGFxAwDq4nTUJECBAgAABAgQIECBAgAABAgQIEKi5gIC15geA7hMgQIAAAQIECBAgQIAAAQIECBAgMLiAgHVwO2sSIECAAAECBAgQIECAAAECBAgQIFBzAQFrzQ8A3SdAgAABAgQIECBAgAABAgQIECBAYHABAevgdtYkQIAAAQIECBAgQIAAAQIECBAgQKDmAgLWmh8Auk+AAAECBAgQIECAAAECBAgQIECAwOACAtbB7axJgAABAgQIECBAgAABAgQIECBAgEDNBQSsNT8AdJ8AAQIECBAgQIAAAQIECBAgQIAAgcEFPjn4qtYkQIDA/Ar89Ud/U/zX//7e/G5E7QTGRODY28eKo7882mjtmvPXFKsvWD0mLddMAvMrMLVnqrmByesn53djaicwJgLHjx4vjhw40mjtuWvWFOdOTIxJyzWTwPwK7Dt4sPjg5MnGRjZ8aUOx7Oxl87tBtRMYUYHlxfLi7/+tvz+irRvPZglYx3O/aTWBWgj8TfHXxc//3/+7Fn3VSQK9BPbu3Vu8/PDexmJX37u52Pxbm3ut4nUCtRC46aabmv384T/7o1r0WScJ9BKY+ndvFru37m4sdvfmzcXENdf0WsXrBGohcPvXv168d+JEo69PHN5ZrPr1VbXot04SaBX49Cc+LWBtRRnyb1MEDAlodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFBCwDglodQIECBAgQIAAAQIECBAgQIAAAQIE6isgYK3vvtdzAgQIECBAgAABAgQIECBAgAABAgSGFPjkkOuPxeo7d+4s3n///UZbL7300uLzn//8WLR71Br5xBNPFGF59tlnF6+88kqxZs2aUWvigrTn6NGjxbPPPtvclmNqQdhrs5G9395bWV/XnD9RrP/ShsrqG6Si48eOF7tv3lWc+uBUMXndZLHx1k2DVGOdGgpM/+pocfBPDlXW841f21gsX7G8svoGqWj/9/YV+/71/mLV6pXF1qduKf9/1SDVWKfmAvu/v784+f7JShRWTaws/22+tJK6Bq0knSeOHztRTH7lkmLzfdcMWpX1CMwQ2L1vX/HhyWreK+dOTBSbLrqIMIElI+BcsmR2pY6MkEAtAtYIBiMUi/LAAw8IWAc4ACOgvvPOO5trPvjgg8XTTz89QE3jv0r0/Zlnnml2xDE1/vt0lHrw8sPVBayTX5lc9IB1z93PFUcOHGkQR3h8SRmyLnbINUr7W1s6C0y/PV1U+n5Y5GPv5Acniz2/u6fR4RPTx4sIW6//zg2dAbxCoIPAvu//uDyGTnR4dW5Pr71k7aIHrPFeSOeJlx9+udGeVRN+fJjbnrR0O4Gnfvzj4r0T1bxXrp2cFLC2Q/bc2Ao4l4ztrtPwERaoRcDar38EZymI3bJlSy2C2AgK33zzzQZRjMS88cYbO3LFyNU0ErjjQkv8hfDKw9Ul3l3dIzC0QB6mxihW4erQpCogQIDAkhZYdvayJd0/nSMwHwI/PXy4ePHj73SfKa8y3LrJFUPz4azO8RFwLhmffbWUWipgzfbmT37ykyIeUSJsrEOJcDUPDDsFrBGuxojVNEXA/fffXweeGX2McDlCeIXAfAo8cXhn1+p3bHqoOXJp460bu15yPwofLK7etrmIkXtRLl3ky1C7wnpx5ARieosnLlnXsV1HDhwudm/d3Xy913tnsUfExY8LW3ffUkw9/2ZjaoCNX/Plt+PO9UJXgW37tnd9/fFrHyumfzndWGb9F9eP/Ejpq++L88SpxsjuaK8f4rruXi/OQeDVb32r69JbHn20eGf69Hvl5o0bi62XX95x+U8tG+3g/73jx4sXpqYa7b+4/G8Ba8dd6YWPBZxLHAoEqhcQsFZvumRrvPLKK4t41LXkI5zraqDf8y8wlxBoWRnYzGX5+W/97C1EkHTXi9+Y/YJnCPQQiJClW9By4tjMS4hH/b0Q3Y15iOOhEBhGoNexvmzFmSAo3kO9lh+mLVWsG228pfzxQSFQtcBvruo+1UQemq5YvrzotXzV7VMfgcUU6HVucC5ZzL1j2+Mq8Ilxbbh2E1hIgZg6IubyjRI393KjtIXUty0CBAgQIECAAAECBAgQIECAwOgK1HoEa1zy3WlO0Xg+zccauy9CtW4llv/FL37RXCcuqf/sZz/b13qpDbFOPKLEttN0BVFPPNqVdtuNtnZaPtWd6sr736vPuUfe1nbtyusfxCXWb90/+T7IfaItvfrcrY39vHbZZZc1F4upEkwV0I+aZUZF4FR5if7J9081mrO8nNsuRr5GOV5ejpluLDJx/upi4vw1bZsc68flpsc/vqlK1LGyHJk6cf5E2+XzJzttO18m2pFK/mt6rHvw9TN3kV9Z3u16XZdLxns2xgIESoF2x1t+jMfx/VvljX86jZ7N3zcBGsflmvK9kN5X3ZDzbefvxbROt/dLvAfTZd+x7upymzFCXCFQhUCnYy//d3ghzhPRl3ajqtq9b2PZaF+cx9I5znmiiqNBHXMV+ODkyWLfwYPN1WIk7OfWdZ7iplP9cZn/r8opCz4s64sS9cRjdYeRuLF8Kh+cOv05L/7+sPzv/DUjczuJe75qAeeSqkXVN24CtQ5YI/jLg7N85915551FPKJEePfuu++23bcRAsa8pDG6MQ8r08Ix0jECuTwczCuKdeMu9FFi/tPHH3+8uOqqq5rhano+6shLBIw33XRTI9Rtt93YXsyT2m5O1XPOOWdGXemPV199tYhHKtHnvN1hlULWaE+7utO6sVz0LeZ3bde+CIBvv/32rnXk+yftg3gu9ksKn/OOxDLRrqpHl8b+Sf2OPkf9Ata2h5AnR1Rg3/f2N+/GHvPb3fLULcVjX36sGa5Gsye/MllsLZ/Py/Fjx4vdN+9qhDpxg6rWsnL1ymJzOXde3PG5U8m3HXer3rZ/5tyBMefeHetub67+w5N/VKTtpvA3r7ufbXZqi+cJhEA+j/F9+7YVZ51VNN4P+TEez6+bnPnleOr5qWL/9/c1Q85WzUvKS//j/dAt9My3ffPurbPmJc7fL1ffu7lRX9xhfe/De9u+B/vZZms7/U2gncDB1w825zReXf7gtr38t3rvt/eWx/z+5uLt/g2Pf6/jGI33x6DniXzbEZDuPPzkrCbm54nGfMvl+zbOT84Ts6g8sYAC75Tfd7bt2VP8rLzBVGuJUPPaycni7muuaX1p1t9xg6rv7t3btp5Y+OK1a4snv/rVWUHrlb//+8V7J07Mqi/mlV1/223N51/Zvn2gwHdWxZ4g0EPAuaQHkJeXvECtA9Zh926Ebnno2K6+CAIj0OwVSMa6Ud+FF17YDPPa1RfPRZibwt9Oy0RdEcBOlyfYhb4hVQpG2wWrqb2xTLTv7bffboTK/ZQIa2OdTiXtjzfeeKOykDXqTGFqCq07bd/zBMZBIL4Af/O3v1mcODb7A3ne/vjCvOd393TtUtQRNxmKkUWb7+v9BaJrZR+/OP3Lo8WOjTvaflGPRdI2Y3Rh3ARJITCMwKEfHZwRILWrK0ZjtP4g0W65A2XAdKgMqbb96bZi4rw17RaZ03Px48PurbsawVWnEts8MnW4ePjnj3QccdtpXc8T6CQQ54ldZXh56EdnriBot2zcsG3PPXs6/nsd66R/sw+XN6W7ZfdX21Uz5+dOlKFutK/TeSxtM/qx8VY3k5szsBX6Eti9b1+x/bnnOi4bI0gjNN136FDxyrZtRczx2q7EMt996aV2LzWf+9mRI8WGMjB98pZbimtrciPmriBeHAuBfs8l8Vks/k1v9yNd6qhzyVjsco0sBWodsMYoygjtUmk3UjFeW7FixayDpTVcjcvU77jjjuKCCy5oXOYfwWEEobFclAhEu13qH8vkozLTFAPxfH65f7Q3D1djNGWMqoztpkv885sxRZ8mJiZmjBTN+xz/nbab6mo0uCxpuoL0dz//n1xSuNrqEq/HyNYIWKOkeU17haxRXwpXo51xs63Uvmh/3qfweeutt/ppbs9l8hHOEVTnI3p7rmwBAiMokI/2icnr02X+cblxKvFBJw9XY9RSjFKNZf7q/ZPlCNMT5ciml5pfbl9++OXGJZ3dRrL2S5HC1djm+i9uKKc0OP2FJL6cR5iUyq4yeHpicqdQqV9Yy7UVyEfnrT5vdfN4W/bxcRcrxYf+/H0To0Y3lMdmvHciBP2z8v0Sx2Z8MYjHY//ssUoCz4NlvenLRmwzTY9xsnwPxkjaFC7F/8cPIlX9yNEWypO1EjhRTgcTjyj5eSKfFqZxpUH5A1sqcXVBhJlrPx75He+NOIbTv9sHnj/QGN1dxXGaRpzHNuO8k6YUaD1PxAjceO90mu6jVjtVZysVaA1XLy6nA4jRqulS/BfffLN4Yer0Z5YY5brl0UeLV7/1rfcg3KgAACAASURBVFlteKFcLg9XW+uJdZ/av795uf9tu3Y1tpGmH9ixZUsR0xNEiVG0aZufKb/7bd105scFUwTMovfEAgj0ey6Jf9NTcS5ZgB1jE/MqUOuANQK6LeWJKZUI6VIgemn562D+WuteyEPMCN1i1GQevkUIGOtHQJcu44/gL5brVqJNr7zyShHrtyv5pekRrLZOHRDrxPOx3RScPvvsszMC1rxfeagb7e/W53btaX0utpvC1XYuqX0R/Ka+RMh6xRVXdOxzrJPqjPVaR+RGmyNETvWFd/Srk2Frmzv93Rq4h6tCYCkIxBfmu164q/lFuLVPz919ZkRG3PF8a8vdndeWK8TzOzY+1AyeIqiqImCNQOnqe6+e9SU8tveZcv7VCFajxHLTb0/Puoy7tS/+JtBLIML8mB6j3aX9MXo0H8W3rZw6IAVIUW8EO/H3pdeX74cvnB55HYFnhErDjpyLYzy+aNz14jdmzXe86WubZr3/qgiuell5vV4C1/3BdcWmr13ettNxeX4q8R6688W7ZgSZMS/xhvIqg+Xl+Sb9kBH/X8VxGu+NL9y6sTGFRh6exnli09c2Fvf+9r2NpsVyVbwX2wJ4srYCx8qRqfnI1bs3b541DUAEoJs2bCi2PHY6OIoRqDENQOu8rBGwpvLlMqD9w3IagLw06rnoouKqhx5qhqwRyH7u47A2tpGXFLCuWLbMSNfaHqGj1/F+zyUxPU1MJZb/u+5cMnr7U4u6C3yi+8tebScQIWw+YrI1XE3rpLA0/R2hXwpw29Ubz3ULV1vXbw0a8zrz12K9bpfrd2rLXJ/PA+pYt5NLvBbhZYxCTaXXlAdpnU59jvryEbcxgniYkkbaRh0RFHfa7jDbsC6BxRLoFq7GJfr5ZZdXl19gO5XN5RyRqcQ8rSfLS6mHLafnlGw/3UC8Fl/kU4mRtgqBYQQiwIwP8+3C1ag33g+pxPzFebiabzemBIhRralE+F9F2fan22eFq6ne/P0XQVJ+E6Aqtq2Oegt0+0Ic/9bno7qv+4MbOo4Szc8hcZwereC9Ee/FG77TfpvxXszPE1W9F+t9NOh9LvCvssv5IxTtNMdqBKO/k82/2m4agAheU+l06X/c4OqhG25oLhfzqyoExkVgLueSrbu+6lwyLjtWOzsK1HoEa0eVHi/ko0gjJMxHrrauGq/FSMo0UvS1115r3NypXYnluo26jNfygDZGbXYqrfXEevlUA53WG+b5GCmbSoz27OYSy8W0AOmmWv2MOu0VckafU329guxe/YzpCFIobWqAXlpeHyeB+OLZKSSKfkycv6Zo3EDk49IpeIqXW+s5fvREsfyC9nOM9Wu08f/sPl9etOdI+b8op96ffeOtfrdjOQIh0O0HhPR6Gom67OxlXdEmLlhTFD+caiwTl08PWyJEmsv7L+6ivqrzx4Jhm2P9GgnEVQ6dRq4GQ4wumnGeKEdydyqxbNy4Kk05MP2ro8WaC4Y7UHuNDl9bXu2QAuAq3oud+ub5+gnE5fhplGj0vlMommT+eRnApkA2gtFYv9NcrN00P1fe5CpGyqYyaD3dtuE1AlULOJdULaq+cRAQsA6wl9L8obFqXNreq+QBa75ur/Xavd4tVG23fHpuvkewRv0pRI5t9jPVQGv4HKNOW4Phbn1qfS0fwTpMf/N5aSOUNjVAq7S/l7pAms9urv2MmwENW5b3CLFWdvkiP+y2rU+gVSDCoX7nb1y24tdbVx/q7363mzZSxftvqAZbeckI9PoxITo66HliIZBWlYGuQmA+BPLRo58qL8NvveS/dZsx92ks9+GpU41H3PgqD1jPLQfLpDq379lTPHPXXUWMWG0tsU6nkbKty/qbwKgIOJeMyp7QjoUUELAOoJ2HpBEK5tMFtKsuH0057MjKvP4UakYbqqy3XR96PdcaHPc7WjaWS8Fsax29tjkfr4djPkI5pmxQCNRZIEKbGAkUl3UaCVTnI0HfQyAuw485WY9P/0U5JcbMEdQu0XeM1FUgzhMxTcz0L48VR7NpNZKHqw3qemQsvX7HjaRSidAzn0O1n95GwHpueXVjKjeXN6KKG1dFiRtabbjttiKmHYjpBSJ8bRe29rMdyxAYRwHnknHca9rcKiBgbRXp8XdrkBk3aFroEsHqzp07i9j2MCM1q2x33o4YSZqPJu22nXy5Vttu683Xa/nNy2Je117THMxXO9RLYLEF4kPOvu/tb9ytPN3JfLHbZPsEFkvgyIHDRdyRPJ93crHaYrsERkXAeWJU9oR2LJTAh+Ul/qlEWJrC0X63/0E5ijUvMcXAeydOFPn8rC9OTRXxiBIjYC8upwfYWgaxeTDb7/YsR2AcBJxLxmEvaWO/AgLWfqVGZLkIIS+77LIZI1YjpIxL61tDzV4ja6vsUmvAWmXdC1VXeCWzCFZ7zfm6UO2yHQILLRAjVXd84aEZN7uKeZRi/tbWy5anPp5zcqHbaHsEFkrg5TJY3fvw3hmbW33e6mJNOV9xXuJ9I4BdqL1iO4st0Ok8ETeOa31vHCxvSOiHusXeY7ZfhUBrQFpFnTG3aoxajblaf1re9CqC21Tiv1+MRxm4RtD65Fe/alRrFejqGBkB55KR2RUaUpGAgHVIyAjk5jIvamsIOtfN5+FqhIBPP/10x3lLFzJgrWr+07l6VLV869QA0Z+w7lTy6QzCOZ9/9o033ui0mucJjIVAHq7Gl+Vbdt8y64ZWqSMC1rHYpRo5oMDU81MzwtUv3Lqx2Hzf5lk/NET1U8+/KWAd0Nlq4yew5+49zR/hep0nDq+7TcA6frtYi9sIrCjnU03l4nXrZtx4qs3is56Ky/7blZgKIMLTKD8tpyGI6QJiOoIIXGPu1ig/K//7H//e7xWvbN9enJdNM9CuPs8RGBcB55Jx2VPa2a+AgLVfqY+Xaw1IL7jggqLf+UbnuKlZi7/66qvNkasRrkaQNyqXsOftiNGs8Wi1mtWh8ok8qFzMvkTAmk9RMJf5YFvXbddPzxEYF4FDrx+c8aV5259u73on83Hpl3YSGEQgQtNULrlusrjhOzcMUo11CCwpgelyntVD5ajUVJwnltTu1ZkuAp8q511tlo8+6nmTqy5VdXwpbpwVj62XX95YJuZ53fbcc80bZW0v//vVb32r4/peIDAuAs4l47KntHMuAgLWuWiVy0ZoGEFgCuPeLE96CxWwxrZSiW0uZiDZyhbtCZs0VUAElDFtQa+SB5mXlvMQLVZJ+7Xf7edhbKzbT5jcb92WI7CYAofLuSZTmTh/Qri6mDvDthddIL/kf/Irk4veHg0gMAoCRw7852YzYuqYVatn3/V8FNqpDQSqFvhMNnL0nenp4oNyTta42dV8lpinNeZiveqhhxqbiZGsMXVAPKcQGGcB55Jx3nva3kngE51e8HxngSuvvLL54kJehj+XeU7zALBzT6p9JQ+a42ZRvUpcVp+3s59Atledg74ebX/33Xf7fuRtveOOO2asN2gbrEdgFATyuz23zrfa2r6YN0khUBeBVRPdv8wenz5RFwr9rLnAyffP3OinH4r8vNLP8pYhMKoCnyvnQf3Ux9MExKX7+w6eGck91zbf9oMfFJ/+5/+88dj27LNdV48RrXk5ls3T2nVFLxIYYQHnkhHeOZo2sICAtQPdBx980OGVorjiiiuar8UIzF5hYoSI55xzTrFz586OdfbzQj5KMqYL6FbuvPPObi/Py2v5TaEiPO3WxgiLb7rppmY7brzxxpEakTsvQColMAYCy84+M79Y3JikW9lz93PdXvYagbEXiJu7pZJPF9DasbjMbf/397U+7W8CS1Jg+dlnRuxN/3K6OPlB58A1bhLnBldL8jCoZaditOrlGzY0+x6X7vcKO3fv21esv+22Wcvlo2FfKG9iFaNhO5XWbeRzwbZbJ83b2u41zxEYFQHnklHZE9pRpYCANdPsN8CM0YsxajGVBx54oGPIGkFjujFVrJPfDGmuOzIPdlNA2TpSNeq/8MILu4ab+XbzaQYiEM1Hyc61feGSj+6NALXdCN9oc36zrthOHs7OdbuWJ0CgOoENXzzzxSG+FD927aNF60jVI+U0Avf+b98s5+A7VN2G1URgBAVimoxU9n9/f+NGVnk5VQZLe8sAacfGHTNCpNb3zAh2TZMIDCyw9pLfaq4b54m4MWLrMR/niR0bH5pxk7iBN2hFAiMk8DvXXDNjFGtcuh83pmotEZhGABtzpsYl/Tc+9tiMRa6dnJxZz44ds0LYWCHq2fLoo811Y2qAc7OpCtIL+fywMX1Bayjb2j5/E1hsgQ1fXN9sgnPJYu8N269KwBysmWQEhGnUZQSVMeo0Lh2PQPCtt96aYR6BYCyT5hCNkDXCxFg+rRPr5YFqPD/MZfCxbjxSnbG9eKQ6Y3utgWtqdKfn83lPI1yNcDa1/+mnn57z/LKxTpjE9lIIHCN8o40RYMdr8ciD3FhnlOaTrerNpR4C4yiwdnJdEXPqpbknD71+qIhHPBfl+LETxYnp9lMDtH7BHsf+azOBXOD6715f3Pvb9zaeig//u7fubgSqMV1A/H28fC+0G53nkmjH0VIWmDh/TbHx1o3lqO39jW4e+9Wx4o61t/d1nvir8n2jEBhngdVlwPnsN77RnBM1wtMIWT8zMdG86dWHZSj643L6gHwk6d1XXz2j2zEa9u4yrI0ANso75XenDeVI14vLaQhSgBpBaTyf17Pj+uvb8qXpC9Ky0aZzyzZF+XI5j2s+8rZtBZ4ksMACK8vPUs4lC4xuc/MuYARrRrxly5YZQV+EhBG4trujfISFb7zxxoyRrGn5FLbm4ertt9/eWH7Y0i70jO3EI4WoqW15aPn222+33XQKbdOLeZ8HGc2ath2X/Od1RhD8xBNPNNqZ6o1lX3nllSJftm0jPUmAwIIK3PniXcXq81bP2GYErvFI4WpcOr1t37Zi5cTK5nLHyktFFQJLSWDivDXFLbtvKfKpAk6UPzLEeyEujU7havwAcdcLdzW7nsLXpWShLwRygavv21ys/9KZ0UfxWrvzxHV/cF2xPhulNP2284QjafwFYk7Uf/fIIzNuNPXnZRj6VDkdQDzikv8UdMacra9s315suuiiWR3fumlT8eQtt8yoJ25iler5WTkyNq8nlm1XT1ScAtu0kQh+Y47YeETgqxAYRYFBzyWHp2aPGh/F/mlT/QRqMYL1ggsuaAanEx//ktduV6dwMEZc5pe2dxpdGcs//vjjRYSnsU4eckb9sV4EmBHcplGmrduN9qTXop29StQZo2mjfc+WE6LHNlOJ1yKsjPZE2+L/X3vttcbL3cLSCDljztZ8ioCoK+rIS+7YySSWj9ciCI4pDVIb8+2nUDe1c8ZGsj9i+53c2q2TW+Y33Gq37LDP5fV3O6aG3Y71CbQKTJw30bxjc6+b7qR1Y7k0AjW/5Lm17vR33NzqkZ//y/Jy6KnGJdH5ndRXlneLnrxustj4tY1FLLfp1k3FwddPz9XabtReP9tObYvt53PAtmtfXt/q7PLtdst6bmkLROiZHzv99nau76FLyuP9tybXNkauHik/0EfA2jhWy+3H+2nzvZuLGPkd81Dm7YkfHFrfo722Pdfje+Z7Z37vZN2vr+UWXyD/d77ffyfj3/N0PK0q/53vVWL5u174RsfzxIYyfN34tU2N89VZZ53V/DGi3Xyt/Wx7Lsd6Xl8/57xeffX60hXIL7WPS+/nUs4rv+8cevLJ4oU332w8YrRpHoZG3TGq9OYyRI3ws1O5thxdenEZ2EaYGvVEwJpKhLP91hPrRGAbbYh6ImCNEnXMtW+d2ur5egmM8rmk3Z5wLmmn4rmFFjjro7Is9EbHZXvT5YlykPBs0PUGdYnwsjUMHYW6WttQZTtb6/b30hQ4+dHJ4rW/+pOl2bkx61V8KY4PLsriCUTA9/LDexsNuLoM9TaXI8iUxRHwflgc905b/cryf9F86Ycn/6jTYp6fZ4EUnjpXzDN0n9XHD6QxrUmUuzdvblyOrsyvQLpRVbdAtZ8WVFFPFXX009ZxXGb9179evHfi9I+lTxzeOevH0HHs01Jqs3PJwu3NT3/i08U//Tv/ZOE2WIMt1WIE66D7cZBwNbY16HqDtrOqcDW2X2Vdrf2Zz7pbt+VvAgSqFfCFuVpPtY23gPfDeO8/rZ8fAe+L+XFV6/gIDBuspp5WUU8VdYyPvJYuJQHnkqW0N+vXF3Ow1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJCFgrglQNAQIECBAgQIAAAQIECBAgQIAAAQL1ExCw1m+f6zEBAgQIECBAgAABAgQIECBAgAABAhUJnPVRWSqqSzUECBCoVOCvP/qb4v3//peV1qkyAuMqcPzY8eL49PFG81dNrCpWrV41rl3RbgKVChw+cLhZ37pL1lVat8oIjKtAfs5YvWpV8T+XD4UAgaL42WHnDMcBgRD4tbP+dvEbnzgbRoUCAtYKMVVFgAABAgQIECBAgAABAgQIECBAgEC9BEwRUK/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS0DAWq/9rbcECBAgQIAAAQIECBAgQIAAAQIECFQoIGCtEFNVBAgQIECAAAECBAgQIECAAAECBAjUS+CT9equ3hIgME4Cf/3R3xT/6W8OjVOTtZXAvAkcfP1gcehHp98P67+4vtjwpQ3zti0VExgngd1bdzebu3X31nFqurYSmDeBw1OHiwM/PNCo3zlj3phVPIYCe353T3Hq/VONlj90ww3FimXLxrAXmkygWoFf+/Sni79zzjnVVlrD2gSsNdzpukxgXAT+pvjr4r/8f++OS3O1k8C8Crzxn35SvPzM3sY2Pvyf/p/if9z09+Z1eyonMC4CzzzzTLOp//v3/49xabZ2EphXgf/4X/5jkd4bzhnzSq3yMRP441f+uDgxfaLR6q//w39Y/A+rVo1ZDzSXwPwICFiHdzVFwPCGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQ/8dRIAAAIABJREFUIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7njdJkCAAAECBAgQIECAAAECBAgQIEBgeAEB6/CGaiBAgAABAgQIECBAgAABAgQIECBAoKYCAtaa7vhBu3306NHi/fffH3R16xEgUEOBUx+cLI5PH69hz3WZwGmBOP7jfaAQINBbwDmjt5El6ivg/VHffa/nBAiMvsAnR7+Jw7fwpptuKiIYjLJly5bixhtvHL7SGtZw5513Fk888USj52+88Ubx+c9/fskq/OQnPykefPDBOfXv0ksvLR544IE5rWNhAq0COzY+1PrUwH+vm1xXXH3v5oHXr2LFQz86WDz25ccaVW28dWNx/XduqKJadSxRgcMHDhcvf3tvZb277g9uKNZcMFFZfYNUtGvrruLA81ONVe/bt62I96VCoCqBOL5OVPQD1qicM3bdvKv8QeJUsf5L64u7XvhGVVTqqaHAUnt/HCnPkfGZKt4fay9ZW2zbv72Ge1WXqxK47Qc/KI4dr2YAxLkTE8WOMmdRCNRdoBYBa4RlKWBdyqHgfB7MMWo1hauxnZ07dy7pgPXNN98s4riZS1mzZs1cFrcsgbYCRw4cafv8IE+uWr1qkNUqXWff9/Y169v//f3F1fdtLpavWF7pNlS2dAQiKKryPbDYo0ZPlqNWU7gaeyn+W8C6dI7XUehJBC4npk9U0pRROGcc/JODjfAoyqHXDzWuflg1sfjnskqAVbLgAkvt/TG1Z6r5/ohzpffHgh9SS2qDPz18uHjvRDXnjyUFozMEhhAwRUCGd9VVVxXnnHNO4/Hqq68OwTo+q8ao1NTn+O9O5eyzzy7ikcpnP/vZTosuiedNg7AkdqNOjIBAHiatXL1SuDoC+0QTFlZg2YplzQ2uFBQtLL6tjZ3AqjVnwtQ4Zyw7+8z7Z+w6o8EEKhbw/qgYVHW1EfjxwYPF+ttuazxi5K5CYL4EajGCtV+8CNXSSNe6BGxz6fNbb71VPPvss42g9fbbb++XdSyXS8dBNP7pp58uLrvssp79WLFiRc9lLECgl8APT/5R10VuX3dbc7RSXP6/uRwROsrlC+W0ACvLkbQxymLy+slRbqq2jYDA5HWXFvHoVGI00o6NO5ovP3F450iPbovR2g///JEiRh1F0Lrpa5s6dc3zBAYS2Hn4ya7rxbQzaVT4OJwzoo3pnLHhSxv8KNd173qxl4D3Ry8hr9dZ4NAf/mHX7l/5+79f/OzI6Svrrp2cLJ786le7Lj/KL3548mTx3sfTIRxbuXKUm6ptYy4gYB3zHbiQzY9L4O+///6F3OSibSsP2KPfE+W8MgoBAnMXiIBp8jrB6tzlrLFUBOKy61H/IWSpWOvH0hBwzlga+1Ev5kfA+2N+XNVKgACBKgRMEVCFojqWnEA+gtXcqktu9+oQAQIECBAgQIAAAQIECBAgQKAygVqPYI1Rir/4xS+amPmoxenp6Rk3Oep1c6y4IdLbb7/drC8uo4+7yl955ZVdd1YEeSnMi3VibtNoR1yKn9p2xRVXdKwnths3ZMrruOCCCxo3oOoUDOY3b8qDxNhu/lq0JZ93NdqTjKLuTvXnHW7n0qt9af3W/ZP2QTwfc+SGd2pPtDWc+mlT1x3y8YsC1n6ULDMuAsePHW9coh9leXmZ8sT5a8qbJJwspp4/UBz95dHG8xd9cUN5x+YNbbsUl2UfnipvplDWc6aOiWJteTf0XjdFabft1o3EneNTWXfJ6TusR/sOljc4mS7bd/LjG56sKdsdd5Xutc3W+v1NYPqX0+VxdLIBMXH+ROOy4zg249L9+P94X3S6AVu7YzGOwTUXTBTry/dNr5JvO27W03r85u+R/PXW9+jp9+5E1ykUerXF6wT6EYhj72j5nomSnzPi3+T073Wvc8b0L481zy9Rx6qJT/f173e+7dh+Oifk7e50zoj32sHXDzpn9LOTLTOwQLv3R1R26EeHij97/c8a9Y7a+yPaFJ/l8vdHnIti6qbWc9LAMFYk0Ebgg/LS/H3l/KfvlNlKXKYf5TNljvC5tWuLc+dwg+h29Xxq+fLi4nXriss3tP8s9k6Zs3xw6vRNE2P7qXxYPhc3+Erl3PJK1RVlXQqBKgRqHbBGYNhpbs0HHnig6Ruh3bvvvtvWO4K4m266aUYwmRZ84oknGoFfXFZ/4403tl0/gtS0rQhjY9m42VYe8MWKrUFt1P3ggw82A8Z2ld9xxx2N+vKQNJbr1OcILfObe0Wf83XzdsW8pJ36FNuIYDVcWvuRt7NT+9Iy+f5J+6Bbv6O+sBx2GoM8aG+1a+fsOQKjLhAh0ssP7200c/0X1zcuV37sy48VJ46duXPoWR+Vr7UErPu/t6/YW66X7ujcrp+XlJf/X/+d6zvOk5dve+0la4tt+7fPqCbuGv/tbE7NmIO203YPFFPFnnueK66+9+qyD9e0a47nCLQVeOzaR5tzF9/5wl3FsTKIiWM7LxGW5jdliy/Re7+9t9j//f1t64wn4yY88X7qNm9svu2bd28tLm2ZYzZ/j0x+pXw/fff6rtuNNt31x3cVE+et6dguLxAYRiBCmN1bdzeqiH+3tz51S7HjCw/NOGfEv92t54xDPzpYPHf3czOWy9sR/37HOSPeM51CnXzbKydWFu3m0Gw9Z3TabjpnbCznAr/+OzcMQ2JdAk2BcXp/PPXf/k3jfLfr5l1t35fx2dD7w8E9XwK79+0rvvvSS0UEmjPK1FTjz4vLkDXmdV296szNDdu1pWM95cJPldv4zXL9mzdtKraWj7xsefTR4r0TZ77rpNcibL3qoYeai76yfXvxuTKoVQhUIWCKgCEUIwC88MIL24arqdoUwEYY2qtEfe3C1db1Iri88847u4arsU6EkRGm5oFha13z8XeEnLHdbuFqal/49VoutTHq7dXvWGbnzp1DdStvT1UjYodqkJUJVCgQI3xaw9V21ccX4T2/u6druBrrHXh+qvHFO40ObFfXXJ57uQyPem335YdfLkOvfXOp1rIEmgL7ymOnNVxt5YlRpd/87W92DVdjnfiRIoKoqo7HeB/FTby6hbqxzR1f2NEcUd7adn8TqFIgrn5oDVfb1R//dvdzbqn6nLF7666e2433U5zTFAJVC8S5YpTfH/GDdZxT8h/UWw28P1pF/F2FwNd37Sq2P/fc7HA1qzxuoBVB57GPbz7Vbrv91BM3r4ptbSsfCoHFFqj1CNa45DwP02KUaLosP8LJ1lGj+c6K9fLwMi5Rf/zxxxuX+Meox6gngr5nnnmmsVoEfzFlQLepBlJbYv3Ydgr34pL6VKKeVGc8d/vttzdCx3QTpqgjwty0TGpHPqoz73OM+kyjVmOb0e9UBrmxU2w3D5Ojv7HtTi7RlgiV33rrreZ22/1H6le8FgYxHcBv/MZvFH/5l3/ZsI5tpn7F61u2bJk1crddve2eax3BGn1KUzbkUxJEn6JvQth2ip4bVYH0ITvuaL6hHLG3srxkOcqa8rLpVOKLch7wfKEc/bPp1k3Nu7XHF4oD5ajYFFId+9WxxqjTKkaVpjpjm3GJXbQvpghojDj89kvNLwkxii9GQsWl3gqBuQj85wOn74gb74G4Wciyj4+hZWefOZbyL8wxSnVjefzHpZTpeDsydbjYVQY76f205549jdF8nUbl9du+uMQ0Stpm3EU9SrwHIsSNcCpKjCrfXY5Iah0R3nhRIVChQB7MxGjWtR9P47Jsxa83tzJVHpf5jxbxb3O8Z9J5pfWcEXVWdc6IbUeJbV5ajgBP57Qj5fs8P2fEOa3TNCAVcqmqZgInps+MjpvL+yOO1ZhmKUq790d8Dqti1HX8IB2ln/dHnA9jCimFwP/f3v3FWlYVdgA+JH1oZ1LxYXjsgI+DWGKQ2GgZ/7w4Q2wsjomkHaA+1IFooDFi1A4YdQT8BzJt44wvAmMbSJzi1Eb0RTJjTEyEB6IyfRT7CA/QBHyk+3eGdWbfw9nnnHvvunfuvetbyQ33nr332mt9+2zO3N9de631Cnzz9OnRE90UhqXcfejQ6OYuB8lI0/Kof/ZJMJqvhKw/v//+Nz2m/3hXR7+efzxwYHT7jTeO60nJY/7f6kbI/rILalMymjWP++dcKT+6997xf1N+0k1TkBA2JSNn/6UbOVvKW3btmnzvGwLrFWg6YA1eP0TsPw5++eWXz105vh+uJph88sknV1yLhG95jD51ltAyI0+HphooByese/rpp2eGdgn3EvSVkiBx+nH4HD993py/v99Qn9PWtYSqpT39EDSvzXNJ2ByPlBKQTvdl0tE3vkn7YhPbUtLe/Jx+l6kPytyt88Ls6br7P/cD6Ex10J+XtuyXNucr4eus6zCvftsIXGqBhDdHf3bPYBh09gcX/1E063H8hEj5RTWj7UoQm//WCFhjc/Spo5NfPPJz/hmVX9T37d83+qd9d435EjDlMb3px63HGxUCCwRmva/LIQlsSqg0dK/kF+P7fnX/+P1YptA4d+pslXsg50zd/T8e5B64/eTt4/kwyz2X0egKgc0QGLoPyrkTZJYy694a+sw48OmDVf5IdqSbemN6mo7MZ5zPjC92I9HLPZo/UCT4VQjUFKh1f4wuG42nh0nJ51CtPwgse3/kjxIC1prvjDbrymjUTAtQyvHbb58Ennktc50mAM3cqR/8/OfHI1wTsj7eTRsw/Yh/AtNSEtLe/bGV04Plsf73diHqTV/5yiRkTShbAtYSxI7POxWi9rdNTuIbAhUETBGwBsQEbv0QLiNXh0p/DtQcMyus6x87FK5mn3LOhIn5mhdIZmRrKQkc++0daut6X89I2H4bE/QOlczf2m9/QuBFUxmUEcKz6kyY2g9eswDWWksWOJtVivv0tgSsy0wBMX2cnwlcKoF54WoWlRp1/8rf04Wo+ZoXmmZUayn5BbYspLWefuWX8zKqY7qe/JKeESKlZFSrQmC1AhfmgByewzf3QHn/Z07WoVGpCUAz4qeUGoFnRtXm/hwamb0R99xq/ezfnsC8z4zMxdr/zNh/y4WRQ7OUEhiVMv7M+P2b58abddy81/KZMR2ulv1z7+ZJjVJeeM5nxjxL29YmsJr747oPXz94kjy5U4r7Y5DJhi0uUEaJppkf379/Rbjab3rmXf1kNxq1lKd+fWGBuP4+/YWpEsgOlc/2gtdMO5BRsgqBSyXQ/AjWtcD3R5EmKEzwNlTK4/7lkf2z3V9VhkZW5vV5dSVAXDQCtrRjup5F4eVQ+1fzen/u00w9sGiBqOyTY9K2ZUadzltUK+2MT5nioVagXILsjMYt/UlbEyb354NNyLpoCojVWNqXwEYJJKAcCoxyzoxeePj8cvMYl0cxS1sTsGbU0HrK0C/Kpc6sAJ9RFimvvTw1af56TuzYZgT6If2sTuexzGUfzcz7sZQXfrP+8Cb1zbs/c88lhC0j8mrcc7MMvEagCGShqUXvyWU/M/KHg9RXHqvOQnLrLfMCq9Td/5zKo9gKgZoCW/3+KFN6DPXZ/TEk4/W1CGT06lPdo/ilTI9Ina7zSPfIfx7xTynBaEa4zir/Nyc0zbQAWaiqlKE6ZtXrNQK1BYxgXYNofxRq5gJdVPojK2sFf4vOOb19owPW6VGyCRsXlQSWfZszZ84sOmTp7evpb0bWJsjOtA8ZUZxgtx8W5/u8lm39141iXfry2JHAmgXKfJlrrsCBBLa5wK63mitsm19Cza8osNv9UFFTVQQIEFi7QH/EaeY1vWbOILScJUHoX+zZMzlhpgrol/d2c6WWcvTUqfE8qrNK6hlPF/DG16x9vEZgswSMYF2l9HSQmBGYi4LBfqhaM2BN0Jtzp84ycnOV3am2+/T5+8HpvJNkvxJY17SZd85ltg1NB9A/Nm3PVAwlWE0/8v5YNHJ3mfPbh8BWETj/i+fHc52+2C3kUGOE3lbpl3YQWEYgI+zO/eAXo9930wZkSoosNtUvr728/hF4y7TDPgS2i8Ciz4z+okDbpU/aSaCWgPujlqR6tqLA77pMol/u/O53FzbzlW4O1lIS0PZD2a/ceut4AawyT+s/fPvb4wWuskjVweuvHy9olakGFAJbSUDAusqrMT0yctGcqqusfqndc86Eepfi3EMN7LusJmDs7zttO3SurfR6pjnoj1xNSLxsuLyV+qEtBKYFMg9lVkYvj+NPb/czgZ0ukFXOszJ6eRx/p/dX/wisR8Bnxnr0HLvTBTI9xclPnvBvqp1+ofVvIpBQNAtXrae8oxsB+/MHHhiHrGV0a/77RL7eqPvtXciaqQgyR6uwdT3ajq0lIGCtJblJ9WTuz5tuumlytgSUmR901ojLRXOW1mzyWgPWmm24FHXFP/Zl9G1G8gpYL8WVcM6aAs/+9zOjBz/+4KTKzPmYeSszP+v0HKsnj5yoeWp1EdgSAqc+99jop//200lbsujVu/7muvHiV7vfenF+sOe7Ed5ZmVwh0LJAwtVjB45N/hiRz4zMJ5z5H6c/M3Jv+aNFy++W9vru/mjvmrfa43nzpK7VJKHps8ePjx7v1rHJQli/7BaxSnhbyu+6Ua93njgxHtl696FDg4tqrfX8jiOwWgEB62rFpvbPXJ3TC0qts8rBwxPiZWGlUvJ4ehZXGhoxupkBa99gO45EHUS3gUBjAhll8djdj016nVXSD3/jlsFVzQWsjb1BGujuuS4w7YerR04eGVylfDR6XcDawHtCF+cL5A9yJTS97sPXjY587/bBz4zT9/1QwDqf09YdJuD+2GEXVHcGBd7SW6Dq5v37R8fvuGNw39VuuLlb3yVfKb98/vnRb7tcJAtqZXGslIxsTdCaKQcWLa612nPbn8BqBCxytRqtbt/pMHUz5w3NlADlfGnHd77zncFwdZXdWvfu0wHrsi79uVu3wsjPMsdu2r+WoHgo7F43sAoIbJLA+XPnRy/94aXx2fbs3TM6cnL4F+VNapLTENhUgXM/ODs53+GvH54Trm5qs5yMwJYUeLabo3vFZ8accHVLdkCjCGyggPtjA3FVveUE+gFrf8Gr2g3NYlZHbrxx9KN77x09041uzejVUr75wx/WPp36CKxKQMC6Kq4LO7///e+fHHW2G66+WeW5556bnKrfhs06/7zzJGDth4uLFv4qdfXnkb322mvnnWJTtj366KOjt73tbeOv/mjhoZOXQLZsF7AOSXl9uwjkUbZSru4e71QItCbQn3c402IoBAgMC/y+W/ytlEwLsPvyi1NoDB9lC4E2BPr3R6bLcH+0cd1b7WUWnSolAesrr659IdA/dCNSM1I1X/l+qGQKgUc/85nJ5kwfkNGtCoFLJSBgXYP8+94Ynp5DM4p0mZGOy47onNecZc5Tjq9xvnltmbWtPyVB5opdVB555JEVdlshNO6HvOnDIvN+PxOuboU+LHK3ncA8gddeXrlK+rx9M52AQmAnCyyaK/KFXri0kx30jcAyAovul3xmvPTChScklqnPPgRaEnB/tHS1d2ZfM7L0Lbt2TTp38qmnlurorCD2W91I1Cxula98P69c0w306pdME6AQuFQCAtYB+VdeeWVgy2iUlePLSMUEcItGOuYx+He+853j/RYFdoMn7Tb0R0cm2BsKUfP6Bz7wgXlVbci2zAlbSkamfvnLXx48T9rY355wdnr6hcGDN3BDAtLSjlyr1fQhi40pBLa7wBVXXXzM5plusauhEDWvH/vQV7d7d7WfwJsE9ly5Z/Ja7oGhkqkE+nO1Du3ndQI7WaC/6Fv+4DDvM+Oh3uKJO9lE3wgUgf79kacjhu6P1155dTR9fwztS5fAVhb4ZPfofinfOn16vDDVvPLNbp8PfuELb9rvPV1YW8pPurlW541ind7WnzJg1rkzX6tCYKMEBKw92X7Al9GVJcCcDkUTdH7pS1+aHJl9E6D25xPNxhLQJezM9xntuszIzqGL/ZGPfGSyKfWl3v4j9iW0TFuGwtfpuvvzns4LbaePm/Vz/B566KHJpizA9YlPfOJNbUmb0/bSxhzX95xV92a+1m9LrllC1mnP6T5Mvyc2s73ORaCmwL6/3jepLqOREqJmpfRSXnzhxdHpr50effGvvjCZd6/m+dVF4FILvO/whUUU0o5fdAtenThyYsUvxee7++HYga+OTh45eamb6vwELrnADYdvmLShfGb05zHuf2ZMj/gWIF3yy6cBGyyQ+2PX5RdH9OXfVP37I8Fq/k111767RtP3xwY3TfUENkTgyIEDK+ZEve3BB0eZF3U6BM2j/3/bjU7NtgSeR0+dWtGeg+9616SePPafkayPz5iaMfVkWynv2bdvlGkDpks/dP3fl15aUdesEbTTx/uZwLICf7Lsji3sd9ttt40SlqYkLM08nAn/Eq69/vrrKwgyirU/wrGMUk3QltAyx0yHcgkc+4/Rr9Y0oytz3oR+Kal/aKRq2p32lXB4ui3l3Olz6kzJvulz+pDvn3766VU/8p66Mldsccx/81WC3LSjtCnnTDtznvx3q5Rco7SzjF7NdcvXUB/S7gTLW6kPW8VSO7afwL79V48OfOrAZGReFi/52oFjMzuSRbDyC3V5LNQvyzOZvLjNBD7Uvf/Pnjo7+QNCQtZ8zSq5B8oCPx59niXktZ0ukDklb/nGLaNTn7vwy3Huh/zxYdYfIErQVD4z/th9figEdrJA7o9D/3xoTfeHz5Sd/M7YuX27fPfu0ZP33DMOPctI0YxSzdfb35ijNa8nNC0l4WeO6ZfU80g3t2rqyb455s4TJ0ZHH3tsErxO15PpCY7fccdM3DJ9QTlvqSs7H7v11tHNvSkgZ1bgRQJLChjB2oNKgJmvfhkKJrNPQrfvf//7K4K1hIcZ3dg/LoFlArgaozSXqSd9mA4t+wtk9fs3a+RlPwBd8n20YreYTPc1AXS++nXPaudazrcRx+Tapg/9aRlm9aEExOsJzjei/eoksB6BW75x6+jQFw/NrWLfDftGR392zyiLNpTyP+fOzz3GRgLbQSC/EOe9nfB0Xjn89cOjzzxxcWGF7Pv8uYujvecdaxuBnSRw4FMHl/rMuO9X94/y2VHKMz8enoJjJ/noS9sCy9wfe/9y72j6/sjTEgqB7SiQEaQJTG/ev39F83/XLXyVr364mqkAsu+sUafv6AZg/fyBB0YZlVpKjh2qJ/vOqqcce6wbWNYvqavflu1orc1bT6CJEaxltGn4+wtUzbocCSYz4jKrySckLSNSZ+2b1xKsZe7NPF5/5syZcYDYPy6jHjNKtB/U9etKexLmpVzZW3lv6Hx5PfvnvA8//PCK0LKcq4TECQhLsDp0/lJfgsL0uQSgqWt6RGbfsT+1wKy2ljYmbO67lHoz3cF0mD1dT85fbKa3zfo5dZY29xermrXvMq8N9SHnyFfan3POs13mPPYhsBqBg5+6cfTqyxdW5bx6/8V/cMyr48J+FwLTK3rzS8475qPdiIsbbtk/+um/PjV+bK2c86puVfX9h/ePMtI1JUFsWSV391svPgZX6l507j/rjvloL8zdNaOOfjv79V3VrVittCWwpwv0V/N+KTr9++aqaxe/b67Ye8Xo4fPHu0c5z42e+fGvx6O0X+wW57my+yX4yu4eOPDpA5PVoPvtueyyN1+PRededI9M19ivrz9n8vR+fiYQgf3dlBf7brjw/+tlPzOXbo8dAAALwklEQVSu7O6R8r6e9f/1WbLLfmYc/PTB8T00VJY592r+H7Da+2uoXV7fmQI7/f5Y9Bnh/tiZ7+tavcqozoz8TLlmyZyinDtBZ0aTfvZjHxs90T3an0f5s/hUAs1r9u4dj0I9eP31k/qH2px6fnTvvePjM5/rb7uAdi31pP4Evjn397rFt37T1ZOyd8+eycjaoTZ4ncBqBC7rHn1f+ez7ao62LwECBDZQ4NXXXx2d+eN/beAZVE1g+whknrb/vO/0uMEJGPLYoUKAwGj097v/bsLw76/+BxICBDqBzPVZpmrwmeEtQeCiwF1X3zkqUzA8e/z4ijlDORFoVeBPu6ki//zd7261+9X6bYqAapQqIkCAAAECBAgQIECAAAECBAgQIECgNQEBa2tXXH8JECBAgAABAgQIECBAgAABAgQIEKgmIGCtRqkiAgQIECBAgAABAgQIECBAgAABAgRaExCwtnbF9ZcAAQIECBAgQIAAAQIECBAgQIAAgWoCAtZqlCoiQIAAAQIECBAgQIAAAQIECBAgQKA1AQFra1dcfwkQIECAAAECBAgQIECAAAECBAgQqCYgYK1GqSICBAgQIECAAAECBAgQIECAAAECBFoTELC2dsX1lwABAgQIECBAgAABAgQIECBAgACBagIC1mqUKiJAgAABAgQIECBAgAABAgQIECBAoDUBAWtrV1x/CRAgQIAAAQIECBAgQIAAAQIECBCoJiBgrUapIgIECBAgQIAAAQIECBAgQIAAAQIEWhMQsLZ2xfWXAAECBAgQIECAAAECBAgQIECAAIFqAgLWapQqIkCAAAECBAgQIECAAAECBAgQIECgNQEBa2tXXH8JECBAgAABAgQIECBAgAABAgQIEKgmIGCtRqkiAgQIECBAgAABAgQIECBAgAABAgRaExCwtnbF9ZcAAQIECBAgQIAAAQIECBAgQIAAgWoCAtZqlCoiQIAAAQIECBAgQIAAAQIECBAgQKA1AQFra1dcfwkQIECAAAECBAgQIECAAAECBAgQqCYgYK1GqSICBAgQIECAAAECBAgQIECAAAECBFoTELC2dsX1lwABAgQIECBAgAABAgQIECBAgACBagIC1mqUKiJAgAABAgQIECBAgAABAgQIECBAoDUBAWtrV1x/CRAgQIAAAQIECBAgQIAAAQIECBCoJiBgrUapIgIECBAgQIAAAQIECBAgQIAAAQIEWhMQsLZ2xfWXAAECBAgQIECAAAECBAgQIECAAIFqAgLWapQqIkCAAAECBAgQIECAAAECBAgQIECgNQEBa2tXXH8JECBAgAABAgQIECBAgAABAgQIEKgmIGCtRqkiAgQIECBAgAABAgQIECBAgAABAgRaExCwtnbF9ZcAAQIECBAgQIAAAQIECBAgQIAAgWoCAtZqlCoiQIAAAQIECBAgQIAAAQIECBAgQKA1AQFra1dcfwkQIECAAAECBAgQIECAAAECBAgQqCYgYK1GqSICBAgQIECAAAECBAgQIECAAAECBFoTELC2dsX1lwABAgQIECBAgAABAgQIECBAgACBagIC1mqUKiJAgAABAgQIECBAgAABAgQIECBAoDUBAWtrV1x/CRAgQIAAAQIECBAgQIAAAQIECBCoJiBgrUapIgIECBAgQIAAAQIECBAgQIAAAQIEWhMQsLZ2xfWXAAECBAgQIECAAAECBAgQIECAAIFqAgLWapQqIkCAAAECBAgQIECAAAECBAgQIECgNQEBa2tXXH8JECBAgAABAgQIECBAgAABAgQIEKgmIGCtRqkiAgQIECBAgAABAgQIECBAgAABAgRaExCwtnbF9ZcAAQIECBAgQIAAAQIECBAgQIAAgWoCAtZqlCoiQIAAAQIECBAgQIAAAQIECBAgQKA1AQFra1dcfwkQIECAAAECBAgQIECAAAECBAgQqCYgYK1GqSICBAgQIECAAAECBAgQIECAAAECBFoTELC2dsX1lwABAgQIECBAgAABAgQIECBAgACBagKXvd6VarWpiAABAgQIECBAgAABAgQIECBAgAABAg0JGMHa0MXWVQIECBAgQIAAAQIECBAgQIAAAQIE6goIWOt6qo0AAQIECBAgQIAAAQIECBAgQIAAgYYEBKwNXWxdJUCAAAECBAgQIECAAAECBAgQIECgroCAta6n2ggQIECAAAECBAgQIECAAAECBAgQaEhAwNrQxdZVAgQIECBAgAABAgQIECBAgAABAgTqCghY63qqjQABAgQIECBAgAABAgQIECBAgACBhgQErA1dbF0lQIAAAQIECBAgQIAAAQIECBAgQKCugIC1rqfaCBAgQIAAAQIECBAgQIAAAQIECBBoSEDA2tDF1lUCBAgQIECAAAECBAgQIECAAAECBOoKCFjreqqNAAECBAgQIECAAAECBAgQIECAAIGGBASsDV1sXSVAgAABAgQIECBAgAABAgQIECBAoK6AgLWup9oIECBAgAABAgQIECBAgAABAgQIEGhIQMDa0MXWVQIECBAgQIAAAQIECBAgQIAAAQIE6goIWOt6qo0AAQIECBAgQIAAAQIECBAgQIAAgYYEBKwNXWxdJUCAAAECBAgQIECAAAECBAgQIECgroCAta6n2ggQIECAAAECBAgQIECAAAECBAgQaEhAwNrQxdZVAgQIECBAgAABAgQIECBAgAABAgTqCghY63qqjQABAgQIECBAgAABAgQIECBAgACBhgQErA1dbF0lQIAAAQIECBAgQIAAAQIECBAgQKCugIC1rqfaCBAgQIAAAQIECBAgQIAAAQIECBBoSEDA2tDF1lUCBAgQIECAAAECBAgQIECAAAECBOoKCFjreqqNAAECBAgQIECAAAECBAgQIECAAIGGBASsDV1sXSVAgAABAgQIECBAgAABAgQIECBAoK6AgLWup9oIECBAgAABAgQIECBAgAABAgQIEGhIQMDa0MXWVQIECBAgQIAAAQIECBAgQIAAAQIE6goIWOt6qo0AAQIECBAgQIAAAQIECBAgQIAAgYYEBKwNXWxdJUCAAAECBAgQIECAAAECBAgQIECgroCAta6n2ggQIECAAAECBAgQIECAAAECBAgQaEhAwNrQxdZVAgQIECBAgAABAgQIECBAgAABAgTqCghY63qqjQABAgQIECBAgAABAgQIECBAgACBhgQErA1dbF0lQIAAAQIECBAgQIAAAQIECBAgQKCugIC1rqfaCBAgQIAAAQIECBAgQIAAAQIECBBoSEDA2tDF1lUCBAgQIECAAAECBAgQIECAAAECBOoKCFjreqqNAAECBAgQIECAAAECBAgQIECAAIGGBASsDV1sXSVAgAABAgQIECBAgAABAgQIECBAoK6AgLWup9oIECBAgAABAgQIECBAgAABAgQIEGhIQMDa0MXWVQIECBAgQIAAAQIECBAgQIAAAQIE6goIWOt6qo0AAQIECBAgQIAAAQIECBAgQIAAgYYEBKwNXWxdJUCAAAECBAgQIECAAAECBAgQIECgroCAta6n2ggQIECAAAECBAgQIECAAAECBAgQaEhAwNrQxdZVAgQIECBAgAABAgQIECBAgAABAgTqCghY63qqjQABAgQIECBAgAABAgQIECBAgACBhgQErA1dbF0lQIAAAQIECBAgQIAAAQIECBAgQKCugIC1rqfaCBAgQIAAAQIECBAgQIAAAQIECBBoSEDA2tDF1lUCBAgQIECAAAECBAgQIECAAAECBOoKCFjreqqNAAECBAgQIECAAAECBAgQIECAAIGGBASsDV1sXSVAgAABAgQIECBAgAABAgQIECBAoK6AgLWup9oIECBAgAABAgQIECBAgAABAgQIEGhIQMDa0MXWVQIECBAgQIAAAQIECBAgQIAAAQIE6goIWOt6qo0AAQIECBAgQIAAAQIECBAgQIAAgYYEBKwNXWxdJUCAAAECBAgQIECAAAECBAgQIECgroCAta6n2ggQIECAAAECBAgQIECAAAECBAgQaEjg/wE2Mysh0hVdCAAAAABJRU5ErkJggg==) Uso de validação cruzada com 5 *folds*:
###Code
CV = 5
###Output
_____no_output_____
###Markdown
Geração dos modelos:
###Code
cv_df = pd.DataFrame(index=range(CV * len(models)))
entries = []
for model in models:
model_name = model.__class__.__name__
accuracies = cross_val_score(model, features, labels, scoring='accuracy', cv=CV)
for fold_idx, accuracy in enumerate(accuracies):
entries.append((model_name, fold_idx, accuracy))
cv_df = pd.DataFrame(entries, columns=['model_name', 'fold_idx', 'accuracy'])
###Output
_____no_output_____
###Markdown
Gráfico BoxPlot ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABAAAAAKYCAYAAADt+IqXAAAABHNCSVQICAgIfAhkiAAAIABJREFUeF7svQu0JWV5rosJAzkcjofNYbDZpNN7pdP2IIQQTocgu0NwpTcSREREVEKQIPEWvG3jPe5sh8PhTjxuh8Nbsr0hXuIVFRERCWKLbUsQERERAduVtm0REQkSQEU479PWj2UxLzXnWt1r1pzPN8bbVavqvz7/XzX/76uas3fZRZOABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQkIAEJSEACEpCABCQgAQlIQAISkIAEJCABCUhAAhKQgAQksDMI/PrOqGQZ6tgvdd4b/WwZ6rZKCUhAAhKQgAQkIAEJSEACEpDAxBGYxgDAXqH8tGjX6DvRzyeOug2SgAQkIAEJSEACEpCABCQgAQlIYNEEjk8JX4veH80tujQLkIAEJCABCUhAAhKQgAQkIAEJSGDiCOyfFr0zuiv6UXRqtMfEtdIGSUACEpCABCQgAQlIQAISkIAEJLAoAqcl97ei+yp9JttViyrRzBKQgAQkIAEJSEACEpCABCQgAQlMFIHVac0HI374rwQAeBPgzGjPiWqpjZGABCQgAQlIQAISkIAEJCABCUhgLAK/llw4+t+LivNftl/PsTVjlWomCUhAAhKQgAQkIAEJSEACEpCABCaKwCFpzScifvG/GQDgjYBXRL4FMFFDZmMkIAEJSEACEpCABCQgAQlIQAKjEdg9yV8Q/ThqOv/l7x/k3IERbwpoEpCABCQgAQlIQAISkIAEJCABCXSQwLq0+XNRP+e/HH9L0hAs0CQgAQlIQAISkIAEJCABCUhAAhLoGIG90t5XRr1e/W8GBPgqwGEd65/NlYAEJCABCUhAAhKQgAQkIAEJzDwBXuc/Ovpa1HT2+/396aTdbebJCUACEpCABCQgAQlIQAISkIAEJNAhAvulra+L+jn7/Y4f36E+2lQJSEACEpCABCQgAQlIQAISkMBME9g1vceR/07Uz9Hvd/zLyfOQmaZn5yUgAQlIQAISkIAEJCABCUhAAh0hMJd2vivq5+QPOs5vAZwZ+T8CdGSwbaYEJCABCUhAAhKQgAQkIAEJzCYBfsn/5OiH0SBHf9C5ryfvytnEZ68lIAEJSEACEpCABCQgAQlIQALdILAmzfxENMjBH3buruR/VcRXCTQJSEACEpCABCQgAQlIQAISkIAEJozAHmnPU6J/j4Y5+cPO35Ay1k5Y/2yOBCQgAQlIQAISkIAEJCABCUhg5gnwnf2Doi9Gw5z7NucJIrwtIqigSUACEpCABCQgAQlIQAISkIAEpppAl16BJwCwW3RddP2AUeErAgdHd0abotsGpL0l5/as0g5I5ikJSEACEpCABCQgAQlIQAISkEC3CTyoQ80nAICzjgbZaTn57Oim6PnRoGDBPTl/e3T3oAI9JwEJSEACEpCABCQgAQlIQAIS6DqBLr0BcG9g46yjQcYTf9L+NOIJ/7ZBiT0nAQlIQAISkIAEJCABCUhAAhKYBQI8VdckIAEJSEACEpCABCQgAQlIQAISmHICBgCmfIDtngQkIAEJSEACEpCABCQgAQlIAAIGAJwHEpCABCQgAQlIQAISkIAEJCCBGSBgAGAGBtkuSkACEpCABCQgAQlIQAISkIAEDAA4ByQgAQlIQAISkIAEJCABCUhAAjNAwADADAyyXZSABCQgAQlIQAISkIAEJCABCRgAcA5IQAISkIAEJCABCUhAAhKQgARmgIABgBkYZLsoAQlIQAISkIAEJCABCUhAAhIwAOAckIAEJCABCUhAAhKQgAQkIAEJzAABAwAzMMh2UQISkIAEJCABCUhAAhKQgAQkYADAOSABCUhAAhKQgAQkIAEJSEACEpgBAgYAZmCQ7aIEJCABCUhAAhKQgAQkIAEJSMAAgHNAAhKQgAQkIAEJSEACEpCABCQwAwQMAMzAINtFCUhAAhKQgAQkIAEJSEACEpCAAQDngAQkIAEJSEACEpCABCQgAQlIYAYIGACYgUG2ixKQgAQkIAEJSEACEpCABCQggV1FIAEJTB2B3dOjfaOHRLtVvbsz21sr3TN1PbZDEpCABCQgAQlIQAISkMBQAgYAhiIygQQ6QYC3eeaiQ6IDot+O9o72iO6N7oi2Rd+Kro2uiggIcE6TgAQkIAEJSEACEpCABGaAgAGAGRhkuzj1BPZJD4+PHhGtjVZF/a7t23Pu+mhT9KloY0RwYJaMYImBj1kacfsqAQlIQAISkIAEJLCdQD8nQTwSkEA3CByUZj4zOjZa2aLJfC3g0OjgaF304ejd0U0t8nY9CV+HOCzaK9oQzVrgo+vjZ/slIAEJSEACEpCABBZJwADAIgGaXQLLSABn9m+jo6PyXf/SHL7zzyv/PPHnOuctAVTSsSUQsCL6jejVEemn0XjiT6CEtyT+NLomuiwyADCNoz27feKa3i/iOueavzu6JSK45xsvszsv7LkEJCABCUjgVwgYAHBCSKCbBPie/yuj9VG5jlnkb4nOj/4lujkiEIADzJP/uejh0XyEk4DhMJwe8cOAr4gIGEyTzaUzJ0SPjPh6BL+LsBDBRJPANBBYk05wH3hYxPW8Z8T85prmet4afSHaUO1nM9Cek7O/G1HGGyMCZm0CCPzwKNfaH0bU+84qbzbb31Ai+MZvkizGPpnMF0cE7+ajx0X86Cn3u7dH4xhB0KdHsON++eKI4EkxeB4T0f5xjXvxy2qZuWdzP3pqywIZS9r2w+jGaFPEuGoSkIAEJCABCYTAM6LvRF+MeOqnSWDaCLAgfU/0k+i+SuzzOv9R0f5Rr+Aei2+chTMiFsw/i0p+FpZP6ZMvhztnBDhOiz4efTeq9xV2OCuaBLpMgKDe06JPR3zm1ed4ua7Z3hXx458fi06KcJgH2SdykjyUd1zUNli2OmnfX+XjmsNpLvaS7PygOke54+pVyVuCl3zWc9+iLK7pcY11wteqcmgjXOtG0JA3pMZtM/m+2SiTtzVOHqFM7u8/jmjfDdFnotOjtmOTpJoEJCABCUjgFwR6OQmykYAEJpsAC8ejIxaRGE+Hzo5eE22O+j2t4wkSPwDIkyPS8QbB4RH3ARa5PPni6dpC1FUjOHJk9KSI3zjgqV7h1NU+2W4JNAkQwHpudErEE2yuYe4D/O8eXOM8Iee3Lgj4HRitikjH3+y/Nbot6mWUxTUzjnNJ3qJ6fvbL8V51tj3Wr8zFrmXq7e7VltL+XufaHOvVvnqZjBdf1+hnjAeBD+5vbFdHKyPKOKtfJo9LQAISkIAEehHo9aHUK53HJCCBySDA4u8vIhz2Yhdmh9f3t9WODdolELCpyvP6bHEQMBwDXkl9WfV3lzYskPlhQ9p/VMRbEMOedHapf7ZVAoUAjj3O/1Oi8jT8vOzzyn1x/gkG8PnO02yecHNdzEd8dejZEUHCf4i4F+wMe3cqIbjYa83Btcr9i/sQgck3R5f1adTWHO8XuOiTZUkP89WGD0TvGrHU+lcKemW9KAd5u+GnvU7mGI4+zv+6iPHD+ed+zTzYEMFNk4AEJCABCbQi0OvDuFVGE0lAAstCAOeWhV+5dnly9NKorfNfGs1C89KIV3afFxFQYJF5esSbAcMWrEkyEUabWQzzHd6TIpyJxX7PeCI6ZiMk0IcA8/yUiLcAcOD/LnpfhHPcy4HkO+PXRFwjvDbPNUIQEafxo9HOMNqGehn3MxxrjP5cFxGgnESDL1+3WOr23ZoyGaNe41c4cK8jDcERvs5B8GcuYj78f5EmAQlIQAISaEWADxRNAhLoDoFHpKk8ASz2oexcO2bzWWyeHdWDB7wyv27M8nZ2Njjwg2Wfip4VrY50/nf2KFjfziTA0/zHRAS9eIr/v6K3RwtRP+eRYB73CH7Q770Rn/u8CfDYaC6aJKNPaJJtudpHvQRKCACcXQHifvfwSYZl2yQgAQlIYPIIGACYvDGxRRLoR4DFHgv3upP7T/l7MQtSnP/Lo/LEn3vCpAcAaCNPvT4b8fRzTcTrsZoEpp3AfDp4WMQ1cEn0keimaNg9gPO8CcCP5REM4A2iI6JJv9bTRK1BgK93fL46xjjy2w6aBCQgAQlIoDWB8hpx6wwmlIAElo0AC726o8sTvysX2Rocg69E/Pddu1dl/c4iy9xR2blfrY34RW6cl3HvX3PJe2JUXjveUe213OknwPfRL9hJ3eTtnD+IePUf43+4wKlva1zrvEJ+fnRgNBdRHu1fzu/Vp3ptRALl4Q1jSkBAk4AEJCABCbQmMO4CunUFJpSABJaMAD/oVb9mb8nf/V77HaXSm5O4vojk9wAmzWgTPxT2lKgEKsZtI8EDpElgsQR4mr6zAgCrU9eqqsEL2fJd+TtH7ADX+pcjvnPONcUbRXPRVSOWY/LlI8BnwKOq6rlvb16+plizBCQgAQl0kYABgC6Omm2eVQI87am/6rtUX+GhnHpZw14nXg7+OCz8/+R8B/qQiGDIuP3HaaK8SezncrC1zvEJ8Pr9zjLeANi/qown/wQAxzHaTP7DohUR5WrtCHDP+T+iUb9ydEe74gemYr3G17/42sapVUrK5TdQNAlIQAISkEBrAgYAWqMyoQSWnQCv6daf1PMr0CxER30K2OwITkXzzYJmmkn4+6I04pLotOivIp6G8kOAowYCPpo8L4zGdaCSVZPAdgI7M4jEXEcYT/LHdSq5j5Af4x5SyqwOuRlAgLePcMD/ekCa5inmyP+MBs0VxoDfMun1Rhf3N+7PBGv4EdgzItpBWn6/5ZxIk4AEJCABCbQmYACgNSoTSmDZCfDkjsU7C8myKDw8++ctomXcA/gecP21+q8torwdnZUAyFlVn5+S7RMjAgG8ETCKwbAeTBklr2klsBwECPbxBBjD+e/lLLZpFz/4WYIHlLfYr9S0qXNa0sBrfaVR+vT3STwoAHDcgDK51zP29fUanwNXRS+O/C2TUUbCtBKQgAQkMPaPaIlOAhLY+QR40n9NdGhUXkF9Uvb5DvK4zixPndZGxQlgkboxmnTj6f3/igh+/GV0dDQXFS6T3n7bJ4FRCeAIlrdduN4HOZSDyq4Hv0p5g9J77pcEYIfDvdRO924ps+7g18e61M79nyAwb29cGr052lJOupWABCQgAQm0JeAbAG1JmU4Ck0Hgn9OME6Li6B6bfV5JZUE4qlHGk6P6d4BvzN+L/Z8FRm3HuOlxgq6NXhZ9LCIYcmQ0F/lUMxC0qSLAE/8S6OOze1znnbw4nFi9zOrQ2Jtx2zN2hcuQESd8Q8TvkYxiw4I1CynsiojxLc4/bxvsH5X/5pTfLflQ9JZoIdIkIAEJSEACYxEwADAWNjNJYNkI4OjzFgBOO9cvi0R+Hf+ZEc5wW8NBJnhwYlS+A8wilcXluK8Wt617qdPxSjNvLcBlPnpcRFBkZeQ9LhC0qSCA88mr+wTuuGbHDXJxzyjXPE+yy9cBCiTuA6g4ouX4sG09fSljWJ6unede86XorCVu+CUpj3t4/d7LOPG2F0Fagr4EA46IvhhtiYYFFZJEk4AEJCABCTyQgIvjBzLxiAQmmQDf/XxndFC0omooi8KXR6+NeHpfnhJWpx+wYfHPK/N8fxQnuRh5ecLUVYPNudHl0TER/1UWv5FAsGQWnk52ddxsdzsC5fVv5jPXfnHi2+X+Zaq9q/wcoUyeLNcNJ7c4lzihXDttnM3mmwVt8jSq9s8aAQI+BHwXqmNPyJZ7/T4RgZsN1XE3EpCABCQggZEIuCgeCZeJJTARBM5PK86JWCBiLLx5QvSq6NSIV0Z7Bff4oTwc4mdFBAwOrqXblv1XRzgEXTf6cnb00og+XRQ1nZyu99H2zx6BhXQZYVzjBAF6XedVkp6b3XJ0VVQCfwvZ39pIiXNZnHeCBW2NNxMIGGC8VdC1N4na9nNnp+NpP8Hd8hWBA7LPW1+MoyYBCUhAAhIYmcCoi4eRKzCDBCSw5ARw/F8f7RvxVKg8eVuf/QOjjRG/5I8jzEKcQB9PC38r4pXStdXf2Wy3W6I3RhdEw94eqLJM/AYH5vpoIYIHbzzwRgB91yTQRQI4gtdER0U8Bf7j6LKI67ytETQgH8469wbuE82gH3/jvO8e/XbU9kEBbeKehBFwo3xtaQgw7tzzV0flqwDPy/7zIwMtS8PYUiQgAQnMDIFJDgDwhKIsJkYZEJwcnnK0XbSMUrZpJTApBBbSEJ74Eww4LSpznsUhQYHjo7IILwEAggDNa34hx/je/9sjypo2Y3HMVxsIBhAIeGT042haAh3TNl72pz8BHOovRPx2xyER1/jnIwJ3vLY/zHD65yMChdi1EV+XaV7336yO8cYQr5xzbxnmZHJfmYu4/2ALEV/J0ZaOAOP88OgZEWPCfZ4fhT1v6aqwJAlIQAISmAUCTWdgkvrM4uYx0ahtZAGCo8PTC4MAkzSitmWpCbCAJwjwrxG/gL+mVgHzvyzGe9VbfjjvPTl5fjTtr8jjPBEAgBn3B59O9poVHpt0AjzxvyRaVenZ2XLtbooGOencD46Mnh5xX+A1/4ujq6Km8ao55/mtgYOjE6N3NxM1/ubeg3PKtUVw7evRzUPyeHo0AtyzeVOLcSQAxNcznhsxXqO8BTJaraaWgAQkIIGpIzCqc70zAfDqYXn6MGq9OP5Fo+Y1vQS6RGAhjX1TxCLwEdFREV8D6Hdt81SORf+nog3R1VGbp4dJNhWGszTtwY6pGCg70ZMAc/f9Edc41zqfka+M3hbxJLjXU3dezceJ/8tobYSDviH6cNTrWrgxxwko8BYegQN+S4PP03OiZuCMJ9G0hcDCfJWO19W5HzXT5pC2SAKMDb8H8I4I9odGZ0T/Myq/27DIKswuAQlIQALTTqCfkzAJ/eYD7jPRqE/xecX3pAinhoWOJoFpJ8Ci/6IIx/4jEU8HHxqtiHjtl+uAhT5vCrCAXKjEUz4XjYGgSaBDBAja4QRybfOjnusintbzxtwXo4WI1/o5vzp6WMSTfBx67gW8CfO6CEe9l/HZ+Y8RziXO/QERPxr66Ig834+4b/C0n0B9ScMT6VsiAhRXVmmy2SkGA+ptY/w3fh+KtrZJ3EjzkPz9Z9HvjZgXXvzvLRePmK9X8nNzkLFmncMYPzG6JCJoo0lAAhKQgARmkgDfj/tOxELooJkkYKdnnQBPhvaJWPATDJiLeO2XxeskB/3SPE0CEmhBgCfzOL08xec3Le6LfhbhnN8Q8T3+b0U/qI5z/t8ivvJD0ID8g4z7xLHRV6OfR6X8H2b/uxGfsZR9V3WO89+LeBuB+84oxj2Kz2vKoL5jWmY+M+noE/loI/1vI4KkBC2KsU74RkQ5P4q4T9aNwMZrqvOj1lXaA6en1QrlHn1KrUze4OBYW+MrAPCnPZTNuBIM0CQgAQlIQAJDCegMDEVkAgl0jgDfBeZJnCYBCUwnAZ7SXx7xK/CfiB4fzUf7VsrmfuPtnw0RwQKe/t8UDXs7jvM8reYp+Z9H/ODcXIQz3DTeQNoQ8QSe7XLce3hTsO3bgqOkbfaVv8fN37Z9vepsHuO3THiLg9+AIZgzHzFGZzUT+rcEJCABCUigSeBBzQNT8DdvALwsYuHy1Kjfa45T0FW7IAEJSEACM05gj/Sf1/H5GgCv6/PVn/8r4un4QnR9hFNOIGCc3/vA6S9vFK3OPn8/OKJ8Pmf5WhFBhXHL58k3baYftI+y7oiGGe2gz6M61nz16eaosMCBpn62BD7gVf9qFA9K6D8a1yivMKIM2sybBtSLwY7z9XqrU303jHnJT7sZ4+UIvvRtoCckIAEJSEACO4sAAQC/ArCzaFuPBCQgAQlMAgGcSpxYXgVHONSjvFY+rA84wpTZLH9UB3xYPZ6XgAQkIAEJSGAHEvArADsQrkVLQAISkIAEdhIBnh6P84S/bfN4yow0CUhAAhKQgAQ6TMDIfYcHz6ZLQAISkIAEJCABCUhAAhKQgATaEjAA0JaU6SQgAQlIQAISkIAEJCABCUhAAh0mYACgw4Nn0yUgAQlIQAISkIAEJCABCUhAAm0JGABoS8p0EpCABCQgAQlIQAISkIAEJCCBDhMwANDhwbPpEpCABCQgAQlIQAISkIAEJCCBtgQMALQlZToJSEACEpCABCQgAQlIQAISkECHCRgA6PDg2XQJSEACEpCABCQgAQlIQAISkEBbAgYA2pIynQQkIAEJSEACEpCABCQgAQlIoMMEDAB0ePBsugQkIAEJSEACEpCABCQgAQlIoC0BAwBtSZlOAhKQgAQkIAEJSEACEpCABCTQYQIGADo8eDZdAhKQgAQkIAEJSEACEpCABCTQloABgLakTCcBCUhAAhKQgAQkIAEJSEACEugwAQMAHR48my4BCUhAAhKQgAQkIAEJSEACEmhLwABAW1Kmk4AEJCABCUhAAhKQgAQkIAEJdJiAAYAOD55Nl4AEJCABCUhAAhKQgAQkIAEJtCVgAKAtKdNJQAISkIAEJCABCUhAAhKQgAQ6TMAAQIcHz6ZLQAISkIAEJCABCUhAAhKQgATaEjAA0JaU6SQgAQlIQAISkIAEJCABCUhAAh0mYACgw4Nn0yUgAQlIQAISkIAEJCABCUhAAm0JGABoS8p0EpCABCQgAQlIQAISkIAEJCCBDhMwANDhwbPpEpCABCQgAQlIQAISkIAEJCCBtgQMALQlZToJSEACEpCABCQgAQlIQAISkECHCRgA6PDg2XQJSEACEpCABCQgAQlIQAISkEBbAgYA2pIynQQkIAEJSEACEpCABCQgAQlIoMMEDAB0ePBsugQkIAEJSEACEpCABCQgAQlIoC0BAwBtSZlOAhKQgAQkIAEJSEACEpCABCTQYQIGADo8eDZdAhKQgAQkIAEJSEACEpCABCTQloABgLakTCcBCUhAAhKQgAQkIAEJSEACEugwAQMAHR48my4BCUhAAhKQgAQkIAEJSEACEmhLwABAW1Kmk4AEJCABCUhAAhKQgAQkIAEJdJiAAYAOD55Nl4AEJCABCUhAAhKQgAQkIAEJtCVgAKAtKdNJQAISkIAEJCABCUhAAhKQgAQ6TMAAQIcHz6ZLQAISkIAEJCABCUhAAhKQgATaEjAA0JaU6SQgAQlIQAISkIAEJCABCUhAAh0mYACgw4Nn0yUgAQlIQAISkIAEJCABCUhAAm0JGABoS8p0EpCABCQgAQlIQAISkIAEJCCBDhMwANDhwbPpEpCABCQgAQlIQAISkIAEJCCBtgQMALQlZToJSEACEpCABCQgAQlIQAISkECHCRgA6PDg2XQJSEACEpCABCQgAQlIQAISkEBbAgYA2pIynQQkIAEJSEACEpCABCQgAQlIoMMEDAB0ePBsugQkIAEJSEACEpCABCQgAQlIoC0BAwBtSZlOAhKQgAQkIAEJSEACEpCABCTQYQIGADo8eDZdAhKQgAQkIAEJSEACEpCABCTQloABgLakTCcBCUhAAhKQgAQkIAEJSEACEugwAQMAHR48my4BCUhAAhKQgAQkIAEJSEACEmhLwABAW1Kmk4AEJCABCUhAAhKQgAQkIAEJdJiAAYAOD55Nl4AEJCABCUhAAhKQgAQkIAEJtCVgAKAtKdNJQAISkIAEJCABCUhAAhKQgAQ6TMAAQIcHz6ZLQAISkIAEJCABCUhAAhKQgATaEjAA0JaU6SQgAQlIQAISkIAEJCABCUhAAh0mYACgw4Nn0yUgAQlIQAISkIAEJCABCUhAAm0JGABoS8p0EpCABCQgAQlIQAISkIAEJCCBDhMwANDhwbPpEpCABCQgAQlIQAISkIAEJCCBtgQMALQlZToJSEACEpCABCQgAQlIQAISkECHCRgA6PDg2XQJSEACEpCABCQgAQlIQAISkEBbAgYA2pIynQQkIAEJSEACEpCABCQgAQlIoMMEDAB0ePBsugQkIAEJSEACEpCABCQgAQlIoC0BAwBtSZlOAhKQgAQkIAEJSEACEpCABCTQYQIGADo8eDZdAhKQgAQkIAEJSEACEpCABCTQloABgLakTCcBCUhAAhKQgAQkIAEJSEACEugwAQMAHR48my4BCUhAAhKQgAQkIAEJSEACEmhLwABAW1Kmk4AEJCABCUhAAhKQgAQkIAEJdJiAAYAOD55Nl4AEJCABCUhAAhKQgAQkIAEJtCVgAKAtKdNJQAISkIAEJCABCUhAAhKQgAQ6TMAAQIcHz6ZLQAISkIAEJCCRWLGrAAAgAElEQVQBCUhAAhKQgATaEjAA0JaU6SQgAQlIQAISkIAEJCABCUhAAh0msOuEt32cAMU4eSYcg82TgAQkIAEJSEACEpCABCQgAQksjsAkBwDWpmuro1Ed+j9Inj2i3cbIuzia5paABCQgAQlIQAISkIAEJCABCUhgZAKvT46fRPeNqa8n3yEj12oGCUhAAhKQgAQkIAEJSEACEpDAFBKY5DcAcOAvjkZt44rkWRXdE907hWNmlyQgAQlIQAISkIAEJCABCUhAAiMTGNW5HrmCRWQ4P3kvGyP/E5PnjOjuyADAGADNIgEJSEACEpCABCQgAQlIQALTR2CSAwDbghuNauuSwaf/o1IzvQQkIAEJSEACEpCABCQgAQlMNYFRf2BvqmHYOQlIQAISkIAEJCABCUhAAhKQwLQSMAAwrSNrvyQgAQlIQAISkIAEJCABCUhAAjUCBgCcDhKQgAQkIAEJSEACEpCABCQggRkgYABgBgbZLkpAAhKQgAQkIAEJSEACEpCABAwAOAckIAEJSEACEpCABCQgAQlIQAIzQMAAwAwMsl2UgAQkIAEJSEACEpCABCQgAQkYAHAOSEACEpCABCQgAQlIQAISkIAEZoCAAYAZGGS7KAEJSEACEpCABCQgAQlIQAISMADgHJCABCQgAQlIQAISkIAEJCABCcwAAQMAMzDIdlECEpCABCQgAQlIQAISkIAEJGAAwDkgAQlIQAISkIAEJCABCUhAAhKYAQIGAGZgkO2iBCQgAQlIQAISkIAEJCABCUjAAIBzQAISkIAEJCABCUhAAhKQgAQkMAMEDADMwCDbRQlIQAISkIAEJCABCUhAAhKQgAEA54AEJCABCUhAAhKQgAQkIAEJSGAGCBgAmIFBtosSkIAEJCABCUhAAhKQgAQkIAEDAM4BCUhAAhKQgAQkIAEJSEACEpDADBAwADADg2wXJSABCUhAAhKQgAQkIAEJSEACBgCcAxKQgAQkIAEJSEACEpCABCQggRkgYABgBgbZLkpAAhKQgAQkIAEJSEACEpCABAwAOAckIAEJSEACEpCABCQgAQlIQAIzQMAAwAwMsl2UgAQkIAEJSEACEpCABCQgAQkYAHAOSEACEpCABCQgAQlIQAISkIAEZoCAAYAZGGS7KAEJSEACEpCABCQgAQlIQAISMADgHJCABCQgAQlIQAISkIAEJCABCcwAAQMAMzDIdlECEpCABCQgAQlIQAISkIAEJGAAwDkgAQlIQAISkIAEJCABCUhAAhKYAQIGAGZgkO2iBCQgAQlIQAISkIAEJCABCUjAAIBzQAISkIAEJCABCUhAAhKQgAQkMAMEDADMwCDbRQlIQAISkIAEJCABCUhAAhKQgAEA54AEJCABCUhAAhKQgAQkIAEJSGAGCBgAmIFBtosSkIAEJCABCUhAAhKQgAQkIAEDAM4BCUhAAhKQgAQkIAEJSEACEpDADBAwADADg2wXJSABCUhAAhKQgAQkIAEJSEACBgCcAxKQgAQkIAEJSEACEpCABCQggRkgYABgBgbZLkpAAhKQgAQkIAEJSEACEpCABAwAOAckIAEJSEACEpCABCQgAQlIQAIzQMAAwAwMsl2UgAQkIAEJSEACEpCABCQgAQkYAHAOSEACEpCABCQgAQlIQAISkIAEZoCAAYAZGGS7KAEJSEACEpCABCQgAQlIQAISMADgHJCABCQgAQlIQAISkIAEJCABCcwAAQMAMzDIdlECEpCABCQgAQlIQAISkIAEJGAAwDkgAQlIQAISkIAEJCABCUhAAhKYAQIGAGZgkO2iBCQgAQlIQAISkIAEJCABCUjAAIBzQAISkIAEJCABCUhAAhKQgAQkMAMEdp2BPtpFCUhAAhKQgAQWT4CHBvtG66qi7s72+ujGxRdtCRKQgAQkIAEJ7AwCBgB2BmXrkIAEJCABCXSfwJ7pwhnRs6N7o0ujl3a/W/ZAAhKQgAQkMDsE/ArA7Iy1PZWABCQgAQkshsA+yfzUaL/o9uj90dbFFGheCUhAAhKQgAR2LgEDADuXt7VJQAISkIAEukhg9zT6+GhldHP0juiC6J4udsY2S0ACEpCABGaVgAGAWR15+y0BCUwaAe7HfC3L+/KkjYztgQCv//P0nyf/50ZviHT+nRsSkIAEJCCBjhHwNwA6NmA2VwISmDoCOPw8XT04enL02ogfVtMkUCdQAkT1YzjgfBd/FGsGmdqWcWcqYX5S3+YR623W+dMRGky/i8jWbG+97OY50i83t9KG3bLT5EB70Sg8KE+TgAQkIAEJjE3AAMDY6MwoAQlIYFEEcEz2iFZFOFanRDgJ/7ioUs08rQQIEB1W6xwO+SXRthE6vFfSHhox5zAcz01Rm4ATjirz88BobXRTRP13RMOMdh8QseagznOiNvkod/+I/Pz+AP/rwIZoa1QCH+uzPxfRPvpyXVS3NfnjyNoBuJGOIEZb4+0HuFEWRh+ujK6u/u63ob97R6uj+ej3In4/geO3RQvRF6PLI75WAZNRAzrJ0soYO9pC+bdG8MI4xr2on5G+BCno945qX7/6Pf6L6+4h1Tgwf0cJGDHXuO4x3t4ZJW+Vbeo2zHeC7nz+wgOm5XrY2Z2lDYj2FJ+sXG+0Cy3XNcc9l7q579IOjDbSXu4npX3VqYncwJX5Tz+45zr/J3KYlqZRz0gx34n4UD1oaYq0FAlIQAJLSoAPUByiF0TfjH4e3Rf9KDpkSWuysGkh8JJqjjBP0M+iUyIWYm1tXRJ+OSpl/Fv2T2+Zed+k+5da3m9k/9iWeY9OuhuiMs+Z923azcLtVRHXBf39WIQzXTeO0Z8fR2c0zrHQ51jpL9ufRM+KWBi2tQOT8EtRnduLhmTGsYbPp6LS73o7yj7nvh29MiLIM0q7hjTh/tNwODKiD/8UrahlfE/2YdhLH87xd0WvieBIEGRYwKBWtLtLQACH6/CI8XljNOrnA3P3g5VGzbsEzZ/IIgjonVox5b66chlayf2PgOKZ0Sci/Bbuceh70Wci2nZYRPBnUJAup8c22sH9ACZ14zhzjh96PbF2Yi773Ks+EsFw0o05D99PRty/dhTHSecwE+0zADATw2wnJdBJAizucWCeEhGk5MO+7hQYAOjksO6URjcDAMwb3hbZv2XtLOhw4nCA647s6S3ys2iaj3BWEWXgcL8+auPIJ9ku/z0i4EDdP4xwSAcZ18ppEQEy8nwtmo/KU7KSd9QAAGXh1K4qBQzZ0j9+/LDJbVAAgDFhkVz6Wxb1X88xnHACKV+JvhVxzReun8/+CVFzMZ5DizLeOsDx/9eIoESdIWMBE9pQHJCybQYucFJeHeFUNschh7QdQID5x5xgjJg/R49YxxFJf1elUfOOWFVnkhPE+rsIptw/mM8707i+T4q4p9EGrjeuQ64v9P2IMePcDREBU4IUS+280o6jIhzk+ah+TXP/pX7uYfV7HQ71ZyPaDMNJN/r0nAiuBJMJZGsh4A3caSABCUhgxxNgEccinCewfx7NR0u9yN/xvbCGSSHAK6IsBlncs6DZ1qJhzL//ErV12OtFslb4s4g6b4uui3gquTZaE10TDbM3JMEfRjigPNln8UiZW3pkpD6e1vxFRPk3R2+Orojo+7hWXqWl7Lloc4uCePr2X6O23BiPl0anR1zjvG5/efTxaFNEf3kNFScEx+NPIxbh9JPxJA/9vyC6M1qs0W6e4HHvuTC6OurFEMa0r260A0cABswfAhv/LVoREQi4NupVVg5rS0SAOXt7xfrGbO9YonItZnkI7JFqeXOLt2q41rdGXJMEBRcirieutT+IuE/h+L88+s2IgOtCVO5j2V2U4cxzH6aetl89vDtpF6J9IgIVk27wvCj6k+ikCM7cB7kHz7QZAJjp4bfzEpDADiaAw4RDcFj02Oi4iA9OTQKLIXBTMuOUHRDhOOKIDVvQ4LzhBOJUsigif1tjzvIUnHwsVnlqxpxeFa2PqH/YohQn5hUR7aXda6PnR38bca5uLHr5XYwjItr7oeiCaLHOD04u6x7agHB4WdAOMvqOg0462rL3gMQ4y6dHJ0cs7rdFZ0f8sCeBgLpRFov/SyO+JvC8aD5iUf5X1TkCB8O4JslAW52zj4oIBPCkj7nTNOq4Knpc80T+hhdzZz4ieMkcYiENS5wY+qjtOAKMzTXRcyPmP0EArbsEjknTeRLN/YFxxfH+QERgtW6c577DvYD74BnRD6L/Hd1ST7iIfe5lBCRGMe5Zb4zIuzBKxmVMe33q5q0FPrNYh3GvIxA708biVJOABCQggaUnwFNOPsBfHBG5Pz3S+V96zrNYIo4hCxicM55sMNcGGY7pgRFOL/lw4kexo5OYp1I4rZ+OqB8HcN/oj6ptNkPtyqTAacSRoU04ygQW6msRFpY4mBynfzjp74mWYsHGgpvFIOU+LKJPgwynGW4ELFh00+9BRlADJ5nrnKAG1/0ro6bzXy+DwMIlEU4BfSXIcmT0mGix9wvaz9jRLuogUEP5oxjpYf/eiL7QRsaLMWJBTR3DbNy15rj5hrVnR50fp73D8hAA4Fq7OLosGub8DStvR/W9bbnL1b7F1rvY/PBZEbEe4B63JXpZhEPfdP5Jyz3yvIj7AuPO/ZI3og6NuH8tl9Eu7uPMxzbBqKXgNqivbcrnGir3PwIwBFnb3LcG1dv5c8s5iToPzw5MDYE2N5Cp6eyUdYQb+6QZH9R8SD8iOiE6eNIaaHs6T4Dvj/P5jWM6H+Eo4iT0M84/vDqJA7wt4qlSG6MeFp4YTi0LP+raGOEE4iAfHp0btTEcSYIGZ0QshHmyeV10RcS1sz6iPtrM8XdEV0dLYZS3R4TjSptZkC8MKJi0vKLPfWZzxJMjFpC9jAXl46O56uRF2Z4V4eAPM8onuMD39FdV4t7BmwE4fOPe51Ym7x9HPE38fMS4j2u0AUfk/dHqiHYypzh2U6NQPlOZm4hxZA7hOGyNCEI00+fQdoPhmojyycffP42Yb8xb1I/FQ3KO+mgX+7SBwAvjRp0Er5pGmv2jg6L9ot0jAh6lvoXsN8ePYBvpmRtcAxgBFtrMMdLTP+pciHoZddFW5h9jQ5/gsyW6Jqo7hLQRFjgtpIFBMwhAvXBDXFP0gbHGUSP/ICvl0yfaQ1m0hzpKH5oMGM/1EWPDtQnvdRF9gR3HFiLKIS3l0rZ9I9JQJ2VSx3XRjVF9XBmHAyPKhQk84Qsz+lfaR17Gt9m+HNpupGVsqJ8yCz/KG2a95iJzqMxh+I5qJyYDfaD/b4vOH1IA/dwUlfsCDFhXcB8qfTg0+/QTlox30+C/MiIvaWCNMR7MKcYDgxP1lfGrDj9gw5iQl+uAsWE+Nq1+LZIOuz1irGh7c7yYc6sirgvax3n6xbXJmDGfqIf5RtqDo7mIeugfaRgP0tCmXsY5yjk8emR0RTTOGPYqu5PHAKdJYJYJcJM/JuKmrHWLAB8GfDhePiHN5kOdD6ajokdHLIhGvcdSBoskPvi0ySfAHGRRtbPt1lTIQoo5xsKOxTILJ9rTNOYU8+mIiMUd/wvAg5uJBvzNvZFF0z1RWUQxrz8THR/NRTj0F0csxIYZbeT741wrLPJo+zMjvjfPQv0vq2Nwxdmk3OaCMYfGMrjdEFE23OjblVEv55AK9om4nnHILov4oa5+xiKbRTSLUtr7zoh8bQ2+GyICDnMR7WOBTvvacE2yBxh84czCm7FbLMfSRt5OYMHO/ON+VXeoWKCfEPF1grKIZw7CmDZcEn04ol+UVwxux0b1fHw+M1+2Rtzn+erJudWxbLYbZcOKOmF3UIRDxHHGmUU/v7/wgervbLYbc5ix5ZVg+jEX0XbqwzHAQaA+5l/9GqffL4zmIr6mMh/RZjjvGcF4S0Q/eXOFz6hitIl0T4yom3bTb+zWCD4bIubOjduP/qIf9Ik3ZxhHtvX24DQVbsw/5iyOHGkvjL4W9TP4lvbMZ39NRB8YF8a0ML8g+/W5TL7yJg9PqPm8OynaK1qIeEX8TRF9m4+YL8wFrhGOwQFOhTMO7nlRMcrhPkC/+doK48/vhRwW0T+stO9d2Yd18xrhvsL4wJl+7R7hhF4Z8RZTKSe7DzDm9HHRoyKY7h/RZ+pgXC6NPhJtjGDdxsgPJ+YYY312m0xJw1jQP9oCP/rDvCrXHF8RoI3cn9hvGv1eHz072hTxVhL29Iiy6CsGb+YlfWN+9+sX863UyTxlztaNOc21iJPN+BXOzJ+rIsbzQxFjX4zrlfGlPXClz7RvVUS+s6LXRbTp1AgWpWzmEuO6OaJ/3FvoQ7P9XNdfio6Pjo5oOwyb6XJoNmzX2eimvZRAXwLcHJ8U8eGldYsAN30WHyxSltv40ONDhQ89Pmz5kB/HmI98uLJA0CafAIsYvsO+HMYTXRZaLLB4KsQisde8YU6xyJ+LuFZYKJa3AbI71FiYUQaLX/4rpeJEsphjMcxCmwXoAdEVURsj399FfP+VBSjXzpaoOAzUcXF0TlR3dvLnooxftccBoH4WsgQuLoqou2ks2OnXXMQil4AHHPsZ5ygTo/xrIxayo9jWJP5qdEyEE8SPJp4XNZ2bNmXiyP1+hNMAR8peCqMcFtuMEWPO4r0Yi/HTIxwI6r062hDh/MPm0OgZEfdLgkDMxWLMAYJAnGOMmM/kY07goD8hOiTCIWDMis1lB+fm5Igxo07Gi4U9TgJ52bLefXtUWB6XfV7BZoxviphvzDXqYyxPrLaMAw7LrRHGvX1NRN+fGXENch+4IGK8aT9lrojIC68yv+ayj2NzSkR59JG6aRvODm2FEdcEnwOUB9PSJtIxrsUon7ULvx9Bfcw5+vHTiDaeEV0TwYVjdaMseMLgqAiul0b0ZfcIBjCCHX/DoLCjTZxnfHhb59iItt4cURfnKX99RPmkvTFi3KiHc8yPwyLGlXsI7WReYeU87WMf5syzKyPy7x+VMWKfNnOOMcfg9/KIMWSebowYg8KR+ULbexn1PC1inJiztIv8pKeuw6vztJk5DO82Rl7GmP5cEcGqrdH26yL4M66URbtgTpmwYB71MsYCHqQpY3t7tU8e+sv4co75/72ocMzuA4yxoE7mxX9qnOU4bE+JKPPyiHGnvNXRERFj+hsR7KgPIy3zgTZizHHYw4g5XuyM7Dw/Iv2GaGtE2WVcOE85t0ZXR03jGH1eF1EXDPvNg2beqfubiahJQAISkMDiCLD4+vOIxQEfuOMai6cTxs1svp1O4NrUuFwBgKtS90LEQnR9xEKJhU/TOE6AgMXi9RFtbhsAYMFc5iML7/NqhbP42hBRP2LuXxkNWjyW7KS5KHp79KKIBSqLbo6XhSNPaFj0LrXRfzjQXhakLDS39KiEa5Enl3BjEctisSxQeyTfvsAti9VxF5alLhaplMVinzEYx7gnkZ9+0P6y2B6nrHoe2oiTgBPBuO0TsZbkODxZoLMIPz8iwAPvuyPSMk+fGh0VbYuYQ4gyYH1AdEFEPhiSj/4fFr02wsEg/8URcwVH4Zjo5Ah7X8TT5M3V3/T/udHREY7JhggnAIcBB5HxZI5R9mUR1w/1HRrh2NJeHEHmx4VRc25TL9cEdTKnYMA8IC9OEO2GCe3C+Ju2cC3RR/IyLnxmwIyAAmUinqL3cmJy+H6jnTzAIC/te0fEfYF2zEV8Jp0W9fpM2i/HeeoLP+bH26JLIpwu5gwOXhkrAgyM4xURZRfjWoUR7HjyemfEXCDdXMRXYtZGG6O3RDiEzBvSrIh4c+JZEXUdG70pqtse+YMxYi68IuL+UuYdgUk4wQDG8Occdnp0YnR39IaINzmYZ8yXwyP6fWTUy46vzjNHzol42l7mIszmI+YUdd4RMdfKfMtuX2NewAtjXJtzqW/GKu23s2XeMNaIsqh/HOPa4/76/YjrlTlPPzdF1PHTiDkwitGeMneZb/8Q8eYNbOjrXMR8ODM6LaI/pKkbdTLezGECxFx3MN8Q7R0x5mwZU978oB+UvW90XMTvKzC+zCUYN20hB26qDv6XbM+LmLMzaVyEmgQkIAEJLI4AiyMi2o+OWHjwITWOsWA5K2KBrU02ARYetyxjE29P3f8SrYvmokMiFrks3oqxEGMhy2KVhf0Xo1EWjfNJPxex6GeRz2KuGOV9ITo1YgH2sIgFFYu2NsbCi0Uo7WbBxkIPI39ZjMJ4qQ1uX4lwfOYinA8W+M2F4D45hnMDr89V22z62n/MGRwWjHGoO0p9M/U4AVcW4RhMitPQI+nAQzhYKyPuKd+N6PdSGWVRLobzwMKdseKpNXUyhjjYLOSL0S+4kJag2XzEWxUcY47SXs59NsIRKfMUp5w0vxcxj9ln7co8py5eB+Z+y9zj9ebromI4CKRljFdHXCvcq0+KmHf0A+f/vVG5bnAQaD/zgTmAQ/KnEX2hvLqRjvv+1VGZqxxj7I+OaNfv1jLQXvrJtbQhqrcVPrzmzHnY4qwOMs7/cUTfKAcH/sKo9ANO6MAIbnUrTE7MQebaB6Ozq/2SjrzwmYtgwGcb9ZS5Sbpfi2jrK6ONEQwoG+PaOSBi/HDAz43q1xgsYfWEaC76f6OmURb3WO4T74vKnNucfeYHY3hk9PsRPGgvzHHwadslEUzr9+kyhozFqqhu++UPvhLCGFwZMTfYMp4Yc4O6KRsH9bCIe9ebODnEuEcWNlyPoxocyzVBWbuNWkAtPRwXIsa4MGUsuDZKX0ctnnnIdcI1c1ZEgIsymRNYuf5hfnz0xOijUXHISQPXMh8pg/Gkn7SJ+zVjSxrma/2aY3xh81vRXEQ+0pW6s7vdmH+MP1vmDnOGds2klck4k5230xIIAW5+74pYyGrdIsBC5/IJaTIfIudFfCj9c8SHGx9Ye4zYPuYjT5MoR5t8As0Fxs5u8YZUyBNAFl2PjC6KigNAW3AeD41Y2F4RsUgfxXjiUhZg789+vb/Uc12EY3RktLYSi742Rlmk5anlUVFZ0FLmxVFZ7LYpa5Q01Mt9g7r3jx4eXRjVnRMWjzhNcLsxguswe0gSlD78KPvjLqTpd2lLvcxh9TfPMydwFFhQo6Wcq/StlFf6vDLHDotYV3IvZF40jXZsihjjgyOchvOjenl/kr9xkOuOF+d5gswTxXpfGD/mNw4AQRrGqmnUhyNHO9mn3TyJZEs7zonq10z+3L4uuDTiesFRoI65aFtUt0vyB3XW2dJWHCvmF873XrUMpZ9zOXZ4tDmqO6c4YLwyT/3DriN4w4/PGNoJr2Y/FnIMZ4m66sa84gkocwTWF0RwrRt94jqh3DURn2c4dfV09AeGhWt275/312SfIBD3IMqoX1+kw3D+bo3oC23qZQs5CGeY1O36/FGcN/pR/BnuQysi2sZnaZ0v+WkHfabNqzhQM+YkxygLbowH5dQN5xJevF1B+j+Kzop69a+ej3HivoJxjY9q9L+05cHZL2WNWs6OSs+1Dzv4cJ1yDdSvC+plvBiTEyPGnOuK679uXE98VlEOVuZ0fW5z/XJ91K9H5hLX+Z4R9TTrpizsOxFjRVuZmzNr5YKZWQB2fOYJcFO5ONow8yS6B4AbfPPDebl7sZAG8MGHA8/3pXHOjohGudfy4TTOAiHZtBkjgJO1OVodHR2xiK7PHRZDPCHlPsdilsVV20UPZa6LmLsstliEN20hBy6NjoxYUPGd9Q1R05nIoZ7GohhHpL6YncvfLOJZ4PdbxPUsbISDhQX10HYciPpikj6zyIQbjgxOzjBu9KH0g3zjtr2er84lRY5kzAWcTxbGS30/2S1llntaWZgfkGPFifth9ptOZ2n8iqo98PzNaO+I+cV4k+eoaP+Ieyg/2oUzwD73VVSMNpCOsWOMmNu9Pg+4n76vOsc+1wSOOWnrjkb+/BVjDn8zYjsXUU/TvpEDverkGA4M40c7izGP6CeOO6+RPyLCOS79xHHB6W5j+yYRLJkvN0TFGW7m5fqkPfW5xHWH84qRn3nSb7zIixhf8tWN41xLvRhwPaF6vcwZONLuVRFvdbBPmjKfsvsrBv/6uJeTzDvGk7rJW+o5pNqnXxtL4saWdjF2TVudA7CgbMaB8nvZrdV57h/7RYzFQq+EtWP1e0K/vg4qot5H+jzu/WVQHYs599vJzNhyv2F7aJ/C6AdtL9fh+Y10jA1lNK3cA7jmT4sYZ47xVhvX0OboxmamHn8zdgRTaCP3K+bNpLHs0eylPzTOJFz6VliiBJaXADcDTQJLRYAPZxZ6WyIWmOujJ0dlwbVU9ViOBFigfj5i8c5C+rDovIgFDQsbFktHRLdEpONeN8yRTZLtdnyEc4ZRJgvxpnGM1+mZ6ysj2rEmauPE4BgdE50elUUhbZ6LeLqGo8T1syPsjhTK1ye4NuHGYpL6ymcBC/r5CL680cPifRg30hRnhLT0ZRwjb1mb0Z5xF6eUgyiDti2lsXAuPJgDlA/H0m7udwRQehnjDl+McnAEbop4E4Q5xJgwj7lfMj+Yu1sjvhrAPCyLfOpnfsKZ8WRh389ur50gD3nhyveQ+xnnKZP+7Rc1nV/ycb7X+HCMudC0TTnAG4c4/6si+rsuoo848Dg0n44uispczG5Pgx3OKunoX78xXsg52lOfj4wBfcIIRvCafD/DUaLvjC311cuh3O/3y1gdn8v2qIjgIP2l3ZTHuFMewnpdL5RP//qxKIzreQkq8Td5YNrLOMfYNceIuUHb4Mm49xpbyoN1eY2/zMMFTgwwxrjU958GpOt3qrSN8z+I+o13v/w78jgMGEe2XNvlLZZedTKPGB/mIF+bahrsuZ6bBr9XRa+J+IxZHxFkOD7i3PUR92quHe4n/Yx7OuNAG5iDbPuNc78ypuK4AYCpGEY7IQEJTCABPmiuiRaijRHfLTw1YhGkSWCpCFycgp4asUB8THRBxOKQBdaREQuzzdElUVtjIcd8LU4PiywWW72M8nESsLXRIdGVUXNxXSXZvmHRxSKOV4tKicgAACAASURBVITJy0KdfnCMemg3QYAt1blsltxwxrZF+0c8ib0wKo4GjieOCvXDs43RB6552LHAH3d9BU8Wplgps/pzpA2MEeOwlAtc+vUbEXxwkopjQ7+LMfdKH2qH798lX1E5iPOLY4yzyFsrR0QrKh2ULUEBvlr15ujdEX0rddLHQfMtp+832lasn2NZzpdyC8ta1u27o3Klz2dH9JU+Ms9WRVy7zH2unWOjy6OnR6TvZ4wDGtZ35mSznfSncGB/0FjBaGufRlBuP4aUT1+eHR0YMV/KeDGvr40ui06JVka9jPKbbe+Vrn6MOsq87zcnKJN7ZNOJLq/W92JWr6Pk51i/udFsJw5qYXVw82SLv2HEPKFPC7WyWmTd4UnKXCwV7ZGdQfc/2g/jXo5+v/kM84sjvpZ2XMT1wz2BecU1BNP5iB+WJKC1IepllF/m1KA29so7VcdmuvNTNZJ2RgISmEQCfNAQ0cYhujH6VMTTsRMiPsw1CSyWAM4Ec4sFIgtuFl8sbNkSEGChdVXUbxGfUw+wdTnCoorFLcb+XLXfa1PSsRjjO7Es1Db3Slgdo23lSQ5txRnnqdFR0epor+ik6OvRu6PmQj2HFm04IHBj4Ui91IljgrHIZKF4ebSlOjZsw9NkrnWu6wMi+njrsEw9zjOO+1bHF7LttUjuke0Bh8pCGkdsKdd6cykPUe4VUXFS647gX+U4fIcZfSuMGGPG46bovIjA0NqI32g4Otq/OsavllN23aGif8Whze5A43ooNsjxJQ3OJKJtSzUH6e+GiM8Eful/TcTXYOajw6O5aL9oIXpx1M9oD8zp96C+c65cn6Ws4sAyRy6KBtVTr597SHGe6sd77a/LwRdGOGkEiT4afS66LuI6YxwYf4IgzPmlMsqkjYwb84I+9jLONa+Lu6r0XLvNc/Uy4Pl/Vgcov18QpJ5nIX8g+npkxL1yUICnnpf58DtVHuY910izX80xLvk53u9cvY7F7Jd7DWUwr/lRyEH3f9IxRtwvRzE483lH2XwucE/gmvmT6IhoRcS9k2uDNL3u3fXroc24pZjptEETfDp7bK8kIAEJ7HwCfNjxYX9ZxOL1wxFPRtZHLFQ0CYxLgMUOrz6y0MZpYnF5fsTiiEU4DgfnmwvGHOprPP3HIcYIHvRaSNUzk/aAiMUXCzH2By0AcQyY+yxMSceCkToujPhe8LMi2v8XEQveS6OlNhZ/X4jgxQL70Gghog9w4/wnIq7dNsbCFMdmLmIsKGdb1DY/deB4/G5EezAW08VBrg613tB+nCzKXMp7DGx4Io9tinBGMJzDMseo88bqeHPDmGO9uLAmpc04cbBj7M+NcJaeGb0oWhkdFcEbxxJj/sG7n52WE3xHme99c21QB2U+tF+G6jjXE6KPtGkpjP7Td8YVMe83RgQD4MpXIVZFJ0cExfoZnyfkPzCijTCnX02by4HCvJzjngFfeOMQlbY08/J3aW+vc/2OwZbriusAdlzf50S0j7rrY0/aZvtyaGz7VlU+Za6OrutREqwI1DUDJ1y/tBFHkvPw6XXf5HpijDDSl3lYHeq5oZyPRdxnKPsZ0d/3TPnAg/M5RDCMPl0cMXbFSvuafSnnaet/qKXfEbvca5iPbOkb86nf9U/948yp0m7y3l6Jew73gfdGjAefKydFXBNw3hI1jaAfrJiDlNNrfJt5pvLvpbzophKQnZKABCSwhAT4sOHDkQ9xfiCQtwGuiGb2Q2gJ2c5yURem8zgofKbzCiQL12MjFn/Mt0uitobzyeKdRTKLJJxwXscfJObx+VUFc9k+LCoBhOrw/Zv12cORK47Hq7NPkIG6WNi+P7o0oi+HR9S7MtoRtiGFsuinLv4XBfp8QkTb4Fn6lN2hxkIUZ4NFMItM3iJgO4rhMCHGj3HjdwpYWI9jxaljHOjXUhgLa4JDcxHc+F5+cX4Yw+KAMgf7GY4MvxLO2x3PiWB0XPTJ6MsR/S9WHAsW8vzCNwt22PDd4eLE4mASLFpTMjW2pGcsXhDRduzyiOPM835jRJkEY2C3OSr9zO5YtiK5XhF9s9ruU5XCvZ+5RvnM+/MijuFIDRo3nB/ahf1+1O8amc85+lo36vtqdYB2Hdo4X/8T7gROeHsNJm2Mtv9WRL3MC3gzHxlPrvNizCcCANhS+SObUhb8qPuYquzmpoxt8zjXL+2kLTxV7jc3uKaYO8xBxoFroY29N4mYyzAob0cMy3dQEjB/V0fUwzgw54sVpmU+1U5t36Wtq5oHq7/rY9EnSevDNyQlc5h2ol4BCcbkqOg7EUHp46O2xr3iSxH3HPqE0f4SgLki+wTPOMZ1U9JsT1izfbPP5yIsy/2qkWQ2/lyqC242aNlLCXSLADfbIq/1yRq7Egj4UJqF48FrrTdGBgIma5y60ppr0lAWr8wfFlUsXB8fscDZGLVdoCbpdmeMRRJGXuYlDsMg4YjgrLIA5F6zPmIR2DQW3m+LWICxcP1o9O6oLETZspAjzdaI+9cpEX1iUbfUdm0KpH8s5HEW4PbEiHZcHOEMtDXY82YPC3zszAhnlj60sf2SiDErztgF2WdMx12kM+Y4CiyEGc9h7WDcyudFcwv7IyOcQeYHbTo7YqxK+6gLZrBkvI6Omp87tIW5AWvaBDPmFVvmxsHVeRzIulEO6RkfyseBwGB9aUS5OGywq9fJ/rERwQHm3Kcj8r8zot3U+dKoyQbnhfaTl3SXRcyTxRj9pP2rIso+MGry4e+V1XHGj4BHP6PvJUAET8aHPhajLJjxplmzf5S7IWJ+0w4cTBzIJrv1OYYOiO6MGKc2BrOSljb1cgY5/uKoOGq90rSpq5nmqhy4MqIv9L3wLOmoZ10Es6Yxn7nmmCPPiGDTZMfc5PcZKHdbhENaroHsDrTbcpb5xlygnI9EtIM6muw5xnz+u6jMw7dm//KoXh/zgL/XRIxTvZw98jf3IMawl9HPMk7MzWY7euXpd2xjTlwfwRfutKfeFvJRB2+1cN0h2t7W6ONcxNgdHjXHhfnEPZQ64cv108v+cw7SDtrKnJ5Zaw7OJIFgEjFIo4pJMMn9miTGtmV6CaxK1/4m4ukaOjViMaBNFgE+1G6J3hCxgGWBzSJBk8CoBHhdHaeaBfUZ0dqIhT5PVtsai6pHRcUB42k8i8Q2xsIbYSw6D4rqi3oWo2+OWDgz76+OcACai2cWpBdG747oD2sAFpRHREv92U7dn4lwhHCATotYdHMcZ35UuyAZLolYgNLf90QsWFmX9DP6xGKYPlI/Y7Al+mC0EI1rlLE5onyextK/fkYanAd+PKvo9dl/S4ST8uWIJ3Y4K7A5J8KJbi6ycVa2RvSB9j8t4nOHObkqek6EA4TB6aJqf1O2zB3mGnMC52suIh9iTGCJ3RSdV+0vZMscxRFbH/Gq+ZER85f+nhDxy+Eroysi5hV1fDS6NGJcnhXRbtpHXaQ9M3p5hENBOp663hYtxsjPV05wPLg+cIRoM+2kXubA30SFMU8zBxnjcHFEn5hrtBdutL8wY+zmIsa3buS9LnprdfAJ2fIVBDjDDh0XsXbgOuYz6h+jtgEx5gVvOsCa65ag2uroIRFtI9BD/xifYuWeUzs09i5zDOduLoIBbSiMz8g+rPi7adxvaBdsaM8/RbApY4RTy1xhHlM+Ti9zaRQ7P4kZ+9ujFRFfC+BaoZ5DIngfGzEenGMcuCeeHXHNMRZ147qk3VxztHddRNtpK9cSY8g5ymgabWCMML5ic3p0dFS/b//i7PB/r0kS3uzh+qcNBHEpq7Dj/vKuaD7iWoDDVVFbOzcJt1SJ4XBiRD8ZR65T5hLjSj+vjS6LmkZamHPdcz9gDLUJJPCUtIkLlwk1init6a7oKxE3GU0Cs0iARdB3o/sqsZjlA0GTgAS6SeAlaXa5nnFQmos0Ftjfq9L8KNufR3we7tnoLosgnCLK+rfo9Nr5w7PPq9nl3Fzt3LBdnJDXRD+r8rMYxanCaCvt/3FE2d+PqGuQ8fnNAriUxz2MRWTTmSENZVI2i/u6kZZjhdv/yH4zP/fFr1Vp4EFa7p30p244Ly+ozpPuRY3z5U8cOYIurEMKx7/OPgtP2DMeiPL2jbhX4wCU9D/MPuk5vxhj0Y9DSbk4sIf0KYz6Cp9eW+bRTyL6/K/R6yPmWj87Kie+FVEvecn37WrL3/8efTqi33U7LH/gzHC+pKMcnvbTLtrA36dGdYMpDhnpSIO4DphjzB3awZzGCYRJsbns4JAzb0p99A8e5W/O44zV5wzlfCOiTSdHzeswh7Y7nvSFtuAIFWNMGRPmF22jHvbpV+k37WH+4KRglI9zQ330A6eqbnCEZ8n/g+xTJnVT1jujwqGZl+uT84wRbYEVeQu7wvJpOVa/j7BPeyifudrL1ubgx6MyrxkTAkn0gXzco1jjvyOiHtpd6uBa+UREu7mP9DPykob+zzUSEQyhTM4j5kfp11ez/7mIPnP/ODCq22n5g3aWtlMOY0Q7ycMc4Zol2DKu4fDfEJX517z2qIcxZU6+KoJJL2NO0wfKoYwyjmXM6eebI8a4eR/AOWcM6Fepn34zT5l/HGve67iPfDaifIIhdWP8CMLRZs6Tn3H/dkS7ECzfFXHdFuN+QoCP9P8Y0a5edlIOfjMq7WUcYEiZhRf9bd5bSllcu1+KaBvXVK9rt6Sd+m39ZjhpnX1oGjQ/xgDRJwYVNT/oJ62PtkcCEpCABCSwFARuTCFXRziVLK54KnRxdEfLwvm8fGTE0xTswujWar/NhqcpLPA3R2ui9RELvS3V/jOzZYFIu3gq1esJTQ7fb/SFRSFl4fizYGPh/g/RKO26v8A+O9fn+LUR9RSnm6dT9Gcc25ZMz41eF7EQpczXVsc2ZLsQsYD9DxGLaRxf0twTweot0Xuj26PFGOVdFV0XHRThrMD03kahnK8vxstp0lEG84c+fSW6KLomKk8NS9r6ljn36OiFEU8CcSRYlzFmzNFLondE1Fu3y/PH86KnRvMRTFjH0Q7GiLa/Mbo0qttt+ePt0UJEXuYK+TDm4qaIsWCM6U+xhew8Lnp2hJMPA9oJd85tiGgn+erGvKAftIu6mzxJSz0LEY4M7IpR9psixvlJ0aqItqKbo1sinLk3RFwnGOWTj3ZQb/N6hgf5nh5xzZU5TNqPRjhUjP+vRc288HlxxNg+PiJ4BYPSfupkPm6I6tcDbWIe0Ebq7mVX5uDLI/rFPODa3zdi7lwR4dh/IGL+cx3QPtrJfYH6YUT9W6N+BlvSLETNOfnWHKMMrkUcTPpF/5lHMCHAt3dEmmbed+cYc4571qFVWsaItlAnzj9pGJdx7UPJuDF6SsR9Fza0EaP/XC9cE3CCV7ON2xPGSPvkiHE8PoJzGWvyvT6in/SDvpZ5ld3tY8e1gcGefGWcGWPYwox0xci/EFEmAZW6kZYAAPdp2gR3ONOvheim6P3RWVF9PtE32FIfASj61MvOyUHmE9fswRH3FkS9jMuG6G0Rc7OXMc/2r84zvv2Y9srrsZ1IgInMIoEPzlH0maQnEsYNjQmiSWAWCRyZTnMjLVHdD2d/zSyCsM8SmBICL6ldz2dmnwVp056RA/WnI/PNBPkbR4cnSuXpzulVGhag/xzxJIUnJCdHveqokvfcsOD7SFUG5bMYXBd9IaJMysbBaVsui9n/Ef0wojzuaSc08lMe5/jcPyOqGwtajpX7IGVxrGmk4SkS6Wjj+maC/I1j9YIqTfOpWI/k2xf0r4l4ovajKl9pR33LeNEv2J8S0eelMha7OLGw/5uol6O/VHU1y4EzDBh/eK6N9mkm6vM36XBYjoqOiFb0Sdc8jDPAHJyP+AwkX3GqmmnrfzO2tI92Hha1beegMoed4xqgrbSTeotDMyzfoPP0F94I9r3mer/8zDucwPkI5iujttdpvzI5Dn/Kosz5iAAN47SzjPoPjOAM41GvL64h2s4YwWcpmKSYXzHKZH1GG+cj6mFOjmo45YUzc6vt+MOIubMqGqfeXu2kzHItFnZtrsVeZfU6xrgwz4+KuGbp+yCjXwSkud++okX6QWVNxbmlHIylBnJeCkSjGgsgLiSiVPeOmtn0EpCABCQggQkksCVtujhiUbc16vX5dmGVhgX2TdFlPfpxT459K7ok4skJ6TAWgNSxIeL4xuinnBjBFpL20xFPfVjU0g4WaLdEl0Z8LvOkqm25tON90f8d8fQGYzFP2ygTuzpiUU/Z26pj9Q3H6Cu2UD9R2+f8RdF+EU+mNvVIBzf4FG4LPdLUD/GkiqfgPC3kSTOLVBxwuDCGlEf/KJMHF9TP/lIaff98tD76k+j8iKfWO8OYnzBAoxpjW8Z3lLzMgRsrjZLv9iS+cpQMS5CWa2Cctg6qmvsCGseYi9eMk3FInnLdLPXcHlLt/aep/9q2iXuk4xrqdV/pkXTsQ8yF6yuNXUgy3hpxbxzVYDTuvOlXF2Uu9fyu1zXquByazHx2kI/77c66D/bj4/EdQIAAAN/1+WJEFE2TwCwSIJLsGwCzOPL2WQISmEQCBEXKk0iCIgQEVkS77uDGzqV8XrvlLYpTo5359HUHd83iJSABCQwlwD2Pp/58ZYG3sgj0zrzt6A+emQcsAAlIQAISkIAEZp4AbxYs5knkuAAXkpEfVOMV98dGvBXCkzlNAhKQwCwQ4GHwERFvu30yGuetpKnjxGtomgQkMH0EeKWM18HKK5i84sgrWZoEJCABCcwWAb5ewNdDCAIcHvE2giYBCUhg2gnw9bBjopURPyrLV8Z6fX1u2jk8oH++AfAAJB6QwFQQ4Ptc/PprWejxxIeAgCYBCUhAArNFgO/Ts/hlzcePw/G5wBsJmgQkIIFpJrB3OsfD7nMjfgPFdXA12gYApnna27dZJkAA4O2zDMC+S0ACEpDA/QT4kTsCAfwIIT/4pklAAhKYdgL82N8HIu55vvpfG20DANM+9e2fBCQgAQlIQAKzToDXXhdmHYL9l4AEZooAX39FWoOAvwHglJCABCQgAQlIQAISkIAEJCABCcwAAQMAMzDIdlECEpCABCQgAQlIQAISkIAEJGAAwDkgAQlIQAISkIAEJCABCUhAAhKYAQIGAGZgkO2iBCQgAQlIQAISkIAEJCABCUjAAIBzQAISkIAEJCABCUhAAhKQgAQkMAMEDADMwCDbRQlIQAISkIAEJCABCUhAAhKQgAEA54AEJCABCUhAAhKQgAQkIAEJSGAGCBgAmIFBtosSkIAEJCABCUhAAhKQgAQkIIFdRSABCUwtgfr1fW96iTQJSEACEpCABCQgAQlIYEYJGACY0YG321NP4OD08G+jvaqefj7bs6KtU99zOygBCUhAAhKQgAQkIAEJ9CRgAKAnFg9KoPMEcPzXRftXPbkt2z063ys7IAEJSEACEpCABCQgAQmMTcDfABgbnRklIAEJSEACEpCABCQgAQlIQALdIWAAoDtjZUslIAEJSEACEpCABCQgAQlIQAJjEzAAMDY6M0pAAhKQgAQkIAEJSEACEpCABLpDwABAd8bKlkpAAhKQgAQkIAEJSEACEpCABMYmYABgbHRmlIAEJCABCUhAAhKQgAQkIAEJdIeA/wtAd8bKlkpgFAJ3JvHm6PYq07ZsfzpKAaaVgAQkIAEJSEACEpCABKaLgAGA6RpPeyOBQmBLdl4T7V4d2JrtLeKRgAQkIAEJSEACEpCABGaXgAGA2R17ez7dBG5O986b7i7aOwlIQAISkIAEJCABCUhgFAL+BsAotEwrAQlIQAISkIAEJCABCUhAAhLoKAEDAB0dOJstAQlIQAISkIAEJCABCUhAAhIYhYABgFFomVYCEpCABCQgAQlIQAISkIAEJNBRAgYAOjpwNlsCEpCABCQgAQlIQAISkIAEJDAKAQMAo9AyrQQkIAEJSEACEpCABCQgAQlIoKMEDAB0dOBstgQkIAEJSEACEpCABCQgAQlIYBQCBgBGoWVaCUhAAhKQgAQkIAEJSEACEpBARwkYAOjowNlsCUhAAhKQgAQkIAEJSEACEpDAKAQMAIxCy7QSkIAEJCABCUhAAhKQgAQkIIGOEti1o+222RKQwHACBPhKkO/e7CNNAhKQgAQkIAEJSEACEphRAr4BMKMDb7ennsDh6eFXou9Welu2q6a+13ZQAhKQgAQkIAEJSEACEuhLwDcA+qLxhAQ6TWC3tH6faN+qFw/J1uu900Nq4yUgAQlIQAISkIAEJLA4Ar4BsDh+5paABCQgAQlIQAISkIAEJCABCXSCgAGATgyTjZSABCQgAQlIQAISkIAEJCABCSyOgAGAxfEztwQkIAEJSEACEpCABCQgAQlIoBMEDAB0YphspAQkIAEJSEACEpCABCQgAQlIYHEEDAAsjp+5JSABCUhAAhKQgAQkIAEJSEACnSDgr4J3YphspARGJnB7clwR8T8BYNdHd1f7biQgAQlIQAISkIAEJCCBGSRgAGAGB90uzwSBzenlKyP+O0Dslujmat+NBCQgAQlIQAISkIAEJDCDBAwAzOCg2+WZIFDeAJiJztpJCUhAAhKQgAQkIAEJSGA4AX8DYDgjU0hAAhKQgAQkIAEJSEACEpCABDpPwABA54fQDkhAAhKQgAQkIAEJSEACEpCABIYTMAAwnJEpJCABCUhAAhKQgAQkIAEJSEACnSdgAKDzQ2gHJCABCUhAAhKQgAQkIAEJSEACwwkYABjOyBQSkIAEJCABCUhAAhKQgAQkIIHOEzAA0PkhtAMSkIAEloXAutR6cuTnyLLgt1IJSEACEpCABCQwOgEXbqMzM4cEJCCBWSawJp1/ffTO6OGRnyOzPBsmq+/818Z7R/tFe0xW0yayNQ9Jq1ZE+y8jr92r8WLM9oy8n0zkVLFREpDANBHgw1KTgAQkIAEJDCOwbxKcET0pmotwsDYNy+T5JSNwbEp6fLSYz+07k//T0UerVh2T7WOjcZ3lbybvu6MtVXlsjooeE+1VO9Zr994cvCe6I/pudFV0RXRrr8QDju2TcydFj4gOjEpffpr9rRFz9OPRlRH19bMzc+IPo3H5/nPynh/V249j/fIIJ5fjz4vqxjV1YvRHjeO9/oTX3dFt0Q3RZdE1vRIOOEbfjogeHfEGD/WX/sLmxuiz0bnRddGOMOpbHZ0QEUBcFcEHow3bIsbsIxFzgnEcZJR1SvTQ6BvROdH1gzJU5wg0UDfjg5H37yM4DzPm2VMj5h72lmhjtT9sc3AScB3PDUuY8/Sda/YH0bXR5VH9WitF0P8/jRYTPLk9+blOLqoKPSDbP4tgtBC9ObqpOsdmt+iVEXP85uht0Y6aM7Vq3ZWABCTQm8Azcvg70Rejg3on8agEJCABCbQkwFO506IvRf8e/Ty6rxJvAYzrMLWs3mQVgRdU/H+S7bj6fvL+9xrR/5Z9nItxy8NZxKGpG470v45Q5l1Jy7z6UcQcOzXCuRhmOI0viqjrx9HPojIvy5a5SvmsCV4bzUX97P05QTnjsnhj8q5oFI6j+MOqTJz2puG8vqc637Ze+kM7Gct/ipp1Nusof6/NzgerfJRRv44LLxgyFjB9TcRT+aU02kq5jAf1DBoz5gP9mxvSgMNz/vMR/D4TEdhoYzjLh1X5yEsAp60DzTUE/zJmH2tTYZXm6Gz/pZZ32LiX6+Pfkoc5RN3N4BpMSTesrEHnCcJx7RY7Mjuso8lDe5mrdSPQ9rXq/DezJb0mAQl0hIALt44MlM2UwIgEWBzzKmxZ0JQnRzxh0STQhgBO2ProhRGLbOZU2wVym/JNMxoB2POZ3e9zuz42/Z5ikreerpRZHO5++fq1tFdbKJPyEE/3ebLYr1ycCF5DJw/7OKmvqo797wH5eOr4loj5ST7Kpy7eIFiIeHKKs3lIRFr2nxOtjF4R9XpyXm93v/Yma1+rc60nglHh0StzOc+9GVY88e1lpCMYR3+5FtnnzQf69sSo/nQ2f/6K8ZYBT7p5KEI76d8t0eXR1urY6mwPjSgXTvBiPF4cwXUxRp2MxesiHHT6Qht4K+LqaHP1N32hTp6s4+SeHM1H9G9j1M8KQ7b9xqFX3jLmnGsTdCId85Wn7bSRPlDnsRHMtkTDjDpLe5mnMOj3uUyb4MCWfNTN/fj/if42Ys5jnCvlVod+ZVNn0m9uN9kVNtTNuV5W+tGWXa8yPCYBCUhgSQj4BsCSYLSQLhPYddddj0z7ieiXJzsfzv6aLvfJtu8UAiz6cC5w+HlayJPG5lPV+t/vzPl+i8Od0uAZqqQE9QjsNcVTeJ6qlrFpnq//jQNZ7K+zw5NW8uGccY8YlLd5DoekOf7PyrHvVWXyWxGDniIz33CkTou+HJWn0jzR7fckl9fWeVrL02PS8wYDbzLQlqbRXhzZb0WlbAIHOGtNY77ztBMWT4uafR32N1zrjhblHxjx5JYyearetNU5wJsHnOfp7hOaCRp/U/4BEa9jl7ZyjcK5WTdZOUaQ4OsRdcDsq9GpUX0e5M/tRh95q4LPDnihz0UEBsY12kDggafItIEyeWLM3GMsm0YAgvlAm8sbAjDkntTLOM6TasqmrUf0StTjGO0ib7lmyNuLYTMrPL9d5XtXtrzJQBkErtrYMUnEXCcP8xg2g4xxOjr6ZFR4MJ9hVIw0/ebn2pwrbeSthX7pOM49pth8dko72TJX60ad34joBzxYc2gSkIAElo2AAYBlQ2/Fk0LAAMCkjERn2sHClwUdjiSOIAvFsjAetH1n0jUdwM50eooaikOLg1nGqm3XcMJKAAAHBmd8sTZKAKDUxfzjdWy+AkAfcPheEjUdMp404uwWpxpHaJhzyvzEscbBpmwCBjhPzXlbDwCckvNLYUsdAChtou1/E5WgBk79XI8G4/x9OqLfBAx4pb7pyDWzUTbBlxKQwenkawqDAjnNMup/k+8dEW2grM9GbZz0uaTDQS5OLwGBXgGDnRkAgA19waFG8GUOxEcXRAAAIABJREFU0rdvRwQvhtmoAYBS3lx2GD/qYtzfFvUKejXrX5UDJQBAYK6tzSehAYC2tEwngY4RaH64dqz5NlcCEpCABBZJAMefJ79nRh+JeJraa6G9yGrMLoG+BO7NmesinHAMx+Y/V9vq0PYNjiNPQzl/e/RX0ZXbz/Q3Xq++MHpfdHdEkOMR0Vz/LBN/hj69Nbqpaik8eMW+blzXj41gBt8LoldGN/5Kqgf+QdmXRy+MFiKcXp7uHheNumYkL+0ioEIbro4ING2MhtlCEvxlxLzACFw8Nxq1DVX2Jdnw9gVBUthuirZF50YwI9BxQrSjbEsK5m2ROyMYrIjmdlRllisBCUw3geW8kU43WXsngQki8Ou//uu7PPjBD56gFtmUCSDA01SeDp0c8Srrq6NhTwebzWaBz2ujsyoYaktDgO8z41AVw8lCxZhrj4lwfLBzossiHMthRrCAJ8+XRDiU5Km/7jws/ySe5/vjOIUYbJpPg3FUeZIPw5sjnuJfT+IWhkN7VcTTbupZGT08wskdxWjTX0Swvi0iCMEYtDX6R8CAwA3XGm9yLGdw8qjUX+YfbycwZ/l6XWnfk7K/o+YVcxaGt0QY9dSvj+qwGwlIQALDCfChoUlAAlNO4N5726yRpxyC3SsEytMjXp1+cnRMNO5nwVzy8p1YHIZZNBbk589ix3dAn3FminPHDYsnnThWxTjH02Res2a+8bYKadoabwq8rMqDY1kvu20Zk5SuzgsezMViXOMHROX75Ruzf200ygfBrUl/abS5Vhb860Ga/DnQ+F750VWKrdl+PBqlDWT9aPSKaE3E2xs44e/lxE42+vLHEW2AzYaI+Xd5RFCJ+ylfCUCboqU27tF7RSXQw/wdZf4vdXssTwIS6DCBcRd9He6yTW8S2HPPPfe9++67V+f4rzzNuueeWV3TNwl14+9873+X3Xf/xcOHn/zkJyzU7n8Scd999+2bY4cnzf70xrGd7DFlLIvV9+vj9qAHPej2n/3sZyw8R10EMi9Oj54fNZ8ajgrmiGRAs2o4VQYAFj/6TPgDo0dWRfFkle9U151anFmcMIwn2sz9UT6keAvgqip/1zdcw8dGcxEONQ4pPIpx/rcigiYY/13bKI57lW17nisiggkrI94YamuM6cERY0YbKeuatplr6XgD4cKIAAD9+qNoOQIAh1dtoF+XRFujEszg7QoCAASn/jy6rHYuu4u2ErTlfx8gCMC8p/5xxnTRjbEACUig+wR+ucrsfl/swZgE7rrrrnVxEF8eh2L/bHcpGrM4sy0TAZzDO+5g3bzLLhlLgjl1547Fy0FJM8qCeZl6YrX1YWo4/ffDyXXKIvN5EU/oRrUbk2FDRKCIhf24xgIUJ3gWjcV/eQV7GvrPPOA+UXe6B/WLwBNzb1B6nlLjAPa67+DUlPsUjuXjovkIrldGm6r9bLYbwUscLAxn9+5qf0dsfieFjhLYwgFfiEYNxtXbDg/u2Txh7mWs13CA4cl1+/KIPAQ2cEi5pouRZnuwNwYnrtNffDhUB1tu6NcNVVocz/8YMWY45cOsBHVIRxsWqu2wfL3O8+OQGGUSjNjZRp/5CgTXCH3/VFSf9+fmb95SYPzmq3QL2Q4zyoXroDFnzq+IcP5PqQpcyPZzUfk6QHXYjQQkIIF2BAwAtOM07am2Lyrymvi+OP/F4kTiSG7/s2x/7ddYb/zy73r6+zO6s9MJlPEpFfN3CeRUr/+z0Ni7jF893WIa6/gvht7wvM2vbvB3jTkLx3Hu4TgpH4hwGk6MHh2tjcrTwuEN+2UKnLSXRr0cvFHK6WraNo5QV/p2dBrKPGg7ljj/r4mYA/0MR5Uf6uvlGPNhQoBgv4gnxasj6sa559fOefJcN5xaPqsw3gBo2856GW33T0vC49smTroN0RujuhM+QvbtSekf1+JD+2TkWsfBhNNhEezgemHEL8ITOCmG08j9ASNAgOrnq1NDNzjuxclkvCiTetvMe9L/ZlUD6RfjrG6pyoEB9ynKHqc/VTEjb1YlB3MZ/tdGV0b1Ob01f5f7KfOZufOGaJgRUOBrWP3Y8LlNf3k75qCIfm+L3hddFGkSkIAExiIwzuJxrIrM1A0C/Fhc/S2ApoP385/zv88YAJi00RwUAKi3tYxfOdbMN2q/mvNj1PymXzYCLJ5vit4abYhYsPKECQeQRW5b46niQrQjnbG2bTHd4gjgaKC2xvoBR3OQ8Vo0GmY4iAQUroo+Fp0fNZ/wU98vItC/cL52pAOIY4ba2kISluBE2zzNdDjXXIdtAg849JdHG6N3Rdc3CsNxRBhsx70+yVd3dBmDUm5V/MBNYUI5dw1MOfhk/e2F0obm/BhcwuLOHpHsa6oicLxxwuvGXORrACdE3D8fFb03uvVXUj3wj7kcOuOBhx9whPIJel0TfTI6p/r7AQk9IAEJSKANAQMAbSjNSBqcOb5vvM8+++yy33777bL33nvvku+Nb+99802A8rcO4GRMjqYjX38DoD5GvdItpgeO/2LoDc/b5HvppZfW3wAYXsDwFCzMeaKF87UhYuF6dHRwtFiHJkVoHSLAE3cc8LaO1feStjyZ7dfNrVWa+hNjniDjTOHwFsfmvOx/PqINPEXv5bByrDj9zM0SDMjukhtOXtOpHlTJV3Oy31PcQfnq53C0FyIcvWKs0QjKzEXF8b4s+wRJrowIAhAMaBqcCqvmuVH/rq8TRy23jCNjtZjxWkwbRu1vMz1vZvxhtH90W8Sr92ybtjEHmLvM7QMiggb/P3vvA3VbVdf9HlBPXONFQi4hnU5PjBMvERERIREdj4RIiIiGhKSGSWp5zdtteB2NbqPRsN7e8vZ67a+akZoRKZERIdGJjkRIRISERKdzj09HPJ6LvIjEa4aI9/t5XPO8i3X2n7X23s/z7D+f3xjfZ6+91pxzzfmZc649f7+19n4Y14OMchhn9SAL/bwp2lxlJPixPeJrB/Q5QYC2c7QqwhcJSEACTyZgAMAR8SQCPGJ80kknbbjooos2POMZz9jwtKc9bSChpoMyMLEHV53AsP5oBgCGpR9W4WZ5w9J7vBsBAnL5ob8NfHVj3759Gz72sY9tePjhXmvPbuX2SM2CEsfivogFLj/Gdl60JfJzogewOdx1c9rEY+zD7lqWpuMMDnNEbk0a7owW5xgnEOcdB+ml0daqMAIEBKIYf/2MgV8CCQQPxnEo+52j7Odfu31gWKLacRzdYSyGFQf3P4hw9oqVAAA/fHdpVJ7QIC39VQ+s1LKt1KU4lQRcRg3mkY872sUos+6s1g4dsMn4KP1OO555QIr2O0q7KROHuF+7u46Jevp+wQ2CoQjHnPZ8R0RQoGkcL8GYI7PN1zluiPrVlfwEDN4e7eZNZZSD808w9iXVPsbXnogAHduaBCQggbEIuLAbC9/8ZcbZ2Lhx44ZnPvOZG571rGetPBEwyMZ1IAeV7bHuBIb1R9NhH5Z+WA2a5Q1L7/FuBOoBAL6eswaGk3VTxF0m7siyCD034u5X18X1GlTXU0yQAE4jzlWvO8qjnmZfMnJXn9e67cgbxtjPRGdHl0U4Tb8U4eT0sr3ZSf2wY6NyR7xX2n77CBzg6A1rI07usDT9zjHqfvhzN5hAXNNuyY5PRfzexukR8xHn/B1RL4eQAEFhjgMPW3gNckZz+AA7NHvK9/hh8pmobaADzuUHBKkrd7VZUPSq7wEnbuw4oXpP3uXGMc5TyuQaNXjR8uTMBEewehn1FJR1WnRctRPH//KoXxtKsIRyT41OjO6s8vZ64Xp7d8RcaBrzhjHI+S6ICILQvuubCX0vAQlIoCuBLhfKrmWbfkYJPOXgp6z8O7mv/uqv3pB/DziwFeM6kAML92BnAs0fjWsW0PwRwGHpm/mb75vlNY/7fjwCzC/6qHylY7zSWudmMYzzcF2EM/aX0YsjHDUWwJoExiXAGOPpgF+McGZx8HjiBAf/LRF3O5t2X3bg2GLcISXf/RFltTGcsosjfs19V8T/pMf56ufMtSlzrdI8kBNdET0r+vGI9vMfQJinV/eoBI4j/0aRV5zS/xzhQMKri5HnpCoDARj6pS1vuOL8soggAECdlyLYd7XvrTIQwCj/EaCUwb4SGOI8qI3hTJfrGW0qZdTzLuUNd/wJHNEOeA4aL6UMmG+KGNODAgA53NcIBr2tVg6BiDdGPIVwe99cHpCABCTQgoABgBaQFi3Jl5740orjz7+UG/YEwKKxsb0SWCACLHSXIxb+LGL/PHpZdEZU7pxlU5PASAQYX7dFvxLh6OA0nR9x1/hXo2b0mXFIEOCUiPF3ToQDz53pNsZTAzzRwjlwqPnePvlnxbhbzOPiJ0dnRUvRm6J7K+Vlv8EWVjiR3IneGv1R1CUAwBMDJ0Twxiir153q6vABLzjVnI872WdGOMQEEfsFADiOo70c1Z1xAhC0AaOvt1fb5YW0D1VvcOgpo40RANhSJSSIwJhoGqwRaW+O+DpLebKimZb3rKmpK0Es6sFXN2hXF+6Ug8EPVm+NeAKBunLt/ZGIQMygeuSwJgEJSKA/AS5qmgQkMCcEuCM/SM1m8lj5OGqW5/u5JMDimMX/VRF3HX86wnHCydAkMA4BHLoboiurQrjjzG8DbOtRKOOQp1GKo/aD2eaucpt1DM4szhN3UUlfHKhZG8M4kj8XEQygHTjHzEna1zTmKAEW2ogD+aIIZ7StHZuEMMaRfSTCkcch7WLk41864szSt9Th+D4FcLecAMfvRhdHh0bczSfIQR1oxy0RAY+6waI42Edneylq8xQAzjoOOsY4/Odqu7wQkOLu/1LEuf8i+lC0fYBuzLFrot0R5cOd4MuoxnnhDhfGP0wIfL0kajPuRz2v+SQggTkn4AVkzju4bfN8lL8tKdNJYGEJsEjmDuAV0Sui/xLtXVgaNnxSBHDo3xvdFbEmOTH6gQjnvmk4WDi2OEPcEf3ZCEdtkFEmjj9Pr+CE4lQRdOAO+SwaTj1PTGA4ujiEF1Xv6y8P5g2/HA8vAgSXRpdE3CUfZsckwRuis6qEOKF8ZaL5VMawcugnHOabIxxigjA8xt4MRFC/b4lOjy6I+C0IngzhKyIEBuhDHH32NevA/n+KCDbA43kRgZFhdkoSFOecMuBaNwIV3M2nbgQ+4Mg1cJAR6NgXMb4wOPL1BZ5YGdU45/URwQeMMr8/YkxrEpCABEYiwAV5Wo3oMxe6rkGKb0oeLtjT3LZpZW69JCABCQwjwEKbxfCeiAU3DpUmgVEJMH4ILP1WhGOLs4RTy3e93xXVxxePevOfCggSsEbAWSQ9PybImGwa4xMn780RziXriR0RjjFO3ywaPODCr8zjBLJOem2EA8ud52I4ozje/GcB0hwdcTf96yIYLkekqRvrJtjyY4PnRrDF+eXRd/qoq1E+1wkeY6e/COoQhDg8oq8JLJCGNuHc/2306ohAwaXVMe56k+YXItI3rYyf23Pg7Ih+pv6ck329rk9nZT/no2wCCndWysuKMU7gQJAAoxw4NHlVh5/0wvXxT6PLI8YfwQgCCfTFKFYY/nYy099L1StBsp0Rc0KTgAQk0InANDvJr0pLXhnhzHcxPrC4qD8QTXP7urRpVdN+6UtfWtXyLVwCEphLArPqQM1lZ0ygURenDByVxzqUxd3JP4ve1yFPr6SPZuf26OqIz32cVb4KgFN/SyMDjhTOIHeJj4xwVE+u0v1VXvdGOHBLET/4hzNJOtYSJdCAwzfImcNR5omBLgYL6kTZq20P5gR8FYe78qx5aP/rIwIddYcXrldE/yl6XXRUhGMKs5uij0b7ItZKSxGPxG+NSIfzivNOsIB+6TIukny/kY8+o77c0d8UXRgx1nCs/zriPPQHazfqTBvYLgZTxli/OtCvfxidEB0TlTFxR7b/PoIX7eFHFDnviRFjjHPi2FMvAgHFKOPZEYEK+pXAxP3/8/DALepOmbdG26ItEU8BwGBUo0zaQpCMsQ+b8yOCZFeOWugE8zFeYNgmGAFzAjT3TvD8FiUBCXQkMM0O8mFpCxforgGAgoAFgNaRAL82XlfzqwHNf/vWfN/xdCafcQL2/+p2IPOvqdU9o6UvMIFj0/alju3HWcPZmYQtpxDuNHMH97gIx51Hne+LcOCK4QTi9BCA4i7y5koX5fWCqDjArG9YPyCcjtsifpxte9TPkcyhFeOuLU5iF+PO7+90yTBG2tKed6SM/yNivXRe9JHo2ka5OGXcDf/XiMAGbNFS9PKIsrA6r+JwwuvGqO4cV8k7vTBOrom4McPTGvQtjjF9V+8z1m2lz7K5Ujf20Rd/FPGVj16ONPX7QISTTxspdynCkYdLaWMpn7ayjwDTj1avedlvx2eLcUh6ggs4q8PGTC37SjsJjG2L6BuCDjDfWU/UcZvxfn30PRHOP/O1BMmo43oaQajTWlaAscV41CQggXUkwMVtWo0PiW+MeFyti4gs7o34wCkLgWlto/WSgAQkIAEJTJJAcXa6lsl6AMeoi8jTax0xSh3Iwx1hHnXG2cIRxKk/p8c5cPhwdL8r+vloT5WeO6PctUVsUzcCCNwZ55HpG6I2jtwoLAbdUKFtw5gMO54inmSscfhxuOL84VDzC/FHNxPmPWnfFz0/4k48TKgvzmmdV96uPHGBU0za66JxnX/KxLiTTvAFp5V63hrR5nqfUR+ceBxu6vmtEfWmP7ZGPFr/wQgHvWkEYN4RETR6V7Qvoo04p5wDsc2+3RHl83sBjLk6e9IQADohwu6I4NXFqAscqQN1p6wz+xTQpd+pB0GmUi5zg6cpqPOkrc2YrZ8Trm016bpangQk0JHAQR3Tz0Ly16WSXNjvj/iQWe/I6CwwuySV5NHFzdzRfdGLXrThVa961Yajjz565Rfl6+Yd31noTus4LwSYf4899tjKf2r49Kc/veEVr3jFhs997nOleTdng2vcOHeV5gWV7ZDApiDAMTwywhF5MNoVLUfeDAiEhm3O+8ILPjiVON5wWwtjccGj49wZJ2jBe86Nk0td6n12ft5z13gp2hMRnFiOBtkhObglWooIclAeT0NwvaSMLo53kmsSkIAE5ocAH5KaBCQgAQlIQAISmGUCBP2R1o4ATjBaL8MBx9FHw4wnEW6KeCIEZ355WIYc58kFbgB5E6gFLJNIQAKLRcAAwGL1t62VgAQkIAEJSEACs0aArxC8b9YqbX0lIAEJTCMBHrnSJCABCUhAAhKQgAQkIAEJSEACEphzAgYA5ryDbZ4EJCABCUhAAhKQgAQkIAEJSAACBgAcBxKQgAQkIAEJSEACEpCABCQggQUgYABgATrZJkpAAhKQgAQkIAEJSEACEpCABAwAOAYkIAEJSEACEpCABCQgAQlIQAILQMAAwAJ0sk2UgAQkIAEJSEACEpCABCQgAQkYAHAMSEACEpCABCQgAQlIQAISkIAEFoCAAYAF6GSbKAEJSEACEpCABCQgAQlIQAISMADgGJCABCQgAQlIQAISkIAEJCABCSwAAQMAC9DJNlECEpCABCQgAQlIQAISkIAEJGAAwDEgAQlIQAISkIAEJCABCUhAAhJYAAIGABagk22iBCQgAQlIQAISkIAEJCABCUjAAIBjQAISkIAEJCABCUhAAhKQgAQksAAEDAAsQCfbRAlIQAISkIAEJCABCUhAAhKQgAEAx4AEJCABCUhAAhKQgAQkIAEJSGABCBgAWIBOtokSkIAEJCABCUhAAhKQgAQkIAEDAI4BCUhAAhKQgAQkIAEJSEACEpDAAhAwALAAnWwTJSABCUhAAhKQgAQkIAEJSEACBgAcAxKQgAQkIAEJSEACEpCABCQggQUgYABgATrZJkpAAhKQgAQkIAEJSEACEpCABAwAOAYkIAEJSEACEpCABCQgAQlIQAILQMAAwAJ0sk2UgAQkIAEJSEACEpCABCQgAQkYAHAMSEACEpCABCQgAQlIQAISkIAEFoCAAYAF6GSbKAEJSEACEpCABCQgAQlIQAISMADgGJCABCQgAQlIQAISkIAEJCABCSwAAQMAC9DJNlECEpCABCQgAQlIQAISkIAEJGAAwDEgAQlIQAISkIAEJCABCUhAAhJYAAIGABagk22iBCQgAQlIQAISkIAEJCABCUjAAIBjQAISkIAEJCABCUhAAhKQgAQksAAEDAAsQCfbRAlIQAISkIAEJCABCUhAAhKQgAEAx4AEJCABCUhAAhKQgAQkIAEJSGABCBgAWIBOtokSkIAEJCABCUhAAhKQgAQkIAEDAI4BCUhAAhKQgAQkIAEJSEACEpDAAhAwALAAnWwTJSABCUhAAhKQgAQkIAEJSEACBgAcAxKQgAQkIAEJSEACEpCABCQggQUgYABgATrZJkpAAhKQgAQkIAEJSEACEpCABAwAOAYkIAEJSEACEpCABCQgAQlIQAILQMAAwAJ0sk2UgAQkIAEJSEACEpCABCQgAQkYAHAMSEACEpCABCQgAQlIQAISkIAEFoCAAYAF6GSbKAEJSEACEpCABCQgAQlIQAISMADgGJCABCQgAQlIQAISkIAEJCABCSwAAQMAC9DJNlECEpCABCQgAQlIQAISkIAEJGAAwDEgAQlIQAISkIAEJCABCUhAAhJYAAIGABagk22iBCQgAQlIQAISkIAEJCABCUjAAIBjQAISkIAEJCABCUhAAhKQgAQksAAEDAAsQCfbRAlIQAISkIAEJCABCUhAAhKQgAEAx4AEJCABCUhAAhKQgAQkIAEJSGABCBgAWIBOtokSkIAEJCABCUhAAhKQgAQkIAEDAI4BCUhAAhKQgAQkIAEJSEACEpDAAhAwALAAnWwTJSABCUhAAhKQgAQkIAEJSEACBgAcAxKQgAQkIAEJSEACEpCABCQggQUgYABgATrZJkpAAhKQgAQkIAEJSEACEpCABAwAOAYkIAEJSEACEpCABCQgAQlIQAILQMAAwAJ0sk2UgAQkIAEJSEACEpCABCQgAQkYAHAMSEACEpCABCQgAQlIQAISkIAEFoCAAYAF6GSbKAEJSEACEpCABCQgAQlIQAISMADgGJCABCQgAQlIQAISkIAEJCABCSwAAQMAC9DJNlECEpCABCQgAQlIQAISkIAEJGAAwDEgAQlIQAISkIAEJCABCUhAAhJYAAIGABagk22iBCQwewQOOuigDb00ey2xxhKQgAQkIAEJSEAC00LAAMC09IT1kIAEJCABCUhAAhKQgAQkIAEJrCIBAwCrCNeiJSABCUhAAhKQgAQkIAEJSEAC00LAAMC09IT1kIAEJCABCUhAAhKQgAQkIAEJrCIBAwCrCNeiJSABCUhAAhKQgAQkIAEJSEAC00LAAMC09IT1kIAEJDB7BPwMmb0+s8YSkIAEJCABCSwwgacucNttugQkIAEJjEZgY7KdGD09ujV6YrRizDUmAfgfEx0dHRrxmf549HC0L7q/ep+XvnZYjhwRjRrM+ULyPhTx2sY416aI10Mi6vxYVOpMvXk/zGgz7cfI8/lhGarjjF3OTV5YPRA16745+6gX4/rB6NGojVH2URGvtIF6cY5ilEnbB7HmnOShPZy3DYv9J+ixQV1gxTihr2FO+ZRN25ajtuySdCSjvTCHa+l3Cip1YJw+ErW5jtAeGGL0G4zb5CP94RHnxzgf7W9jzC3ylTGxN9tt+4VxxpgY1OelDrSDcuFC/XoZdaAd9OU4xhig/aXv623k/MzHehvLOOL82O7qdbVfaCufNbC5L+Jag9F+jpX6sI9+ac7llcR9jLz0K20v1utawnij/aMadW6Ob+Yjc7Gf0V5Ee8jbtl2MM9pzZFVwl3FOXtpamDI++o3Dqvj9L7SFc8KJenedI+TlvIw9eNWvm3nb06jvUnWEcxbOPRM3dm7Je3R/xLhqc7425c5MmvrEmZlKW1EJSEACElgXAnzgHh9ti14YsQi8LeLDV1s7AjgVp0RnRt8abY5YDLP4YtHOwm05+ofo1uiuqN8C5/Qcoy/JP4r9SzJdFe0ckJlxc1x0avTtEQsvnFIWqqxDWNziiC9H/xgxpu6JBjmml+T4t0Xkf1t0Z9TGWGi+PIIbC8a3R4zjuv1k3uAYwPJvovdEbcY4Zb82WopYWP5iVF9A47S8KeK1n9FPxQn8TLapG21bjvr1Ya+yWJCfEJ0RwfzYiPqxn3Jw8KgjvG+PbolYfE/S6HfOyzilDowB+r0EbmCDo8AC/O8ixuq+aBBrnOmfi7CPR78eta03Y/2lEfPkL6P3RG1sWxK9IGK8wu7Xorur7WH5uV7+aAT3YUbZzAW4fDKCC33THEPnZ9/3DitsyHHm2+9HZd5Qzx+MGCOM+Q9F9EUx5sNPRBxnfL66dmy1NmG2rTov9fyViDmLcf17UUR9ir0zG7dFbecJTvjFEdeRYr2uJT+Sg4zjUe2PkvHGqH49uyzvv3lAgcwBOJPnv0d7IsYcY2LQdZHr4ckRdS7Xryuy3ca4Lr8x4vrE+f8ien+bjEkDn5dFSxH83xsxn6nDMCMv44m+/Ovo6qj086C85PvZKgFM/iq6alCG2jH6/qci5gHjmmvhQhkDRZPAhoMPPnjDl7/85ZX/O37wQQd/5TX7eK9JQAISCIGlaFvEQphXFoTvi7S1JcAi6SURi/VTo+JM9aoF6XDsWOizoO+1cGThf1GEYzaKseC+KdrZJzOLSRwWFus4gpzn4D5p2Y1jitPzwei6qO6E1LM9N2/Oi1jHsMAujkw9Ta9tAh04T+dELPpg0wwAwIMFIotgHI17Iuo0zCibOpGHxTrOxCO1TPQVZePEtjEW0iz874j+OLo26tWHzbIon3r8QHRaxFztZ4wR6grrP4ju7Zew436c7FKHrdmGZz9jfOyKGEf0B+3FEe5ltOXl1YHteX139GivhD32nZB9l0Q4luR5T480zV2cjwDZZRFtwugTxgxjdZhtSoKLI+ZBF6OfGXd/EuGa0d0UAAAgAElEQVTAlXnAGHp2VBh0KbOelvrjbJV5s5TtCyNemZ+wrRv1Z6xsjqjbagcAqAMOKQE1zsm4qF8LOFbqk80Vw2lkLD9Sdgx5PSnHfyg6sZaO607zWnJu9jGPRrV/TsYdUX3uPj/vmRdt7IkkeiCibR+OcJAJnLG/aYUb44M5RBrGTxu7IInIV8bqcdm+JqrXu185zO9y7SPNkRHzZDnqVc96OeQtY+/xbF8f0ZeDjHbS/2UeEGignBujYXkp966I8USdCYL+l4hzL4wZAFiYrh7cUBx9AgCaBCQggQYBPsi3RTj+Z0V8yPrZsT7DBMeFRRp3qFkAs2BhsYrT9mDEIog0m6KTo2MjHF0W0NhV0aDFGA4xizbKaWucu58jhEN8eVTuoDFuWLjeHu2KPhvRhmdE1PHUiEXn2RF1x9m5MqJt62EsMqnP66PdE64HDih385rOK4zoQ+YdwRI4IPrzqOhdUT/nOIdW5ucrIxw08tHfOyPGCW34XLQxelZ0UoRjQ9m0E/1SRNpxDG6XRXDjHNSBfr8tot+pA/a10fHRKRHnXqrevzWvN0WD2pnDa2LUH9EvOJaMSZwVHNJ+475fxeCPE9fPoaJfmDMwYQ7TN/Qh5/6vEfMSFYcymwcYY2YpYgwxbxhjvRwb+mO95tUBle6xAw5c686IbqjUqx31rOfmDeO3TQAAPjj+8G1rnJ/r3UNtM1Tp9uS1X93ZTx81+4I5RB3hQJ8WMRa/MeJpCK7XzK1J2KEp5PsixjfBBq41J0RnRjeOcAKu4ZdEvxo1r3EjFHdAFuYKTxzQfvqD6yXXkNOj6w9IfeAOxghPjNA+gkyMMT6XFsa4qEyzvS6V+6aOFeSDjAnD5Dmiep3UBOlYldlJ/pSnPGXDE0+IaXZ6zJpKYNUJsCBg8cWd223RlogPXW39CLBg5a4ufcEC5v3Rn0Y4FjgjLCbpIxZDOBE8JfCSavtVeb0rYgHbz27NgfdGXRa41IMFbtOox6URTuDmCMflAxF3NKkDDggLQz54WHSWBSfj7cKINrIwo+zron4L6BxaVaNuLGYvit4VTeqDksU7i/jlRu1Zu8DusAgGL45OjVjc8vUBFudw7FUP1jzUE+abqrQshgvzfdn+fMTaj7RLEXOcR9RxhODO+Xk0dlc0qp1flbGUV863PeLpgrsj6lAcAtZqjI3TovJEy9Zs4/gwBu+I1qvfc+oVToU93Bm3zEHmFnVmbH4hamu0nz6HQS/jfFx3YYJzQ1/ST2zfHO2ImOcfim6LehmB2tdEMKS+PxfRB02j3r3mbTPderyHA87nKyIc4z+LmC/9jPbRXsYwDjJ8e82Pen7mx7dV+UjLuB9mMPvtiHHZxah7v3HCXCCYtKNHgVwHGA/U9XnRWRGBgFdGXHcZS4yHSRjjmesN7K+KXhfBlHnJ/B3GM0meZOQl8HtrdEs06XnMvGQu0vc471y7jokIYhCwaHM+6nZTdHH05ojP1jb5kmz2jY6eZisLzy51pE2ID20+XJjUXQdul/OZVgISkMCaEFijp3RYdJwc8V1ZFhw4HixCtPUnwCKNvuFz7dqI7yLvipqfcTgrOyMWniwYcfJYLJ0T4RT0MxbOd0a8jmuckwU8i1fq9+vR70TUq1lfFrNoObovYrF8WcTYw6G5K1qO1suOyolZCMPm9glVgoU/DuGg/mAdwyL1p6OzI1iyjTO4N6ob6x4WxQRNSMfxK6LfjXD06gtbtjmOqANjCGcCB4oxQnqCAI9FXY3x9jPRUkQ/Xh1xR59+bS6uGafFseY4bTszYpzzlAuBCRzA9TI4fnvEWhLmfA3jyIjx8P0RjgdtbGtljDMvBxkOJn2CQ8M84pWvIeyI6JPlSnk5wJgzpd8IojBmJ+UkHnCyVdoBbwJfjEcY3xg1rxn1UzOGCRgQNMMB3BEN6xc4cS3lWsfcoU+HGXVgnN42LGGH48wJromDytyY44w/xs2lEXW9PPqbaHvUnFfZ1dnwt46oyiIgcXrEPGQ+bomoY1dbSgaClvdEk57HXI/pt89HH4w4F3PllIixwJgYZowRPpPOj7jubYvguRAGvGm2v0jlGPAHd6jk8UnLpObDlYvtoItGh2IXI+nK7wAcnN8ByPf/MX8DYDH63VZOHwHm3uOPP77y1Rye0Pmqr/qqDY89VtZ1q1JfJj0f9BdHOF1cS1mIadNBgEXgN0Q4IAwEPh93R/0+41gUsvD6vYiFEflwZvjcn8SCMcX0NQJGBJAIOnA+nEAetxxUXwqjXrsi7haTd2vEApSxuByttbFAJPjAYph1BU9RsBBeK4eKxS2OAXdxN0cwYXF7SfTforodnTc4pbAi37URzFlDDTLWStdFBBtYDNN350Xcdb1pUMY+xy7LflgxLm+P3hLRp4OM+uLgMMZpB208NzoruiZa7fGaU/Q02oG4NjIOboi+N6J+26JjI5zsfnMwh0a2e5PzDyPmLn1C31OP1TjXyJVchYy0Ea4XRcwzfqfggSHnIcgBI/qFscsPwzF3+xljnbRL0S3RIRFO9bQa1/v7ordFzI8Lq1d+v4A5xhgcx7i2cI2DC+OO+crvqpwW8bnB+X45amuUQT7EHP7hiOvVpOYx5Z4fMRe4vsGA30bgs4KxwznbBACSbOX6ytzms+aNEde8eZ9jtHvlg3ma7X2pHBOzi70yiRkAeyMuHgvRkV0ANdPiXBTD8cfp4OsAT3va0zb8x3/8RzO57yUggTUgUJ+LX/rSlzYg5uYqGYufiyOcNhZG3AlgIaZNDwH6AweJV8SicNjnG4tgFrg7Ipysj0drEQDAaTo1YlH+QIQjuhwNq2+SrKRhQbYj4m4O7SxtbpOfMiZlrCF+K9ocHRPhlP599O5JnaBlOfAgkMOdde5yEoj4f6LCAz6se1gUs31PxF0x1kFtjAsLzu2HokuiTRHBhB1RF+aHJz1OCWOM8fb2CGeijVGHHdH1EaxLWdSLmzlrbYxdAmZwfTT6h2g5wiHFKeeayV1TWNPWSRvccfow+pT6sB5ejXNVp5mKF5zQcyLGYLkODBuD9M+tEfN0KeK6cVPUzyj7OyKupziKfN6d2S/xFO3fk7pwHaB9x0Vcj5grXKeGMRrUjLNyECaMsxsjPjcIvBG8Y9wxzt8RtZ2Hy0nLPL4swlnHsaZc+nMSRruZf1wz6Gc+Y/icgwPn++7oymp/XgYabeXJHvqfeQ3bOwbmmJOD0x4AeHAEzkTCGBQsGnjVWhDAucA2bty44d/+7d82fOpTn9rw1Kc+1QBAC3YmkcBqEChPABCM4+7/Zz7zmQ1f/OIXJ30qFlvnRXxX76SID1UWAdr0EeAz7bMRCxYcgedGLHq5A9LPWBTujOhfPg9ZKFPOahsLqaXqJCzQcGS6fB5TTx5D/UjE4o6F7zgL3KoqnV+oM3eXfiNiMcwimSAZdxzRWhn1+ED0ExEBgC3RCRHOJ8Yi/eTo6Ii07GcR24UZi/v3RhdHXBcoj/bCvq2dmYTHVIkZdzjvXawsxi9IJgIA2yKuSW0djy7nGpYWxqdGzDXm2a4ItjdGBGCo1yURQY7VcMq5DheW9CNs0Lwb45s5RpsZe2WMD2s3d4D5LCM/Xx/YEfUb/8fmGH3LtfNjEcGAWbEdqShjkTYwR06PmGujXtfxA58f4Tgzvv8kYpzBfkd0TrQUbYuujdoYdeH6vRQxlxnHvxgRSBi1nsm63+qP//OkQhkrBAEIgh4fwaVtfa9LWurHdZTfATAAEAjaghH493//9w07duzYsGvXrpU7juWrAAuGweZKYN0JEAAod/5LAGCCleJDnw9IHIozIj78RwkIb04+PnD7LbQmWOWpK4o2PxrtWKOacb67IxZ/J0aXRiwAfzvCQennHLDgIs9aGc7jN0eMKYz/zfxQtd3lhTrvjliUrqfhfH4o+p6IO0/MFxag1G8tHVOclZ3R0RFzlTtVxTli4crdapzGB6N/irrWDc6UtxzhXNB/J0U4Am3tOUm4MWKs3hyN4hgTWNkb4YAzlk6LGAdrfY0hwAJj7LaozKH7ss08pH5L0dnRVdGk60fggacpMDhy3kmfoyp+al4Y1zhvsGf88tRFv+tas9I4bcwRnE2cTn5DopezyVyh/KVoe8SYn6UAAGOB8cd1iOv/d0ZXRr3amt1DjUAfzJm3ZZwzzrge8FUsAgBHRDwRhKPcdgw+nLS/EJ0aLUXbotdEvx6NY3z2MS+51t0bcb3AuO4RBGI9wvWLa9H1UZvPD65xzG/K5hrPk1Ztx12SzqaNsuCbzZZa61YEcPi54/jJT35y5bvHOCGDfgdgjX6UrFXdR0k0qG1typv19rdpo2nWh0AJvjEfJzzO+DDnXyWxSGIBwQf/qMYihAXEohqLch43XCu7qTrfUl6503VhhAOyHN0S8aNQBANwoEYxxgJOF2pjZaFYX2ThoHJ3lAUai1Kc1lEcQcpuu9hsU9dR01AHFod8FYCFJ21jkclXAVh4r5VRD1gy52D7n2snxqnBIcUItnRx2mvFrCx6WVSzgKbMpfrBFts4VmVdifM2ilEH2nlaxDj81ugD0VqOBfoYx4rXfRF9jYOBMdZxNM6KGOs46VdHozpglNk0yuWHE+lrjHPzlY55N64/OG6MIQIAJcDVpt2MG66PBK02RYwfrolNOyY7vqvaSfmMtS5GYKbt9ZFyGS+THBuU+YmoOKjMVa4HoxqfH/DCuPsPd4z5dmPE2CMYyLWP+d2lT0j7toi76zB7U7Q94nNzVHtJMvLZR/34jYzC9tFsEwzYHcGEdQljoQQIstnX6KPbq/TMPfISDJlrMwAw193brnH8BkD5CkD5d4A4HPwGAI8cT9j5aFepNUo1z21bI4SeZpUIlDm5CsXzAc9d2W0RC9xxjMUQWlQjgLKWRt/xa+0slF8ZcX74E9RhscMdFhZELGpviP40YgHU1nm6LGkv7ZCe83CX59qoGItFFmgYi0cWZrNuMCWw8q7o/4pwtnlU9O6oy4J4XA7wpC9Zu9XnLuMB7tjno4eq7a4vtLMEDxhXX9uxABwJnBHqeH/HvPXkn8wb6oLTUJyTMYrrnPW45MCBpC3Mn3uj+hxibr05wlkgEED6NuMAx4J8xcnK5n6jT5k3jC3SMbfZR9r3RbdE826099SqkVw3dnVsMNc7roGMG74G0IvZ5uw/PWKc8/g/86Wt0T/cFcdhbGv0HQH3vW0ztEjH/GZ+YFwHGKejGGOMADafH4wzxnWdx8N5z7X9hyPG+vlRm3GeZCsGJ66ZPPq/LTomggVPE3RhmOQrxjWJfuV6R/6rvrJ7/1+uOTdFx0YEK86ImL9tjCDf5RFlnxIZAGhDzTSzT6A4wuU7xvwWAMGAEhDo18JZd6B9AqBfz7p/vQnUnwCof/efMTvmvCsfnHzYs1h6Q8SH+yiLCBY3/KuuUT7M1xvxJM7f1rGexLlKGfuyweOtLHZfH22LuFvL4pnFC4tfnBcW0v9nhJPKHZjro7JozGZPY4FFGW2NBTHnrRvnL2WwiB80Njgfdd3SKKP5dk923BGxIF0vw/nm+6Y4D2dH50SwXY5o51rY/6idBHbFmLuMAYw+7uLUlDLKa2kL/UpfdjHqQF2YF+Mwof5lbpUyu9RjnLS0G+cBJ4A6/G10X6NAxkJxNGD06oivUw2zk5LgxGGJquOcm7nO3P3NaNA8alnk1CfjulH4MAaY913s9iRejnBoXxIRbKlz43pFH2yK+Pxr6xwm6X5jPHa57jNPR/lsrZ+zuf2FWh3q14FmumHvtybBsVX9tuf1gVq55OVawl32yyKYPi96R9TlOkxdWSP8XQR/zsm6gzHd1bju0nfwpP/2Ngpgvvxl9MqINc2zo6sj9g+zMscZgzx1NPfW/OCe+wbbwN4EcCiKU/HCF75wwyWXXLJhy5YtKz8EOMjGdEQGFb0mxwwArAlmTzICgWYAgCcCnvOc56zM0wnMOxYw3EX45eiqCIfy4og7Al0cQMphkbAIi9M0c2oM3izYEAuibdH3RmdGR0b0ISoO9u9m+z3Rz0WDFm8sqO6PhgUKkmTFWKAzjurGmCgLZD5ABi1+WUzzA4Uvb5TRfMtdqJ+OBtW9mWc13nP3i99cwElhgcmc4S7iNVEXp2DUutU/kNdiznU9Rz394MXDYAL1MbMWXOu12Zw3/N4DcwengCAPTkzTCAbR/wQAGL98b3hY0IN5RVmlTaWdZb7y/pHo1ojHsRlXOGWLYnDgLjF8CAB0ne/kIzCKk085Z0Q3R8WYs3zFgD64N9pZO9Zmk/HNmOhSr/836dteT9vUgTSTuA4w1p4bLUVwY7w1r+W0l/GPTq7SnpPXD0RdDM4/G/G1FtYY/FeAHRF90NaoL08OlAAM/w2hadSXcxHYOT2izrx+qJmwx3s+yzDYcg2YexvnAj33cBalgTgWdYeC7xw//elPX/kBwGGOxrDj885w0ds/7/272u0bFIBiHpZ/BUggjv8CUL6WM4GnAErT+OBfjggA8IHKB/PZER/SLMa06SeAw/7+Snym45yeFT0/YgFEXyKcFJwLHtnvtyDlbgl3HNvcMUmynsbCvZTPHZ9B42jQQp+24IT1W6eQt+lI9axQYycLyeJ4tUlfT8MCE4cC3j8eHRfxVQAWsl0Ws/Uyu2w/M4lL3enLYtSrvIc3TukoRtlHVBnpw3/rWMjDSV/6pJTTsYiV5F8TlX7HKSllcqy+3bXsYf3O8S0RjiO2HNGmXg4B8466HRXR1gsjxsUguyEHfy0qTn3hzZ1VgglLEe37aIST1XTIsmuuDY70O2MP7qPYtcnE5xlzgMfFma8YrDdF9O2eiDvSzJthY4K8xbi28aTH9vrOddjmulrmR/060KUqXLtOiuDE9R4xlpuGw31rxGfJMRGfKx+K+n2GNPOX97+ZDfpja8R8ImBG8HdY0KzkL33HZwJtJhDTa17SnwRqcfxpI08B3BjRd4Psweog+Qmiz731+2Cd+4bbQAlIQAKzRoCAQPmhzgnXnQ/z26PXRtuq19PyWhZkEz6dxY1BgAVKPyeIBe1dlX41r/ThT0VnRyxqzon+LKKvV8twbsrinfFzeMRag7o1jUUZP27GXfSmfUt2nB/1WuSRtl7eoCBDs1zSlkU/474fy2a+8p6FMnd/T41YzJ4X4UzgEK62waL0/7/WTsYdTc5/QsSCfdQFLP20VJVL3+ytttu+7E7CUyLquKVtph7paGfpU37wrG70O33GOVCXdSzOQ7FeDgzscHSOrRIxX86q5Wlucu4yll6VbZz2XuWWfMwLHJfmWLkp+z4cERwggMeTOt8U4WwWxySbc21wxBktxpgexe5Opl0R/XhuxJNDjGX6lrHJ3LijUl5m0o5OrctYZiz1urYOa9iZSVDGOY4/T070szLGOSdBA9h2/QxhXvAjgHz+0AfbokujK6I2dkES8XmC0Zd8Z7+flfpyDeE6zXWRPh9kJUBA3vp1YlCemT5WIM10I6z85AiUX/0vr5Mr2ZIkIIFxCfDEyaCnBsYtP/kfja6L+E4rEfpbIhagXZ2kCVTFIhoEWKSxgD0j4m7IsM9vFoW3RjgRt1VlkY8F0WoaAQCcZM5PHakzC7ZexiJ/e/SbPcQjqYOcH8ZqGZfcEWtrLO7KAo/zj7J4vjP5fj+inTgtr4hYUHdxRpO8k8Hw+No5cHSKweLe6g3j5Lho2PioZd+/SbDmxOrdI3nd1SvRgH0syosDzI+LjVIHFvkED1i807+wrl9/2C7OIWnqTmPeDrRn5GipE+1r2ubs4Ks0GO3AYSddP3GHvrSXO444R6PazcmIs1qCAxdX77uM7VHPPS356v08ytihHZRBgA7DUaZfMOYFfVvmyu5q/6y9cI0hOFquqf+Y7Tq3Nu1hTHFnnM8Drn/9xnd9P9ww5sjZ0Sj9wzWL/wrAnKE/WGfgnA8z5jhPHlBv6jtsXlLv4tAzJ0+Jhl2b6+0ZpW3D2jB1x4cBmboKW6HJE1jFXxuffGUtUQISWAsCOHHvinDOLoleEPFBzQfwQnw4rgXkjud4SdJzZ/DIiB+3wmkui5xBRdGX3N3ZFrFoJP9qGs4Zd/RxjllgsnC7OmLRNkmj/LLw/YYOBdP+4lQRYCgOXIciVvLcEH17dFmE0/yyiKcZVmt+EPgpfQfLO6JiLHj5sTrY0zZ+xAr2e/anGL6BM00QA6eJRTaOKAv2LrYjiRmTLNhxvAbVgWAD45E+oN7FtmaDOsCRPqYOdQeHujGmlyICOTBp68zhvJR176eyXTf2b4lOizjfndGN0X88KdWBb74vu+gb8hMIIl9Xh6yUel02cFaY3zD84Yi59L4xyixlT/srzIqTSd+XIN0o9b42mQhgUwafXTsixiJ9uzfiKxaj9lGyrqsxhgkE0jauQ7dUr10qxR18rlmM2Z3R9dFnBhRAf/BEyqUR841/o3hMxDWiizHP3xMRiDkrYk3BDwR+ZEghzAmCmlyjmOtXRv8+IA/1/Y7ovIhAA8EOrteDrofMN6w+Dqtd8/lSLoTz2TpbNTaBVb7bOHb9xi1g3O/wzzufcfmaf7IEGK/N3+yY7BkOKI0P2/8a8eGJg8OHNouP8mF5QAZ3rBoBHCsWfGVxw4KWxdswY0GD04TVt4flG+c4i1LqxiKRRffZEQvvssAfp+ySlx/XKu06NdtwGbaoJw0L6OJIsyActU4sfnH4WUzj7F5QlYVTO2mjTOZfKfvGbNcDKiys74lwPs+IWDCfG70vqjvXedvX6Cu+k4tRNovyrkGbu5KHOtDfR0SU95aIcVs3FvLU77nRX0e3R/QF7XtpxKIduyZq1oGyuC4tRZxjS0T+YUbZOBGMAey+RgbK+p6IwMRD0R9H/3c0bEwxzhnjtAmHg39zxlgf1X41GXl64pyIOvMEzx0R/Tvvxlykf/FNaDuvZY53afu9SUz/nhRtixjbzFH6FpZoFo0xxnWGaxhGG2nrsDFaJV95gSnjlXmDXR8xZvdV7/u9cJ3j2kI+5tGZ0VX9Eg/Y/0COcT7K4DOMcU6/UK9exnwlyFa/JvA7NZ/vlbi2j+sLaxVE358Y3R/1Y1U+Ezj+YK2cud0sF8K5baANk4AEJDAvBPhhQLTGxglZ1PN4Kt/hw6lg0THKwmyNqz5Xp9uV1iCMxReLMRbJwwzHhjs2GM4UjtZqG+PjryIcqUOiN0RnV9ttzk27WOQeNiAxi/iyCCQAgNM7zI5OAu4MsdhjXPP47CPDMg04fluO/WHE4pm6viRiUTtJI9jGop+FMiypL/+JoGksbglIcHwp4scJWfjiNAwzePD7H6RnXuNYfGhYph7Hycv32KkDdb0sYiHerAPnw8m9PCL9L0S08TXR1og2057fi74Q1Q0nsThwOAX8qjtjfJidlQT0Dete5kHToeYYYxRjnnGONhfb7UlHXTHqc361PeoLdeNJH8YUdd0SMX8GzYVRzzVt+ejrvVW7ae+obabf+BoA/HD+6dfnRYwd+n0troE5zUTtqSnttKgEyAiUMD+6Xr/gUa6B5OVpCK7Tw4zxyFjHuDYzf9t8/lRZ9r9wjSBgd0VEf1MWgTOCAL2MazafdYwFrvf8hkDzmtArH9cwrs8Yc+iUaFB9qQdG/Zar7bl+MQAw193bvXH+BkB3ZuaQwFoRWOO7/81mseC4KeLxVB6vvDIqC99mWt9PngBONQsaFm04KzhsF0Us6Hp9lrNvKbokwnlkYbMzKs5TNlfNONdVEeOFRdvxEf8G6pXRcVGv+mb3ygKNhdoPRz8UlUUZY48y6waPu6MnIhaPjMlTo6azSR7OB7NLo60Ri2mcgDujrgvoZNlvLESvjW6MaCcOII7vJIwF74kRzPghR8qmrVdHt/Y4AY7jDRF1IR2LZgJ29H2/xTVc6BsczB+PeM9C/70RC+hRjPN/IKK/WLzjzDJO2S5Gf/KVBdpCvS+MuKvH3W76CUftnRHOGm2pG5zL0wmMl7Oil0fHRL3GFf1xevT6qDiU12X7wagYaWANizJPGFttjPozBjACFy+OqNc4Rv9eEdF2ysRBOnecAmckL+OCeY3RJ5vHqDd9zPykz38w4trA2GbsNMfUGKdZ9az0/7ERc4Qg/MkR4/y2iHFHG7sY11fGOnZXtDOC+zAjSPCXEXOTvqEeJw3L1Oc419w/iG6JuBbTR/3mzNYcYxzQZuYk16U2/Ucg6e8i5iefCTzdQyCgn51QHYDFP/VLNE/7Aa9JQAISkIAE2hLgw/uaiEUqi/o2H8ZtyzZdfwIsZPhhPBZeOD04NUdGfL+RH18jGEPf0B8sqFg0cef//OiIaDnisWYWfP2MhSHOFOfqYjhqLEjrtitvfiMqThr15k4vDiJ3nfZELCqpLwtKHFyCA9T5tIg644xRNgvPusOWtyuLVu4eU2ecP5wkFsvcIeLcsMA4P+P0OyOcYRbTLGKviih73PG7nDJYzFIPFte9nNDsPsDou4siFqrFyIvg8bURi9LSz6SB3Vujfgv23TkGc9htjWgvbb8+YoxwLhxozkEaePN93AsizvlAdEUEm1GNuuHM0z7KxUmg3z8U0e9lnN6dbYx01JF+KcZYog70U9MoH8eF8hir5CNwwHeUCSrsi3CKWPTTRo7zCDEc2cf4h1HdqANpOE5+WDXH25NzPPkd/f9jVX7GwanRjkEZWhz7raTZFp0RMTdeHd0ZMbbX2/AdXhO1mTvM8dsj+n2YMd/pQ+Yy17DjI/p6FKOfmd/MyTMj+nZPRF1GNcrgNwWWOhbAeGQOwqJpzDueTmCs1g3GiP2MbcbvUrUPJr8S0Z42fZBkK8a1kN8tYU6Q75aoTb+Ql3l3X8S54UnfMDbhSb91Mc69HDHGuQZt7pMZNjzhw/UB47dsel0TqsNPeqG+9D8BJerJnGRuso9jTeOpCIxjdzQPzuN7BpcmAQlIQAIzQOBpT3vaSi35GsC4v18xgeayUEba2hHAAdggdLgAACAASURBVMDpxYFjEbalEo7b3qg47iyeWVTh2JTFFndUr6ne56WnUSYLpS6LShZ/74pu61EiC0yOU7cLIupzaXRhtByxIOb40yOcnGMiFtkYY2tHxGPtvPZaPN+Y/SyEfzIi79kRi73lqLAoAQDODTfKwXnk8Vm4TcJoJ48cb4rKYnVYuaSl3k3WrMtY+NbXZ9T5hujt0c4BBbN4pR9wuMkDDxygE6Ll6P6IBTRlw4NF/OERxkKZO/9XRKQZx5aTmScyqAN9vRT9WLXNsUcirDjo9baynz47J8JxYuw0GdFvOA/kJ91S9LqIczFuyMc4oo30B+MLuycigFJf4DMm6IutX0mywunWarvtC04RKoGr78/2zVGz3m3LIx3tYGzTR7STeclTMQRXCOKsp8GW73G3sbuTiKdAGHvDjPHLXOL1sOjboquGZepznDIIeMKN+cSYuzOC66hGOZdHXLO6GOfk3L2uYYzNV0aXNApkXMIZsY3R74wrxv5NEeO8i+H4cz3gnNSJQFevOvUrk7n4FxGfE4xJgqqbo939MgzYT93p6yuj/z2CbdMY+ydF1Jd67oi6jH2uacz1Mi/52gJl7InqxvWHNMxXxin55t6aF925b7ANlIAEJDCrBA4++OANaB1+B2BWkc1bvVnUbo9wbl8YnRXh3B1Vqd5eFjM4SiyyeHIAB/LBeoIe22XB2eNQ310shnst3shAHW6NWDhyZ4+7zSzIWYhS76bRvl3R7RGP6rLY5X2/BTfp3xWxOOTRaxZxOHy9ysYBwAHkCYFrIsodx0FL9v3GovSqiLtI50f9eNTzsP46rL6j2qZOLI7pO7jdF8GO/mOhPazO5IUb+Vngw/zk6LiIBXXd4Mpil/Qfjm6KxnX+S/nUG8fvH6LnR/T7UqWShlfqwKKbccL4PDsi7aaIp0FwOGBLXxcjDw73W6KdEfOAtpEH1Q1eLPhpI0EaONYZ4lycGR0TwY56d3UAqNvvR4w/+v6MiDHOGBvVaCP9Qdt/LDo8uiD6m4g2rLdRnzbGGG/ra9Bm+pN5iuN3YkR+5m5Xo4/hxGPz9DHzgSeJxjWCU12N60M/BgfnGPVDTWNccW3bFzGWeKSdMcHYZ6x2Mc4DU64F2J0RZQ67nlTJV17oBxxqrkvMF8piru7m4AjGfCfASxnM+6ZtzY6laudteWUed6kv7Lj+UN/N0bboDyOuN/VyuHYsRYy/7dGkroEpanqt34Cc3hpbMwlIQAISkMDiEmBRiLPE4g2nbUv0jdGREYtIFnosYD4VsZjGmcGpYXHTy1hQElDAcRnVuMs3yJZz8D0RThiOKHX+uoi7SBsjFrPU4V+j5Yj6stijrcOMtr4/YmF6QoTj9ayoOB6UzeL/E1Fx7lgYDrI35yAsWbi3dT6Wk5Y7tgQYaBPnaC4kaeObokGsWZjSV+RlgcxitS2LJF0x8t9T5WNBywK3cKFdHKcuMGEckZbzTNJoB2W+O7olKnX4+mwXZ4c6fDLCgWCc8h4nje9snxtdGuFofCBqGmMDJwY2zAPG1TdEjCn4cn76rpTPGF2OmvOA9zgXr62OUY9mv2XXULsmKYpTxnkZOxjn5SsKrLeZj7SxrVEO/zf9Y1UGyseZGWS0hd+MgPFyVOoxKA/HYImzjIPLmGDO1I33lNvVAWYewLStwYc796dEOG0nRYyfprGv1OeubDf7jP5nvvPbD2U+co1oGulwCj9eHaC/msbTDkc2d3Z4Tx805xfXCoJGg4yxSV4Y0u/LUbOdzfzk4fOB8UzbGHPFaBtBOT4jYMO862KUDcM3RkdEzMHSt5THk0dwWo64dg2z0kf8PgyBAKw+R27Pe54kYu5wnjZlfqWUr/yl/JsimFFf6k+b2V+38/KGMUI6xsJC2EFz2MrXpU1cxJhsPxLxwaYNJnBJDnOB28zdxRe84AUbLr/88g2bNm0anCtHp+Ax5KF1HJRg3Pr7bwAH0fXYuASYj4899tiGpzzlKRs+/elPb7jkkks2fPGLX9zw+OOPM/dwprjG1T/gxz2l+WePAIu5wyIW5ixiMBZmLKSHLRar5Gv+Qj0Pj3BSWNyxMKOuODzUfVSDBc4fPCib95RXyuY8i2hwYHzABfYsgItjsVZMqAN9Qh2Kg45DS5/XnVTGw3HRGRFPuRSnZVg9aRdtRJSBUS7zoDjm1W5fppgA4+S0iKc16FO+9vLzU1xfqzbbBLgWEXQ8Pbouemk0zmfQzNAoF8mZqbAVXVsC4zrIa1vb7mfTge/OzBwSkMBUEcCZw8npcmdxvRvAAqt5h3ESdSqObd2hnES5s14GXHC00XoZdSAQMywohaPP3T7u1HEnkzoPc/5pE2OKO6VIm10CjJNd0TURN/SeHW2O9sxuk6z5FBM4M3U7PmLc8fs6C+H80x9E2jQJSEACEpCABCQgAQlMCwHu2hMIaD46PS31sx6rR4BgJk8A7I1OiLat3qkseYEJ4AO/KuKpoe3RLYvEwgDAIvW2bZWABCQgAQlIQAISkMD0EuCJD76++/5oU8SPSB41vdW1ZjNKgK8Z8QQAd/35QdGF+qqQAYAZHbVWWwISkIAEJCABCUhAAnNIgK9y8GOA/BAcvwlwzhy20SatHwG+As8PJRJYekd05/pVZX3ObABgfbh7VglIQAISkIAEJCABCUjgQAJ8J5unAPif9zyizQ9CHntgMvdIYCQC5yYXd//5DxL80ORC3f2HmD8CONK4MZMEJCABCUhAAhKQgAQksEoEcMqujXDSuGHZ9d/ArVK1LHYOCDCm+MX/fRG/NbFwZgBg4brcBktAAhKQgAQkIAEJSGDqCfBfI/gxSE0CkyTAj4su9A+M+hWASQ4ny5KABCQgAQlIQAISkIAEJCABCUwpAQMAU9oxVksCEpCABCQgAQlIQAISkIAEJDBJAgYAJknTsiQgAQlIQAISkIAEJCABCUhAAlNKwADAlHaM1ZKABCQgAQlIQAISkIAEJCABCUySgAGASdK0LAlIQAISkIAEJCABCUhAAhKQwJQSMAAwpR1jtSQgAQlIQAISkIAEJCABCUhAApMkYABgkjQtSwISkIAEJCABCUhAAhKQgAQkMKUEDABMacdYLQlIQAISkIAEJCABCUhAAhKQwCQJGACYJE3LkoAEJLA4BPj8OGJxmmtLJSABCUhAAhKQwOwTeOrsN8EWSEACEpg/AgcddNCGpg4++OD9+7785S+vZ6OXcvKzoyei91Sv61mfRTj35jTy2BYNpU8+Hz0U7Y2+MCDPphxbitqsBSj3sejhaF/1OqDoDYfl4HHRoRH1uTuq14VznhgdXivkwWzfG3GutnZUEp5QS/xItu9sZD4676nLqDc9qD/1erRW7iHZpj84P1zui2Bet6W8od/anJc2wwe+9Bvn7MIhyTdQJ853fESb6QPOTbmw3R1RTxh1LTtZWtsxSUnfMr7oX/oaRvDZE1GHB6I2daBvYUzaXRFs2hgs4MD5ycuYqPdfvzIKQ9qAcT7qXB+7/fLSzi0R7NvY41W59A3ngVHTnp4dpzV3jvCecXVXlW9jXhkn9A/nvD+ijXVjbMOANi1HpKG+mgQkMAcE2nzoz0EzbYIEJCABCUyAAAvxs6IXVK835PV9UZuF/AROv9BFbEvrXxsNcibpB4SzUpytj2b7pggHoGmnZ8erojZPclBuCQDgrPx9RLnLzUKr9zgYb4hwvHEu3hgROCiGo0V7Tq7tw0n7xQhno62dn4SvjuBCHXHUf6SR+dS8/4kIZ2oU251MPxvhgBY7Mhsvi86OcOCo963/8/DKFnPlhyIcrkFW+q3wpf1/G22PcJSHGWs5OMPiu6ItEXP10Agu9fFAIOYvoh1RG4c4yVobDuN50XMinG/elwAAdSDwwFigjz4SMX4YS4PsB3IQxrB5W3TtoMS1YwQ/GBf0PXkZ523ai1PMuDyjKuu2vL4zImgxzBjTF0VcH4dZ6fPSN8vJ8NcRfc6+YszNX4gGzfta8r6bOP8/Wh1lHtBPjF+uE78fXdnICfOXRoyh34quidrwaxTjWwlIYBoJGACYxl6xThKQgASmiwCLwDOjF0fbomMjPj/GXZROVyunuzY4UzjsXZjj+JwbfXvEIr7pbOEk4iDx2sW4E0hZpVwcuqYxZk6MTolwYppOMOOHu7u0qRhOG84pDnAbg8UrouKs4VT1Mpx17qJSp1GMejWDBzh7ONrUn8AGjm7TcCY5L2m7GHxx0L41+o1oEA/KxlnDaaUutBWjjM9Xr9SdOYtgtS3Cke41JrJ7JKNcAi/UhXZj9Ad1wPGnnuwvTLZlm2DF70YEfvr1XWHMWO4yTjlfGV841M3+y64DjPFEIAUnfnN1lHlHEG1n1K+OpSDyf1NUH9Pl2LBX2ndORHt/M+I9Rr27zvsq65NeSnnsZO59Y1XuA3n9qx4Zvj77GLuM/T+t8vRI5i4JSGAWCXAR0CQgAQlIQAK9COC04SB+f3RWdHzU1ZnpVa77xiNwR7LfE+HkNY0+wxnFkaG/cMJxnL4Y/XJUv7uYt/vt1mzdEn2uvrPaxrH5qohycOi5a4+D9PKI8rj7zZ3EcQ3n8NkRzukwZ4tz0caTOp703qTH4ezHoVdxe7NznPbRFvh+uFfh2Qff/yXC2cTpwglEl0efjt4T4UQ3jbl4fvTTUeGwJ9vbo7+LHowYIwQ+cP6/N2I+kxbWnO+nIpzAcWxbVc7WvFInnja5LuIphn0RdWA/Y+Z7ojMj2sd4og5vjW6L2vR5kq2aMW/gQ53ob+rDNmNyR9SFE/1FHhj0Mvr8q6OlCB6ch3kFk53R9REGy3dHpG8aTKkv85y63hQtNxNV7/+5z353S0ACC0jAAMACdrpNloAEJDCEAItN7p7xCOg51TZ3grTpILAj1eDO6aM9qsPnOo7B0RF3x18ZHRm9KLo52hH1Mh4R/p2on5PDmMCRxIkjIPS6CIeJ8cHj3Dh8oxoO4ucjyseZwTnFkR1m5yUBd0i5u7lxWOLq+O155a56v3b2Kob64UyPajhnBB1+c0AB9Ftx1LmbT9voNx5j3x4RuKgb/YHDyFcTCPLAj/7lcXX6Ese7BDlIy/yljxgHr4kYH5dEOKpvimjjKEZ/kX9bRB/cHREQgvPeqF4H+oo6nBXxtACBCMYP4xiH+75oPY1x99yIvqC/YHpBhIP+x1GXMUNe7qy/J+pn8KJfYEgQB2eeQMCboxsixg1c+ApALzsiO0lbAgAfzDb5elnph17H3CcBCSwYAQMAC9bhNlcCEpDAEAIsQC+KeNyfBTpOHg6ENj0EPpuq3B/1uitcaokzhdN6XHRGdGzEHeAdJUHjFSea8rjj2M9wRjgvaZaiCyPu5PIoN3csR72DS7n3RDiG1JNxtycaZIxJnFmcqFuirYMS147hbNIGHOS1NBywQWypC/1Fu3EecUbptxMiHPzdUd2JI1iA08hx9t8YEQyg3+nLutEvnBunlvIZP+QlwHBZxJMJ5O9qrCEJUFBP+oHAA7/1cFvUqw6wJ5ABf/qcwMHJ0fkRvylBwGDQmM7hVTPqz1zBCYfVrdH/iLZF8GdMwo++aWMwb9PnBBWWqwIJwB0VnRZRF/qSwEy/uUD5MC1Wxk9tl5sSkIAEDiTAB6gmAQlIQAISODwILo1YhPJYMHe9uMPk58Rsjg0cEJxG7lxih0U4FTg64xjlLkd8Vx/jru4xEeWPajhc3DHGmVmKTomGjTscMtpDOu58zovBF0eTgAgOHk42d3jr/Uabt0VnR9iuiLvE5Gk63lWS/S84ie+Jro0on3nPXeRhvPcXUNvAUT0rou/pu5+JcJyH1QEnn/P/QYQDTB142oj+XC/jWsePFxJYIUDxDxEBDdgyxjlGUGY1DCd/e7Qnov8PiQg6aBKQgARWhcAoF/xVqYiFSkACEpDAYAJPPPHEBv7934T/BSCLTRbxOP6/Um0fnVccD222CeCI4cBgfN7jyKBxDccRR7IYzuk4gQUcoE9EOFw4YN8SDXO2zq3SUo+bonky+g0nGS4Yvw9QX6+x/UMRcxfH+7oIdjiPbYyAy9sj7sJjp0cEXbra9yXDsRH1uSbizn+p87CySr3Jg50aEdShTethBLFKQGU52/C8uxL12RptiVZr3VyeFih9yDzQJCABCawKARd4q4J1tgp9ylOesuFLX/rSSqVxMD7xiU9s+PM///MNhx9++KQdjdkCY20lsI4EDj74K+tM5uQXv/jFlTlaNKFqcQLuuP1kdH7EHbBxnLgJVctiJkyg/jmPc9HWQRtUDYIIS7UEPBaNQzeO7U1m7h6fGeFocQd0T58CGbs4n9Tj6mjcc/c5zbrtxgkmCFec4f+e7Xq/0e7irOLM8yvtXfuVR/Fxvi+oznNeXu+I2hp37U+OypMff5RtghZdbGcS82OFZ0U4vPzY3o0Rd+DX0uBJWxh3tAHHnzowX3gSADY8mv/d0e0RQadJG+0nCFHm6wOTPoHlSUACEigEDAA4Fg4g8PGPf3zDzp07N2zcuHHDQQcdZBDgAELukMDqE2gGAB5//PGVuUhAYAKGs8//Rb88Gsfxx1n77WgilZpAu9a6iH05Id+lnlbDsfmOqnLcVcZxmYSzvJRy+B/iGHeR/yVq+93oKtsBL5SDM0j9CExxN/j6A1J9ZQd3i5ciAgHz9Ph/ae452cAhZY1Gv3E3mjvExWBTHG8c1jtrx9puEjDgR+rOjQg08Ih7Fzs+iQlS0AfMA5x56trFqMN9Ec425dEurkdrHQDgnM+P4L0c/U1UAiqwpW0EAOgXAh2rEQB4ZcqFJ0ZfE5zRJCABCawKAQMAq4J1tgqtPwFAzXEyHnvssRVpEpDA3BFgIcv3t1lwj3vH/9iUsTR3hNo3iLuo0xoA4POdPr6sag53FPmhtX6GI0ceXnsZ+xkvPArNd71x1jCcU+7ajms4XLsjHC7OUb4G0MsZxGnlDjTHbomKM5zNoYazR4CB1zZGvXg6YdzACfz6saUeHOMuMG3jO/knsDN2XYQDWr/DX46xbzmqBwdWMrU07nRTBudmrPDaNpi3OWkL913ZHjUARB8yNjk/ZcJgLY0243jj3GOMwVurbV4Yj8zzM6JTohOr922ZD+tzxjHOPz+IWMbkFdl+KNIkIAEJrAoBAwCrgnXmCuUDn0XAY1/zNV+z4bOf/ezK3f9yx3HmWmOFJTAHBOpPAPT5zj9ztu1ivU6ExTZ3u1hwsvBkATrqZ8Fy8i7qnSrY4xiuh/Gd8H6OUnFoLkwanvIg4ENdudN67YDKnp9jOGG9HBvGB87JsRHlYZSJY8RvRxAEmIThDDKetkY4uThbzQDAIdn3vIj2vz/qGqm+NHlQW+P8/Mu6G9pm6JEOfvQHgYdeRmDlyGipeiUN8xu+b42aDL6OBFWacR4VZ/yWawhPi+DQP1yVPeyF6wZ9gXFHvB6gGJa3fpzzleAKZcJiLa0EXTg3LLn7X2dAYONvIwIES9ELo5uj5WiQMV9+NOKrKr0MdgQemFNlLjOWCThMa1CxVzvcJwEJzCCBURd9M9hUq9yPQJyL5TgbV+f4EZ/73OdWknn3vx8t90tgbQjUH/cvwQDOXPtazr88/elPf+TRR0e6MbkvRXGX8b0RvwFwXsTiv+vim4UwztGoi/9k1UYg8L8lD8GbZgCIz3T6sX5HnL7B+f+NqOlI1k+NI4IGGWXhECHuHuOcTuLufzknjiTfuX4kwlkmIEH59XaelvfHVBn+JK9dAwCU1SV4RvlNztXpO71sSWo0yKgXExq+t0S/GBFcaZ4fZx1jP2lHtWbe4tC3KY9rRVlDEjRq1rFNGaSBb7l+cP61Xpfi+OPUY3sirmlNIyi1K1qKzo42R6Qd1GbaclKlvPQ12s94py8I0PGvHNsGYfoW6gEJSEACgwis9YV2UF08tk4E8gNjt+fUSJOABKaEQP27/v2+9z+i819ayKIbJ+610dboDdHpEQtiPxsKpel8pY9QP6NvccpwJndGb4s+1C9xtR+nAxVn7OBsI4IJ3M3E2eEOKU4KjjeOUtOBzK6xjHPsjrgLui361ghn//5aqdz9pz44ZHdFg5ywWrb9m/C4J2pb94eSloDZuAZbAhzFYMs8qwdsaNP1Ef+6EQb96lj6iLIoZ1RrzvMuwRTqUNgTDBi1HtSh5K2XOWqbuuTj3ASZTo1oO/y5Jjbbwv57ozMi5h1PUDGGGBv9jLaUOVXSUC6sKIMgDvzo5z+LmJ8E6up9m7eaBCQggckTaF78J38GS5SABCQggWkmgKN4Y3RHdGH0iujECCeruRCe5nYsUt14dBtnspezwD4cExzdj0bbq7TD+PAUGE8J4ORj9D2OyraIIBGOEk4LTillM25Ww5ZTKE4RQSnGIU8ClAAAzjJBKl7fHfVzkHOor8HjF6JJOPV9T9I4gKN3ZXXecgi+BDfOjX4gop3w/WxEewfx/UyOY6zh6KNRja8dlDmOA9zlcSLuWpc6cq0YdT15aPKWJxoosxmEqI/xSV+POO+LKwa0HYed8dXLaCv1I8/50e9FgwIAzE8Cb++vFYbzf2zEOS+INkW0f0/EnO41n7Nbk4AEJDBZAqNesCdbC0uTgAQkIIH1JsBi9ooIB+mS6EURTh8OxqQX3uvd1lk//zvTgHdEOCR1w9FsOlBt24oDhFNcd4xxRLnTeUvEd/1PiX4s2hz9VLQrmrThOP1jhDPG+EM3RzhHp0Wcmzb+eTTISc7hqTH6Bb44eXWD712VePT75Aiuz4xwHjlO3qbdV+1gDQcPXkdxHgk6lLm9nO0uY4e2lIABdTgkGsWOSqYSxKDMZlCnXicc6LZGfcoalzJ6cSQAcl5VIEGM11Uadg4CU8yF5ajfGOR8zM9mn5Pntoj/nMFvdJwQvT0iqPW+qDmns0uTgAQkMFkCLuomy9PSJDAtBJjbLJaKDPZNS89Mfz24G/XfIn7A6l0Rd2O73Bmc/hbOfg1x9nA8muriwHWhcHcS85sRjA0cq3MifjsC523ShuO0MyLwgFPE1wCOjrimPTfinDyOTZpRnN5kmyqjz26MePoCvtxhviy6NOp3d5/+wFGGyTHRsdEo9t3JxGcEzO/oWADBH4I15CUAsCnq+jlD/bdEtAEjsEHgp244xMV5Zzy0Ne6sl4AB169mAIC6bo2oO+OItiwPEcEx+ot6vzTqUp8k32/03XuiD0S0l37mP2swr7oEOZJck4AEJNCdQNeLdfczmEMCElgPAiySz4pYBGG7IxZ4zcVVddgXCTyJAAtinAycrA9HLHa3RSzWR73Tl6zajBLAebol4k7lWyIcn/Ojj0fvjvrdBc2hkQznkvF3ZsSd1qWIc7DNuW+I5ulOKW27Pvrm6DURDuGrI+Yf+5t8cUS5i8w1nrQXRATtmk5udvU1nG4cYNaBzHe+h97F9iYxgZjTIz5neGKIz5he/cI1YynC8SUf58P4nPr26pV9fxfhiNftU9UbnO5vaBwb9JagUfn8eyDbOO51o0589QLjc/H9EU+eDDLKpF8IuMB+KaK+XbiX8uH029Fx0bkRLPj1//siuI5SZrJpEpCABIYTMAAwnJEpJDCLBFhU8Avd5c7K1dneExkAmMXeXL8643jcHN0VbYv47mq5a+bnx/r1y3qcmbFwVfQ90YUR15aXRTwhcuuEK4RThTP2UMS1DOHocrcWJ/Ij0bw9lYJT/4fRidHZEcG2H4pwBgkE1B1CtvkPHsxFAiI439dFOI9tbGMSvTxainCsyUeAp4tRB76GcU50fMSY+KOIcprOK2OFrzbQd38V3R7R3m3RGRF1IOhDAKHZr9SN8rjenBTxhATlDDPGzBFVIspm/NZtKW8IMFH2csRXLviMHGQEFAhYbIoKd55UaVOfXuVSL75aQ11heHLEj7HydE2TQ6/87pOABCQwEgEuupoEJCABCUhgEAHuVl0bsYjnu8oElFjANxf6g8rw2OwT4E7qr0U4LqwfcFZ/MOLu5SSNcYXji/OH4/9tEU7u0dHd0e6IO8bzZrTtg1FxRAkEnB/hbNYNPjdGBF6KY4zjuPTkZD3fkZ47zq+KcGjhSJ/2unPfs4DaTs6PcFY3RziufKe9boyTIyPa8pqIwPSboldG/OAogQ6cc64vxdnP5n4jwETdKAdHmXKGGQ76syPOi/FkQd2hpqyXRLSfc3OOYc4/5VAGAQwCUxhlHFFtj/qyPRlpe+F/SbbPGrUw80lAAhJoQ8AAQBtKppGABCQgAQjg9L8/4lHVn4v4lXmDAIszNnAWcZZ+N8JxwjHFIeMuMI7lJI273uVR6G3Z5jwEA3CYigM2yfNNQ1k8pn5D1UacTe5246ifFDX5Pph9vxLhuNIPF0c44KdGG6NeRgDl0oj5i+PNGhDnk4DeKMYTZTzGfk/E2CBYwXfZcWDrdSiBo+uznzZdFnH92BZRh1sinh4gXdO45hDswGgnwYPzIsppGmUtRT8c8WQBdViObovqd+kPyXu+1oTheHf5+gPjb2+Vl4DEmdE4a2nmEU8B8JQVDGkjgVb6SpOABCSwKgSaHyirchILlYAEJCCBuSLAHdh3Rdz9MgAwV107tDE4TB+KnhPh+B8bfX+EA4MjOCnDweVrALxyVxkni3MTdCp3S0c5Fw4yv77+bx0y45jfHeGcr7bhXP5BxOPg1JW2EwS4L6o7yDiLN0UEAXCKN0XcVccp/UhE8IT0xalcyvZ3RgRSuFvP+g/Hmn+JiCM/qt2ejNQBx5+6XhRRPnX7WER7cHIJHH1NRDDjyKg48NTxTyLGTr9rCb89gUO/FJ0W8TsUOOL/HDE+yIfjfEzEI/pbI3jg9L8/ggUcip2eDepKvvujW2vHhm3uTgLGOo/s0wa+psF8oI2j2q5kfGdEnY6KaOPl0c+PWuCE8z035X119O9DyoUn4/SaIek8LAEJrDMBAwDr3AGeXgISkMCMEmCxx+JbWywC9PtyxFMAJ0Y4XWdGv5ll8QAAIABJREFUF0Z7onGc82Tfb5wHpxvnCOcPw4lcjurOXHWo9QtONU5oF8ORvDJaiwAA9cKp/uNoc8Sd4Auiv4y4U193NB/N+/dV+/hxOtqGg48DiWOLYw8rHFX6CedyY0Qf4aThdMJ4HKOfrosIkvCfQ86KcLAJYDAeuEZQZ+660xbqgJEPYz8OJml3RL3Gzx3ZzxMD3BknwHFKhLNM8KAEL2gjgQUCAQSLHoquin6v2s7LfuNrK6ShX2+OulzH4MkTA+dGnJOxTzCAoMA4BsPnRS+P6CP6c3vE0wvrbVtTgTIHB9WFPmVcGQAYRMljEpgCAgYApqATrIIEJCABCUhghggUx4mF/o9FOF58Rx/HlbvKk7J7U9DOCIcSh4072zh94xjrnq5rHxybrnnGqSN8PxB9V4SjeUT0xghncHdUnOdsrjjABCcIlDw/Ij1OMsGZpu3LDsrgP3vgXC5H9bKa6du+x8G/PiLoQACAHwslGEE9UN1w8KnDLVEJbpyf7S0RzvjvR3dGBBSKUUd44KjztAnn2FzT/oTZoHzyE0DBqSawUG8j5zynygBnnj7oatSTcinr0Ohl0bgBAII5POlAQAFmmyL+9Sa/k8Cx9TQCEmiYwfmQYYk8LgEJrD+Bg9a/ChOvwetSIt9v44PoR6J7Jn4GC5TA9BPYmiryGCl3fTDuHDEvWExrEpDA7BGoO3U4xjh8j4/ZjGOT//iIRTvlcX3AmWtjOMQ4YSdFOOfko17LVWacVu4CHx7hwODw4XAVw6HAsSd4UJw27trWjXJxZKkn2zhZOF7NdnMnFscXo4wd1XZ5WcoG9RzVied8nLfu5OH4nRDhqNF2nE4c7LrBFmHw7bIeob3wo3zqTR1ujnD4exnpYYkjvRTB7H+N2A//T0bLlXbntW0/J2knOyypGatLEXX52oi+pu8/HS1HnB+epKXfXhXRVpz+10bXRL3qR1sYc6WNX59txhd8KP//i5YjWHOOXk8T0G9nR5TFOXZE9XGZt63szKSCN+XsjW6LqAftoI6UfV9EPdoaZfHZzdzB6LcdUT0YUh1aeYEr82MpeiIiAEddhhn5GJdwpJ7waq4NGNscp01d7P4kph6aBCQwxQQMAExx51g1CYxBgEXjeRGLHYwP91uifovHKpkvEpCABCQw4wRw8Lj284pTSfAAZxJnD0dxLYzz4uATXCp14PzUo16HEjAgEMB/e+C/BNzRSJO3BxjlEvjhFaNMHHlEezUJSEACEuhDoGtkr08x7paABKaMwL7U58qIhRfGgqjXHZXqsC8SkIAEJDAnBLhjzJMQ62k45G0CzuXpD+6UEwx4IGoTpODzzM+09exhzy0BCcwsAQMAM9t1VlwCAwmUOz4DE3lQAhKQgAQksM4EcPgJWKx30GKdMXh6CUhAAmtDoNwdXJuzeRYJSEACEpCABCQgAQlIQAISkIAE1oWAAYB1we5JJSABCUhAAhKQgAQkIAEJSEACa0vAAMDa8vZsEpCABCQgAQlIQAISkIAEJCCBdSFgAGBdsHtSCUhAAhKQgAQkIAEJSEACEpDA2hIwALC2vD2bBCQgAQlIQAISkIAEJCABCUhgXQgYAFgX7J5UAhKQgAQkIAEJSEACEpCABCSwtgQMAKwtb88mAQlIQAISkIAEJCABCUhAAhJYFwIGANYFuyeVgAQkIAEJSEACEpCABCQgAQmsLYGnru3pPJsEJLBGBDbmPIdGZY5/Idufjx5fo/N7GglIQAISkIAEJCABCUhgyghMcwBgU1gdOQKvb0genB+fbhgBnlnmhsAxackl0TOqFv1jXm+MHpybFtoQCUhAAhKQgAQkIAEJSKATgWkOAFyQlrww6lpHAgeHRYdEBgE6DQcTzxGBzWnLGyICAdjV0R2RAYAKiC8SkIAEJCABCUhAAhJYNAJdneu15POsnOzEiLv5XQzHvzwBYACgCznTSkACEpCABCQgAQlIQAISkMDcEpjmAMCHQ/2TUVcn/jnJc270WPTE3PacDZOABCQgAQlIQAISkIAEJCABCXQgMM0BgFvTDjSKnZlMBgBGIWceCUhAAhKQgAQkIAEJSEACEphLAl3vrs8lBBslAQlIQAISkIAEJCABCUhAAhKYdwIGAOa9h22fBCQgAQlIQAISkIAEJCABCUggBAwAOAwkIAEJSEACEpCABCQgAQlIQAILQGCafwNgAfDbRAmsGoF7U/KrI/4rBra3UvXWFwlIQAISkIAEJCABCUhg0QgYAFi0Hre9i0LgwTT0xkZj/a8Yi9L7tlMCEpCABCQgAQlIQAI9CBgA6AHFXRKYEwI6/HPSkTZDAhKQgAQkIAEJSEACkyDgbwBMgqJlSEACEpCABCQgAQlIQAISkIAEppyAAYAp7yCrJwEJSEACEpCABCQgAQlIQAISmAQBAwCToGgZEpCABCQgAQlIQAISkIAEJCCBKSdgAGDKO8jqSUACEpCABCQgAQlIQAISkIAEJkHAAMAkKFqGBCQgAQlIQAISkIAEJCABCUhgygkYAJjyDrJ6EpCABCQgAQlIQAISkIAEJCCBSRAwADAJipYhAQlIQAISkIAEJCABCUhAAhKYcgIGAKa8g6yeBCQgAQlIQAISkIAEJCABCUhgEgSeOolCLEMCEpg6AoekRkdHZY4/mu2HosemrqZWSAISkIAEJCABCUhAAhJYEwIGANYEsyeRwJoTWMoZ3xgdUZ35o3m9Ktq35jXxhBKQgAQkIAEJSEACEpDAVBAwADAV3WAlJDBxAkelxAuiY6qS+brP9ZEBgImjtkAJSEACEpCABCQgAQnMBgF/A2A2+slaSkACEpCABCQgAQlIQAISkIAExiJgAGAsfGaWgAQkIAEJSEACEpCABCQgAQnMBgEDALPRT9ZSAhKQgAQkIAEJSEACEpCABCQwFgEDAGPhM7MEJCABCUhAAhKQgAQkIAEJSGA2CBgAmI1+spYSkIAEJCABCUhAAhKQgAQkIIGxCPhfAMbCZ2YJTC2Bx1Kz+i/+P5z3j09tba2YBCQgAQlIQAISkIAEJLDqBAwArDpiTyCBdSFwb876+mhjdfYH8+q/AFyXrvCkEpCABCQgAQlIQAISmA4CBgCmox+shQQmTeCRFHjbpAu1PAlIQAISkIAEJCABCUhgdgn4GwCz23fWXAISkIAEJCABCUhAAhKQgAQk0JqAAYDWqEwoAQlIQAISkIAEJCABCUhAAhKYXQIGAGa376y5BCQgAQlIQAISkIAEJCABCUigNQEDAK1RmVACEpCABCQgAQlIQAISkIAEJDC7BAwAzG7fWXMJSEACEpCABCQgAQlIQAISkEBrAgYAWqMyoQQkIAEJSEACEpCABCQgAQlIYHYJGACY3b6z5hKQgAQkIAEJSEACEpCABCQggdYEDAC0RmVCCUhAAhKQgAQkIAEJSEACEpDA7BIwADC7fWfNJSABCUhAAhKQgAQkIAEJSEACrQk8tXVKE0pAArNE4NBUdikqc/zhbO+LvjBLjbCuEpCABCQgAQlIQAISkMDkCBgAmBxLS5LANBHYksq8JTqyqtSOvP5WtGeaKmldJCABCUhAAhKQgAQkIIG1I2AAYO1YeyYJrCWBw3KyU6JjqpPen9dD1rICnksCEpCABCQgAQlIQAISmC4C/gbAdPWHtZGABCQgAQlIQAISkIAEJCABCawKAQMAq4LVQiUgAQlIQAISkIAEJCABCUhAAtNFwADAdPWHtZGABCQgAQlIQAISkIAEJCABCawKAQMAq4LVQiUgAQlIQAISkIAEJCABCUhAAtNFwADAdPWHtZGABCQgAQlIQAISkIAEJCABCawKAf8LwKpgtVAJrDuBR1KD26PybwDvy/YX1r1WVkACEpCABCQgAQlIQAISWDcCBgDWDb0nlsCqEtiV0n862lid5eG8PrCqZ7RwCUhAAhKQgAQkIAEJSGCqCRgAmOrusXISGJnAo8l578i5zSgBCUhAAhKQgAQkIAEJzB0BfwNg7rrUBklAAhKQgAQkIAEJSEACEpCABA4kYADgQCbukYAEJCABCUhAAhKQgAQkIAEJzB0BAwBz16U2SAISkIAEJCABCUhAAhKQgAQkcCABAwAHMnGPBCQgAQlIQAISkIAEJCABCUhg7ggYAJi7LrVBEpCABCQgAQlIQAISkIAEJCCBAwkYADiQiXskIAEJSEACEpCABCQgAQlIQAJzR8AAwNx1qQ2SgAQkIAEJSEACEpCABCQgAQkcSMAAwIFM3CMBCUhAAhKQgAQkIAEJSEACEpg7AgYA5q5LbZAEJCABCUhAAhKQgAQkIAEJSOBAAk89cJd7JCCBOSBweNpwfLSxasuDeV2OPj8HbbMJEpCABCQgAQlIQAISkMAIBAwAjADNLBKYAQInpI7vjI6q6npdXn8h2j0DdbeKEpCABCQgAQlIQAISkMAqEDAAsApQLVICU0CAuX1EVAIAh2Xb+T4FHWMVJCABCUhAAhKQgAQksF4E/A2A9SLveSUgAQlIQAISkIAEJCABCUhAAmtIwADAGsL2VBKQgAQkIAEJSEACEpCABCQggfUiYABgvch7XglIQAISkIAEJCABCUhAAhKQwBoSMACwhrA9lQQkIAEJSEACEpCABCQgAQlIYL0IGABYL/KeVwISkIAEJCABCUhAAhKQgAQksIYE/FXwNYTtqSSwhgQeyLmuifhPANjfRo9U275IQAISkIAEJCABCUhAAgtIwADAAna6TV4IAstp5VujMsc/n+2HFqLlNlICEpCABCQgAQlIQAIS6EnAAEBPLO6UwMwT+EJasGfmW2EDJCABCUhAAhKQgAQkIIGJEfA3ACaG0oIkIAEJSEACEpCABCQgAQlIQALTS8AAwPT2jTWTgAQkIAEJSEACEpCABCQgAQlMjIABgImhtCAJSEACEpCABCQgAQlIQAISkMD0EjAAML19Y80kIAEJSEACEpCABCQgAQlIQAITI2AAYGIoLUgCEpCABCQgAQlIQAISkIAEJDC9BKb5vwAQnBglQDHNbZrekWDNJCABCUhAAhKQgAQkIAEJSGCuCUyzs3xCyC9FXYMA35I8h0TT3La5HlQ2TgISkIAEJCABCUhAAhKQgAQk0IXA25P4P6Ivj6iPJ9/JXU5oWglIQAISkIAEJCABCUhAAhKQwLwSmOa75J8M9LujrnU8MnmOip6Y106zXRJoQaD5FRrmg3OiBTiTSEACEpCABCQgAQlIYF4JHDTFDTs8dTs06voVgJcnz+uj+6NXR/dMcRutmgRWiwCBsNOip1cnYD4wFx5drRNargQkIAEJSEACEpCABCQggbUm8LqckKcHPhqduNYn93wSmBICW1OPT0XlKzQfzPZxU1I3qyEBCUhAAhKQgAQkIAEJrAOBrnfX16GKnlICEpCABCQgAQlIQAISkIAEJCCBcQkYABiXoPklIAEJSEACEpCABCQgAQlIQAIzQMAAwAx0klWUgAQkIAEJSEACEpCABCQgAQmMS8AAwLgEzS8BCUhAAhKQgAQkIAEJSEACEpgBAgYAZqCTrKIEJCABCUhAAhKQgAQkIAEJSGBcAgYAxiVofglIQAISkIAEJCABCUhAAhKQwAwQeOoM1NEqSkAC3QnsSZa3R/+pyvrxvD7UvRhzSEACEpCABCQgAQlIQALzQsAAwLz0pO2QwJMJ3J+3747KUz6PZftRIUlAAhKQgAQkIAEJSEACi0vAAMDi9r0tn28Cj6d53vGf7z62dRKQgAQkIAEJSEACEuhEwN8A6ITLxBKQgAQkIAEJSEACEpCABCQggdkkYABgNvvNWktAAhKQgAQkIAEJSEACEpCABDoR8CsAnXCZWAISkIAEJLAQBE5IK0+KWCc8Ed0YPbgQLbeREpCABCQggTkmYABgjjvXpklAAhKQgARGIMDTgb8UbY1YJ1wbfWCEcswiAQlIQAISkMCUEfArAFPWIVZHAhKQgAQksM4Ezs75T40Oi/ZFr4/4YVFNAhKQgAQkIIEZJ2AAYMY70OpLQAJzQ4A7rYdGXpfnpktnsiGMw1dHR0T8J5GXVa8z2RgrLQEJSEACEpDAkwm40HRESEACElhfAjhch0dnRe+Mjlvf6nj2BSfAd/83R3zf/83RnQvOw+ZLQAISkIAE5oqAvwEwV91pYyQggRkiQAAWxx+H61XRhRH73jpDbbCqa0dgY051SO10/DDf5yNe2xrjizLKZz95vxDVH++/P+955J/z7azSNh//p5ynR7xy7LHqNS9DjTw86YKRlzbUjeOcG/WqH2lLG0hb7JHadpvNJgvyPBq14VnywqDUhXylvtQFJqtpfD3jmIinNB6oTkRdeHJjkFFH6gZ3+l5bewKM/zIH6L8uY4U+5nMDY5w15091aOFe4MmcwPjaUpt5vBqQSv9QH65hWJlzXF8ejrr0d1XE2C/luku9mPeMnXJdZzxxLeM9gd/1Yte2kUcm4VHR3gie2ggEDACMAM0sEpgBAsxtLujFuLBz0Z/2C/sMoB27iuWDmDv9L/7/27sfoGvrus7jO42z4zqO6zosy7LMM3csyxqLRCwREdEjIRIRmSkZkZFamlmW2f/dmsZpnLbZMctyyzQ1AjUzUyIkRERCZEkJkQgJbxHJjEbLP9mqtZ8XnMsuD+ec+9z3c98P9/Ocz3fm85zrXNfv7/v3vX7n9/1d5z5PdEG0Z1JqP8z2Ge9BW8AJ6dlpo95ZTF4a3bWJHh+WtKdHAkfG366K7pi89yIgsSnlNwDUd310zei6Q+WcF1ngSn9tdNtUmnlv5eXz5qI7ozdMJbRAPSU6LtLHq6Nbp9KcnPfHR8MC2/z2ykhbljXtOCPyOthv5GDRPejetVhei7TvK6OjJue0Qd7boz+J8FiP9GG7Tb8vjH42ekH0S5MKtOmnI+2cZUPgb+H83uiWyNjvRBtn1d9z9xM4My9PjgRhL4qWvXfkPjb64egz0e9E7t/av/pXZwUCptY93xbt740Rgb959dTo8ZH584jIvaot5mlz6R9EN0U2WnfC3Ps2Ac1T43kdlydF1hzmp4uioQ3m8sdFgn/zx2bm0STf7+b++dXof0f/K3owNlT2e6db4cYEnpUkH4reGZkoayWwigR88Dw7+rGJzs3rRk+GVpHT/u6z4MYi/XnRe6LPR/880sdyLLCplcA0gZ8a+Qmf+cfoKdEQBE+nn35vYXh25PNx8Ln35/gJ0wnz/uejT0zS+d8Apk0APlz3KohZdn45KWn5/WejN08XnPdr0csjbfyr6PwZaSz8hvoHFhaxmzHl/mU0vv8E8/MMZ2uK/xH9eTR9747Lce1Po+dEi8qcV9dG580R2FjIHzJKfE6O+YW2aIPjQXhPt1k/fE4cGtX2H4Hnp6q/if4ici9txtzD8n4geupmMh7kaa11fH7y/Ufs5776XBeUCu4/FbnPtOWDkTnW699Nzv9DXn87sqE7b6Mul7ZkyjNHvTB61VQJ5mfzJj6/Gz16dP3FOTafmg/GG6KjJLvq0Fz8tsh9MN4U31WN3O2Nechub2DbVwIlsCUCRyaXnVw70uz1kacMu31nd9Lcg+7F0wFj4umAH1Wz6HOuVgJbJWAR9DXR1dFHlijEotjmk83B7TSLX0GJgPd10YPxNMZaxpMt85xvFmxk2H11tOxi173qnvXkVV/V5+mtp3r3RsPTRiwE03siC3y81WMT5eZoO+xhKcTc7vXXI/VP22dywtP9W0cXBAfarY2eDtqYODqyefMfI396pE+1nSfw0VRhfHxjpN++2HneO1mD+/CM6Ccjm5vG9rroxsgGj/F1zz0m8q0A88IFkQD8uZG022XmKXOUjaErN1GoTWHzk7Z/bhP5HqykPmPMV6+N/E6Ne6lr202ORjcANgmsyUugBEpgEwTMsQIugf+3RqdH+/vpxCaa26QHCAGLNAGdpx8vi5bZABDsfu0O9U8w6Wu3FmK+3ro/zWJQQH9KJPhehoXNuGOiZTbh3MMW7f8zcv8KrgXWFthvjfRZEG481O/pvK/TnhmtRb5pIOC22L8j2lfThnMinN84pzBBh6+H/9KM6/q8FtnI+K7o2Oj5kSfKr4hqO0/A1/b5gvvYJlLtwCRgbnC/C7oF93dGF0ee8Bvf6c1I99ozI3OCOcWTevfgerQdZg5adlNzXJ955PrIXHqgbAKaf2+IzPvnRq+Mapsg0A2ATcBq0hIogRJYkoAP4kMigf83RWdFW/lgXrK6JlsxAnenvzaSPEXyFPe2aNGTd5/1a5FFquBQ2kdF22X8na/7G1xtm/VUervqmi5HAG4xrT+CYwvwjcyTurVIO3FctBFweK7/QKRsT/otlH8lsgDFcmzeCwKuiN4V/Uhko8H9/6ORQEEZWzVPG7UF79+KtrJYt4HBX8hY+VqwPmqrdju3U6bd00HRTtWl3P1d37J9wXgnOY/bsT8Y7I86lmW7P9P5jLexLwi9J/KNnN+M5j2NNlf59o574DmReciGwHAuhw+K2aygZW0nx3vZsj+Xxr40stFpE+WyyDcYaksS6AbAkqCarARKoASWJODrfoKhb4g8ZTtqyXxNVgLLEvDkQ/DvK+ZfF10TLVr88ElBsifUFqEWqL49sB328RQiqBVEPiHy2xZvihZtSGxHvUMZl+cAC98C+MZoow0AQbQf77MhJ4jH5YholtkYOD3yhMmiHTtf55dvUSBrI+DVkUWqABv386I/jLDZqp2cjIT5pVstZJTv9Tn+quiHorXIfPUbM8p9ZM7ZyLDZNGwc2Xy4Pbo50p5Zxu88ITUHymdxj43g99borgijWYaZvGuRTRrp+LiNC+Mwy7/4wJGRJ63G1/h9JhKcaed6NF2fvrkX9OemSBl8Qjl8RX7tfHeknFnjzvdPjLzqszTusTsj/bTRNDY8SB+UO81PG5SHufZJN7Qhhxuadpgb1qKhPdjhRtMMjI268MJXv/nZIZF8N0T6MuTbk2Plq8fYGFechvFRhveDiTWkd5/hYfz5knPGWX68XDMG47x5+wXTTnnkVe+nI+3SPmUsMkyHdh+WY31Uz0ciTJQzy6fmlak+7XhCZLyviV4T6cci42fmkNMjfvqk6OXRHZEyMeV/fGKW7+T0fb4trz5cF/Ev743hwNPruZH6jMeivsmLDR7Km+Zv/I6MsNc+LKUxjsbL67RPKc+87Dw2/PmkSB+HMcNe2WN/0id59AkTDPRh2gbm6tbvM6KLpxP1/XwCwNdKYNUJmJDoYDL39rhPjp072O55HwK0GwzbkyNByFmRD8vN+pX0FkmzPvB2Qx/bhi8mYFFlAbK/TZBtsW7hszfyRNrie55ZDH599MnIgsri7bR5iTd53uLZAvP8yILPb1x47/z+sL9MJRahp0QWme6fRWNyVK5rJ35vn6SXZ5YJvjxdsijF9/eiq6Nl5hxpXhf5nYbvjQRi3xNdEU0vsHNqKfvOpNLuq6J7lsqxOJE2+hOSZ0UW9eau3xhlMR8dGZ0XPT7ib5g4L0Axzm+JLorWozEXPqff8uEteGOfju6Kro/8De810TQPY+nbJHzUeAn2BAUfiYz170evj8bzpGD1zMhvQRwfCVSM2xCo3Jhj4/emyH3L9EM6X8UWbGBhvM6K1GvMtHc90k7Xb460ZTBBjXHV5j2RPDgIBG+PjJV+ju8H/Xp6hKG/Hfc6GL4XRvohMPNefXdGfMcPn80z/dEOf45zamTssNOeaXbjOrXbk+jDojdHXxo9JcKU3+Pz6giLc6Jvjtxr8imfYaoOfPzIHM5DHXxLm+Q1VxkTZWCnTu2+N5LXGL0yUtdgrhsPbbKxzp/UqxyML41wkm6WSXt2pM4TIvf74Bvuo8E3rsyxcVvG5PcZvxYp4x3RXctknKT3ZwI2Aga/5X/ar3148xc/zofLtOm/v33nu/xIm/XvuyN9U87RkXL4zYuim6J5Zlx8k2EYv/H9qJ+uux8HdsZzfB9fkvfXTs7l5T7jH77x9MmI//kTKj7JjPPPRldH/NUcyxe0/aGR9Pp9WzTML7Pm9L/PdWP21MjYvj7ih7UlCDxkiTRNUgIHMwGTjQ+VrzjIOumDYfhg1jUfVD4wxh/6B3qXfQD9cXTVLugIvj4kLXZPjnxAbsX443Ojg2mctsLhQMljUWKRs7/N4uhPI4snCz0LwvVo1uLH5/xaZPGmvb6a/mXRdpnFrwW/YOP0ibTNYnLW4nW76h3KsVhUPxaPivZGFy2oBAfBBBYWxRaOs8wiWp8sTNl69KZoFuNJkge8SOu/q7ogenh0fGSsFi3GH1DI5IRASR/NLea97bLbU9B6JLgn7bRoZ2sR/z4v4kfvjm6NMOd35jp5/IjgCyK+wLB7RuTPHqS9OlqfHFvkY4qJzymL+OujwZQpKD4jMseb3++MzI3yCXQwVMcrIuUbd200d65F0r8xEhi5JvBw3TytnIsj+ZgARx8EYq4dG+mH/NJ47zpf+FwkaOM7zJj4+ra5fz16fcTnjZE2qterOuT7aMS0CT/tc20wjJ8dYS4N3jdG2qEcfTA28gxjlMMvmPp8zu+NcL0iuivSHgHZmZH+PDJ6STQEevq9FmmTa8ZFWz4yeW8c9N2YCOT48R0RRkN/13I8jI/8+nZpxIyVccdRIOlYnfo31HF6jvdGaxG+7rXB9uTgmdFTI32RT3CojSdEfE1/lTltWOGGqf6tR2PfwOWJkbZJ69rHo43sEUlg3ahv2nvLRhmmruvfz0fqtOn0a5Gy8NcWzF2bZeo+KlobpcHx1si1wT+0SduwWWTuX/718AjTwbDGxv2oPvfVxZGxPTQy3tbPrr0wujIa5kftwNt798je6J5IHw+JMHZd2eY1/vSaSNnab1yVz1/56Suj6XHhk2+LvneSTnmbHYdkWU0bD/RqEmivV52ACc5Tjycd5CBM0HQwmQ+1j0VX7YJO+dB+fGQB5gNuq8Yfz9hq5ubb7wQsuB4suzYVW0wrn6ttAAAgAElEQVRbTD0uuiYaFl/jNlnUCaosyK6Pboi2cwNAcHJTdEl0ZLQWfcfk3HhBmFM7ZpelZItpfXUfXjSnJtct2gUoFvp3RUMgOJ3FvWgBajGKqwUqbdYEKhalxkD95gi8NmsWxMaaXbfZzAvS6782Cjq077BIP81p50QW+Bbgr448yR6YHZFjTw2fFp0fvS96RSRweVT0fZE1pnO/HglQ1CVwODf6/giLgYc6+Kgnmnsji31BMx+SV1nHRzYVTot8G8I9cHuErfr4n/S+Uo2xzwj9wE6+4yLByLuj6XsXW8EGX+LL6xETVGjrmZF222y6J9IXc/Xe6JOR8pWrTm3VFk9knxGdHvnBSGUvMv3wbQL8Lo08udU/thZ5io73rMDQeAzs7s7xi6Oroo9GfFmA50nrUyN1GHN1jA0rZV8T/X4kGDNeyvmSCAcM74z8ZgSGQ3+lOysSaPOlx0bGwriOTbDnfvjFyFykDuOOj/HWD2OrbRg/NDLefAxXQeKrovVIm4yP38UwPvo5NteNqbFZi8x/uAy+od4To2dG/NAGkr7xK3UvMpyMMdOHuxYlnnFN+nsj3I6K9G1f7PJkxpXPKJPf2Kwxd6lnmk1ObWjYDsH/dTn250zq4O/Y8VfsvPL19WjWfYXtS6K3R8ZE3juip0SuKfOFEX9Q9uCvxs099u2R/pkTxmaMbo74GN81lsqqLUFgXx1uiSqapARKoAQOegI+hHzY2ky6IDp8iz3+dPL9cvShLeZvtv1LwMLvwTKL2PXIwt4iyeJseoGkbYKJx0cWVhZH9zi5zcZvLdD+e3RhZOEo2LDIGwKYHO6YCXgs7vdGApS1aD2aNu0SnFiEvjMSWM4z66Mvn1zETl8+Ny/xgvPyWDxbJAtmHrMg7aJLFrcWxlgLUrbTBAgMF/7CjogEnDYFjK2vbq+7MDGBJV9ai54QCT7fEGmfvOT+8G2F8aLcOUz+XWSslDOsRU/KsQBRnS+JXh2N7zGBKIbSCL5Ojfj810fuA/XYbNBeQQFTPl7qfGkknQBTYDNtfNV9dNvogkBNXwSSeybH2iuw+srImPCjN0XqGAxTPAVj+jC0Z5TkAYfaddgkva9Iu8cHU4dgW1ueODo/HO7NAR7q/NXookj6wXDQJszdA4KqKyL9GExf9N0GyqWTa87plzyYK3+4nsMvmPYpy9PscyP9eEQ03W9B28ui10Tuq8Hcw98UnR25TwV0uB0eGV/lXR4ZX1yGAF0+5chjbLVvML7y5Mm19bwaW/0axgkT+ZlxMsbmSkHscE/cf/WB/yr7kMlpPj9r7n1grn85ow2Y6Rd/f1g0zWpR/ulr+qIdA1PH/Hmw4b6ezjfvPY42cjHFyAar+2/gjo/2azefPCPy5yN3RGOfwumayIbPwNo542sjmn+5x/jitL8ae/2gcZl5e59pCx8xDxmLYb6+/2r/XUjAJFYrgRIogRLYNwI+/H3IWTx5kmHh/KTIAmgz5kPOk5ebN5OpaR80AsNi6MFoAJ97V3RKZGFu8WqBNQ5CfMa7dnxkkfWOqet5u21mEXZJpB0CkbMi7XtFtNnF8WYb5b75g+j0yEJe/eszChH8WNDeGd0SWVjOMwvgPZOLFuYfnpdwifN/MUljPCyWt2L/JZnkN46L2r2VsseLawty9RwVnRAZV3OaeqdNACDA3xsJEAVhzg3tExwI6sxn69FgyhKMC3wEEkPgw5eNn3PmwWm/0c5rI08GLfxvjPA8MTJe102uTwdS7onLIoHj3sjc7JsAYxM4Ke/2qfPy8hdtWosELhhpy9BPwcd50RtH58wN+m2jQRnyLzKfFfqv7Kui6c8A5d0R/WEkSB6bPF8dHR7dGl0djYMpaeXXDxyMq/EyxtKPzX2h7sEnhld+4BsAxscYzDJ1Dn6iTQK9adOHGyK8x4al8dE3+fAYNgCMr/Q27bRPX8b27rzRprWIzw2mHOXpg7KvjIzF2PjKFZHNe/57WvQ70Ubjxd+G/ilz2udyakMb/EdZeG2ljA0r2WICc4D7RN/cF9dE09y1/7rImJwb2QDGmK+Mjc+aFwbTT2UN/eWHZ0aXR4NfuO69sXU8XWZO3Wfax6/3RGv3n+q/yxAwwLUSWGUCJjBfCbPYqB1YBHyoTy9eHuwefDQN8AF4W2Q33Nfizop8uC9r+rWbFgLLtrvp9j8BizJPaR4VeXJ1dTReWFsMPzayUL07un5ynJdtN4s0C0VfkV6Ljoh81dg5i8TphXdObZup2yJe3wVoj4sujsYLVuc9ITosel20PnU9b7/ILMoFIUzbpwOqL0q8wZshkB2XuUGWB1wW3FmzzVsIPyDDJk6MAzVzD7+xKHfefPTw6JQ55UmLsznu2MhY8zXjLtAURBwTCRL+JLohuiMaAsUcfsH+c44wl9dcOh6/IZGg8JWRdpH5dS0yPu+PXJ9lPuvVfWokWDgkGgd5fOeD0aw6XRvm5PG6+S05/72R+88TUt96UYdAVeAi6PFZsIwdmUTK4SM26ma1Qxtw4wPSDaYv/MN48bWjo8F3/yXV/f7z2ckJ143x9GeosdPuacP16ki9Q9scqxtPZflGxBkRc23cxsnp+9qP5yz7m9FJ/sQwWYv4w3pkzKfN/fl/oydG/HGwoY/G/uZoXr36ZpOOD+nHLHZfKHRygMHQFv3kF7PaNp1v/H58320276Jyt+PaoSlkLcIWt5PmFGpuGOzIHJhnp+comy/T/oy1DWJlY/7CyDdgjKM5RB7jQotM+/gG/urGtOunRcQm1wCrlcAqEzB5XB/50K4deASmP1R2Qw+0ySLHIsqi4/cigZAF9KwF0W5oc9twYBIQKPE1CyiB0Aui8SLXQvbsyGJLWgHPEdFOmYX26yOBwHmRQMRTwzuju3eq0pTrnnO/XRudGR0TrUXqHUxbjo2ktfBcpj3jNZLPiq3aOO9W5wBjKa+x3G4TPDLttOAWfP2Hybk9efXE3djOMgtuwQKTRxstwOUZvnJ/co6Pj/ii8u+IBM+XR/yXKccCXt0W9PMCIuM39nEBnyBRuYLfRZ8JAnx9lEebxxsAzs9jO69Mawd/t+7Pv9YirARKF0T68O7IRvA10bzgM5fus8Oiwd+mA6ghjVflaPfA3DnchkDsxBxr0yx/NTb6znA+ZHI8fvlE3swba+nkeWL0tZH+qtfYeR2Uw7mm/bPaJsM05yGo02ZjY3znGWbT95m26qd8H56XcXKe/+i3cZBnI+OfQ3CqfcZgEbdZ5bnv9FnfcNlNMZm2aQ+fcd+6h2eZ64MfuQ/5wrSZm6fH1hzhvvi16NnRUdFadGpkI02eGyIP59xH89iO5wPjZiy6ARAIG9lucraN2trrJbBTBOZ9GO1UfS13NQhYINwZWZj4IDsremb06NXofnu5HwhYFHlaaNG/Fgmyrowsiny+87VjovXIryVPL8JyatvNws3fCKtbu86J1P3qaN4ibjsaYdEn2HKfWcTbcHP/DYaDDYDbo9uiZRaJQ3stch/6L0Vt+kjwzsaL1c0WMizGt3sM9Q0XJgi5OxLMDP1V3+BP96f64n99fsrDPjV5lf666Fsj4+/PAPjCERPxjdOiJ0f+NpjP6t+wJp0X/CfJA0z75dOOjT7Lx+XOWv9uli0f+j+RvvqzL753VITfnghXwdNlkX7eGc2zcdC5yDf1cfq6vEN/Fo2VuuXVjo/OKMd15c/jcGau/Vykj4L9oc0C4Zsj99XR0RnRPFtU/nSeYWyd16ZF4+tenW730D7np5lN18U3pFPnMuZeMZecGgl8j4juWSbjJI0AW+CsT8Zjuu2bKGpHko7nO22bdb8MFRt/4lOzxmgee58VfpfBZ9i3RKdHeyI8j4x8nj0p8tnhN0iUP8uGOsf+Mitdz40ILBrQgiqBEiiBEth3Aj78LBTuiq6M7G5fEB2270W3hBK4z6e+O7JoEmhdHVnMWsB9Q+Rz3sL0+mh/mMXidZE/BbAo5uc/Pjl3yw42QJ+vjizMsXhsdNGkPgvtx0SHRldEd0zOL3qxqHTPnhIJJOTdqn3pJKMyNxMkjOszj2A7BDVbbct0PkGqMdI2T9qGQGjY/Lgp5/wd+zLMsB8W+8q5LVqPXhmpA0t/nnFqhOfeyNNZTKTVBuYp3rKBmDzqxGUjNjZihnK1dTvM01v+LgD25F3gsjfSzxMjAc350b2RPxMY+OTwi0x7hiBw2DCaTuO99k/3E+uB3Wty/KuRgGyRSb8ZBjZt9O+4SF731tujWyOBnH657/wvAIs2AHJ5aRvGVoaNxtd8N+0zQ//MgTYsFhmfk27WRsKsfPzW19WfFhnjE6IbZiWcc87GmPqUIwCeZdP9GdLIN+/arHK2cm64/zE07vxqI+OH7odljb8L6s3J10Y2zvjZ10VnTY6x/aHofdEbo+n7Bwdjz5SnDbUlCHCiWgmUQAmUwM4S8MHkA9ViyddFBUfPjSwCFi32drZVLf1gIHBjOnFXtBY9IRKsWQRZFPEvCzKLq80szJJ8n0z9r4j8MJk2WMRp1/ftU6mLM7vHBFlXR+q0kDwyujM6Kjo+0q73RHhtZIIPi04mePiyjTIsuC4QZOr/8wXpFl0S0OmjIGs7zZ8nWQtaWPshRaadfzU55keCL0H6LLMA165Z5pp5j4yNIP+iyLj4E4Hzo5Oio6NbImnUfUQ0HeTm1H1mvpTPporASYAiiJDnkGiR/ddc1Fdje/eihJu8pkz3F+EkEHxJdEok6Bc087+1CINZtp6T+s4W+Zr+HzpJN7zwjSHY5R+OF42XfPPGbKroL7y1ce0+MqY2HAVjONJQlvtEIL1dplwBsv7p12ELCjb+xmEwefnFsDnk9yUW2X/KRQHoRyZ5FqV1jU/fFN0e4WLD8bJombkFI5//jM+8aXKszQNLnGmW8YFxX2el2ddzQz/U8++jef6knkVzwEbtGPIaZ3JfXh35xswTI58b5ovHR1dFszYAjBtzbbgPJqf6Mo/APOeal77nS6AESqAEtk7Ah7sPqRsjiyhfkb0ispiolcBWCFiA+yV2Cx+L4JMjn+0CDoGVxbO/t97fJph7UWSBrD3nRedG8wK77WgfBn84KejwvJ46OcbhhOjWiTDbyARjFpzuWQv2Y6I9G2WacV3gsndy3n1+7Yw0y5xaTyLt3kobZpVvTM6Inho5Vr6gjuGIlblqLTopmrdefEqu2dSwWXJmJGB4duQH+d4bDQEhjpgq++bojyJBhWBmSKMM/npsZPxm1YmnP6VSh6BAwMbHlOFbHmvRLBOc7o20793RdCAxK8+iczZGbOT+dfTCUcJhjteP66NLJ9e0Txvm2Z25IPiR330yq+/qPDKa9gEM1iN9Oi06LJpl2vATkTbb7HFPLGv837370Uif+LLx1N7BjI10TPtn9WFIu+yrvt0SHRr9t4i/TJt6PDWe5ovHDZPzfHgIFKfzr+XEoyN81CUQXcb43WsmCfn+hdF0G6bL4X+CW+OoffIbdzbcI461ZVZfXePn8+oZj4e0WzX+6z7lc6dH8+ozt/5O9KHIfeAzaBnTf3O1uePHRhnMcbio/+LIPMT4FnbTZuz5uzzmk2Xm9ukyVvL9dtycKwmunS6BEiiBfSAwfNBfmTL87ZuncDdGFlW1EtgsgcuT4e8nmZ6cVwslm0v87K5IIPJg2DWp9JLIYs56w1dJBS87tfZw/1w7qc+i8GsigeRXRBb/N0XDgjKHCw27OyPlMQvWJ0Wbabu0z4ksopV3RyQg2YoJjgVc+mMxvMjUK1gTREzLQv6Q6GmRhbsgQ1DuzzQsopm2rkdXRer6xui0aLwAV4e2DE+GlYmXBfi9EV7G4PxIW8aGh2vGREA5BFzqEwypxxNS5Y95a/vpkc0t+XwDYD16e4SNIOycSJ/H+dTnSeKeiL1q8rovL1jpFz76qD/jOh3r3xAQuQdwWWTGw1jon/EZc8PkuMgvpc8KhPzOBv/SDhska9G4PfIcFf1ApF0YSb+sDb5hDKYDU/WoF/thU0F9s9q5bH1DuvUc+G8o+eQwvtNcnD81Ms5jMx9gKu+jo2dH4zTajYPxOyXiuzZTPxItY8bUj9TxW0z4rPvImKtH+YSD9zYx/C37MyImwDYnDqad7gXl8tUvj7RvMP22kXFGND0GQxp9VqdxUqe6x34wpNvoVVuG++ToHP/gpLxxWdpmzLVH32wsDfdyDhea8knZflDyyGhctuO1iK+y9cg9Pm36aGz126ZjbUkC23FzLllVk5VACZRACcwg4IPr4uiK6ImRhW2tBDZD4KYkvjMSMFmQ+TMTrzYFPP23sH2w7CWpWCBukShI0LadNItnC3LBusXl6dGJERaeSN8TLWsWs78VnRxpux96uy66MdqIqQWsegVcjpX1siXyJclMuyFnzRUW/sq9bGaq+09amH9zNP7as/We8xbqx0YCdqZdvqlx6eT98LKeA30X0Om/p3v+y9zrIwvxtehHIoGX964NgZOybosszH82stC/MsLMgp0vPD0S0FwT3RIxfvzmSPBj/JTrR8KUq/1nR8Pf0eOBwScj4638MyMBGEZvjPTNsbqeEan72uiiaDvMj10OvvHaHP905D5kNgZsjuiHOf2d0V33XZlvr84lG3iYGhNB3Bsi/I6JfjRS3yzf8/nx9dER0YWRvMpQp/zHR4JN478eGVv3xLL2J0lo3PRLudrCH/m2eccm9vmTc+o2tvxtX839bGwfF+m7jRzjaMxxwMqcol/6qT2D8Z83Re7b0yL34r+JbAro+yOj74ieFvET/qRcPrWs3ZyEuOqze+UnIn74e5F5QlmYnRR9e2RO0u5bI36pf2Pj6/IpA89PRHxZ306JbDKYi/QN47FJc2+EgXpeEPmdAveV85sxZb0y4o/a7l7/t5FNAfeV/j4h+r4IR9zcy8uyc0/4Bs3eyNgM88t6jhk/9nsS/BYj3xhS79j0c0+E73r07i++3HerRuBZ6fCHIpPtsavW+fa3BEqgBErgoCTwU+nVP08kmBkvdHX4x6JPTa7/zOT1g3m1KB2bBZNAVFmemFjETZsgy8JTmvETqiGdhfhw/ZIcWwwusr25qK7PR0MfPptjwd60reWEwEq6v4osgqdNUDjUf+HURUGHdYC6/jLyVeePRe+KBDDT9racGNp01PTFvLcQVYb2kg0V/V8U3OAhzZ9F2jH0dXrBnktLm7x/OilP/2fZOTn5j9HQn+nXoS38xNO690ZYTfvSULaF/Q9GH4j+IZL/b6IPT97rl/e/HgnABlMe1vgP+f4ux8r520g53v9xtDcam0DsVyLruIH5OJ+xFAwIiAcb6ntrTihX+dIpQ1+H98ZaYDSYfCdGOLlXLhhdGx9K845JOjwGf5ff17n5qbaqx7H28k/nvNcffjTY83OA219E/GRs+mX9Kv/Q7oG3MdOOP4/U8dRxxhyr4+XR0B71Y+C9PvKN90eC1PGYa8PQP9dm2SE56X7FV1n4Ci75kHPaJujFQ73vic6OGF6Cbvl+OzIHTdtDcgJbaXAZj5Nr5in1GU/lqw9DfdI/dQ9t40OD6acg8u2T65hii8vYV/iUAHvevZBLc00eDH830hY+rx/Tct71P4hmzTU5fd+8cmHEH4d72esg7TZGxlH5p0dDm80R5ktjM9QtvXtckGzecF47Hx0N9uIcYMKvDhudH9jxx4EV/spUx8CS75wVGafBLsyBMVLfeCxHSe7zV/eGOcGY4sPXle94GFv91f5p01+fe9rBN8Zz0HTavp8iMB6swimBEiiBEiiBEjgwCVyeZv9IZAHp1VOmO6KbdkF3rk4bLo6eF220WbCvzf10Crgx8iRNoHF4ZKF4y0SbLf+eZLAhohyBhMBWMPTS6PrIU6nPRBbL6hE0W5Q/N1qLmKeExsRTu62avAIHAaI2qGu6PG3R93lrO+mluSuyYL4iwumfolkm7W9Gt0ffHQkaHhrpq2vY/H70mmj8JFN5V0XfEX1/dFw05Ptojm+Lros8TTQuY/v7vPnxSAD5bZEgQV/vjfizcrHXh8HUd2WkL98ZnRYJGLRTOnn/KNIX9Y/NE8sbInmlm2V8yhNb7ZB/4OX1JyOB07dHgif9ZOq9O8Ln9dG4bHV5Wqmv6h+bejx1xUA/hvLWcywAvz4yFoLc6faqj58Z22+NBt/P4X0+iB/ml0ZDH1zThqF/xnSWqcvTXuVr18Mjvs6n8BOAvy4S2PJRPigNG+Yi6e6Mpv12kuy+MZAGO/fUYPJrs+vfE50QPSzSB+1+WcT/nDPu47451nfMnh65N7Vb+/TbeL4lek2kbVsxdRiXZ0ZnRt8SHR1pDx/Ufv3h6+5hfRn3L2+/YHztDZG28WU8h3t9Pcf6ynceE+kzHxoM18uiX47OjviONM5rg00FfPnBuP4P5P2NERbSDTaw49vGfm/E7wZ25gX9/q0I4zF3/qKdxmNeX/nrCyKbGd8cuX/0VTnaaDwuifRp3M+8vc+kfVyE2XWR9tdWmMCz0ne7R3asjl1hDu16CZRACZTAwUPgp9KV4anOM3JsYTk2798eeRoi3SeiH/ziJPe92xNZREpj4fWEGWm2+xsAqrAQfGs0tM8THwHNtK3lxMsj7fO07PzpBHnvSZb+SXPhjOv6+NrJdWk8KfyhGemcetsoncX2LLPQFDhIOzxl9ITqzyIB1S9G2uQpp2BweHKnjZ6OnRhthz06hXhCpvztKnPZdmGA6ynRqZG2CHA2Mn5pYX9SdFp0QsQXljEBzDHR3kj+Q5bJlDSHRvio7/hoCESXzL6lZIIiQR826l2Wz7zKBm54nxxhuBkbs9OmtWh6zthMeeO0+GrX3sj4qGt/GT9ci9TPlwTzy/ZLOr4n3+Ab8m+38YXDo8EHvfLdZds5tIffimO01euyfqwe/VqLjJX27KsNZQ7s9EnZm+3TvHbom3uGrxrbI6ON/Irv+UbCByb58lJblsB2OMWydTVdCZRACZRACZTAzhD4pxTr6ZJgwSLZE5MrdqaqLZXqSZSNBUHSEVsqYflM6rIZ8sTIOsfTJE+jtmr/LxmviX448gTSIlUwbFF+3IxC1X9XdG304kj922G3pZDLo6dG3xXpk3HfH4aBPtFmTPs+MtFm8knryeGtE20mryeB+/tp4OdS5+0Tbaat89LuCzdlbpXdvPaMzz8YfIf6+eH6RMu0dZwGU/cm7aTxhXsm2pd6PpnMt2yhAP38+ERbyD4zy1DmvsyjMwuenNRX89uyZuPBN4TM7zdE1y+bsenuJ9ANgHpCCZRACZRACex+AhaCF0+aOS+gvDTXfTXUBoBATUAybb4u+a7I01uB2d3TCfLek+3XRZ7A+Cr2tN07uu7bdha8y9jVSfQr0ZdHFpSzyrYQVKa6LWLXo2mTb2jfndMX814Zgu9XR8rxlWiB5Cz745y0WGfyzTN9vCnylW9Pox8beWLliaI69EfQJbhQl28LWJQ6t52G3zkT+dbB+nYW3rJKoARK4AAgcEjaeH5kM8o32pb9DDoAutYmbpXAs5KxfwKwVXrNVwIlUAIlUAIlsBEBT6AE/8dEvnVBNgQeEbm2kybw9ycIP7cf6trJfrTsEiiBEtgKgWcnkz+FEvzb8K6VwH2/dtkNgDpCCZRACZRACZTAwUhgLZ3yrYb3RTYgaiVQAiWwKgSOSEffG/mmmj/Dqm2BwE7vUm+hSc1SAiVQAiVQAiVQAiUwh4A/7/Dr2f4e98g5aXq6BEqgBA5GAmvplD+z+oVoK7+RcDAy2XSf+hsAm0bWDCVQAiVQAiVQAiXwoBHwewN+DNBvPCz63YIHrYGtuARKoAR2iIDfffnZaH2Hyl+JYrsBsBLD3E6WQAmUQAmUQAkcRAT8GvpmfjX7IOp6u1ICJbDCBLbjf1hYYXz3d71/ArDyLlAAJVACJVACJVACJVACJVACJVACq0CgGwCrMMrtYwmUQAmUQAmUQAmUQAmUQAmUwMoT6AbAyrtAAZRACZRACZRACZRACZRACZRACawCgW4ArMIot48lUAIlUAIlUAIlUAIlUAIlUAIrT6AbACvvAgVQAiVQAiVQAiVQAiVQAiVQAiWwCgS6AbAKo9w+lkAJlEAJlEAJlEAJlEAJlEAJrDyBbgCsvAsUQAmUQAmUQAmUQAmUQAmUQAmUwCoQ6AbAKoxy+1gCJVACJVACJVACJVACJVACJbDyBLoBsPIuUAAlUAIlUAIlUAIlUAIlUAIlUAKrQKAbAKswyu1jCZRACZRACZRACZRACZRACZTAyhPoBsDKu0ABlEAJlEAJlEAJlEAJlEAJlEAJrAKBbgCswii3jyVQAiVQAiVQAiVQAiVQAiVQAitPoBsAK+8CBVACJVACJVACJVACJVACJVACJbAKBLoBsAqj3D6WQAmUQAmUQAmUQAmUQAmUQAmsPIFuAKy8CxRACZRACZRACZRACZRACZRACZTAKhDoBsAqjHL7WAIlUAIlUAIlUAIlUAIlUAIlsPIEugGw8i5QACVQAiVQAiVQAiVQAiVQAiVQAqtAoBsAqzDK7WMJlEAJlEAJlEAJlEAJlEAJlMDKE+gGwMq7QAGUQAmUQAmUQAmUQAmUQAmUQAmsAoFuAKzCKLePJVACJVACJVACJVACJVACJVACK0+gGwAr7wIFUAIlUAIlUAIlUAIlUAIlUAIlsAoEugGwCqPcPpZACZRACZRACZRACZRACZRACaw8gW4ArLwLFEAJlEAJlEAJlEAJlEAJlEAJlMAqEOgGwCqMcvtYAiVQAiVQAiVQAiVQAiVQAiWw8gS6AbDyLlAAJVACJVACJVACJVACJVACJVACq0DgIbu8k7+Q9p2wyTYekfSHRI+IDotujf5pk2U0eQmUQAmUQAmUQAmUQAmUQAmUQAkcVAR2+wbA8aG9d5PEfauBHho9bHLcDYBNQmzyEiiBEiiBEiiBEiiBEiiBEiiBg4vAbt8AeG1w37hJ5L4xcEr0yejjUYP/TQISmMIAAAWgSURBVAJs8hIogRIogRIogRIogRIogRIogRI4EAg8K438UPTO6NgDocFtYwmUQAmUQAmUQAmUQAmUQAmUQAnsNIH+COBOE275JVACJVACJVACJVACJVACJVACJbALCHQDYBcMQptQAiVQAiVQAiVQAiVQAiVQAiVQAjtNoBsAO0245ZdACZRACZRACZRACZRACZRACZTALiDQDYBdMAhtQgmUQAmUQAmUQAmUQAmUQAmUQAnsNIFuAOw04ZZfAiVQAiVQAiVQAiVQAiVQAiVQAruAQDcAdsEgtAklUAIlUAIlUAIlUAIlUAIlUAIlsNMEugGw04RbfgmUQAmUQAmUQAmUQAmUQAmUQAnsAgLdANgFg9AmlEAJlEAJlEAJlEAJlEAJlEAJlMBOE+gGwE4TbvklUAIlUAIlUAIlUAIlUAIlUAIlsAsIdANgFwxCm1ACJVACJVACJVACJVACJVACJVACO02gGwA7Tbjll0AJlEAJlEAJlEAJlEAJlEAJlMAuINANgF0wCG1CCZRACZRACZRACZRACZRACZRACew0gW4A7DThll8CJVACJVACJVACJVACJVACJVACu4BANwB2wSC0CSVQAiVQAiVQAiVQAiVQAiVQAiWw0wS6AbDThFt+CZRACZRACZRACZRACZRACZRACewCAt0A2AWD0CaUQAmUQAmUQAmUQAmUQAmUQAmUwE4T6AbAThNu+SVQAiVQAiVQAiVQAiVQAiVQAiWwCwh0A2AXDEKbUAIlUAIlUAIlUAIlUAIlUAIlUAI7TeBg3AA4GPu0037Q8kugBEqgBEqgBEqgBEqgBEqgBA5yAg85gPqnrcdHp0eLgvyvyvVHRIdHT4/+ekEfP55rl0V3LUjTSyVQAiVQAiVQAiVQAiVQAiVQAiVQAvuRgKDfBsCHo3/eBn02ZfxR9LD92IdWVQIlUAIlUAIlUAIlUAIlUAIlUAIlsASBRybNz0SC933dBLCRcPYSdTZJCZRACZRACZRACZRACZRACZRACZTAg0DguNT5nmhfNgD+MfkviRb9KcGD0LVWWQIlUAIlUAIlUAIlUAIlUAIlUAIlMBDw9/3PiT4VbXUT4IPJe2KRlkAJlEAJlEAJlEAJlEAJlEAJlEAJ7G4Cx6R5b4m2sgHgzwdetLu719aVQAmUQAmUQAmUQAmUQAmUQAmUQAkg4If7Loz+NtrsJsD7k+fIYiyBEiiBEiiBEiiBEiiBEiiBEiiBEjgwCBydZvo7/s9Hy24CSPu8A6N7bWUJlEAJlEAJlEAJlEAJlEAJlEAJlAAC/zo6L/L3/MtuAPjxwEOKrwRKoARKoARKoARKoARKoARKoARK4MAisCfNfWm0zH8L6Jf/LziwutfWlkAJlEAJlEAJlEAJlEAJlEAJlEAJIPCQ6NzovdFG3wLwo4GPKrYSKIESKIESKIESKIESKIESKIESKIEDk8ChafYvRv8QzdsE+LtcOzuyYVArgRIogRIogRIogRIogRIogRIogRI4QAmcnna/M5q3AfCqXOvT/wN0cNvsEiiBEiiBEiiBEiiBEiiBEiiBEhgIPCIHPxN9LJreBPhwzp0a9el//aUESqAESqAESqAESqAESqAESqAEDgICJ6UPb42mNwB+IeceeRD0r10ogRIogRIogRIogRIogRIogRIogRIIgYdGz4/+Oho2Ad6X4xOjLymhEiiBEiiBEiiBEiiBEiiBEiiBEiiBg4fA0enKm6PPT/S8vPrzgFoJlEAJlEAJlEAJlEAJlEAJlEAJlMBBRMDf+T8r+mD0jui4qE//D6IBbldKoARKoARKoARKoARKoARKoARKYCDgvwW8JLowelixlEAJlEAJlEAJlEAJlEAJlEAJlEAJHLwEjkjX+tX/g3d827MSKIESKIESKIESKIESKIESKIFNEvj/rZ01IKQVfGEAAAAASUVORK5CYII=) Plotar o gráfico comparativo:
###Code
import seaborn as sns
sns.boxplot(x='model_name', y='accuracy', data=cv_df)
sns.stripplot(x='model_name', y='accuracy', data=cv_df,
size=8, jitter=True, edgecolor="gray", linewidth=2)
plt.show()
###Output
_____no_output_____
###Markdown
Acurácia média entre os 5 modelos:
###Code
cv_df.groupby('model_name').accuracy.mean()
###Output
_____no_output_____
###Markdown
Matriz de Confusão Geração de modelo baseado em SVM:
###Code
model = LinearSVC()
X_train, X_test, y_train, y_test, indices_train, indices_test = train_test_split(features, labels, df.index, test_size=0.33, random_state=0)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
###Output
_____no_output_____
###Markdown
Plotar matriz de confusão para o modelo SVM:
###Code
from sklearn.metrics import confusion_matrix
conf_mat = confusion_matrix(y_test, y_pred)
fig, ax = plt.subplots(figsize=(10,10))
sns.heatmap(conf_mat, annot=True, fmt='d',
xticklabels=category_id_df.CATEGORIA.values, yticklabels=category_id_df.CATEGORIA.values)
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.show()
###Output
_____no_output_____
###Markdown
Vamos avaliar os resultados incorretos.Apresentação de textos classificados incorretamente:
###Code
from IPython.display import display
for predicted in category_id_df.ID_CATEGORIA:
for actual in category_id_df.ID_CATEGORIA:
if predicted != actual and conf_mat[actual, predicted] >= 5:
print("'{}' predicted as '{}' : {} examples.".format(id_to_category[actual], id_to_category[predicted], conf_mat[actual, predicted]))
display(df.loc[indices_test[(y_test == actual) & (y_pred == predicted)]][['CATEGORIA', 'TEXTO']])
print('')
###Output
'esporte' predicted as 'coronavirus' : 6 examples.
###Markdown
Reportar o resultado do classificador em cada classe:
###Code
from sklearn import metrics
print(metrics.classification_report(y_test, y_pred, target_names=df['CATEGORIA'].unique()))
###Output
precision recall f1-score support
coronavirus 0.35 0.26 0.30 66
politica 0.67 0.69 0.68 58
esporte 0.70 0.58 0.63 78
carro 0.65 0.50 0.56 70
educacao 0.48 0.74 0.58 62
entretenimento 0.55 0.52 0.54 71
economia 0.47 0.54 0.50 56
saude 0.59 0.66 0.62 67
accuracy 0.56 528
macro avg 0.56 0.56 0.55 528
weighted avg 0.56 0.56 0.55 528
|
JupyterNotebooks/Generators.ipynb | ###Markdown
Generators We have seen how useful **generator** are as **iterables** that are premade of list comprehensions. How can we make our own generators? Generators are just functions that encapsulate **state** and return a series of valuesRather than use a **return** it uses **yeild** to return the next item in the list They are generally used because they are **lazy** and only compute what is needed when its needed. This makes it more efficent on memory and system resources. Lets take a look at an example that can crash a computer due to memory
###Code
with open('some_csv.txt','w') as csvFile:
for i in range(1,100_000_000):
row = str(list(range(0,50))).replace('[','').replace(']','')+'\n'
csvFile.write(row)
###Output
_____no_output_____
###Markdown
The Cell above will make a massive file. So big that i stoped making it early and many computers cant hold it all in memory![image.png](attachment:image.png)the cell bellow will openthat file, read the hold thing in and then return it as a list of lines of text.
###Code
def csv_reader(file_name):
file = open(file_name)
result = file.read().split("\n")
return result
csv_list = csv_reader("some_csv.txt")
row_count = len(csv_list)
print(f"Row count is {row_count}")
###Output
Row count is 44137589
###Markdown
It maxes out my memory ![image.png](attachment:image.png) Its so much memory that we need to delete the data structure before we can move on
###Code
del csv_list
def csv_reader(file_name):
for row in open(file_name, "r"):
yield row
csv_gen = csv_reader("some_csv.txt")
row_count = 0
for row in csv_gen:
row_count += 1
print(f"Row count is {row_count}")
###Output
Row count is 44137588
###Markdown
By comparison, the Code above uses a generator and reads the program one line at a time. Then that line is processes and added to the count. Its much more memory efficient. ![image.png](attachment:image.png) so what are generator grat for? They are great at handling large amounts of data and taking something that is done all at one and turning it into a stream of information. So... **Generators** are Great for Infinite Series$S_k = \sum_{n=0}^{k}a_n = a_0 + a_1 + \cdots + a_k$$L = \sum_{n=0}^{\infty}a_n \Leftrightarrow L = \lim_{k \rightarrow \infty} S_k$**What is the most used infinite series in engineering and physics? ** Its not the Riemann Zeta Function, but lets look at it. $\zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s} = \frac{1}{1^s} + \frac{1}{2^s} + \frac{1}{3^s} + \cdots$
###Code
def Zeta_part(s,p=True):
n = 1
while True:
val = 1/(n**s)
if p == True:
print(f"Zeta({s})_{n} = {val}")
yield val
n+=1
###Output
_____no_output_____
###Markdown
The code above calculates $n^{-S}$, optionally prints it, and then increments n Once you set S it just keeps going. If we were to just take this code and run it as a sum, it would run forever!Instead we put the **yield** statement in there and the code pauses at that line and will wait untill **next()** is called
###Code
next(Zeta_part(1))
next(Zeta_part(1))
next(Zeta_part(1))
next(Zeta_part(1))
###Output
Zeta(1)_1 = 1.0
Zeta(1)_1 = 1.0
Zeta(1)_1 = 1.0
Zeta(1)_1 = 1.0
###Markdown
Why did this not work?Because we kept remakeing the generator. The solution is to name the generator we make, similar to how we do list comprehensions.
###Code
Zeta_1_n = Zeta_part(1)
next(Zeta_1_n)
next(Zeta_1_n)
next(Zeta_1_n)
next(Zeta_1_n)
next(Zeta_1_n)
###Output
Zeta(1)_2 = 0.5
Zeta(1)_3 = 0.3333333333333333
Zeta(1)_4 = 0.25
Zeta(1)_5 = 0.2
###Markdown
So now that we have the parts, How do we take calculate the whole $\zeta$ function? Well this would take an infinite amount of time to do this way, but we can take it for up to a certain number of $\zeta(s)_n$ such that $\zeta(s)~=\sum_{n=1}^{k}\zeta(s)_n$Here we use the **islice** function from itertools. islice is similar to slicing on a tuple or list. It gets you a part of the infinite series.
###Code
import math
import itertools
Zeta_2_n = Zeta_part(2,False)
N = 500
Z_n = itertools.islice(Zeta_2_n,N)
Zeta_2 = sum(Z_n)
diff = (math.pi**2)/6-Zeta_2
print(f"Zeta_2 from n=1 to n={N} is approximately\n {Zeta_2},\nand is off by\n {diff}\nfrom the true value")#.format(N,Zeta_2,diff))
###Output
Zeta_2 from n=1 to n=500 is approximately
1.642936065514894,
and is off by
0.0019980013333324997
from the true value
###Markdown
What if that isnt good enough and we want to keep going? Well we can use islice to get more terms
###Code
Z_n = itertools.islice(Zeta_2_n,0,10000)# lets grap 10000 more terms
Zeta_2 = Zeta_2 + sum(Z_n)## add them to what we have
diff = (math.pi**2)/6-Zeta_2
print(f"Zeta_2 from n=1 to n={N} is approximately\n {Zeta_2},\nand is off by\n {diff}\nfrom the true value")#.format(N,Zeta_2,diff))
###Output
Zeta_2 from n=1 to n=500 is approximately
1.6449241166489736,
and is off by
9.950199252761749e-06
from the true value
###Markdown
Keep runing the code block above and watch as it converges to the true value. So what did the generator do? It made the while loop safe and made it into something like a for loop where you can change the N! Where else can we see infinite series? Well lets say we arent iterating over N but are getting things in over timeIf you are getting a stream of data in from a DAQ you can view it as yeilding information. What about in 3D printing? Well we saw it in the design of G-Code and the design of the STL file. In both cases we needed to be able to read something in one at a time, process it and then move on to conserve memory on a machine What else are generators good for besides **$\infty$**? Structure Something that will come up over and over again when you write code or interact with someone elses code is the need to do things in an order. Here is some example code
###Code
x = 0
def first():
global x
x = 2
print(f"First x={x}")
def second():
global x
x = 1/x
print(f"Second x={x}")
def final():
global x
x = 0
print(f"Final x={x}")
first()
second()
final()
###Output
First x=2
Second x=0.5
Final x=0
###Markdown
Here it worked. But what happens if we get the order wrong?
###Code
first()
final()
second()
###Output
First x=2
Final x=0
###Markdown
O no the world exploded! we divided by zero. How can a generator help?
###Code
def mysequence():
first()
yield
second()
yield
final()
yield
seq = mysequence()
next(seq)
next(seq)
next(seq)
###Output
First x=2
Second x=0.5
Final x=0
|
notebooks/E2E_Example.ipynb | ###Markdown
Generate Sinusodial Signals with N Carriers **On CPU where**:* fs = sample rate of signal* freq = list of carrier frequencies* N = number of points in signal
###Code
def cpu_gen_signal(fs, freq, N):
T = 1/fs
sig = 0
x = np.linspace(0.0, N*(1.0/fs), N)
for f in freq:
sig += np.cos(f*2*cp.pi*x)
return sig
def cpu_gen_ensemble(fs, N, num_sig):
sig_ensemble = np.zeros((int(num_sig), int(N)))
for i in range(int(num_sig)):
# random number of carriers in random locations for each signal
freq = 1e6 * np.random.randint(1, 10, np.random.randint(1,5))
sig_ensemble[i,:] = cpu_gen_signal(fs, freq, N)
return sig_ensemble
###Output
_____no_output_____
###Markdown
**On GPU**
###Code
def gpu_gen_signal(fs, freq, N):
T = 1/fs
sig = 0
x = cp.linspace(0.0, N*(1.0/fs), N)
for f in freq:
sig += cp.cos(f*2*cp.pi*x)
return sig
# Storing num carriers for deep learning prediction -- We're even HURTING ourself here with benchmarks!
def gpu_gen_ensemble(fs, N, num_sig):
sig_ensemble = cp.zeros((int(num_sig), int(N)))
num_carriers = cp.zeros(int(num_sig))
for i in range(int(num_sig)):
# random number of carriers in random locations for each signal
num_carrier = int(cp.random.randint(1,5))
freq = 1e6 * cp.random.randint(1, 10, num_carrier)
sig_ensemble[i,:] = gpu_gen_signal(fs, freq, N)
num_carriers[i] = num_carrier
return sig_ensemble, num_carriers
###Output
_____no_output_____
###Markdown
Generate a bunch of different signals with arbitrary carrier frequencies. Allow user to select number of signals, sample frequency of the ensemble, and number of points in the signal
###Code
#10MHz
fs = 10e6
# Overwrite
num_sig = 2000
N = 2**15
# Change sample rate so N=2^16
up = 2
down = 1
cpu_ensemble = cpu_gen_ensemble(fs, N, num_sig)
[gpu_ensemble, num_carriers] = gpu_gen_ensemble(fs, N, num_sig)
###Output
_____no_output_____
###Markdown
Resample Ensemble - Use Polyphase Resampler to upsample by 2 **On CPU**
###Code
%%time
resample_cpu_ensemble = signal.resample_poly(cpu_ensemble, up, down, axis=1, window='flattop')
###Output
CPU times: user 3.14 s, sys: 528 ms, total: 3.66 s
Wall time: 3.66 s
###Markdown
**On GPU**
###Code
%%time
resample_gpu_ensemble = cusignal.resample_poly(gpu_ensemble, up, down, axis=1, window='flattop')
###Output
CPU times: user 279 ms, sys: 16.3 ms, total: 296 ms
Wall time: 294 ms
###Markdown
Run Periodogram with Flattop Filter over Each Row of Ensemble **On CPU**
###Code
%%time
cf, cPxx_den = signal.periodogram(resample_cpu_ensemble, fs, 'flattop', scaling='spectrum', axis=1)
###Output
CPU times: user 3.32 s, sys: 2.2 s, total: 5.52 s
Wall time: 5.52 s
###Markdown
**On GPU**
###Code
%%time
gf, gPxx_den = cusignal.periodogram(resample_gpu_ensemble, fs, 'flattop', scaling='spectrum', axis=1)
###Output
CPU times: user 199 ms, sys: 68.7 ms, total: 268 ms
Wall time: 272 ms
###Markdown
Visualize Output **On CPU**
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.semilogy(cf, cPxx_den[0,:])
plt.show()
###Output
_____no_output_____
###Markdown
**On GPU**
###Code
import matplotlib.pyplot as plt
plt.semilogy(cp.asnumpy(gf), cp.asnumpy(gPxx_den[0,:]))
plt.show()
###Output
_____no_output_____
###Markdown
Move to PyTorch to try to 'predict' number of carriers in signal
###Code
# Uncomment the line below to ensure PyTorch is installed.
# PyTorch is intentionally excluded from our Docker images due to its size.
# Alternatively, the docker image can be run with the following variable:
# docker run -e EXTRA_CONDA_PACKAGES="-c pytorch pytorch"...
#!conda install -c pytorch pytorch
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import torch.nn.functional as F
device = torch.device("cuda:0")
#90 percent of dataset for training
training_idx_max = int(0.9*gPxx_den.shape[0])
gPxx_den = gPxx_den.astype(cp.float32)
num_carriers = num_carriers.astype(cp.int64)
# Zero copy memory from cupy to DLPack to Torch
x = torch.as_tensor(gPxx_den[0:training_idx_max,:], device=device)
y = torch.as_tensor(num_carriers[0:training_idx_max], device=device)
# Test
x_t = torch.as_tensor(gPxx_den[training_idx_max:gPxx_den.shape[0],:], device=device)
y_t = torch.as_tensor(num_carriers[training_idx_max:gPxx_den.shape[0]], device=device)
# Number of possible carriers
output_size = 10
epochs = 75
batch_size = 10
learning_rate = 1e-2
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.l1 = nn.Linear(x.shape[1], 1500)
self.relu = nn.ReLU()
self.l3 = nn.Linear(1500, 750)
self.relu = nn.ReLU()
self.l5 = nn.Linear(750, output_size)
def forward(self, x):
x = self.l1(x)
x = self.relu(x)
x = self.l3(x)
x = self.relu(x)
x = self.l5(x)
return F.log_softmax(x, dim=1)
net = Network().to(device)
optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=0.5)
loss_log = []
for e in range(epochs):
for i in range(0, x.shape[0], batch_size):
x_mini = x[i:i + batch_size]
y_mini = y[i:i + batch_size]
x_var = Variable(x_mini)
y_var = Variable(y_mini)
optimizer.zero_grad()
net_out = net(x_var)
loss = F.nll_loss(net_out, y_var)
loss.backward()
optimizer.step()
if i % 100 == 0:
loss_log.append(loss.data)
print('Epoch: {} - Loss: {:.6f}'.format(e, loss.data))
###Output
_____no_output_____
###Markdown
**Measure Inference Accuracy on Test Set**
###Code
test_loss = 0
correct = 0
for i in range(x_t.shape[0]):
pred = net(x_t[i,:].expand(1,-1)).argmax()
correct += pred.eq(y_t[i].view_as(pred)).sum().item()
print('Accuracy: ', 100. * correct / x_t.shape[0])
###Output
_____no_output_____
###Markdown
**Save Model**
###Code
checkpoint = {'net': Network(),
'state_dict': net.state_dict(),
'optimizer': optimizer.state_dict()}
torch.save(checkpoint,"E2E_sig_proc.pt")
###Output
_____no_output_____
###Markdown
**Load Model**
###Code
checkpoint = torch.load('E2E_sig_proc.pt')
checkpoint.keys()
###Output
_____no_output_____
###Markdown
**Generate New Signal and Look at Inferencing Power**
###Code
num_carrier = 2
freq = 1e6 * cp.random.randint(1, 10, num_carrier)
sig = gpu_gen_signal(fs, freq, N)
r_sig = cusignal.resample_poly(sig, up, down, window='flattop')
f, Pxx = cusignal.periodogram(r_sig, fs, 'flattop', scaling='spectrum')
x = torch.as_tensor(Pxx.astype(cp.float32), device=device)
pred_num_carrier = net(x.expand(1,-1)).argmax().item()
print(pred_num_carrier)
###Output
_____no_output_____
###Markdown
Generate Sinusodial Signals with N Carriers **On CPU where**:* fs = sample rate of signal* freq = list of carrier frequencies* N = number of points in signal
###Code
def cpu_gen_signal(fs, freq, N):
T = 1/fs
sig = 0
x = np.linspace(0.0, N*(1.0/fs), N)
for f in freq:
sig += np.cos(f*2*cp.pi*x)
return sig
def cpu_gen_ensemble(fs, N, num_sig):
sig_ensemble = np.zeros((int(num_sig), int(N)))
for i in range(int(num_sig)):
# random number of carriers in random locations for each signal
freq = 1e6 * np.random.randint(1, 10, np.random.randint(1,5))
sig_ensemble[i,:] = cpu_gen_signal(fs, freq, N)
return sig_ensemble
###Output
_____no_output_____
###Markdown
**On GPU**
###Code
def gpu_gen_signal(fs, freq, N):
T = 1/fs
sig = 0
x = cp.linspace(0.0, N*(1.0/fs), N)
for f in freq:
sig += cp.cos(f*2*cp.pi*x)
return sig
# Storing num carriers for deep learning prediction -- We're even HURTING ourself here with benchmarks!
def gpu_gen_ensemble(fs, N, num_sig):
sig_ensemble = cp.zeros((int(num_sig), int(N)))
num_carriers = cp.zeros(int(num_sig))
for i in range(int(num_sig)):
# random number of carriers in random locations for each signal
num_carrier = int(cp.random.randint(1,5))
freq = 1e6 * cp.random.randint(1, 10, num_carrier)
sig_ensemble[i,:] = gpu_gen_signal(fs, freq, N)
num_carriers[i] = num_carrier
return sig_ensemble, num_carriers
###Output
_____no_output_____
###Markdown
Generate a bunch of different signals with arbitrary carrier frequencies. Allow user to select number of signals, sample frequency of the ensemble, and number of points in the signal
###Code
#10MHz
fs = 10e6
# Overwrite
num_sig = 2000
N = 2**15
# Change sample rate so N=2^16
up = 2
down = 1
cpu_ensemble = cpu_gen_ensemble(fs, N, num_sig)
[gpu_ensemble, num_carriers] = gpu_gen_ensemble(fs, N, num_sig)
###Output
_____no_output_____
###Markdown
Resample Ensemble - Use Polyphase Resampler to upsample by 2 **On CPU**
###Code
%%time
resample_cpu_ensemble = signal.resample_poly(cpu_ensemble, up, down, axis=1, window='flattop')
###Output
CPU times: user 3.14 s, sys: 528 ms, total: 3.66 s
Wall time: 3.66 s
###Markdown
**On GPU**
###Code
%%time
resample_gpu_ensemble = cusignal.resample_poly(gpu_ensemble, up, down, axis=1, window='flattop')
###Output
CPU times: user 279 ms, sys: 16.3 ms, total: 296 ms
Wall time: 294 ms
###Markdown
Run Periodogram with Flattop Filter over Each Row of Ensemble **On CPU**
###Code
%%time
cf, cPxx_den = signal.periodogram(resample_cpu_ensemble, fs, 'flattop', scaling='spectrum', axis=1)
###Output
CPU times: user 3.32 s, sys: 2.2 s, total: 5.52 s
Wall time: 5.52 s
###Markdown
**On GPU**
###Code
%%time
gf, gPxx_den = cusignal.periodogram(resample_gpu_ensemble, fs, 'flattop', scaling='spectrum', axis=1)
###Output
CPU times: user 199 ms, sys: 68.7 ms, total: 268 ms
Wall time: 272 ms
###Markdown
Visualize Output **On CPU**
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.semilogy(cf, cPxx_den[0,:])
plt.show()
###Output
_____no_output_____
###Markdown
**On GPU**
###Code
import matplotlib.pyplot as plt
plt.semilogy(cp.asnumpy(gf), cp.asnumpy(gPxx_den[0,:]))
plt.show()
###Output
_____no_output_____
###Markdown
Move to PyTorch to try to 'predict' number of carriers in signal
###Code
# Uncomment the line below to ensure PyTorch is installed.
# PyTorch is intentionally excluded from our Docker images due to its size.
# Alternatively, the docker image can be run with the following variable:
# docker run -e EXTRA_CONDA_PACKAGES="-c pytorch pytorch"...
#!conda install -y -c pytorch pytorch
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import torch.nn.functional as F
device = torch.device("cuda:0")
#90 percent of dataset for training
training_idx_max = int(0.9*gPxx_den.shape[0])
gPxx_den = gPxx_den.astype(cp.float32)
num_carriers = num_carriers.astype(cp.int64)
# Zero copy memory from cupy to DLPack to Torch
x = torch.as_tensor(gPxx_den[0:training_idx_max,:], device=device)
y = torch.as_tensor(num_carriers[0:training_idx_max], device=device)
# Test
x_t = torch.as_tensor(gPxx_den[training_idx_max:gPxx_den.shape[0],:], device=device)
y_t = torch.as_tensor(num_carriers[training_idx_max:gPxx_den.shape[0]], device=device)
# Number of possible carriers
output_size = 10
epochs = 75
batch_size = 10
learning_rate = 1e-2
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.l1 = nn.Linear(x.shape[1], 1500)
self.relu = nn.ReLU()
self.l3 = nn.Linear(1500, 750)
self.relu = nn.ReLU()
self.l5 = nn.Linear(750, output_size)
def forward(self, x):
x = self.l1(x)
x = self.relu(x)
x = self.l3(x)
x = self.relu(x)
x = self.l5(x)
return F.log_softmax(x, dim=1)
net = Network().to(device)
optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=0.5)
loss_log = []
for e in range(epochs):
for i in range(0, x.shape[0], batch_size):
x_mini = x[i:i + batch_size]
y_mini = y[i:i + batch_size]
x_var = Variable(x_mini)
y_var = Variable(y_mini)
optimizer.zero_grad()
net_out = net(x_var)
loss = F.nll_loss(net_out, y_var)
loss.backward()
optimizer.step()
if i % 100 == 0:
loss_log.append(loss.data)
print('Epoch: {} - Loss: {:.6f}'.format(e, loss.data))
###Output
_____no_output_____
###Markdown
**Measure Inference Accuracy on Test Set**
###Code
test_loss = 0
correct = 0
for i in range(x_t.shape[0]):
pred = net(x_t[i,:].expand(1,-1)).argmax()
correct += pred.eq(y_t[i].view_as(pred)).sum().item()
print('Accuracy: ', 100. * correct / x_t.shape[0])
###Output
_____no_output_____
###Markdown
**Save Model**
###Code
checkpoint = {'net': Network(),
'state_dict': net.state_dict(),
'optimizer': optimizer.state_dict()}
torch.save(checkpoint,"E2E_sig_proc.pt")
###Output
_____no_output_____
###Markdown
**Load Model**
###Code
checkpoint = torch.load('E2E_sig_proc.pt')
checkpoint.keys()
###Output
_____no_output_____
###Markdown
**Generate New Signal and Look at Inferencing Power**
###Code
num_carrier = 2
freq = 1e6 * cp.random.randint(1, 10, num_carrier)
sig = gpu_gen_signal(fs, freq, N)
r_sig = cusignal.resample_poly(sig, up, down, window='flattop')
f, Pxx = cusignal.periodogram(r_sig, fs, 'flattop', scaling='spectrum')
x = torch.as_tensor(Pxx.astype(cp.float32), device=device)
pred_num_carrier = net(x.expand(1,-1)).argmax().item()
print(pred_num_carrier)
###Output
_____no_output_____
###Markdown
Generate Sinusodial Signals with N Carriers **On CPU where**:* fs = sample rate of signal* freq = list of carrier frequencies* N = number of points in signal
###Code
def cpu_gen_signal(fs, freq, N):
T = 1/fs
sig = 0
x = np.linspace(0.0, N*(1.0/fs), N)
for f in freq:
sig += np.cos(f*2*cp.pi*x)
return sig
def cpu_gen_ensemble(fs, N, num_sig):
sig_ensemble = np.zeros((int(num_sig), int(N)))
for i in range(int(num_sig)):
# random number of carriers in random locations for each signal
freq = 1e6 * np.random.randint(1, 10, np.random.randint(1,5))
sig_ensemble[i,:] = cpu_gen_signal(fs, freq, N)
return sig_ensemble
###Output
_____no_output_____
###Markdown
**On GPU**
###Code
def gpu_gen_signal(fs, freq, N):
T = 1/fs
sig = 0
x = cp.linspace(0.0, N*(1.0/fs), N)
for f in freq:
sig += cp.cos(f*2*cp.pi*x)
return sig
# Storing num carriers for deep learning prediction -- We're even HURTING ourself here with benchmarks!
def gpu_gen_ensemble(fs, N, num_sig):
sig_ensemble = cp.zeros((int(num_sig), int(N)))
num_carriers = cp.zeros(int(num_sig))
for i in range(int(num_sig)):
# random number of carriers in random locations for each signal
num_carrier = int(cp.random.randint(1,5))
freq = 1e6 * cp.random.randint(1, 10, num_carrier)
sig_ensemble[i,:] = gpu_gen_signal(fs, freq, N)
num_carriers[i] = num_carrier
return sig_ensemble, num_carriers
###Output
_____no_output_____
###Markdown
Generate a bunch of different signals with arbitrary carrier frequencies. Allow user to select number of signals, sample frequency of the ensemble, and number of points in the signal
###Code
#10MHz
fs = 10e6
# Overwrite
num_sig = 2000
N = 2**15
# Change sample rate so N=2^16
up = 2
down = 1
cpu_ensemble = cpu_gen_ensemble(fs, N, num_sig)
[gpu_ensemble, num_carriers] = gpu_gen_ensemble(fs, N, num_sig)
###Output
_____no_output_____
###Markdown
Resample Ensemble - Use Polyphase Resampler to upsample by 2 **On CPU**
###Code
%%time
resample_cpu_ensemble = signal.resample_poly(cpu_ensemble, up, down, axis=1, window='flattop')
###Output
CPU times: user 3.14 s, sys: 528 ms, total: 3.66 s
Wall time: 3.66 s
###Markdown
**On GPU**
###Code
%%time
resample_gpu_ensemble = cusignal.resample_poly(gpu_ensemble, up, down, axis=1, window='flattop')
###Output
CPU times: user 279 ms, sys: 16.3 ms, total: 296 ms
Wall time: 294 ms
###Markdown
Run Periodogram with Flattop Filter over Each Row of Ensemble **On CPU**
###Code
%%time
cf, cPxx_den = signal.periodogram(resample_cpu_ensemble, fs, 'flattop', scaling='spectrum', axis=1)
###Output
CPU times: user 3.32 s, sys: 2.2 s, total: 5.52 s
Wall time: 5.52 s
###Markdown
**On GPU**
###Code
%%time
gf, gPxx_den = cusignal.periodogram(resample_gpu_ensemble, fs, 'flattop', scaling='spectrum', axis=1)
###Output
CPU times: user 199 ms, sys: 68.7 ms, total: 268 ms
Wall time: 272 ms
###Markdown
Visualize Output **On CPU**
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.semilogy(cf, cPxx_den[0,:])
plt.show()
###Output
_____no_output_____
###Markdown
**On GPU**
###Code
import matplotlib.pyplot as plt
plt.semilogy(cp.asnumpy(gf), cp.asnumpy(gPxx_den[0,:]))
plt.show()
###Output
_____no_output_____
###Markdown
Move to PyTorch to try to 'predict' number of carriers in signal
###Code
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import torch.nn.functional as F
from torch.utils.dlpack import to_dlpack
from torch.utils.dlpack import from_dlpack
#90 percent of dataset for training
training_idx_max = int(0.9*gPxx_den.shape[0])
gPxx_den = gPxx_den.astype(cp.float32)
num_carriers = num_carriers.astype(cp.int64)
# Zero copy memory from cupy to DLPack to Torch
x = from_dlpack(gPxx_den[0:training_idx_max,:].toDlpack())
y = from_dlpack(num_carriers[0:training_idx_max].toDlpack())
# Test
x_t = from_dlpack(gPxx_den[training_idx_max:gPxx_den.shape[0],:].toDlpack())
y_t = from_dlpack(num_carriers[training_idx_max:gPxx_den.shape[0]].toDlpack())
device = torch.device("cuda:0")
# Number of possible carriers
output_size = 10
epochs = 75
batch_size = 10
learning_rate = 1e-2
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.l1 = nn.Linear(x.shape[1], 1500)
self.relu = nn.ReLU()
self.l3 = nn.Linear(1500, 750)
self.relu = nn.ReLU()
self.l5 = nn.Linear(750, output_size)
def forward(self, x):
x = self.l1(x)
x = self.relu(x)
x = self.l3(x)
x = self.relu(x)
x = self.l5(x)
return F.log_softmax(x)
net = Network().to(device)
optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=0.5)
loss_log = []
for e in range(epochs):
for i in range(0, x.shape[0], batch_size):
x_mini = x[i:i + batch_size]
y_mini = y[i:i + batch_size]
x_var = Variable(x_mini)
y_var = Variable(y_mini)
optimizer.zero_grad()
net_out = net(x_var)
loss = F.nll_loss(net_out, y_var)
loss.backward()
optimizer.step()
if i % 100 == 0:
loss_log.append(loss.data)
print('Epoch: {} - Loss: {:.6f}'.format(e, loss.data))
###Output
_____no_output_____
###Markdown
**Measure Inference Accuracy on Test Set**
###Code
test_loss = 0
correct = 0
for i in range(x_t.shape[0]):
pred = net(x_t[i,:]).argmax()
correct += pred.eq(y_t[i].view_as(pred)).sum().item()
print('Accuracy: ', 100. * correct / x_t.shape[0])
###Output
_____no_output_____
###Markdown
**Save Model**
###Code
checkpoint = {'net': Network(),
'state_dict': net.state_dict(),
'optimizer': optimizer.state_dict()}
torch.save(checkpoint,"E2E_sig_proc.pt")
###Output
_____no_output_____
###Markdown
**Load Model**
###Code
checkpoint = torch.load('E2E_sig_proc.pt')
checkpoint.keys()
###Output
_____no_output_____
###Markdown
**Generate New Signal and Look at Inferencing Power**
###Code
num_carrier = 2
freq = 1e6 * cp.random.randint(1, 10, num_carrier)
sig = gpu_gen_signal(fs, freq, N)
r_sig = cusignal.resample_poly(sig, up, down, window='flattop')
f, Pxx = cusignal.periodogram(r_sig, fs, 'flattop', scaling='spectrum')
x = from_dlpack(Pxx.astype(cp.float32).toDlpack())
pred_num_carrier = net(x).argmax().item()
print(pred_num_carrier)
###Output
_____no_output_____
###Markdown
Generate Sinusodial Signals with N Carriers **On CPU where**:* fs = sample rate of signal* freq = list of carrier frequencies* N = number of points in signal
###Code
def cpu_gen_signal(fs, freq, N):
T = 1/fs
sig = 0
x = np.linspace(0.0, N*(1.0/fs), N)
for f in freq:
sig += np.cos(f*2*cp.pi*x)
return sig
def cpu_gen_ensemble(fs, N, num_sig):
sig_ensemble = np.zeros((int(num_sig), int(N)))
for i in range(int(num_sig)):
# random number of carriers in random locations for each signal
freq = 1e6 * np.random.randint(1, 10, np.random.randint(1,5))
sig_ensemble[i,:] = cpu_gen_signal(fs, freq, N)
return sig_ensemble
###Output
_____no_output_____
###Markdown
**On GPU**Please note, first run of GPU functions includes setting up memory and 'pre-warming' the GPU. For accurate performance and benchmarking each cell is typically run multiple times.
###Code
def gpu_gen_signal(fs, freq, N):
T = 1/fs
sig = 0
x = cp.linspace(0.0, N*(1.0/fs), N)
for f in freq:
sig += cp.cos(f*2*cp.pi*x)
return sig
# Storing num carriers for deep learning prediction -- We're even HURTING ourself here with benchmarks!
def gpu_gen_ensemble(fs, N, num_sig):
sig_ensemble = cp.zeros((int(num_sig), int(N)))
num_carriers = cp.zeros(int(num_sig))
for i in range(int(num_sig)):
# random number of carriers in random locations for each signal
num_carrier = int(cp.random.randint(1,5))
freq = 1e6 * cp.random.randint(1, 10, num_carrier)
sig_ensemble[i,:] = gpu_gen_signal(fs, freq, N)
num_carriers[i] = num_carrier
return sig_ensemble, num_carriers
###Output
_____no_output_____
###Markdown
Generate a bunch of different signals with arbitrary carrier frequencies. Allow user to select number of signals, sample frequency of the ensemble, and number of points in the signal
###Code
#10MHz
fs = 10e6
# Overwrite
num_sig = 2000
N = 2**15
# Change sample rate so N=2^16
up = 2
down = 1
cpu_ensemble = cpu_gen_ensemble(fs, N, num_sig)
[gpu_ensemble, num_carriers] = gpu_gen_ensemble(fs, N, num_sig)
###Output
_____no_output_____
###Markdown
Resample Ensemble - Use Polyphase Resampler to upsample by 2 **On CPU**
###Code
%%time
resample_cpu_ensemble = signal.resample_poly(cpu_ensemble, up, down, axis=1, window='flattop')
###Output
CPU times: user 3.14 s, sys: 528 ms, total: 3.66 s
Wall time: 3.66 s
###Markdown
**On GPU**
###Code
%%time
resample_gpu_ensemble = cusignal.resample_poly(gpu_ensemble, up, down, axis=1, window='flattop')
###Output
CPU times: user 279 ms, sys: 16.3 ms, total: 296 ms
Wall time: 294 ms
###Markdown
Run Periodogram with Flattop Filter over Each Row of Ensemble **On CPU**
###Code
%%time
cf, cPxx_den = signal.periodogram(resample_cpu_ensemble, fs, 'flattop', scaling='spectrum', axis=1)
###Output
CPU times: user 3.32 s, sys: 2.2 s, total: 5.52 s
Wall time: 5.52 s
###Markdown
**On GPU**
###Code
%%time
gf, gPxx_den = cusignal.periodogram(resample_gpu_ensemble, fs, 'flattop', scaling='spectrum', axis=1)
###Output
CPU times: user 199 ms, sys: 68.7 ms, total: 268 ms
Wall time: 272 ms
###Markdown
Visualize Output **On CPU**
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.semilogy(cf, cPxx_den[0,:])
plt.show()
###Output
_____no_output_____
###Markdown
**On GPU**
###Code
import matplotlib.pyplot as plt
plt.semilogy(cp.asnumpy(gf), cp.asnumpy(gPxx_den[0,:]))
plt.show()
###Output
_____no_output_____
###Markdown
Move to PyTorch to try to 'predict' number of carriers in signal
###Code
# Uncomment the line below to ensure PyTorch is installed.
# PyTorch is intentionally excluded from our Docker images due to its size.
# Alternatively, the docker image can be run with the following variable:
# docker run -e EXTRA_CONDA_PACKAGES="-c pytorch pytorch"...
#!conda install -y -c pytorch pytorch
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import torch.nn.functional as F
device = torch.device("cuda:0")
#90 percent of dataset for training
training_idx_max = int(0.9*gPxx_den.shape[0])
gPxx_den = gPxx_den.astype(cp.float32)
num_carriers = num_carriers.astype(cp.int64)
# Zero copy memory from cupy to DLPack to Torch
x = torch.as_tensor(gPxx_den[0:training_idx_max,:], device=device)
y = torch.as_tensor(num_carriers[0:training_idx_max], device=device)
# Test
x_t = torch.as_tensor(gPxx_den[training_idx_max:gPxx_den.shape[0],:], device=device)
y_t = torch.as_tensor(num_carriers[training_idx_max:gPxx_den.shape[0]], device=device)
# Number of possible carriers
output_size = 10
epochs = 75
batch_size = 10
learning_rate = 1e-2
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.l1 = nn.Linear(x.shape[1], 1500)
self.relu = nn.ReLU()
self.l3 = nn.Linear(1500, 750)
self.relu = nn.ReLU()
self.l5 = nn.Linear(750, output_size)
def forward(self, x):
x = self.l1(x)
x = self.relu(x)
x = self.l3(x)
x = self.relu(x)
x = self.l5(x)
return F.log_softmax(x, dim=1)
net = Network().to(device)
optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=0.5)
loss_log = []
for e in range(epochs):
for i in range(0, x.shape[0], batch_size):
x_mini = x[i:i + batch_size]
y_mini = y[i:i + batch_size]
x_var = Variable(x_mini)
y_var = Variable(y_mini)
optimizer.zero_grad()
net_out = net(x_var)
loss = F.nll_loss(net_out, y_var)
loss.backward()
optimizer.step()
if i % 100 == 0:
loss_log.append(loss.data)
print('Epoch: {} - Loss: {:.6f}'.format(e, loss.data))
###Output
_____no_output_____
###Markdown
**Measure Inference Accuracy on Test Set**
###Code
test_loss = 0
correct = 0
for i in range(x_t.shape[0]):
pred = net(x_t[i,:].expand(1,-1)).argmax()
correct += pred.eq(y_t[i].view_as(pred)).sum().item()
print('Accuracy: ', 100. * correct / x_t.shape[0])
###Output
_____no_output_____
###Markdown
**Save Model**
###Code
checkpoint = {'net': Network(),
'state_dict': net.state_dict(),
'optimizer': optimizer.state_dict()}
torch.save(checkpoint,"E2E_sig_proc.pt")
###Output
_____no_output_____
###Markdown
**Load Model**
###Code
checkpoint = torch.load('E2E_sig_proc.pt')
checkpoint.keys()
###Output
_____no_output_____
###Markdown
**Generate New Signal and Look at Inferencing Power**
###Code
num_carrier = 2
freq = 1e6 * cp.random.randint(1, 10, num_carrier)
sig = gpu_gen_signal(fs, freq, N)
r_sig = cusignal.resample_poly(sig, up, down, window='flattop')
f, Pxx = cusignal.periodogram(r_sig, fs, 'flattop', scaling='spectrum')
x = torch.as_tensor(Pxx.astype(cp.float32), device=device)
pred_num_carrier = net(x.expand(1,-1)).argmax().item()
print(pred_num_carrier)
###Output
_____no_output_____ |
Data Warehouse/Amazon United Kingdom/.ipynb_checkpoints/Amazon_UK - Supplements - Sports - Fat Burner --print ns-checkpoint.ipynb | ###Markdown
List of Products
###Code
amazon_usa = {'health_and_beauty':{'hair_products':{'shampoo':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11057241%2Cn%3A17911764011%2Cn%3A11057651&dc&',
'conditioner':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11057241%2Cn%3A17911764011%2Cn%3A11057251&dc&',
'hair_scalp_treatment':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11057241%2Cn%3A11057431&dc&',
'treatment_oil':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11057241%2Cn%3A10666439011&dc&',
'hair_loss':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11057241%2Cn%3A10898755011&dc&'},
'skin_care':{'body':{'cleansers':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11060521%2Cn%3A11056281&dc&',
'moisturizers':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11060521%2Cn%3A11060661&dc&',
'treatments':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11060521%2Cn%3A11056421&dc&'},
'eyes':{'creams':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11061941%2Cn%3A7730090011&dc&',
'gels':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11061941%2Cn%3A7730092011&dc&',
'serums':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11061941%2Cn%3A7730098011&dc&'},
'face':{'f_cleansers':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11060711%2Cn%3A11060901&dc&',
'f_moisturizers':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11060711%2Cn%3A11060901&dc&',
'scrubs':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11060711%2Cn%3A11061091&dc&',
'toners':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11060711%2Cn%3A11061931&dc&',
'f_treatments':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11060711%2Cn%3A11061931&dc&'},
'lipcare':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A3761351&dc&'}},
'food':{'tea':{'herbal':'https://www.amazon.com/s?k=tea&i=grocery&rh=n%3A16310101%2Cn%3A16310231%2Cn%3A16521305011%2Cn%3A16318401%2Cn%3A16318511&dc&',
'green':'https://www.amazon.com/s?k=tea&i=grocery&rh=n%3A16310101%2Cn%3A16310231%2Cn%3A16521305011%2Cn%3A16318401%2Cn%3A16318471&dc&',
'black':'https://www.amazon.com/s?k=tea&i=grocery&rh=n%3A16310101%2Cn%3A16310231%2Cn%3A16521305011%2Cn%3A16318401%2Cn%3A16318411&dc&',
'chai':'https://www.amazon.com/s?k=tea&i=grocery&rh=n%3A16310101%2Cn%3A16310231%2Cn%3A16521305011%2Cn%3A16318401%2Cn%3A348022011&dc&'},
'coffee':'https://www.amazon.com/s?k=tea&i=grocery&rh=n%3A16310101%2Cn%3A16310231%2Cn%3A16521305011%2Cn%3A16318031%2Cn%3A2251593011&dc&',
'dried_fruits':{'mixed':'https://www.amazon.com/s?k=dried+fruits&i=grocery&rh=n%3A16310101%2Cn%3A6506977011%2Cn%3A9865332011%2Cn%3A9865334011%2Cn%3A9865348011&dc&',
'mangoes':'https://www.amazon.com/s?k=dried+fruits&rh=n%3A16310101%2Cn%3A9865346011&dc&'},
'nuts':{'mixed':'https://www.amazon.com/s?k=nuts&rh=n%3A16310101%2Cn%3A16322931&dc&',
'peanuts':'https://www.amazon.com/s?k=nuts&i=grocery&rh=n%3A16310101%2Cn%3A18787303011%2Cn%3A16310221%2Cn%3A16322881%2Cn%3A16322941&dc&',
'cashews':'https://www.amazon.com/s?k=nuts&i=grocery&rh=n%3A16310101%2Cn%3A18787303011%2Cn%3A16310221%2Cn%3A16322881%2Cn%3A16322901&dc&'}},
'supplements':{'sports':{'pre_workout':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A6973663011%2Cn%3A6973697011&dc&',
'protein':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A6973663011%2Cn%3A6973704011&dc&',
'fat_burner':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A6973663011%2Cn%3A6973679011&dc&',
'weight_gainer':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A6973663011%2Cn%3A6973725011&dc&'},
'vitamins_dietary':{'supplements':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A3764441%2Cn%3A6939426011&dc&',
'multivitamins':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A3774861&dc&'}},
'wellness':{'ayurveda':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A10079996011%2Cn%3A13052911%2Cn%3A13052941&dc&',
'essential_oil_set':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A10079996011%2Cn%3A13052911%2Cn%3A18502613011&dc&',
'massage_oil':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A10079996011%2Cn%3A14442631&dc&'},
'personal_accessories':{'bags':{'women':{'clutches':'https://www.amazon.com/s?k=bags&i=fashion-womens-handbags&bbn=15743631&rh=n%3A7141123011%2Cn%3A%217141124011%2Cn%3A7147440011%2Cn%3A15743631%2Cn%3A17037745011&dc&',
'crossbody':'https://www.amazon.com/s?k=bags&i=fashion-womens-handbags&bbn=15743631&rh=n%3A7141123011%2Cn%3A%217141124011%2Cn%3A7147440011%2Cn%3A15743631%2Cn%3A2475899011&dc&',
'fashion':'https://www.amazon.com/s?k=bags&i=fashion-womens-handbags&bbn=15743631&rh=n%3A7141123011%2Cn%3A%217141124011%2Cn%3A7147440011%2Cn%3A15743631%2Cn%3A16977745011&dc&',
'hobo':'https://www.amazon.com/s?k=bags&i=fashion-womens-handbags&bbn=15743631&rh=n%3A7141123011%2Cn%3A%217141124011%2Cn%3A7147440011%2Cn%3A15743631%2Cn%3A16977747011&dc&'}},
'jewelry':{'anklets':'https://www.amazon.com/s?i=fashion-womens-intl-ship&bbn=16225018011&rh=n%3A16225018011%2Cn%3A7192394011%2Cn%3A7454897011&dc&',
'bracelets':'https://www.amazon.com/s?i=fashion-womens-intl-ship&bbn=16225018011&rh=n%3A16225018011%2Cn%3A7192394011%2Cn%3A7454898011&dc&',
'earrings':'https://www.amazon.com/s?i=fashion-womens-intl-ship&bbn=16225018011&rh=n%3A16225018011%2Cn%3A7192394011%2Cn%3A7454917011&dc&',
'necklaces':'https://www.amazon.com/s?i=fashion-womens-intl-ship&bbn=16225018011&rh=n%3A16225018011%2Cn%3A7192394011%2Cn%3A7454917011&dc&',
'rings':'https://www.amazon.com/s?i=fashion-womens-intl-ship&bbn=16225018011&rh=n%3A16225018011%2Cn%3A7192394011%2Cn%3A7454939011&dc&'},
'artisan_fabrics':'https://www.amazon.com/s?k=fabrics&rh=n%3A2617941011%2Cn%3A12899121&dc&'}}
amazon_uk = {'health_and_beauty':{'hair_products':{'shampoo':'https://www.amazon.co.uk/b/ref=amb_link_5?ie=UTF8&node=74094031&pf_rd_m=A3P5ROKL5A1OLE&pf_rd_s=merchandised-search-leftnav&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_t=101&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_i=66469031',
'conditioner':'https://www.amazon.co.uk/b/ref=amb_link_6?ie=UTF8&node=2867976031&pf_rd_m=A3P5ROKL5A1OLE&pf_rd_s=merchandised-search-leftnav&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_t=101&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_i=66469031',
'hair_loss':'https://www.amazon.co.uk/b/ref=amb_link_11?ie=UTF8&node=2867979031&pf_rd_m=A3P5ROKL5A1OLE&pf_rd_s=merchandised-search-leftnav&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_t=101&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_i=66469031',
'hair_scalp_treatment':'https://www.amazon.co.uk/b/ref=amb_link_7?ie=UTF8&node=2867977031&pf_rd_m=A3P5ROKL5A1OLE&pf_rd_s=merchandised-search-leftnav&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_t=101&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_i=66469031',
'treatment_oil':'https://www.amazon.co.uk/hair-oil-argan/b/ref=amb_link_8?ie=UTF8&node=2867981031&pf_rd_m=A3P5ROKL5A1OLE&pf_rd_s=merchandised-search-leftnav&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_t=101&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_i=66469031'},
'skin_care':{'body':{'cleanser':'https://www.amazon.co.uk/s/ref=lp_344269031_nr_n_3?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A344269031%2Cn%3A344282031&bbn=344269031&ie=UTF8&qid=1581612722&rnid=344269031',
'moisturizers':'https://www.amazon.co.uk/s/ref=lp_344269031_nr_n_1?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A344269031%2Cn%3A2805272031&bbn=344269031&ie=UTF8&qid=1581612722&rnid=344269031'},
'eyes':{'creams':'https://www.amazon.co.uk/s/ref=lp_118465031_nr_n_0?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A118465031%2Cn%3A344259031&bbn=118465031&ie=UTF8&qid=1581612984&rnid=118465031',
'gels':'https://www.amazon.co.uk/s/ref=lp_118465031_nr_n_1?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A118465031%2Cn%3A344258031&bbn=118465031&ie=UTF8&qid=1581613044&rnid=118465031',
'serums':'https://www.amazon.co.uk/s/ref=lp_118465031_nr_n_3?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A118465031%2Cn%3A344257031&bbn=118465031&ie=UTF8&qid=1581613044&rnid=118465031'},
'face':{'cleansers':'https://www.amazon.co.uk/s/ref=lp_118466031_nr_n_1?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A118466031%2Cn%3A344265031&bbn=118466031&ie=UTF8&qid=1581613120&rnid=118466031',
'moisturizers':'https://www.amazon.co.uk/s/ref=lp_118466031_nr_n_3?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A118466031%2Cn%3A2805291031&bbn=118466031&ie=UTF8&qid=1581613120&rnid=118466031',
'toners':'https://www.amazon.co.uk/s/ref=lp_118466031_nr_n_0?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A118466031%2Cn%3A344267031&bbn=118466031&ie=UTF8&qid=1581613120&rnid=118466031',
'treatments':'https://www.amazon.co.uk/s?bbn=118466031&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A118466031%2Cn%3A18918424031&dc&fst=as%3Aoff&qid=1581613120&rnid=118466031&ref=lp_118466031_nr_n_7'},
'lipcare':'https://www.amazon.co.uk/s/ref=lp_118464031_nr_n_4?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A118467031&bbn=118464031&ie=UTF8&qid=1581613357&rnid=118464031'}},
'food':{'tea':{'herbal':'https://www.amazon.co.uk/s?k=tea&i=grocery&rh=n%3A340834031%2Cn%3A358584031%2Cn%3A11711401%2Cn%3A406567031&dc&qid=1581613483&rnid=344155031&ref=sr_nr_n_1',
'green':'https://www.amazon.co.uk/s?k=tea&i=grocery&rh=n%3A340834031%2Cn%3A358584031%2Cn%3A11711401%2Cn%3A406566031&dc&qid=1581613483&rnid=344155031&ref=sr_nr_n_3',
'black':'https://www.amazon.co.uk/s?k=tea&i=grocery&rh=n%3A340834031%2Cn%3A358584031%2Cn%3A11711401%2Cn%3A406564031&dc&qid=1581613483&rnid=344155031&ref=sr_nr_n_2'},
'coffee':'https://www.amazon.co.uk/s?k=coffee&rh=n%3A340834031%2Cn%3A11711391&dc&qid=1581613715&rnid=1642204031&ref=sr_nr_n_2',
'dried_fruits':{'mixed':'https://www.amazon.co.uk/s?k=dried+fruits&rh=n%3A340834031%2Cn%3A9733163031&dc&qid=1581613770&rnid=1642204031&ref=sr_nr_n_2'},
'nuts':{'mixed':'https://www.amazon.co.uk/s?k=mixed&rh=n%3A359964031&ref=nb_sb_noss',
'peanuts':'https://www.amazon.co.uk/s?k=peanuts&rh=n%3A359964031&ref=nb_sb_noss',
'cashews':'https://www.amazon.co.uk/s?k=cashew&rh=n%3A359964031&ref=nb_sb_noss'}},
'supplements':{'sports':{'pre_workout':'https://www.amazon.co.uk/b/?node=5977685031&ref_=Oct_s9_apbd_odnav_hd_bw_b35Hc3L_1&pf_rd_r=C5MZHH5TH5F868B6FQWD&pf_rd_p=8086b6c9-ae16-5c3c-a879-030afa4ee08f&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=2826478031',
'protein':'https://www.amazon.co.uk/b/?node=2826510031&ref_=Oct_s9_apbd_odnav_hd_bw_b35Hc3L_0&pf_rd_r=C5MZHH5TH5F868B6FQWD&pf_rd_p=8086b6c9-ae16-5c3c-a879-030afa4ee08f&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=2826478031',
'fat_burner':'https://www.amazon.co.uk/b/?node=5977737031&ref_=Oct_s9_apbd_odnav_hd_bw_b35Hc3L_2&pf_rd_r=C5MZHH5TH5F868B6FQWD&pf_rd_p=8086b6c9-ae16-5c3c-a879-030afa4ee08f&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=2826478031'},
'vitamins_dietary':{'supplements':'https://www.amazon.co.uk/b/?_encoding=UTF8&node=2826534031&bbn=65801031&ref_=Oct_s9_apbd_odnav_hd_bw_b35Hdc7_2&pf_rd_r=AY01DQVCB4SE7VVE7MTK&pf_rd_p=1ecdbf02-af23-502a-b7ab-9916ddd6690c&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=2826484031',
'multivitamins':'https://www.amazon.co.uk/b/?_encoding=UTF8&node=2826506031&bbn=65801031&ref_=Oct_s9_apbd_odnav_hd_bw_b35Hdc7_1&pf_rd_r=AY01DQVCB4SE7VVE7MTK&pf_rd_p=1ecdbf02-af23-502a-b7ab-9916ddd6690c&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=2826484031'}},
'wellness':{'massage_oil':'https://www.amazon.co.uk/b/?node=3360479031&ref_=Oct_s9_apbd_odnav_hd_bw_b50nmJ_4&pf_rd_r=GYVYF52HT2004EDTY67W&pf_rd_p=3f8e4361-c00b-588b-a07d-ff259bf98bbc&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=74073031',
'ayurveda':'https://www.amazon.co.uk/s?k=ayurveda&rh=n%3A65801031%2Cn%3A2826449031&dc&qid=1581686978&rnid=1642204031&ref=sr_nr_n_22'},
'personal_accessories':{'bags':{'women':{'clutches':'https://www.amazon.co.uk/b/?node=1769563031&ref_=Oct_s9_apbd_odnav_hd_bw_b1vkt8h_3&pf_rd_r=VC8RX89R4V4JJ5TEBANF&pf_rd_p=cefca17f-8dac-5c80-848f-812aff1bfdd7&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=1769559031',
'crossbody':'https://www.amazon.co.uk/b/?node=1769564031&ref_=Oct_s9_apbd_odnav_hd_bw_b1vkt8h_1&pf_rd_r=VC8RX89R4V4JJ5TEBANF&pf_rd_p=cefca17f-8dac-5c80-848f-812aff1bfdd7&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=1769559031',
'fashion':'https://www.amazon.co.uk/b/?node=1769560031&ref_=Oct_s9_apbd_odnav_hd_bw_b1vkt8h_5&pf_rd_r=VC8RX89R4V4JJ5TEBANF&pf_rd_p=cefca17f-8dac-5c80-848f-812aff1bfdd7&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=1769559031',
'hobo':'https://www.amazon.co.uk/b/?node=1769565031&ref_=Oct_s9_apbd_odnav_hd_bw_b1vkt8h_4&pf_rd_r=VC8RX89R4V4JJ5TEBANF&pf_rd_p=cefca17f-8dac-5c80-848f-812aff1bfdd7&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=1769559031'}},
'jewelry':{'anklets':'https://www.amazon.co.uk/s/ref=lp_10382835031_nr_n_0?fst=as%3Aoff&rh=n%3A193716031%2Cn%3A%21193717031%2Cn%3A10382835031%2Cn%3A10382860031&bbn=10382835031&ie=UTF8&qid=1581687575&rnid=10382835031',
'bracelets':'https://www.amazon.co.uk/s/ref=lp_10382835031_nr_n_1?fst=as%3Aoff&rh=n%3A193716031%2Cn%3A%21193717031%2Cn%3A10382835031%2Cn%3A10382861031&bbn=10382835031&ie=UTF8&qid=1581687575&rnid=10382835031',
'earrings':'https://www.amazon.co.uk/s/ref=lp_10382835031_nr_n_4?fst=as%3Aoff&rh=n%3A193716031%2Cn%3A%21193717031%2Cn%3A10382835031%2Cn%3A10382865031&bbn=10382835031&ie=UTF8&qid=1581687575&rnid=10382835031',
'necklaces':'https://www.amazon.co.uk/s/ref=lp_10382835031_nr_n_7?fst=as%3Aoff&rh=n%3A193716031%2Cn%3A%21193717031%2Cn%3A10382835031%2Cn%3A10382868031&bbn=10382835031&ie=UTF8&qid=1581687575&rnid=10382835031',
'rings':'https://www.amazon.co.uk/s/ref=lp_10382835031_nr_n_10?fst=as%3Aoff&rh=n%3A193716031%2Cn%3A%21193717031%2Cn%3A10382835031%2Cn%3A10382871031&bbn=10382835031&ie=UTF8&qid=1581687575&rnid=10382835031'},
'artisan_fabrics':'https://www.amazon.co.uk/s?k=fabric&rh=n%3A11052681%2Cn%3A3063518031&dc&qid=1581687726&rnid=1642204031&ref=a9_sc_1'}}
amazon_india = {'health_and_beauty':{'hair_products':{'shampoo':'https://www.amazon.in/b/ref=s9_acss_bw_cg_btyH1_2a1_w?ie=UTF8&node=1374334031&pf_rd_m=A1K21FY43GMZF8&pf_rd_s=merchandised-search-5&pf_rd_r=JHDJ4QHM0APVS05NGF4G&pf_rd_t=101&pf_rd_p=41b9c06b-1514-47de-a1c6-f4f13fb55ffe&pf_rd_i=1374305031',
'conditioner':'https://www.amazon.in/b/ref=s9_acss_bw_cg_btyH1_2b1_w?ie=UTF8&node=1374306031&pf_rd_m=A1K21FY43GMZF8&pf_rd_s=merchandised-search-5&pf_rd_r=CBABMCW6C69JRBGZNWWP&pf_rd_t=101&pf_rd_p=41b9c06b-1514-47de-a1c6-f4f13fb55ffe&pf_rd_i=1374305031',
'treatment_oil':''},
'skin_care':[],
'wellness_product':[]},
'food':{'tea':[],
'coffee':[],
'dried_fruits':[],
'nuts':[],
'supplements':[]},
'personal_accessories':{'bags':[],
'jewelry':[],
'artisan_fabrics':[]}}
amazon_aus = {'health_and_beauty':{'hair_products':{'shampoo':'https://www.amazon.com.au/b/?_encoding=UTF8&node=5150253051&bbn=4851917051&ref_=Oct_s9_apbd_odnav_hd_bw_b5cXATz&pf_rd_r=6SEM7GFDN7CQ2W4KXM9M&pf_rd_p=9dd4b462-1094-5e36-890d-bb1b694c8b53&pf_rd_s=merchandised-search-12&pf_rd_t=BROWSE&pf_rd_i=5150070051',
'conditioner':'https://www.amazon.com.au/b/?_encoding=UTF8&node=5150226051&bbn=4851917051&ref_=Oct_s9_apbd_odnav_hd_bw_b5cXATz&pf_rd_r=6SEM7GFDN7CQ2W4KXM9M&pf_rd_p=9dd4b462-1094-5e36-890d-bb1b694c8b53&pf_rd_s=merchandised-search-12&pf_rd_t=BROWSE&pf_rd_i=5150070051'},
'skin_care':[],
'wellness_product':[]},
'food':{'tea':{'herbal':'',
'green':'https://www.amazon.com.au/s/ref=lp_5555388051_nr_n_3?fst=as%3Aoff&rh=n%3A5547635051%2Cn%3A%215547636051%2Cn%3A5555314051%2Cn%3A5555388051%2Cn%3A5555543051&bbn=5555388051&ie=UTF8&qid=1584282626&rnid=5555388051',
'black':'https://www.amazon.com.au/s/ref=lp_5555388051_nr_n_0?fst=as%3Aoff&rh=n%3A5547635051%2Cn%3A%215547636051%2Cn%3A5555314051%2Cn%3A5555388051%2Cn%3A5555541051&bbn=5555388051&ie=UTF8&qid=1584285938&rnid=5555388051',
'chai':''},
'coffee':'https://www.amazon.com.au/s/ref=lp_5555314051_nr_n_0?fst=as%3Aoff&rh=n%3A5547635051%2Cn%3A%215547636051%2Cn%3A5555314051%2Cn%3A5555382051&bbn=5555314051&ie=UTF8&qid=1584207291&rnid=5555314051',
'dried_fruits':{'mixed':'',
'mangoes':''},
'nuts':{'mixed':'https://www.amazon.com.au/s?k=mixed%20nuts&ref=nb_sb_noss&rh=n%3A5555474051&url=node%3D5555474051',
'peanuts':'https://www.amazon.com.au/s?k=peanuts&ref=nb_sb_noss&rh=n%3A5555474051&url=node%3D5555474051',
'cashews':'https://www.amazon.com.au/s?k=cashews&ref=nb_sb_noss&rh=n%3A5555474051&url=node%3D5555474051'}},
'supplements':{'sports':{'pre_workout':'https://www.amazon.com.au/b/?_encoding=UTF8&node=5148339051&bbn=4851917051&ref_=Oct_s9_apbd_odnav_hd_bw_b5cPRoZ_3&pf_rd_r=HN11C6S8SDVY38KJZYV3&pf_rd_p=1c658db3-169d-5f89-8673-898e1fd5ee1e&pf_rd_s=merchandised-search-10&pf_rd_t=BROWSE&pf_rd_i=5148230051',
'protein':'https://www.amazon.com.au/b/?_encoding=UTF8&node=5148365051&bbn=4851917051&ref_=Oct_s9_apbd_odnav_hd_bw_b5cPRoZ_0&pf_rd_r=6GVHZAP9J9WY7HGH888R&pf_rd_p=1c658db3-169d-5f89-8673-898e1fd5ee1e&pf_rd_s=merchandised-search-10&pf_rd_t=BROWSE&pf_rd_i=5148230051',
'fat_burner':'https://www.amazon.com.au/b/?_encoding=UTF8&node=5148760051&bbn=4851917051&ref_=Oct_s9_apbd_odnav_hd_bw_b5cPRoZ_4&pf_rd_r=6GVHZAP9J9WY7HGH888R&pf_rd_p=1c658db3-169d-5f89-8673-898e1fd5ee1e&pf_rd_s=merchandised-search-10&pf_rd_t=BROWSE&pf_rd_i=5148230051',
'weight_gainer':''},
'vitamins_dietary':{'supplements':'https://www.amazon.com.au/b/?_encoding=UTF8&node=5148358051&bbn=4851917051&ref_=Oct_s9_apbd_odnav_hd_bw_b5cPS4h_0&pf_rd_r=VGHE5D2HR7JYWNCAAVYT&pf_rd_p=214a2f58-0505-577e-aa86-fdd72d600a9a&pf_rd_s=merchandised-search-10&pf_rd_t=BROWSE&pf_rd_i=5148231051',
'multivitamins':'https://www.amazon.com.au/b/?_encoding=UTF8&node=5148351051&bbn=4851917051&ref_=Oct_s9_apbd_odnav_hd_bw_b5cPS4h_2&pf_rd_r=VGHE5D2HR7JYWNCAAVYT&pf_rd_p=214a2f58-0505-577e-aa86-fdd72d600a9a&pf_rd_s=merchandised-search-10&pf_rd_t=BROWSE&pf_rd_i=5148231051'}},
'wellness':{'ayurveda':'https://www.amazon.com.au/s?k=ayurveda&ref=nb_sb_noss&rh=n%3A5148210051&url=node%3D5148210051',
'essential_oil_set':'https://www.amazon.com.au/s?k=essential+oil&rh=n%3A5148210051&ref=nb_sb_noss',
'massage_oil':'https://www.amazon.com.au/s?k=massage%20oil&ref=nb_sb_noss&rh=n%3A5148210051&url=node%3D5148210051'},
'personal_accessories':{'bags':{'women':{'clutches':'https://www.amazon.com.au/b/?_encoding=UTF8&node=5131114051&bbn=4851856051&ref_=Oct_s9_apbd_odnav_hd_bw_b5bEF3L_2&pf_rd_r=YZ7JGTT62DKZB8C97D3H&pf_rd_p=bf3f7e2d-f60e-5998-994f-a490e47553c6&pf_rd_s=merchandised-search-10&pf_rd_t=BROWSE&pf_rd_i=5130783051',
'crossbody':'https://www.amazon.com.au/b/?_encoding=UTF8&node=5131115051&bbn=4851856051&ref_=Oct_s9_apbd_odnav_hd_bw_b5bEF3L_3&pf_rd_r=YZ7JGTT62DKZB8C97D3H&pf_rd_p=bf3f7e2d-f60e-5998-994f-a490e47553c6&pf_rd_s=merchandised-search-10&pf_rd_t=BROWSE&pf_rd_i=5130783051',
'fashion':'',
'hobo':''}},
'jewelry':{'anklets':'',
'bracelets':'',
'earrings':'',
'necklaces':'',
'rings':''},
'artisan_fabrics':''}}
amazon = {'USA':amazon_usa,
'UK':amazon_uk,
'India':amazon_india,
'Australia':amazon_aus}
def hover(browser, xpath):
'''
This function makes an automated mouse hovering in the selenium webdriver
element based on its xpath.
PARAMETER
---------
browser: Selenium based webbrowser
xpath: str
xpath of the element in the webpage where hover operation has to be
performed.
'''
element_to_hover_over = browser.find_element_by_xpath(xpath)
hover = ActionChains(browser).move_to_element(element_to_hover_over)
hover.perform()
element_to_hover_over.click()
def browser(link):
'''This funtion opens a selenium based chromebrowser specifically tuned
to work for amazon product(singular item) webpages. Few functionality
includes translation of webpage, clicking the initial popups, and hovering
over product imagesso that the images can be scrape
PARAMETER
---------
link: str
Amazon Product item link
RETURN
------
driver: Selenium web browser with operated functions
'''
options = Options()
prefs = {
"translate_whitelists": {"ja":"en","de":'en'},
"translate":{"enabled":"true"}
}
# helium = r'C:\Users\Dell-pc\AppData\Local\Google\Chrome\User Data\Default\Extensions\njmehopjdpcckochcggncklnlmikcbnb\4.2.12_0'
# options.add_argument(helium)
options.add_experimental_option("prefs", prefs)
options.headless = True
driver = webdriver.Chrome(chrome_options=options)
driver.get(link)
try:
driver.find_element_by_xpath('//*[@id="nav-main"]/div[1]/div[2]/div/div[3]/span[1]/span/input').click()
except:
pass
try:
hover(driver,'//*[@id="altImages"]/ul/li[3]')
except:
pass
try:
driver.find_element_by_xpath('//*[@id="a-popover-6"]/div/header/button/i').click()
except:
pass
try:
hover(driver,'//*[@id="altImages"]/ul/li[4]')
except:
pass
try:
driver.find_element_by_xpath('//*[@id="a-popover-6"]/div/header/button/i').click()
except:
pass
try:
hover(driver,'//*[@id="altImages"]/ul/li[5]')
except:
pass
try:
driver.find_element_by_xpath('//*[@id="a-popover-6"]/div/header/button/i').click()
except:
pass
try:
hover(driver,'//*[@id="altImages"]/ul/li[6]')
except:
pass
try:
driver.find_element_by_xpath('//*[@id="a-popover-6"]/div/header/button/i').click()
except:
pass
try:
hover(driver,'//*[@id="altImages"]/ul/li[7]')
except:
pass
try:
driver.find_element_by_xpath('//*[@id="a-popover-6"]/div/header/button/i').click()
except:
pass
try:
hover(driver,'//*[@id="altImages"]/ul/li[8]')
except:
pass
try:
driver.find_element_by_xpath('//*[@id="a-popover-6"]/div/header/button/i').click()
except:
pass
try:
hover(driver,'//*[@id="altImages"]/ul/li[9]')
except:
pass
try:
driver.find_element_by_xpath('//*[@id="a-popover-6"]/div/header/button/i').click()
except:
pass
return driver
def scroll_temp(driver):
'''
Automated Scroller in Selenium Webbrowser
PARAMETER
---------
driver: Selenium Webbrowser
'''
pre_scroll_height = driver.execute_script('return document.body.scrollHeight;')
run_time, max_run_time = 0, 2
while True:
iteration_start = time.time()
# Scroll webpage, the 100 allows for a more 'aggressive' scroll
driver.execute_script('window.scrollTo(0,0.6*document.body.scrollHeight);')
post_scroll_height = driver.execute_script('return document.body.scrollHeight;')
scrolled = post_scroll_height != pre_scroll_height
timed_out = run_time >= max_run_time
if scrolled:
run_time = 0
pre_scroll_height = post_scroll_height
elif not scrolled and not timed_out:
run_time += time.time() - iteration_start
elif not scrolled and timed_out:
break
# def scroll(driver):
# scroll_temp(driver)
# from selenium.common.exceptions import NoSuchElementException
# try:
# element = driver.find_element_by_xpath('//*[@id="reviewsMedley"]/div/div[1]')
# except NoSuchElementException:
# try:
# element = driver.find_element_by_xpath('//*[@id="reviewsMedley"]')
# except NoSuchElementException:
# element = driver.find_element_by_xpath('//*[@id="detail-bullets_feature_div"]')
# actions = ActionChains(driver)
# actions.move_to_element(element).perform()
def scroll(driver):
scroll_temp(driver)
from selenium.common.exceptions import NoSuchElementException
try:
try:
element = driver.find_element_by_xpath('//*[@id="reviewsMedley"]/div/div[1]')
except NoSuchElementException:
try:
element = driver.find_element_by_xpath('//*[@id="reviewsMedley"]')
except NoSuchElementException:
element = driver.find_element_by_xpath('//*[@id="detail-bullets_feature_div"]')
actions = ActionChains(driver)
actions.move_to_element(element).perform()
except NoSuchElementException:
pass
def browser_link(product_link,country):
'''Returns all the web link of the products based on the first
page of the product category. It captures product link of all the pages for
that specific product.
PARAMETER
---------
link: str
The initial web link of the product page. This is generally the
first page of the all the items for that specfic product
RETURN
------
links: list
It is a list of strings which contains all the links of the items
for the specific product
'''
driver = browser(product_link)
soup = BeautifulSoup(driver.page_source, 'lxml')
try:
pages_soup = soup.findAll("ul",{"class":"a-pagination"})
pages = int(pages_soup[0].findAll("li",{'class':'a-disabled'})[1].text)
except:
pass
try:
pages_soup = soup.findAll("div",{"id":"pagn"})
pages = int(pages_soup[0].findAll("span",{'class':'pagnDisabled'})[0].text)
except:
try:
pages_soup = soup.findAll("div",{"id":"pagn"})
pages = int(pages_soup[0].findAll("span",{'class':'pagnDisabled'})[1].text)
except:
pass
print(pages)
links = []
for page in range(1,pages+1):
print(page)
link_page = product_link + '&page=' + str(page)
driver_temp = browser(link_page)
time.sleep(2)
soup_temp = BeautifulSoup(driver_temp.page_source, 'lxml')
try:
search = soup_temp.findAll("div",{"id":"mainResults"})
temp_search = search[1].findAll("a",{'class':'a-link-normal s-access-detail-page s-color-twister-title-link a-text-normal'})
for i in range(len(temp_search)):
if country == 'Australia':
link = temp_search[i].get('href')
else:
link = countries_link[country] + temp_search[i].get('href')
links.append(link)
print(len(links))
except:
try:
search = soup_temp.findAll("div",{"class":"s-result-list s-search-results sg-row"})
temp_search = search[1].findAll("h2")
if len(temp_search) < 2:
for i in range(len(search[0].findAll("h2"))):
temp = search[0].findAll("h2")[i]
for j in range(len(temp.findAll('a'))):
link = countries_link[country]+temp.findAll('a')[j].get('href')
links.append(link)
print(len(links))
else:
for i in range(len(search[1].findAll("h2"))):
temp = search[1].findAll("h2")[i]
for j in range(len(temp.findAll('a'))):
link = countries_link[country]+temp.findAll('a')[j].get('href')
links.append(link)
print(len(links))
except:
pass
try:
search = soup_temp.findAll("div",{"id":"mainResults"})
temp_search = search[0].findAll("a",{'class':'a-link-normal s-access-detail-page s-color-twister-title-link a-text-normal'})
for i in range(len(temp_search)):
if country == 'Australia':
link = temp_search[i].get('href')
else:
link = countries_link[country] + temp_search[i].get('href')
links.append(link)
print(len(links))
except:
try:
search = soup_temp.findAll("div",{"class":"s-result-list s-search-results sg-row"})
temp_search = search[1].findAll("h2")
if len(temp_search) < 2:
for i in range(len(search[0].findAll("h2"))):
temp = search[0].findAll("h2")[i]
for j in range(len(temp.findAll('a'))):
link = countries_link[country]+temp.findAll('a')[j].get('href')
links.append(link)
print(len(links))
else:
for i in range(len(search[1].findAll("h2"))):
temp = search[1].findAll("h2")[i]
for j in range(len(temp.findAll('a'))):
link = countries_link[country]+temp.findAll('a')[j].get('href')
links.append(link)
print(len(links))
except:
print('Not Scrapable')
return links
def indexes(amazon_links,link_list):
amazon_dict = amazon_links
if len(link_list) == 5:
return amazon_dict[link_list[0]][link_list[1]][link_list[2]][link_list[3]][link_list[4]]
elif len(link_list) == 4:
return amazon_dict[link_list[0]][link_list[1]][link_list[2]][link_list[3]]
elif len(link_list) == 3:
return amazon_dict[link_list[0]][link_list[1]][link_list[2]]
elif len(link_list) == 2:
return amazon_dict[link_list[0]][link_list[1]]
elif len(link_list) == 1:
return amazon_dict[link_list[0]]
else:
return print("Invalid Product")
def products_links(country, **kwargs):
amazon_links = amazon[country]
directory_temp = []
for key, value in kwargs.items():
directory_temp.append(value)
directory = '/'.join(directory_temp)
print(directory)
product_link = indexes(amazon_links,directory_temp)
main_links = browser_link(product_link,country=country)
return main_links,directory
###Output
_____no_output_____
###Markdown
Product Scraper Function
###Code
def delete_images(filename):
import os
file_path = '/home/jishu/Amazon_AU/'
os.remove(file_path + filename)
def upload_s3(filename,key):
key_id = 'AKIAWR6YW7N5ZKW35OJI'
access_key = 'h/xrcI9A2SRU0ds+zts4EClKAqbzU+/iXdiDcgzm'
bucket_name = 'amazon-data-ecfullfill'
s3 = boto3.client('s3',aws_access_key_id=key_id,
aws_secret_access_key=access_key)
try:
s3.upload_file(filename,bucket_name,key)
except FileNotFoundError:
pass
def product_info(link,directory,country):
'''Get all the product information of an Amazon Product'''
#Opening Selenium Webdrive with Amazon product
driver = browser(link)
time.sleep(4)
scroll(driver)
time.sleep(2)
#Initializing BeautifulSoup operation in selenium browser
selenium_soup = BeautifulSoup(driver.page_source, 'lxml')
time.sleep(2)
#Product Title
try:
product_title = driver.find_element_by_xpath('//*[@id="productTitle"]').text
except:
product_title = 'Not Scrapable'
print(product_title)
#Ratings - Star
try:
rating_star = float(selenium_soup.findAll('span',{'class':'a-icon-alt'})[0].text.split()[0])
except:
rating_star = 'Not Scrapable'
print(rating_star)
#Rating - Overall
try:
overall_rating = int(selenium_soup.findAll('span',{'id':'acrCustomerReviewText'})[0].text.split()[0].replace(',',''))
except:
overall_rating = 'Not Scrapable'
print(overall_rating)
#Company
try:
company = selenium_soup.findAll('a',{'id':'bylineInfo'})[0].text
except:
company = 'Not Scrapable'
print(country)
#Price
try:
denomination = '$'
if country=='UAE':
denomination = selenium_soup.findAll('span',{'id':'priceblock_ourprice'})[0].text[:3]
price = float(selenium_soup.findAll('span',{'id':'priceblock_ourprice'})[0].text[3:])
else:
denomination = selenium_soup.findAll('span',{'id':'priceblock_ourprice'})[0].text[0]
price = float(selenium_soup.findAll('span',{'id':'priceblock_ourprice'})[0].text[1:])
except:
try:
if country=='UAE':
try:
price = float(selenium_soup.findAll('span',{'id':'priceblock_ourprice'})[0].text[3:].replace(',',''))
except:
price = float(selenium_soup.findAll('span',{'id':'priceblock_dealprice'})[0].text[3:].replace(',',''))
else:
try:
price = float(selenium_soup.findAll('span',{'id':'priceblock_ourprice'})[0].text[3:].replace(',',''))
except:
price = float(selenium_soup.findAll('span',{'id':'priceblock_dealprice'})[0].text[3:].replace(',',''))
except:
denomination = 'Not Scrapable'
price = 'Not Scrapable'
print(denomination,price)
#Product Highlights
try:
temp_ph = selenium_soup.findAll('ul',{'class':'a-unordered-list a-vertical a-spacing-none'})[0].findAll('li')
counter_ph = len(temp_ph)
product_highlights = []
for i in range(counter_ph):
raw = temp_ph[i].text
clean = raw.strip()
product_highlights.append(clean)
product_highlights = '<CPT14>'.join(product_highlights)
except:
try:
temp_ph = selenium_soup.findAll('div',{'id':'rich-product-description'})[0].findAll('p')
counter_ph = len(temp_ph)
product_highlights = []
for i in range(counter_ph):
raw = temp_ph[i].text
clean = raw.strip()
product_highlights.append(clean)
product_highlights = '<CPT14>'.join(product_highlights)
except:
product_highlights = 'Not Available'
print(product_highlights)
#Product Details/Dimensions:
#USA
try:
temp_pd = selenium_soup.findAll('div',{'class':'content'})[0].findAll('ul')[0].findAll('li')
counter_pd = len(temp_pd)
for i in range(counter_pd):
try:
if re.findall('ASIN',temp_pd[i].text)[0]:
try:
asin = temp_pd[i].text.split(' ')[1]
except:
pass
except IndexError:
pass
try:
if re.findall('Product Dimensions|Product Dimension|Product dimensions',temp_pd[i].text)[0]:
pd_temp = temp_pd[i].text.strip().split('\n')[2].strip().split(';')
try:
product_length = float(pd_temp[0].split('x')[0])
except IndexError:
pass
try:
product_width = float(pd_temp[0].split('x')[1])
except IndexError:
pass
try:
product_height = float(pd_temp[0].split('x')[2].split(' ')[1])
except IndexError:
pass
try:
pd_unit = pd_temp[0].split('x')[2].split(' ')[2]
except IndexError:
pass
try:
product_weight = float(pd_temp[1].split(' ')[1])
except IndexError:
pass
try:
weight_unit = pd_temp[1].split(' ')[2]
except IndexError:
pass
except:
pass
try:
if re.findall('Shipping Weight|Shipping weight|shipping weight',temp_pd[i].text)[0]:
sweight_temp = temp_pd[i].text.split(':')[1].strip().split(' ')
shipping_weight = float(sweight_temp[0])
shipping_weight_unit = sweight_temp[1]
except IndexError:
pass
try:
if re.findall('Amazon Best Sellers Rank|Amazon Bestsellers Rank',temp_pd[i].text)[0]:
x = temp_pd[i].text.replace('\n','').split(' ')
indexes = []
for j,k in enumerate(x):
if re.findall('#',k):
indexes.append(j)
try:
best_seller_cat = int(temp_pd[i].text.strip().replace('\n','').split(' ')[3].replace(',',''))
best_seller_prod = int(x[indexes[0]].split('#')[1].split('in')[0])
except:
try:
best_seller_cat = x[indexes[0]].split('#')[1]
except:
pass
try:
best_seller_prod = x[indexes[1]].split('#')[1].split('in')[0]
except:
pass
except IndexError:
pass
print(asin)
except:
pass
try:
temp_pd = selenium_soup.findAll('div',{'class':'content'})[1].findAll('ul')[0].findAll('li')
counter_pd = len(temp_pd)
for i in range(counter_pd):
try:
if re.findall('ASIN',temp_pd[i].text)[0]:
try:
asin = temp_pd[i].text.split(' ')[1]
except:
pass
except IndexError:
pass
try:
if re.findall('Product Dimensions|Product Dimension|Product dimensions',temp_pd[i].text)[0]:
pd_temp = temp_pd[i].text.strip().split('\n')[2].strip().split(';')
try:
product_length = float(pd_temp[0].split('x')[0])
except IndexError:
pass
try:
product_width = float(pd_temp[0].split('x')[1])
except IndexError:
pass
try:
product_height = float(pd_temp[0].split('x')[2].split(' ')[1])
except IndexError:
pass
try:
pd_unit = pd_temp[0].split('x')[2].split(' ')[2]
except IndexError:
pass
try:
product_weight = float(pd_temp[1].split(' ')[1])
except IndexError:
pass
try:
weight_unit = pd_temp[1].split(' ')[2]
except IndexError:
pass
except:
pass
try:
if re.findall('Shipping Weight|Shipping weight|shipping weight',temp_pd[i].text)[0]:
sweight_temp = temp_pd[i].text.split(':')[1].strip().split(' ')
shipping_weight = float(sweight_temp[0])
shipping_weight_unit = sweight_temp[1]
except IndexError:
pass
try:
if re.findall('Amazon Best Sellers Rank|Amazon Bestsellers Rank',temp_pd[i].text)[0]:
x = temp_pd[i].text.replace('\n','').split(' ')
indexes = []
for j,k in enumerate(x):
if re.findall('#',k):
indexes.append(j)
try:
best_seller_cat = int(temp_pd[i].text.strip().replace('\n','').split(' ')[3].replace(',',''))
best_seller_prod = int(x[indexes[0]].split('#')[1].split('in')[0])
except:
try:
best_seller_cat = x[indexes[0]].split('#')[1]
except:
pass
try:
best_seller_prod = x[indexes[1]].split('#')[1].split('in')[0]
except:
pass
except IndexError:
pass
print(asin)
except:
pass
#India
try:
temp_pd = selenium_soup.findAll('div',{'class':'content'})[0].findAll('ul')[0].findAll('li')
counter_pd = len(temp_pd)
for i in range(counter_pd):
try:
if re.findall('ASIN',temp_pd[i].text)[0]:
asin = temp_pd[i].text.split(' ')[1]
except:
pass
try:
if re.findall('Product Dimensions|Product Dimension|Product dimensions',temp_pd[i].text)[0]:
pd_temp = temp_pd[i].text.strip().split('\n')[2].strip().split(' ')
try:
product_length = float(pd_temp[0])
except:
pass
try:
product_width = float(pd_temp[2])
except:
pass
try:
product_height = float(pd_temp[4])
except:
pass
try:
pd_unit = pd_temp[5]
except:
pass
try:
product_weight = float(pd_temp[1].split(' ')[1])
except:
pass
try:
weight_unit = pd_temp[1].split(' ')[2]
except:
pass
print(asin)
except IndexError:
pass
try:
if re.findall('Shipping Weight|Shipping weight|shipping weight',temp_pd[i].text)[0]:
sweight_temp = temp_pd[i].text.split(':')[1].strip().split(' ')
shipping_weight = float(sweight_temp[0])
shipping_weight_unit = sweight_temp[1]
except IndexError:
pass
try:
if re.findall('Item Weight|Product Weight|Item weight|Product weight|Boxed-product Weight',temp_pd[i].text)[0]:
pd_weight_temp = temp_pd[i].text.replace('\n','').strip().split(' ')[1].strip()
product_weight = float(pd_weight_temp.split(' ')[0])
weight_unit = pd_weight_temp.split(' ')[1]
except IndexError:
pass
try:
if re.findall('Amazon Best Sellers Rank|Amazon Bestsellers Rank',temp_pd[i].text)[0]:
x = temp_pd[i].text.strip().replace('\n','').split(' ')
indexes = []
for j,k in enumerate(x):
if re.findall('#',k):
indexes.append(j)
try:
best_seller_cat = int(temp_pd[i].text.strip().replace('\n','').split(' ')[3].replace(',',''))
best_seller_prod = int(x[indexes[0]].split('#')[1].split('in')[0])
except:
try:
best_seller_cat = x[indexes[0]].split('#')[1]
except:
pass
try:
best_seller_prod = x[indexes[1]].split('#')[1].split('in')[0]
except:
pass
except IndexError:
pass
print(asin)
except:
pass
try:
try:
asin = list(selenium_soup.findAll('div',{'class':'pdTab'})[1].findAll('tr')[0].findAll('td')[1])[0]
except:
pass
try:
dimensions = list(selenium_soup.findAll('div',{'class':'pdTab'})[0].findAll('tr')[0].findAll('td')[1])[0]
except:
pass
try:
weight_temp = list(selenium_soup.findAll('div',{'class':'pdTab'})[1].findAll('tr')[1].findAll('td')[1])[0]
except:
pass
try:
best_seller_cat = float(list(selenium_soup.findAll('div',{'class':'pdTab'})[1].findAll('tr')[5].findAll('td')[1])[0].split('\n')[-1].split(' ')[0].replace(',',''))
except:
pass
try:
best_seller_prod = int(list(list(list(list(selenium_soup.findAll('div',{'class':'pdTab'})[1].findAll('tr')[5].findAll('td')[1])[5])[1])[1])[0].replace('#',''))
except:
pass
try:
product_length = float(dimensions.split('x')[0])
except:
pass
try:
product_width = float(dimensions.split('x')[1])
except:
pass
try:
product_height = float(dimensions.split('x')[2].split(' ')[1])
except:
pass
try:
product_weight = weight_temp.split(' ')[0]
except:
pass
try:
weight_unit = weight_temp.split(' ')[1]
except:
pass
try:
pd_unit = dimensions.split(' ')[-1]
except:
pass
print(asin)
except:
try:
for j in [0,1]:
temp_pd = selenium_soup.findAll('table',{'class':'a-keyvalue prodDetTable'})[j].findAll('tr')
for i in range(len(temp_pd)):
if re.findall('ASIN',temp_pd[i].text):
asin = temp_pd[i].text.strip().split('\n')[3].strip()
if re.findall('Item Model Number|Item model number',temp_pd[i].text):
bait = temp_pd[i].text.strip().split('\n')[3].strip()
if re.findall('Best Sellers Rank|Amazon Best Sellers Rank|Amazon Bestsellers Rank',temp_pd[i].text):
x = temp_pd[i].text.strip().replace('\n','').split(' ')
indexes = []
for j,k in enumerate(x):
if re.findall('#',k):
indexes.append(j)
best_seller_cat = int(x[indexes[0]].split('#')[1])
best_seller_prod = int(x[indexes[1]].split('#')[1].split('in')[0])
if re.findall('Product Dimensions|Product dimension|Product Dimension',temp_pd[i].text):
dimensions = temp_pd[i].text.strip().split('\n')[3].strip().split('x')
product_length = float(dimensions[0].strip())
product_width = float(dimensions[1].strip())
product_height = float(dimensions[2].strip().split(' ')[0])
pd_unit = dimensions[2].strip().split(' ')[1]
if re.findall('Item Weight|Product Weight|Item weight|Boxed-product Weight',temp_pd[i].text):
weight_temp = temp_pd[i].text.strip().split('\n')[3].strip()
product_weight = float(weight_temp.split(' ')[0])
weight_unit = weight_temp.split(' ')[1]
if re.findall('Shipping Weight|Shipping weight|shipping weight',temp_pd[i].text):
sweight_temp = temp_pd[i].text.replace('\n','').strip().split(' ')[1].lstrip().split(' ')
shipping_weight = float(sweight_temp[0])
shipping_weight_unit = sweight_temp[1]
print(asin,bait)
except:
try:
temp_pd = selenium_soup.findAll('div',{'id':'prodDetails'})[0].findAll('tr')
for i in range(len(temp_pd)):
if re.findall('ASIN',temp_pd[i].text):
asin = temp_pd[i].text.strip().split('\n')[3].strip()
if re.findall('Best Sellers Rank|Amazon Best Sellers Rank|Amazon Bestsellers Rank',temp_pd[i].text):
x = temp_pd[i].text.strip().replace('\n','').split(' ')
indexes = []
for j,k in enumerate(x):
if re.findall('#',k):
indexes.append(j)
best_seller_cat = int(x[indexes[0]].split('#')[1])
best_seller_prod = int(x[indexes[1]].split('#')[1].split('in')[0])
if re.findall('Product Dimensions|Product dimension|Product Dimension',temp_pd[i].text):
dimensions = temp_pd[i].text.strip().split('\n')[3].strip().split('x')
product_length = float(dimensions[0].strip())
product_width = float(dimensions[1].strip())
product_height = float(dimensions[2].strip().split(' ')[0])
pd_unit = dimensions[2].strip().split(' ')[1]
if re.findall('Item Weight|Product Weight|Item weight|Boxed-product Weight',temp_pd[i].text):
weight_temp = temp_pd[i].text.strip().split('\n')[3].strip()
product_weight = float(weight_temp.split(' ')[0])
weight_unit = weight_temp.split(' ')[1]
if re.findall('Shipping Weight|Shipping weight|shipping weight',temp_pd[i].text):
sweight_temp = temp_pd[i].text.replace('\n','').strip().split(' ')[1].lstrip().split(' ')
shipping_weight = float(sweight_temp[0])
shipping_weight_unit = sweight_temp[1]
except:
try:
temp_pd = selenium_soup.findAll('div',{'id':'detail_bullets_id'})[0].findAll('tr')[0].findAll('li')
for i in range(len(temp_pd)):
if re.findall('ASIN',temp_pd[i].text):
asin = temp_pd[i].text.strip().split(':')[1].strip()
if re.findall('Best Sellers Rank|Amazon Best Sellers Rank|Amazon Bestsellers Rank',temp_pd[i].text):
x = temp_pd[i].text.strip().replace('\n','').split(' ')
indexes = []
for j,k in enumerate(x):
if re.findall('#',k):
indexes.append(j)
best_seller_cat = int(x[indexes[0]].split('#')[1])
best_seller_prod = int(x[indexes[1]].split('#')[1].split('in')[0])
if re.findall('Product Dimensions|Product dimension|Product Dimension',temp_pd[i].text):
dimensions = temp_pd[i].text.strip().split('\n')[2].strip().split('x')
product_length = float(dimensions[0].strip())
product_width = float(dimensions[1].strip())
product_height = float(dimensions[2].strip().split(' ')[0])
pd_unit = dimensions[2].strip().split(' ')[1]
if re.findall('Item Weight|Product Weight|Item weight|Boxed-product Weight',temp_pd[i].text):
weight_temp = temp_pd[i].text.strip().split('\n')[2].strip()
product_weight = float(weight_temp.split(' ')[0])
weight_unit = weight_temp.split(' ')[1]
if re.findall('Shipping Weight|Shipping weight|shipping weight',temp_pd[i].text):
sweight_temp = temp_pd[i].text.replace('\n','').strip().split(' ')[1].lstrip().split(' ')
shipping_weight = float(sweight_temp[0])
shipping_weight_unit = sweight_temp[1]
except:
pass
try:
print(asin)
except NameError:
asin = 'Not Scrapable'
try:
print(best_seller_cat)
except NameError:
best_seller_cat = 'Not Scrapable'
try:
print(best_seller_prod)
except NameError:
best_seller_prod = 'Not Scrapable'
try:
print(product_length)
except NameError:
product_length = 'Not Scrapable'
try:
print(product_width)
except NameError:
product_width = 'Not Scrapable'
try:
print(product_height)
except NameError:
product_height = 'Not Scrapable'
try:
print(product_weight)
except NameError:
product_weight = 'Not Scrapable'
try:
print(weight_unit)
except NameError:
weight_unit = 'Not Scrapable'
try:
print(pd_unit)
except NameError:
pd_unit = 'Not Scrapable'
try:
print(shipping_weight_unit)
except NameError:
shipping_weight_unit = 'Not Scrapable'
try:
print(shipping_weight)
except NameError:
shipping_weight = 'Not Scrapable'
print(product_length,product_width,product_height,product_weight,asin,pd_unit,
best_seller_cat,best_seller_prod,weight_unit,shipping_weight,shipping_weight_unit)
#Customer Review Ratings - Overall
time.sleep(0.5)
try:
temp_crr = selenium_soup.findAll('table',{'id':'histogramTable'})[1].findAll('a')
crr_main = {}
crr_temp = []
counter_crr = len(temp_crr)
for i in range(counter_crr):
crr_temp.append(temp_crr[i]['title'])
crr_temp = list(set(crr_temp))
for j in range(len(crr_temp)):
crr_temp[j] = crr_temp[j].split(' ')
stopwords = ['stars','represent','of','rating','reviews','have']
for word in list(crr_temp[j]):
if word in stopwords:
crr_temp[j].remove(word)
print(crr_temp[j])
try:
if re.findall(r'%',crr_temp[j][1])[0]:
crr_main.update({int(crr_temp[j][0]): int(crr_temp[j][1].replace('%',''))})
except:
crr_main.update({int(crr_temp[j][1]): int(crr_temp[j][0].replace('%',''))})
except:
try:
temp_crr = selenium_soup.findAll('table',{'id':'histogramTable'})[1].findAll('span',{'class':'a-offscreen'})
crr_main = {}
counter_crr = len(temp_crr)
star = counter_crr
for i in range(counter_crr):
crr_main.update({star:int(temp_crr[i].text.strip().split('/n')[0].split(' ')[0].replace('%',''))})
star -= 1
except:
pass
try:
crr_5 = crr_main[5]
except:
crr_5 = 0
try:
crr_4 = crr_main[4]
except:
crr_4 = 0
try:
crr_3 = crr_main[3]
except:
crr_3 = 0
try:
crr_2 = crr_main[2]
except:
crr_2 = 0
try:
crr_1 = crr_main[1]
except:
crr_1 = 0
#Customer Review Ratings - By Feature
time.sleep(1)
try:
driver.find_element_by_xpath('//*[@id="cr-summarization-attributes-list"]/div[4]/a/span').click()
temp_fr = driver.find_element_by_xpath('//*[@id="cr-summarization-attributes-list"]').text
temp_fr = temp_fr.split('\n')
crr_feature_title = []
crr_feature_rating = []
for i in [0,2,4]:
crr_feature_title.append(temp_fr[i])
for j in [1,3,5]:
crr_feature_rating.append(temp_fr[j])
crr_feature = dict(zip(crr_feature_title,crr_feature_rating))
except:
try:
temp_fr = driver.find_element_by_xpath('//*[@id="cr-summarization-attributes-list"]').text
temp_fr = temp_fr.split('\n')
crr_feature_title = []
crr_feature_rating = []
for i in [0,2,4]:
crr_feature_title.append(temp_fr[i])
for j in [1,3,5]:
crr_feature_rating.append(temp_fr[j])
crr_feature = dict(zip(crr_feature_title,crr_feature_rating))
except:
crr_feature = 'Not Defined'
try:
crr_feature_key = list(crr_feature.keys())
except:
pass
try:
crr_fr_1 = crr_feature[crr_feature_key[0]]
except:
crr_fr_1 = 0
try:
crr_fr_2 = crr_feature[crr_feature_key[1]]
except:
crr_fr_2 = 0
try:
crr_fr_3 = crr_feature[crr_feature_key[2]]
except:
crr_fr_3 = 0
#Tags:
time.sleep(1)
try:
temp_tags = selenium_soup.findAll('div',{'class':'cr-lighthouse-terms'})[0]
counter_tags = len(temp_tags)
print('Counter Tags:',counter_tags)
tags = []
for i in range(counter_tags):
tags.append(temp_tags.findAll('span')[i].text.strip())
print(tags[i])
except:
tags = ['None']
try:
for feature in crr_feature_key:
tags.append(feature)
except:
pass
tags = list(set(tags))
tags = '<CPT14>'.join(tags)
print(tags)
#Images
images = []
for i in [0,3,4,5,6,7,8,9]:
try:
images.append(selenium_soup.findAll('div',{'class':'imgTagWrapper'})[i].find('img')['src'])
except:
pass
import urllib.request
for i in range(len(images)):
if asin =='Not Scrapable':
product_image = "{}_{}.jpg".format(product_title,i)
product_image = product_image.replace('/','')
urllib.request.urlretrieve(images[i],product_image)
upload_s3("{}_{}.jpg".format(product_title,i),
directory+"/images/" + product_image)
delete_images(product_image)
else:
product_image = "{}_{}.jpg".format(asin,i)
product_image = product_image.replace('/','')
urllib.request.urlretrieve(images[i],product_image)
upload_s3("{}_{}.jpg".format(asin,i),
directory+"/images/" + product_image)
delete_images(product_image)
return [product_title,rating_star,overall_rating,company,price,
product_highlights,product_length,product_width,product_height,
product_weight,asin,pd_unit,best_seller_cat,best_seller_prod,
weight_unit,shipping_weight,shipping_weight_unit,crr_5,crr_4,
crr_3,crr_2,crr_1,crr_fr_1,crr_fr_2,crr_fr_3,tags,directory]
###Output
_____no_output_____
###Markdown
Data Wrangling
###Code
def database(product_data,**kwargs):
try:
try:
link = kwargs['link']
except KeyError:
print('Error in Link')
try:
country = kwargs['country']
except KeyError:
print("Enter Country Name")
try:
cat1 = kwargs['cat1']
except KeyError:
pass
try:
cat2 = kwargs['cat2']
except KeyError:
pass
try:
cat3 = kwargs['cat3']
except KeyError:
pass
try:
cat4 = kwargs['cat4']
except KeyError:
pass
try:
product = kwargs['product']
except KeyError:
print("Enter Product Name")
metadata = [link,country,cat1,cat2,cat3,cat4,product]
except NameError:
try:
cat4 = None
metadata = [link,country,cat1,cat2,cat3,cat4,product]
except NameError:
try:
cat4 = None
cat3 = None
metadata = [link,country,cat1,cat2,cat3,cat4,product]
except NameError:
cat4 = None
cat3 = None
cat2 = None
metadata = [link,country,cat1,cat2,cat3,cat4,product]
conn = sqlite3.connect('{}.db'.format(product))
headers = ['link','country','cat1','cat2','cat3','cat4','product','product_title',
'rating_star','overall_rating','company','price',
'product_highlights','product_length','product_width','product_height',
'product_weight','asin','pd_unit','best_seller_cat','best_seller_prod',
'weight_unit','shipping_weight','shipping_weight_unit','crr_5','crr_4',
'crr_3','crr_2','crr_1','crr_fr_1','crr_fr_2','crr_fr_3','tags','images_link']
product_data.append(metadata)
product_data = product_data[-1] + product_data[:len(product_data)-1]
temp = pd.DataFrame(data= [product_data],columns=headers)
temp.to_sql('Product',conn,if_exists='append')
upload_s3(product+'.db',directory+'/'+product+'.db')
conn.close()
def checkpoint(link_list,directory,product):
BUCKET_NAME = 'amazon-data-ecfullfill'
key_id = 'AKIAWR6YW7N5ZKW35OJI'
access_key = 'h/xrcI9A2SRU0ds+zts4EClKAqbzU+/iXdiDcgzm'
KEY = '{}/{}.db'.format(directory,product)
s3 = boto3.resource('s3',aws_access_key_id=key_id,
aws_secret_access_key=access_key)
try:
s3.Bucket(BUCKET_NAME).download_file(KEY, 'test.db')
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
print("The object does not exist.")
else:
raise
conn = sqlite3.connect('test.db')
try:
df = pd.read_sql('''SELECT * FROM Product''', conn)
product_link = df['link'].unique()
new_list = []
for i in link_list:
if i in product_link:
pass
else:
new_list.append(i)
except:
new_list = link_list
return new_list
###Output
_____no_output_____
###Markdown
Execution
###Code
#Initializing the product per Jupyter Notebook
country = 'Australia'
cat1 = 'supplements'
cat2='sports'
# cat3='None'
# cat4 = 'None'
product='fat_burner'
# links,directory = products_links(country=country,category=cat1,cat2=cat2,product=product)
# test_1 = {'links':links,'directory':directory}
# import pickle
# with open('au_supplements_sports_fat_burner.pkl', 'wb') as f:
# pickle.dump(test_1, f)
with open('au_supplements_sports_fat_burner.pkl', 'rb') as f:
file = pickle.load(f)
links = file['links']
directory = 'Amazon_AU/supplements/sports/fat_burner'
#replace links with new_links if interruption
for link in new_links:
data = product_info(link=link,directory=directory,country=country)
conn = sqlite3.connect('{}.db'.format(product))
database(product_data=data,link=link,country=country,
cat1=cat1,cat2=cat2,product=product)
# Run if there is an interruption
new_links = checkpoint(links,directory,product)
len(new_links)
len(links)
###Output
_____no_output_____
###Markdown
Testing the datasets in S3
###Code
BUCKET_NAME = 'amazon-data-ecfullfill' # replace with your bucket name
key_id = 'AKIAWR6YW7N5ZKW35OJI'
access_key = 'h/xrcI9A2SRU0ds+zts4EClKAqbzU+/iXdiDcgzm'
KEY = 'Amazon_USA/health_and_beauty/hair_products/shampoo/shampoo.db' # replace with your object key
s3 = boto3.resource('s3',aws_access_key_id=key_id,
aws_secret_access_key=access_key)
try:
s3.Bucket(BUCKET_NAME).download_file(KEY, 'test.db')
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
print("The object does not exist.")
else:
raise
conn = sqlite3.connect('shampoo.db')
df_USA = pd.read_sql("SELECT * FROM Product",conn)
df_USA.iloc[:,:15]
df_USA.iloc[:,15:]
len(link_db)
# def upload_s3(filename,key):
# key_id = 'AKIAWR6YW7N5ZKW35OJI'
# access_key = 'h/xrcI9A2SRU0ds+zts4EClKAqbzU+/iXdiDcgzm'
# bucket_name = 'amazon-data-ecfullfill'
# s3 = boto3.client('s3',aws_access_key_id=key_id,
# aws_secret_access_key=access_key)
# # s3.put_object(Bucket=bucket_name, Key='Amazon/health_and_beauty/hair_product/shampoo')
# s3.upload_file(filename,bucket_name,key)
###Output
_____no_output_____ |
2. Data Modeling/NoSQL Data Models/Lesson 3 Exercise 4 Using the WHERE Clause.ipynb | ###Markdown
Lesson 3 Demo 4: Using the WHERE Clause In this exercise we are going to walk through the basics of using the WHERE clause in Apache Cassandra. denotes where the code needs to be completed. We will use a python wrapper/ python driver called cassandra to run the Apache Cassandra queries. This library should be preinstalled but in the future to install this library you can run this command in a notebook to install locally: ! pip install cassandra-driver More documentation can be found here: https://datastax.github.io/python-driver/ Import Apache Cassandra python package
###Code
import cassandra
###Output
_____no_output_____
###Markdown
First let's create a connection to the database
###Code
from cassandra.cluster import Cluster
try:
cluster = Cluster(['127.0.0.1']) #If you have a locally installed Apache Cassandra instance
session = cluster.connect()
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Let's create a keyspace to do our work in
###Code
try:
session.execute("""
CREATE KEYSPACE IF NOT EXISTS udacity
WITH REPLICATION =
{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }"""
)
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Connect to our Keyspace. Compare this to how we had to create a new session in PostgreSQL.
###Code
try:
session.set_keyspace('udacity')
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Let's imagine we would like to start creating a new Music Library of albums. We want to ask 4 question of our data 1. Give me every album in my music library that was released in a 1965 year 2. Give me the album that is in my music library that was released in 1965 by "The Beatles" 3. Give me all the albums released in a given year that was made in London 4. Give me the city that the album "Rubber Soul" was recorded Here is our Collection of Data How should we model this data? What should be our Primary Key and Partition Key? Since our data is looking for the YEAR let's start with that. From there we will add clustering columns on Artist Name and Album Name.
###Code
query = "CREATE TABLE IF NOT EXISTS music_library "
query = query + "(year int, artist_name text, album_name text, city text, PRIMARY KEY (year, artist_name, album_name))"
try:
session.execute(query)
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Let's insert our data into of table
###Code
query = "INSERT INTO music_library (year, artist_name, album_name, city)"
query = query + " VALUES (%s, %s, %s, %s)"
try:
session.execute(query, (1970, "The Beatles", "Let it Be", "Liverpool"))
except Exception as e:
print(e)
try:
session.execute(query, (1965, "The Beatles", "Rubber Soul", "Oxford"))
except Exception as e:
print(e)
try:
session.execute(query, (1965, "The Who", "My Generation", "London"))
except Exception as e:
print(e)
try:
session.execute(query, (1966, "The Monkees", "The Monkees", "Los Angeles"))
except Exception as e:
print(e)
try:
session.execute(query, (1970, "The Carpenters", "Close To You", "San Diego"))
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Let's Validate our Data Model with our 4 queries.Query 1:
###Code
query = "select * from music_library WHERE YEAR=1970"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.artist_name, row.album_name, row.city)
###Output
1970 The Beatles Let it Be Liverpool
1970 The Carpenters Close To You San Diego
###Markdown
Let's try the 2nd query. Query 2:
###Code
query = "select * from music_library WHERE YEAR=1970 AND ARTIST_NAME = 'The Beatles'"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.artist_name, row.album_name, row.city)
###Output
1970 The Beatles Let it Be Liverpool
###Markdown
Let's try the 3rd query.Query 3:
###Code
query = "select * from music_library WHERE YEAR = 1970 AND LOCATION = 'Liverpool'"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.artist_name, row.album_name, row.city)
###Output
Error from server: code=2200 [Invalid query] message="Undefined column name location"
###Markdown
Did you get an error? You can not try to access a column or a clustering column if you have not used the other defined clustering column. Let's see if we can try it a different way. Try Query 4:
###Code
query = "select city from music_library WHERE YEAR = 1970 AND ARTIST_NAME = 'The Beatles' AND ALBUM_NAME='Let it Be'"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.city)
###Output
Liverpool
###Markdown
And Finally close the session and cluster connection
###Code
session.shutdown()
cluster.shutdown()
###Output
_____no_output_____ |
Tutorial/Tutorial.ipynb | ###Markdown
A tutorial on probabilistic models of single-cell gene expression and Bayesian inference within STAN *Nick Phillips, Naef lab, EPFL, 28/06/21* Introduction This is a tutorial to accompany our paper that was recently published in Molecular Systems Biology: "The circadian oscillator analysed at the single‐transcript level" https://doi.org/10.15252/msb.202010135. While our particular model was customised for circadian biology, the approach and tools are general and can be adapted to a wide range of different biological problems. Within this approach, the goal is to define a generative model of single-cell gene expression and infer the parameters of the model within a Bayesian framework, which can be used to e.g. compare two different genes or cell populations.As part of this Bayesian inference strategy we'll be forced to use a probabilistic, generative model of the single-cell data. But how do we define a model or decide which is the best? We'll see that there are multiple models that can be used for single-cell gene expression, and each model has its strengths and weaknesses. There are two main objectives of this tutorial:* to give an overview of several probability distributions of single-cell gene expression based on simple biophysical models* to show how these models can be implemented in STAN for Bayesian inferenceThe most important message is this: probabilistic programming languages such as STAN make Bayesian inference so much more accessible for practitioners, and the straightforward implementation of the analysis pipeline means that users can spend more time thinking about the biological questions and experimenting with different models. However, while the implementation of Bayesian models is now much easier, there are still an enormous number of problems/subtleties that can arise, and we will also see a few basic failure modes that are important to think about.Contents:1. A quick intro to Bayesian data analysis and STAN2. A quick intro to biophysical models of single-cell gene expression3. Model 1: the Poisson distribution4. Model 2: the beta-Poisson distribution5. Model 3: the negative binomial (gamma-Poisson) distribution6. Model 4: the multivariate log-normal Poisson mixture distribution7. Model 5: incorporating extrinsic noise explicitly8. Technical noise9. Model selection and validation10. Strengths and limitations of the approach A quick intro to Bayesian data analysis and STAN If you're a complete beginner to Bayesian analysis then I recommend first starting with the book "Statistical Rethinking" by Richard McElreath [1](https://xcelab.net/rm/statistical-rethinking/), which starts from the basics and is aimed at individuals from diverse (including non-quantitative) backgrounds. And here are a few videos that dive into Bayesian inference and STAN a bit more:* Julia Galef - A visual guide to Bayesian thinking [2](https://youtu.be/BrK7X_XlGB8)* Richard McElreath - Stastistical Rethinking course [3](https://youtu.be/4WVelCswXo4)* Aki Vehtari - Bayesian Data Analysis course [4](https://avehtari.github.io/BDA_course_Aalto/index.html)* Michael Betancourt - on MCMC [5](https://www.youtube.com/watch?v=UzcLe-kpMDQ). Here I'll attempt to summarise some of the core concepts of Bayesian data analysis, but I hope the actual mechanics will become clearer when we look at some practical examples below.Before launching into the practical details of Bayesian inference for single-cell gene expression, it's worth noting some of the advantages and disadvantages of this approach. An essential feature of Bayesian analysis is that we're forced to describe a generative model of the data, which isn't necessarily true for all data analysis approaches (e.g. t-SNE). With a model we're able to ask hypothetical "but what if?" type questions that can predict outcomes of interventions to biological systems *before* doing the experiment. We can also use models to explicitly disentangle cell-to-cell variability caused by biological sources vs technical noise, and this recent article is a concrete example of why this can be important [6](https://doi.org/10.1038/s41587-021-00875-x). However, a natural question is: how do we know that we have a good model of our data? And how do we even define a "good" model? We'll revisit this question in section 8. We'll define our models and perform Bayesian inference with STAN [7](https://mc-stan.org), and here are some of the benefits:* In STAN you define the model at a high level and you don't have to write a custom inference algorithm every time you'd like to change the model (it's a big time-saver)* Several textbooks for Bayesian inference (Statistical Rethinking, Bayesian Data Analysis 3rd edition, Regression and Other Stories) show many examples of different models in STAN .* There are loads of interesting online tutorials and resources (see e.g. tutorials from Aki Vehtari [8](https://avehtari.github.io) or Michael Betancourt [9](https://betanalpha.github.io/writing/) )* There's a large user community and the reference manual is comprehensive.* It's available for R, Python and more. The goal of this tutorial is not to cover the best practices/workflow for Bayesian inference (see e.g. this paper [10](https://arxiv.org/pdf/2011.01808.pdf) or this blog [11](https://betanalpha.github.io/assets/case_studies/principled_bayesian_workflow.html) ), as we'll mainly focus on the probabilistic models of single-cell gene expression. We'll skip some (quite important) steps in the Bayesian analysis pipeline (including MCMC diagnostics to check that the inference behaved as expected), so check out these other resources if you'd like further information on these general principles.So what's the big picture? Starting from a simple biophysical model of gene expression, we'll derive several different probability distributions that can be used as part of the STAN inference framework to describe single-cell gene expression datasets. For each gene in our experiment we have a dataset of single-cell mRNA counts (that could come from either smFISH or single-cell RNA-seq) that we'll label $y$, and each model will have some parameters that we label $\theta$ (e.g. transcription rate, mRNA degradation rate etc., see below). We'll use STAN to infer the parameters of our model from the data. Specifically, we'll write our model in STAN, feed it with our data and then draw samples from the "posterior probability distribution" of our model. These samples will (hopefully) reveal the range of plausible values of the model parameters, given the data. We can also use these parameter samples to simulate synthetic datasets (known as the posterior predictive distribution) and check that the model simulations resemble the data (perhaps there's a particular qualitative feature of the dataset that you're interested in?). We can also change a subset of parameters to explore "what if" type questions (e.g. what would the mRNA distribution look like if we could double the transcription rate?).For the purposes of this tutorial, the formula we need for Bayesian inference is this:$p(\theta|y) \propto p(y|\theta)p(\theta)$Which has three components:* $p(\theta|y)$ - the posterior parameter distribution. This is a probability distribution that expresses our belief/uncertainty of the parameters $\theta$ given the data $y$. This is the output of STAN, and it's the thing we're interested in.* $p(y|\theta)$ - the likelihood of the data $y$ given the model parameters $\theta$. In order to calculate this, we'll need to input a model that describes the distribution of mRNA counts ($y$) given the model parameters $\theta$. We'll discuss four possible options based on simple biophysical models below.* $p(\theta)$ - the prior distribution. This encodes our state of belief about the parameter values before we see any data. Ideally, here we would insert some expert knowledge on the range of plausible values for each parameter (for example, there are certainly upper limits on possible transcription rates due to the physical constraints of the cell). For illustration, we'll assume that we know very little about the possible values of the parameters and use a "weakly informative prior", but in some cases this is actually not recommended.In order for STAN to sample the posterior parameter distribution, we'll need to supply:1. Data2. A model 3. A prior distribution for each parameterWe'll next go over some simple models that can be used within STAN. A quick intro to biophysical models of single-cell gene expression mRNA production and degradation result from random collisions between finite numbers of interacting molecules within the cell, which creates biological noise known as intrinsic stochasticity. I particularly like this paper from Dattani & Barahona [12](https://doi.org/10.1098/rsif.2016.0833) that describes a conceptual framework for unifying many diverse models of gene expression. We'll use their framework for defining a general, stochastic model of gene expression, where mRNA molecules are produced and degraded according to the following reaction scheme:\begin{align}\emptyset \overset{M_t}\longrightarrow \text{mRNA} \overset{L_t}\longrightarrow \emptyset\end{align}and where the production rate $M_t$ and degradation rate $L_t$ can be stochastic, time-varying functions. The interesting aspect of the approach is that you can break the solution into two simpler problems. First, you solve a differential equation for a new variable $X_t$ that has the same production and degradation rates as the original problem (this will be a random differential equation if either $M_t$ or $L_t$ is stochastic)\begin{align}\frac{dX_t}{dt} = M_t - L_t X_t\end{align}Solving this equation will give the probability density function of the process $X_t$ as $p_{X_t}(x,t)$. To describe the probability distribution of the mRNA molecules, we next take this distribution and plug it into this formula that describes a Poisson mixture distribution (ignoring the ‘burn-in’ transient towards stationarity):\begin{align}p(n,t)=\int \frac{x^n}{n!} e^{-x}p_{X_t}(x,t) dx\end{align}So to reiterate: we first solve a simpler problem to calculate a mixing density $p_{X_t}(x,t)$ and then use it to define the Poisson mixture distribution. In some cases the integral can be perfomed analytically, and in other cases we can introduce "latent variables" as extra parameters within STAN to take care of the integration. Hopefully this will become clearer with a few examples. Model 1: the Poisson distribution We'll start with the simplest example: let's imagine that there's a constant rate of production and degradation (transcription rate $M_t = \alpha$ and degradation rate $L_t = \gamma$), which is often referred to a "consitutive gene expression". In this case we simply recover a Poisson distribution, which only has one parameter known as the rate parameter $\lambda$ (equal to $\lambda = \alpha/\gamma$ for this model). Both the mean and variance of the Poisson distribution are given by this rate parameter, and so the larger the ratio of production to degradation, the larger the mean and variance of the mRNA distribution. Interestingly, even for this simple model we can see that there might be problems with recovering the parameters during inference. Below I've simulated this model (externally, with the Gillespie algorithm) using two different combinations of parameters (fast dynamics: $\alpha = 30$, $\gamma = 1$, slow dynamics: $\alpha = 3$, $\gamma = 0.1$).
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import nbinom
t = pd.read_csv('Time.csv',header=None).values
y_fast = pd.read_csv('Model1_fast.csv',header=None).values
y_slow = pd.read_csv('Model1_slow.csv',header=None).values
def plot_time_series(t,y):
plt.plot(t,y,linestyle = 'steps')
plt.ylabel('mRNA')
plt.xlabel('Time (hours)')
def plot_hist(y,bins):
plt.hist(y,bins=bins,density=True)
plt.xlabel('mRNA')
plt.ylabel('Density')
fig = plt.figure(figsize=(8,8))
plt.subplot(2,2,1)
plot_time_series(t,y_fast[:,0])
plt.title(r'Poisson model (fast dynamics). $\alpha = 30$, $\gamma = 1$')
plt.subplot(2,2,2)
plot_hist(y_fast[-1,:],bins=np.linspace(15,50,10))
plt.title(r'mRNA distribution (fast dynamics)')
plt.subplot(2,2,3)
plot_time_series(t,y_slow[:,0])
plt.title(r'Poisson model (slow dynamics). $\alpha = 3$, $\gamma = 0.1$')
plt.subplot(2,2,4)
plot_hist(y_slow[-1,:],bins=np.linspace(15,50,10))
plt.title(r'mRNA distribution (slow dynamics)')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
In the left column we see the simulated mRNA dynamics in a single example cell, and in the right column we see the mRNA distribution across 1000 simulated cells. Even though the dynamics are different, the resulting mRNA distributions are the same for both fast and slow dynamics! Incidentally, this shows why the time-series generated by live-cell imaging can sometimes carry more parameter information than snapshot measurements.Now let's try and perform inference on our Poisson model. As we'll see, a STAN program is divided into different blocks.
###Code
import pystan
model1 = """
data {
int<lower=1> N;
int y[N];
}
parameters {
real<lower=0> alpha;
real<lower=0> gamma;
}
transformed parameters{
real<lower=0> lambda;
lambda = alpha/gamma;
}
model {
y ~ poisson(lambda);
alpha ~ normal(0,100);
gamma ~ normal(0,100);
}
"""
###Output
_____no_output_____
###Markdown
* Data - here we define the size of our data (the number of cells that we measured, $N$) and then supply the measured values as a variable that we call $y$* Parameters - we have two parameters: the production rate $\alpha$ and the degradation rate $\gamma$. We constrain these rates to be positive with: * Transformed parameters - we'll also create a transformed parameter $\lambda = \alpha/\gamma$ that will act as the input rate parameter to the Poisson distribution* Model - here we enter our model of the data $y$ (a Poisson distribution) AND we specify the priors for the parameters $\alpha$ and $\gamma$. We use weakly informative priors $\alpha, \gamma\sim N(0,100)$ i.e. enormously wide normal distributions.And that's it! Let's see what happens! We next perform parameter inference for this model (using MCMC within STAN).
###Code
dat = {
'N' : len(y_fast[-1,:]),
'y' : y_fast[-1,:]
}
sm = pystan.StanModel(model_code=model1)
fit = sm.sampling(data=dat, iter=2000, chains=4)
a = fit.extract(permuted=True)
alpha_inferred = a['alpha']
gamma_inferred = a['gamma']
lambda_inferred = a['lambda']
###Output
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_911d21b04865cbe9659afd57dd6e747f NOW.
###Markdown
After running STAN, we can retrieve the posterior parameter samples and plot them (at this stage you would normally run diagnostic checks).
###Code
def plot_posterior_oned(x,label):
plt.hist(x,bins=15,density=True)
plt.xlabel(label)
plt.ylabel('Density')
fig = plt.figure(figsize=(12,3))
plt.subplot(1,4,1)
plot_posterior_oned(alpha_inferred,r'$\alpha$')
plt.plot([30,30],[0,0.007],'r--')
plt.subplot(1,4,2)
plot_posterior_oned(gamma_inferred,r'$\gamma$')
plt.plot([1,1],[0,0.2],'r--')
plt.subplot(1,4,3)
plt.scatter(alpha_inferred,gamma_inferred,s=2)
plt.xlabel(r'$\alpha$')
plt.ylabel(r'$\gamma$')
plt.subplot(1,4,4)
plot_posterior_oned(lambda_inferred,r'$\lambda$')
plt.plot([30,30],[0,2.5],'r--')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
We can see from the histograms that $\alpha$ and $\gamma$ are all over the place, and plotting both parameters together (the plot of $\alpha$ vs $\gamma$) shows that they slide along a valley together. We know from the simulations that it's really the *ratio* of $\alpha/\gamma$ that's important (and sets the $\lambda$ parameter), so we have a horrible correlation in the parameter estimates between $\alpha$ and $\gamma$. However, the posterior parameter estimates for $\lambda$ includes the true value (30, shown in red). In other terms, we can't disentangle and individually infer the production and degradation rate parameters, but we can reliably infer their ratio. For this particular case, we can say that we're able to infer the transcription rate *scaled* by the mRNA degradation rate. This is a rather trivial example of problems with parameter identifiability, but we'll see another case that's more subtle in Model 2. Model 2 - the beta-Poisson mixture model One of the major problems with the basic Poisson model is that it's too simple: it only has one parameter, and the variance is always equal to the mean. In real single-cell gene expression data this is rarely observed, and the Poisson model is too restrictive.Live-cell imaging has shown that transcription is highly bursty [13](https://science.sciencemag.org/content/332/6028/472.abstract), and a common model known as the "telegraph model" is often used to describe this burstiness. The telegraph model assumes that genes switches discontinuously between an active and inactive state, where mRNA molecules are only produced in the active state. This can be represented with the following set of reactions:\begin{align}g_{\text{off}} &\overset{k_{\text{on}}}\longrightarrow g_{\text{on}} \: & \text{Promoter activation}\\g_{\text{on}} &\overset{k_{\text{off}}}\longrightarrow g_{\text{off}} \: & \text{Promoter deactivation} \\g_{\text{on}} &\overset{\alpha}\longrightarrow g_{\text{on}} + M \: & \text{Transcription} \\M &\overset{\gamma}\longrightarrow \emptyset \: & \text{mRNA degradation} \\\end{align}The differences between the basic Poisson model (equivalent to constitutive expression) and the telegraph model is discussed in [14](https://science.sciencemag.org/content/336/6078/183), which includes some great visualisations.Let's see an example of the telegraph model in action. Below we see an example time series, where $k_{\text{on}}$ and $k_{\text{off}}$ are both equal to 0.5, the transcription rate $\alpha=50$ and the degradation rate $\gamma=1$.
###Code
y_slow_mRNA = pd.read_csv('Model2_slow_mRNA.csv',header=None).values
y_slow_promoter = pd.read_csv('Model2_slow_promoter.csv',header=None).values
fig = plt.figure(figsize=(8,4))
plt.subplot(2,2,1)
plot_time_series(t,y_slow_mRNA[:,0])
plt.title(r'mRNA time series. $k_{on} = 0.5$, $k_{off} = 0.5$')
plt.subplot(2,2,3)
plot_time_series(t,y_slow_promoter[:,0])
plt.ylabel('Promoter state')
plt.title(r'Promoter time series')
plt.subplot(1,2,2)
plot_hist(y_slow_mRNA[-1,:],bins=10)
plt.title(r'mRNA distribution')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
In this parameter regime, the switching between the on and off promoter state is relatively slow. We can see that mRNA is increasing while the promoter is in an active state (left column). Averaging across 1000 cells (right column) we see a bimodal mRNA distribution. But what is the exact probability distribution? To solve this model using the formulation from above, the production rate $M_t$ follows a telegraph process that randomly switches between the on and off state. It's then possible to solve the random differential equation for $X_t$, which turns out to be a beta distribution (see [15](https://link.springer.com/article/10.1007/s00285-009-0298-z) or [Dattani](https://doi.org/10.1098/rsif.2016.0833) for details). The full model for the mRNA distribution then becomes a beta-Poisson mixture, and (to my knowledge) the first paper to use this formulation for single-cell RNA-seq data was Kim and Marioni, 2013 [16](https://genomebiology.biomedcentral.com/articles/10.1186/gb-2013-14-1-r7).We can code this model in STAN using latent variables (extra, unseen parameters). Instead of having a constant rate parameter for the Poisson, *each* cell *i* will have have its own parameter $p_i$, where each $p_i$ is drawn from a beta distribution (parameterised by $k_{\text{on}}$ and $k_{\text{off}}$). Mathematically, the model is given by:\begin{align}p_i \sim \text{Beta}(k_{\text{on}},k_{\text{off}}) \\y_i \sim \text{Poisson}(\alpha p_i)\end{align}Note that the degradation rate $\gamma$ is missing from this formulation, which is again impossible to infer using only the mRNA distribution and hence timescales are normalised in terms of the mRNA degradation rate. Now let's code the model in STAN and see if it's possible to reliably infer the parameters!
###Code
model2 = """
data {
int<lower=1> N;
int y[N];
}
parameters {
real<lower=0> k_on;
real<lower=0> k_off;
real<lower=0> alpha;
vector<lower=0, upper=1>[N] p;
}
model {
y ~ poisson(alpha*p);
p ~ beta(k_on,k_off);
k_on ~ normal(0,100);
k_off ~ normal(0,100);
alpha ~ normal(0,100);
}
"""
dat = {
'N' : len(y_slow_mRNA[-1,:]),
'y' : y_slow_mRNA[-1,:]
}
sm = pystan.StanModel(model_code=model2)
fit = sm.sampling(data=dat, iter=2000, chains=4)
a = fit.extract(permuted=True)
k_on_inferred = a['k_on']
k_off_inferred = a['k_off']
alpha_inferred = a['alpha']
p_inferred = a['p']
fig = plt.figure(figsize=(9,3))
plt.subplot(1,3,1)
plot_posterior_oned(k_on_inferred,r'$k_{on}$')
plt.plot([0.5,0.5],[0,14],'r--')
plt.subplot(1,3,2)
plot_posterior_oned(k_off_inferred,r'$k_{off}$')
plt.plot([0.5,0.5],[0,10],'r--')
plt.subplot(1,3,3)
plot_posterior_oned(alpha_inferred,r'$\alpha$')
plt.plot([50,50],[0,0.7],'r--')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
As we can see in the posterior parameter histograms, the true value (shown in red) is contained within most of the posterior parameter distribution.The beta-Poisson model might seem like an attractive model in that it provides more flexibility than the standard Poisson distribution while still being based on a biophysical model of transcription with interpretable parameters. However, we will now see an example that illustrates some potential pathologies of this model. Let's imagine that the switching between on and off state becomes fast relative to the mRNA degradation timescale ($k_{\text{on}}$ and $k_{\text{off}}$ are both equal to 30) and look at another time series:
###Code
y_fast_mRNA = pd.read_csv('Model2_fast_mRNA.csv',header=None).values
y_fast_promoter = pd.read_csv('Model2_fast_promoter.csv',header=None).values
fig = plt.figure(figsize=(8,4))
plt.subplot(2,2,1)
plot_time_series(t,y_fast_mRNA[:,0])
plt.title(r'mRNA time series. $k_{on} = 30$, $k_{off} = 30$')
plt.subplot(2,2,3)
plot_time_series(t,y_fast_promoter[:,0])
plt.ylabel('Promoter state')
plt.title(r'Promoter time series')
plt.subplot(1,2,2)
plot_hist(y_fast_mRNA[-1,:],bins=10)
plt.title(r'mRNA distribution')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Now let's use the same STAN model for inference:
###Code
dat = {
'N' : len(y_fast_mRNA[-1,:]),
'y' : y_fast_mRNA[-1,:]
}
fit = sm.sampling(data=dat, iter=2000, chains=4)
a = fit.extract(permuted=True)
k_on_inferred = a['k_on']
k_off_inferred = a['k_off']
alpha_inferred = a['alpha']
p_inferred = a['p']
fig = plt.figure(figsize=(9,3))
plt.subplot(1,3,1)
plot_posterior_oned(k_on_inferred,r'$k_{on}$')
plt.plot([30,30],[0,0.2],'r--')
plt.subplot(1,3,2)
plot_posterior_oned(k_off_inferred,r'$k_{off}$')
plt.plot([30,30],[0,0.1],'r--')
plt.subplot(1,3,3)
plot_posterior_oned(alpha_inferred,r'$\alpha$')
plt.plot([50,50],[0,0.1],'r--')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
We can see STAN is not happy with the inference procedure (we see several warning messages related to STAN diagnostics) and the parameter posterior distributions are a mess. In fact, for this parameter regime there is a problem of parameter identifiability as the switching between on and off states is so fast that the mRNA distribution becomes similar to a standard Poisson. In other words, for some parameter combinations the inference procedure is not robust for Model 2. Model 3: the negative binomial (gamma-Poisson) distribution The negative binomial distribution has become common for modelling mRNA distributions in both smFISH and RNA-seq datasets. But where does this negative binomial distribution come from, and how is it related to transcriptional dynamics? Interestingly, there are (at least) three different ways you can generate a negative binomial distribution. We'll first look at stochastic simulations before describing them below
###Code
y_method1_mRNA = pd.read_csv('Model3_method1_mRNA.csv',header=None).values
y_method1_promoter = pd.read_csv('Model3_method1_promoter.csv',header=None).values
y_method2_mRNA = pd.read_csv('Model3_method2_mRNA.csv',header=None).values
y_method3_mRNA = pd.read_csv('Model3_method3_mRNA.csv',header=None).values
y_method3_alpha = pd.read_csv('Model3_method3_alpha.csv',header=None).values
fig = plt.figure(figsize=(12,12))
plt.subplot(3,3,1)
plot_time_series(t,y_method1_mRNA[:,0])
plot_time_series(t,y_method1_mRNA[:,1])
plt.title(r'Method 1 - approximation of full model')
plt.subplot(3,3,4)
plot_time_series(t,y_method1_promoter[:,0])
plot_time_series(t,y_method1_promoter[:,1])
plt.ylabel('Promoter state')
plt.title(r'Promoter time series')
plt.subplot(3,3,7)
plot_hist(y_method1_mRNA[-1,:],bins=np.linspace(0,100,12))
plt.title(r'mRNA distribution')
b = 50/3
r = 1
x_vec = range(100)
p_tot = np.zeros(len(x_vec))
for i in range(len(x_vec)):
x = x_vec[i]
m = b*r
p = r/(m+r)
y = nbinom.pmf(x, r, p)
p_tot[i] = y
plt.plot(x_vec,p_tot,'r',label='NB approx')
legend = plt.legend(loc='upper right', shadow=True)
plt.subplot(3,3,2)
plot_time_series(t,y_method2_mRNA[:,0])
plot_time_series(t,y_method2_mRNA[:,1])
plt.title(r'Method 2 - random bursts')
plt.subplot(3,3,8)
plot_hist(y_method2_mRNA[-1,:],bins=np.linspace(0,100,12))
plt.title(r'mRNA distribution')
plt.plot(x_vec,p_tot,'r',label='NB exact')
legend = plt.legend(loc='upper right', shadow=True)
plt.subplot(3,3,3)
plot_time_series(t,y_method3_mRNA[:,0])
plot_time_series(t,y_method3_mRNA[:,1])
plot_time_series(t,y_method3_mRNA[:,2])
plt.plot([t[0],t[-1]],[y_method3_alpha[0],y_method3_alpha[0]],'k--')
plt.plot([t[0],t[-1]],[y_method3_alpha[1],y_method3_alpha[1]],'k--')
plt.plot([t[0],t[-1]],[y_method3_alpha[2],y_method3_alpha[2]],'k--')
plt.title(r'Method 3 - gamma distributed $\alpha$')
plt.subplot(3,3,9)
plot_hist(y_method3_mRNA[-1,:],bins=np.linspace(0,100,12))
plt.title(r'mRNA distribution')
plt.plot(x_vec,p_tot,'r',label='NB exact')
legend = plt.legend(loc='upper right', shadow=True)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
*Method 1: as an approximation of the full telegraph model when burst are short*. In the regime where burst are short (i.e. a large ratio $k_{\text{off}}/\gamma$), the full model can be approximated by the negative binomial distribution (see Raj et al. 2006 Supp Material [17](https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.0040309) ). In contrast to the full telegraph model (which has 3 parameters, exluding the degradation rate that is impossible to infer), the negative binomial distribution has two parameters: a burst size and burst frequency. A negative binomial distribution is less flexible than the beta-Poisson distribution and can never give rise to a bimodal distribution. However, the short burst size is probably a reasonable assumption for many mammalian genes [13](https://science.sciencemag.org/content/332/6028/472.abstract) and parameter inference is generally more robust.*Method 2: instantaneous bursts of mRNA production*. In contrast to the approximation made in Method 1, it's possible to use a slightly different model of transcription that will give rise to a negative binomial as the exact distribution. In Friedman et al. 2006 [18](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.97.168302), the authors consider a model of protein dynamics where protein production occurs in random bursts with an exponentially distributed number of molecules, and they show that this model leads to a gamma distribution in steady state. In fact, the model they used was equivalent to solving the random differential equation for $X_t$ where the production term $M_t$ is described by instantaneous, exponentially distributed injections. In the time series for Method 2 you can see that mRNA increases in discontinuous bursts. When the mixture distribution of the Poisson distribution is a gamma distribution, the integral can be performed analytically and leads to a negative binomial distribution (see wiki page for details [19](https://en.wikipedia.org/wiki/Negative_binomial_distribution)).*Method 3: a simple model of extrinsic noise*. A negative binomial distribution can also be created with an entirely different generative model. It is unrealistic to assume that all cells have exactly the same kinetic parameters, and parameter variability could arise from many different mechanisms (e.g. differences in concentration of transcriptional enzymes, cell volume etc.). Let's imagine that the transcription rate varies according to a gamma distribution. In the time series for Method 3 we see that cells fluctuate around their own mean, which is dictated by their individual transcription parameter. In each cell there is a Poisson distribution with a rate parameter equal to the hidden (latent) transcription rate. Given that the transcription rate is gamma distributed, we once again recover a negative binomial distribution across the whole population.Methods 2 and 3 once again illustrate a very important principle: different dynamical models can produce the same stationary distributions. We can code any of the models in STAN using a negative binomial distribution.Option 1: directly use a negative binomial distribution.
###Code
model3 = """
data {
int<lower=1> N;
int y[N];
}
parameters {
real<lower=0> b;
real<lower=0> f;
}
model {
y ~ neg_binomial_2( b*f , f);
b ~ normal(0,100);
f ~ normal(0,100);
}
"""
###Output
_____no_output_____
###Markdown
For Method 3, we could alternatively give each cell its own (latent) transcription parameter that is drawn from a gamma distribution. This method is less efficient because each cell has an associated latent variable.Option 2: use a latent variable representation with a gamma mixture of Poisson distributions.
###Code
model3 = """
data {
int<lower=1> N;
int y[N];
}
parameters {
real<lower=0> b;
real<lower=0> f;
real<lower=0> alpha[N];
}
model {
y ~ poisson(alpha);
alpha ~ gamma(f,b):
b ~ normal(0,100);
f ~ normal(0,100);
}
"""
###Output
_____no_output_____
###Markdown
Model 4: the multivariate log-normal Poisson mixture distribution Instead of using a gamma distribution, we could be more open-minded about what type of distribution we use for the mixing distribution. We could, for example, use a log-normal distribution instead of a gamma distribution (similarly to as performed here: [6](https://doi.org/10.1038/s41587-021-00875-x)). There are many different ways that one could generate a log-normal distribution. Mechanistically, we could assume that the transcription process $X_t$ follows a modified Ornstein-Uhlenbeck process in log space. Alternatively, the central limit theorem in the log domain tells us that we expect a log-normal distribution when we multiply many independent variables. We saw that the negative binomial distribution could be generated by either purely intrinsic (transcriptional bursting) or extrinsic (differences in transcription rate) mechanisms. We could also decide to be agnostic and say: we'll use the log-normal to model everything (both intrinsic and extrinsic) that we can't measure and that will create super-poissonian noise.With a multivariate log-normal distribution it's also possible to introduce correlation between genes and hence encode networks of statistical relationships. This is also an opportunity to show another nice feature of STAN. A correlation matrix has a certain structure (symmetric and positive semi-definite), so we could have problems if we encode each entry in the correlation matrix as a free parameter. Fortunately, in STAN there is a special data type for correlation matrices that will ensure that this doesn't cause a problem. Here is an example of how you could code this in STAN for a matrix of gene expression counts $Y$ with $N$ cells and $K$ number of genes. Note that we use a "non-centred" parameterisation for the log-normal latent variables (see the STAN reference manual for discussion).
###Code
model4 = """
data {
int<lower=1> N;
int<lower=1> K;
int<lower=0> Y[N,K];
}
parameters {
row_vector[K] mu_vec;
matrix[N, K] eta;
cholesky_factor_corr[K] L;
vector[K] sigma_vec;
}
transformed parameters{
matrix[N, K] Z;
matrix[K, K] C;
corr = L*L';
for ( n in 1:N ) {
Z[n,:] = exp(mu_vec + (diag_pre_multiply(sigma_vec,L)*(eta[n,:])')');
}
}
model {
to_vector(eta) ~ normal(0, 1);
for ( k in 1:K ) {
Y[:,k] ~ poisson(Z[:,k]);
}
mu_vec ~ normal(0,100);
sigma_vec ~ normal(0,100);
L ~ lkj_corr_cholesky(2.0);
}
"""
###Output
_____no_output_____
###Markdown
Model 5: incorportating extrinsic noise explicitly There is somewhat of a debate within the single-cell gene expression community about how much single cell noise is due to intrinsic vs extrinsic factors. We saw a very interesting example for the negative binomial distribution, where intrinsic noise (caused by transcriptional bursting) and extrinsic noise (caused by cell-to-cell parameter differences) generated exactly the same mRNA distribution.So how will we ever know the real contribution of extrinsic noise? One approach is to measure as many possible cellular variables as possible and see whether knowledge of these variables helps predict gene expression in a given cell. In Battich et al. 2015 [20](https://www.cell.com/fulltext/S0092-8674(15)01498-1), for an example, they measured 183 features including cell area, shape, cell crowding, neighbourhood activity etc. We can incorporate any additional cellular measurements into our STAN model. The measured cellular features can be supplied as an additional data input in the STAN data block. You could either use the original measured cellular features or perhaps a lower-dimensional representation using either a linear or nonlinear dimensionality reduction technique. You could even try and do this in STAN if you were feeling adventurous! We still need to define the distribution of mRNA counts *given* all of the external cellular variables (i.e. we still need to define a model of the remaining intrinsic noise once we have the cellular information). You can experiment with any of the Models 1-4. In our recent paper we used a negative binomial distribution, but if you decide to do this then you need to define whether the additional measurements affect either burst size or burst frequency.There is previous work ([21](https://www.cell.com/molecular-cell/pdfExtended/S1097-2765(15)00170-7)) showing that burst size is a function of cell volume, so let's code this as a STAN model as a simple example where the burst size $b$ is now rescaled by the volume $v$ to become $bv$, and where we enter the volume $v$ as an extra data input.
###Code
model5 = """
data {
int<lower=1> N;
int y[N];
int v[N]; // cell volume
}
parameters {
real<lower=0> b;
real<lower=0> f;
}
model {
y ~ neg_binomial_2( b*v*f , f);
b ~ normal(0,100);
f ~ normal(0,100);
}
"""
###Output
_____no_output_____
###Markdown
excersise![img](imori.jpg "imori")replace colors of the image on upper left between R and B.
###Code
# my answer
img = cv2.imread("imori.jpg")
shape = img.shape
print(shape[0]//2, shape[0]/2)
B = img[:int(shape[0]/2), :int(shape[1]/2),0].copy()
img[:int(shape[0]/2), :int(shape[1]/2),0] = img[:int(shape[0]/2), :int(shape[1]/2),2]
img[:int(shape[0]/2), :int(shape[1]/2),2] = B
plt.imshow(img)
# Model answer
img = cv2.imread("imori.jpg")
img3 = img.copy()
H, W, C = img3.shape
img3[:H//2, :W//2] = img3[:H//2, :W//2, (2 ,1 ,0)]
plt.imshow(img3)
###Output
_____no_output_____
###Markdown
Tutorial - Voltammetry analysis step-by-stepThis notebook walks you through how to analyze voltammmetry data that has been previously acquired and stored in "axon binary file" (abf) format. PreprocessingThe "voltammetry" module included in this repository has several tools to load and view data. First, let's convert load the relevant packages and preprocess the data.
###Code
# load packages
import os
import h5py
import numpy as np
import matplotlib.pyplot as plt
import voltammetry
abfpath = 'Tutorial_LABS_data'
sorted(os.listdir(abfpath))
###Output
_____no_output_____
###Markdown
This data set contains 18 experiments with the extension ".h5" and a file with labels corresponding to each experiment. Each experimental data file has two matrices: 'CMD' = the forcing funciton used for frame. [frame point count X frames]'Voltammogram' = current trace for each frame. [frame point count X frames]
###Code
# load single data file
d0 = h5py.File('Tutorial_LABS_data/Tutorial_LABS_data_0000.h5','r')
print(list(d0.keys()))
print(d0['CMD'])
print(d0['Voltammogram'])
plt.plot(d0['CMD'][:,0])
plt.legend({'LABS'})
plt.xlabel('frame index')
plt.ylabel('voltage (mV)')
###Output
['CMD', 'Voltammogram']
<HDF5 dataset "CMD": shape (1032, 10000), type "<f8">
<HDF5 dataset "Voltammogram": shape (1032, 10000), type "<f8">
###Markdown
The votammogram data and labels can be loaded and converted to the proper format
###Code
vg = voltammetry.Data(abfpath)
labels = voltammetry.Mulabels(abfpath, 'run.csv')
# Plot single voltammogram for each experiment. Note values above 2000 nA are cut off and drop down
# to minimum of -2000 nA.
vgram_fig = vg._plotVoltammograms()
###Output
_____no_output_____
###Markdown
Select the data to be analyzed and preprocess the data.The 1032 long frames contain leading and trailing data points, 16 each that must be removed.A 1500 frame window will be selected from the entire 10000 frames for each experiment based on the the moststable median value.Points set to minimum and flipped to maximum to reduce overflow error. ** This can be time consuming.**
###Code
data = voltammetry.PreprocessedData(vg.Voltammogram[16:1016], labels,window_size=1500,trainingSampleSize=125,corr_over=True)
###Output
Correcting overflow negative values
Start partition
Finding stable section with window size 1500
Partitioning data with training sample size 125
Flattening Data
PRE-PROCESSING COMPLETE!!!!
###Markdown
Observe the input shape of the corrected voltammogram.
###Code
y = data.training.vgrams[0,:]
x = np.arange(len(y))/100 # scale x bashed on 100 kHz sampling rate
plt.plot(x,y)
plt.title('corrected voltammogram')
plt.xlabel('Time (ms)')
plt.ylabel('current (nA)');
###Output
_____no_output_____
###Markdown
As an alternative to background subtraction, the voltammograms are differentiated with respect to index.
###Code
y = np.diff(data.training.vgrams[0,:])
x = np.arange(len(y))/100 # scale x bashed on 100 kHz sampling rate
plt.plot(x,y)
plt.title('differentiated voltammogram')
plt.xlabel('Time (ms)')
plt.ylabel(r'$\Delta$current (nA/index)');
###Output
_____no_output_____
###Markdown
Training Elastic-Net Penalized RegressionThe software package GLMnet computes the $\lambda$ which minimizes the 10-fold cross validation error. A function was written to itterate over values of $\alpha$ from 0 to 1 in steps of 0.1. The partitioned training data will be used as input and the model is then validated on the testing data. More detailed informaiton on GLMnet can be found at https://glmnet-python.readthedocs.io/en/latest/glmnet_vignette.html.
###Code
# train cvglmnet model
max_core = 8 # maxmum number of corest that can be used in parallel by machine for training
# Search for the best alpha from 0 to 1 in steps of 0.1
bestAlpha = voltammetry.best_alpha(data.training)
cvFit = voltammetry.train_analyte(data.training, alpha=bestAlpha,parallel=max_core)
# generate predictions
predictions = voltammetry.test_analyte(data.testing, cvFit)
###Output
_____no_output_____
###Markdown
ResultsThe predictions of the elastic net penalized linear regression model can be evaluated by comparing true values to predicted values. Below the calibration of the model is visualized as the individual predicitons in a time series, along with root-mean-squared error and signal-to-noise ratio at each concentration.
###Code
# Generate predictions, plot results
chemIx = 0 # only one analyte
t = np.arange(len(data.testing.labels))
stats = voltammetry.calcStepStats(chemIx, predictions, data.testing.labels)
[calFig,ax1,ax2,ax3] = voltammetry.plot_Calibration(t, predictions, data.testing.labels, labels.targetAnalyte, chemIx, stats)
###Output
_____no_output_____
###Markdown
We can also visualize some aspects of the model, including the linear regression coefficients generated for the model.
###Code
# plot betas
from glmnet_python import cvglmnetCoef
betas = cvglmnetCoef(cvFit, 'lambda_min')
#plt.figure(dpi=200)
plt.plot(np.squeeze(betas[0])[1::]) # the first element is the offset, omit when plotting
plt.title('weight along each frame point')
plt.ylabel('coefficient')
plt.xlabel('frame position')
###Output
_____no_output_____
###Markdown
The above tutorial should provide a guide for using models and interpretting voltammetry data. The same basic steps are used in the Dopamine_Norepinephrine.ipynb, "Comparing_linear_sweep_random_burst.ipynb",and "Ensemble_model_example.ipynb" demos.Thank you.
###Code
## Credits
import sys, platform, time
print('This tutorial was created using:')
print('Python Version:',sys.version)
print('Operating System:',platform.system(),'Version',platform.release())
print('GLMnet for python: https://web.stanford.edu/~hastie/glmnet_python/')
print('Numpy: https://numpy.org/')
print('h5py: http://www.h5py.org/')
print('pyplot: https://matplotlib.org/index.html')
print('Last updated:',time.strftime('%d-%b-%Y %H:%M:%S',time.localtime()))
###Output
This tutorial was created using:
Python Version: 3.7.3 (default, Mar 27 2019, 16:54:48)
[Clang 4.0.1 (tags/RELEASE_401/final)]
Operating System: Darwin Version 18.7.0
GLMnet for python: https://web.stanford.edu/~hastie/glmnet_python/
Numpy: https://numpy.org/
h5py: http://www.h5py.org/
pyplot: https://matplotlib.org/index.html
Last updated: 20-Dec-2020 21:21:55
|
14-Advanced Python Objects and Data Structures/03-Advanced Sets.ipynb | ###Markdown
Advanced SetsIn this lecture we will learn about the various methods for sets that you may not have seen yet. We'll go over the basic ones you already know and then dive a little deeper.
###Code
s = set()
###Output
_____no_output_____
###Markdown
addadd elements to a set. Remember, a set won't duplicate elements; it will only present them once (that's why it's called a set!)
###Code
s.add(1)
s.add(2)
s
###Output
_____no_output_____
###Markdown
clearremoves all elements from the set
###Code
s.clear()
s
###Output
_____no_output_____
###Markdown
copyreturns a copy of the set. Note it is a copy, so changes to the original don't effect the copy.
###Code
s = {1,2,3}
sc = s.copy()
sc
s
s.add(4)
s
sc
###Output
_____no_output_____
###Markdown
differencedifference returns the difference of two or more sets. The syntax is: set1.difference(set2)For example:
###Code
s.difference(sc)
###Output
_____no_output_____
###Markdown
difference_updatedifference_update syntax is: set1.difference_update(set2)the method returns set1 after removing elements found in set2
###Code
s1 = {1,2,3}
s2 = {1,4,5}
s1.difference_update(s2)
s1
###Output
_____no_output_____
###Markdown
discardRemoves an element from a set if it is a member. If the element is not a member, do nothing.
###Code
s
s.discard(2)
s
###Output
_____no_output_____
###Markdown
intersection and intersection_updateReturns the intersection of two or more sets as a new set.(i.e. elements that are common to all of the sets.)
###Code
s1 = {1,2,3}
s2 = {1,2,4}
s1.intersection(s2)
s1
###Output
_____no_output_____
###Markdown
intersection_update will update a set with the intersection of itself and another.
###Code
s1.intersection_update(s2)
s1
###Output
_____no_output_____
###Markdown
isdisjointThis method will return True if two sets have a null intersection.
###Code
s1 = {1,2}
s2 = {1,2,4}
s3 = {5}
s1.isdisjoint(s2)
s1.isdisjoint(s3)
###Output
_____no_output_____
###Markdown
issubsetThis method reports whether another set contains this set.
###Code
s1
s2
s1.issubset(s2)
###Output
_____no_output_____
###Markdown
issupersetThis method will report whether this set contains another set.
###Code
s2.issuperset(s1)
s1.issuperset(s2)
###Output
_____no_output_____
###Markdown
symmetric_difference and symmetric_updateReturn the symmetric difference of two sets as a new set.(i.e. all elements that are in exactly one of the sets.)
###Code
s1
s2
s1.symmetric_difference(s2)
###Output
_____no_output_____
###Markdown
unionReturns the union of two sets (i.e. all elements that are in either set.)
###Code
s1.union(s2)
###Output
_____no_output_____
###Markdown
updateUpdate a set with the union of itself and others.
###Code
s1.update(s2)
s1
###Output
_____no_output_____
###Markdown
Advanced SetsIn this lecture we will learn about the various methods for sets that you may not have seen yet. We'll go over the basic ones you already know and then dive a little deeper.
###Code
s = set()
###Output
_____no_output_____
###Markdown
addadd elements to a set. Remember, a set won't duplicate elements; it will only present them once (that's why it's called a set!)
###Code
s.add(1)
s.add(2)
s
###Output
_____no_output_____
###Markdown
clearremoves all elements from the set
###Code
s.clear()
s
###Output
_____no_output_____
###Markdown
copyreturns a copy of the set. Note it is a copy, so changes to the original don't effect the copy.
###Code
s = {1,2,3}
sc = s.copy()
sc
s
s.add(4)
s
sc
###Output
_____no_output_____
###Markdown
differencedifference returns the difference of two or more sets. The syntax is: set1.difference(set2)For example:
###Code
s.difference(sc)
###Output
_____no_output_____
###Markdown
difference_updatedifference_update syntax is: set1.difference_update(set2)the method returns set1 after removing elements found in set2
###Code
s1 = {1,2,3}
s2 = {1,4,5}
s1.difference_update(s2)
s1
###Output
_____no_output_____
###Markdown
discardRemoves an element from a set if it is a member. If the element is not a member, do nothing.
###Code
s
s.discard(2)
s
###Output
_____no_output_____
###Markdown
intersection and intersection_updateReturns the intersection of two or more sets as a new set.(i.e. elements that are common to all of the sets.)
###Code
s1 = {1,2,3}
s2 = {1,2,4}
s1.intersection(s2)
s1
###Output
_____no_output_____
###Markdown
intersection_update will update a set with the intersection of itself and another.
###Code
s1.intersection_update(s2)
s1
###Output
_____no_output_____
###Markdown
isdisjointThis method will return True if two sets have a null intersection.
###Code
s1 = {1,2}
s2 = {1,2,4}
s3 = {5}
s1.isdisjoint(s2)
s1.isdisjoint(s3)
###Output
_____no_output_____
###Markdown
issubsetThis method reports whether another set contains this set.
###Code
s1
s2
s1.issubset(s2)
###Output
_____no_output_____
###Markdown
issupersetThis method will report whether this set contains another set.
###Code
s2.issuperset(s1)
s1.issuperset(s2)
###Output
_____no_output_____
###Markdown
symmetric_difference and symmetric_updateReturn the symmetric difference of two sets as a new set.(i.e. all elements that are in exactly one of the sets.)
###Code
s1
s2
s1.symmetric_difference(s2)
###Output
_____no_output_____
###Markdown
unionReturns the union of two sets (i.e. all elements that are in either set.)
###Code
s1.union(s2)
###Output
_____no_output_____
###Markdown
updateUpdate a set with the union of itself and others.
###Code
s1.update(s2)
s1
###Output
_____no_output_____ |
misc/ireland_rip_deaths2.ipynb | ###Markdown
RIP.ie daily death data (v2)* rip.ie* http://dmnfarrell.github.io/* https://data.gov.ie/dataset/list-of-nursing-homes-in-ireland/resource/489aad00-cad1-41d7-92bf-8b5cdd9d61ea* https://data.gov.ie/dataset/62954fa3-1492-48af-93d1-5c9bf6a14d1e/resource/142d3b64-8f02-4ed7-bfbd-dc3e20420f3f&r=C01885V02316&c=STATISTIC
###Code
import pandas as pd
import pylab as plt
import numpy as np
import seaborn as sns
import matplotlib as mpl
import pylab as plt
import matplotlib.dates as mdates
import difflib, re
pd.set_option('display.width', 150)
locator = mdates.AutoDateLocator(minticks=4, maxticks=10)
formatter = mdates.ConciseDateFormatter(locator)
sns.set_style("white")
sns.set_context('talk')
pd.set_option('display.max_colwidth', 500)
def get_data(dups=False):
df = pd.read_pickle('rip_dn_scrape_processed.pkl')
df=df.dropna(subset=['date'])
if dups==False:
df=df.drop_duplicates(['name','date','county'])
print (len(df))
df['date'] = pd.to_datetime(df.date,format='%d/%m/%Y',errors='coerce')
df['name'] = df.name.replace(' ',' ')
df.index=df.index.astype('int')
df.sort_index()
df['year'] = df.date.dt.year.astype(int)
df['month'] = df.date.dt.month
df['day'] = df.date.dt.dayofyear
df['week'] = df.date.dt.isocalendar().week
#df['week'] = df.date.dt.strftime('%W').astype('int')
df['year-week'] = df.date.dt.strftime('%Y-W%U')
return df
df = get_data()
df = df[df.year>=2008]
df[(df.week==44) & (df.year==2021)]
g=df.groupby(['year','week']).agg({'name':np.size}).reset_index()
g=g[(g.week>23) & (g.week<45)]
g = g[g.year>2017]
#print (g)
sns.catplot(data=g,x='week',y='name',hue='year',kind='bar',aspect=3.0)
plt.ylabel('total deaths')
plt.title('Ireland total deaths per week calculated from RIP.ie, weeks 24-41')
plt.savefig('ireland_deaths_ripie_byweek.png',dpi=150)
g=df.groupby(['year','month']).agg({'name':np.size}).reset_index()
g=g[g.year>2017]
sns.catplot(data=g,x='month',y='name',hue='year',kind='bar',aspect=3.0)
nhomes = pd.read_csv('nursing_homes.csv')
#print (nhomes[:10])
def find_nhome(x):
for i,r in nhomes.iterrows():
if r.shortname in x.notice and x.county == r.county:
return r['name']+','+r.county
x=df[:160]
#x['home'] = x.apply(lambda x: find_nhome(x),1)
#print (x)
pop = pd.read_csv('ireland_population.csv')
wbcdrt = pd.read_csv('ireland_cdrt.csv')
x=df[(df.month<11) & (df.month>5)]
totals = x.groupby('year').agg('size')
print (totals)
ax=totals.plot(kind='bar',grid=True,figsize=(10,5))
plt.title('RIP.ie estimate, total deaths Jun-Oct (2008-2020)')
sns.despine()
plt.tight_layout()
plt.savefig('ireland_deaths_ripie_summary_v3.png',dpi=120)
d=pd.DataFrame(totals,columns=['deaths']).reset_index()
d=pop.merge(d,on='year')
d=d.sort_values('year')
d['deathsper1000'] = d.deaths/d['pop']*1e3
d=wbcdrt.merge(d,on='year')
d.plot(x='year',y=['deaths','deathsper1000','cdrt'],kind='bar',subplots=True,grid=True,legend=False,figsize=(10,8))
plt.suptitle('RIP.ie deaths 2008-2020')
sns.despine()
plt.tight_layout()
byw = pd.pivot_table(df, index='week',columns='year',values='name',aggfunc='size')
byw['5 yr average'] = byw.iloc[:,8:-1].mean(1)
x=byw.iloc[25:43,11:]
#print (x)
x.plot(kind='bar',width=.8,figsize=(20,6))
#x.T.boxplot(figsize=(18,6))
plt.legend(loc=4,ncol=5,framealpha=0.9,fontsize=18)
plt.suptitle('RIP.ie deaths per week 2019-2021')
sns.despine()
plt.savefig('ireland_deaths_ripie_byweek.png',dpi=150)
x = df.groupby('date').size()
ax=x.rolling(14,win_type='hamming').mean().plot(lw=2,figsize=(15,6),ylim=(50,160))
ax.xaxis.set_major_locator(locator)
ax.xaxis.set_major_formatter(formatter)
for y in range(2008,2021):
ax.vlines(pd.to_datetime('%s-12-31' %y),0,160,color='r',ls=':')
plt.suptitle('RIP.ie deaths 2013-2021, 14 day average')
sns.despine()
plt.tight_layout()
plt.savefig('ireland_deaths_ripie_trend_v2.png',dpi=150)
byday = pd.pivot_table(df, index='day',columns='year',values='name',aggfunc='size')
byday['average'] = byday.iloc[:,8:-1].mean(1)
#x = byday.iloc[:,10:]
x = byday[[2021,'average']]
#x = x.loc[50:]
meanday = x.rolling(window=7,win_type='hamming').mean()
ax=meanday.plot(figsize=(15,6),ylim=(50,160),lw=3,alpha=0.7)
ax.grid(linestyle='--',linewidth=.5)
plt.legend(loc=9,ncol=3,fontsize=16)
sns.despine()
plt.suptitle('RIP.ie daily deaths 2019-2021, 21 day trailing average')
plt.tight_layout()
plt.savefig('ireland_deaths_ripie_compared_mean_v2.png',dpi=150)
###Output
_____no_output_____
###Markdown
GRO compare
###Code
gro = pd.read_csv('gro_deaths.csv')
bymonth = pd.pivot_table(df, index='month',columns='year',values='name',aggfunc='size')
a=bymonth.reset_index()
a.columns=[str(i) for i in a.columns]
b=gro.merge(a,on='month',suffixes=['_GRO','_RIP'])
#print (b)
f,axs=plt.subplots(2,4,figsize=(18,9))
axs=axs.flat
i=0
for y in range(2013,2021):
ax=axs[i]
b.plot(x='%s_GRO' %y,y='%s_RIP' %y,c='0.1',s=100,kind='scatter',grid=True,ax=ax)
ax.plot([2000, 3300], [2000, 3300], ls='--')
ax.set_xlim(2000,3300)
ax.set_ylim(2000,3300)
ax.set_title('GRO vs RIP.ie, %s' %y)
i+=1
sns.despine()
plt.tight_layout()
plt.savefig('ireland_deaths_gro_vs_ripie.png',dpi=150)
###Output
_____no_output_____
###Markdown
compare eurostat data
###Code
eu=pd.read_csv('estat_demo_r_mwk_ts_filtered.tsv',sep='\t').T
eu.columns=['Eurostat']
eu[-4:]
x = df.groupby(['year','week']).size().reset_index()
x['time'] = x.apply(lambda x: '%s-W%02d' %(x.year,x.week),1)
x=x.set_index('time').iloc[:,2:]
x.columns=['RIP.ie']
x=x.merge(eu,left_index=True, right_index=True,how='right')
#x['diff'] = x.RIP-x.Eurostat
print (x[50:70])
fig,ax=plt.subplots(1,1,figsize=(17,7))
ax=x.plot(lw=3,ax=ax,grid=True)
sns.despine()
plt.tight_layout()
plt.title('Weekly mortality RIP.ie estimate vs Eurostat',fontsize=25)
#ax2 = fig.add_axes( [0.4, 0.6, 0.15, 0.3])
#x.plot(x='Eurostat',y='RIP.ie',kind='scatter',c='black',ax=ax2)
plt.tight_layout()
fig.savefig('eurostat_ireland_deaths_compared.png')
###Output
RIP.ie Eurostat
2020-W51 658 614
2020-W52 700 628
2020-W53 429 672
2021-W01 859 789
2021-W02 988 906
2021-W03 1061 975
2021-W04 1034 971
2021-W05 961 890
2021-W06 902 834
2021-W07 813 752
2021-W08 707 671
2021-W09 679 625
2021-W10 644 582
2021-W11 620 571
2021-W12 639 586
2021-W13 670 622
2021-W14 617 567
2021-W15 628 591
2021-W16 630 589
2021-W17 648 591
###Markdown
person matching
###Code
df = get_data(dups=True)
sub = df[(df.address.str.contains('Dublin')) | (df.county=='Dublin')]
print (len(sub))
sub.to_pickle('rip_dublin.pkl')
test = sub[sub.address.str.contains('Dublin')][['name','address','date']].sample(20)
#test['address'] = test.address.apply(lambda x: '+'.join(x.strip().split(',')))
test
x=df[df.name.str.contains("'")]
x[:10].name.apply(clean_name)
def clean_name(txt):
clean = re.sub(r"""[,.;@#?!&$']+ \ * """, " ",
txt, flags=re.VERBOSE)
clean = clean.strip()
return clean
def find_word(w):
return re.compile(r'\b({0})\b'.format(w), flags=re.IGNORECASE).search
def find_match(x,r):
name = clean_name(x['name'])
name = name.lower()
if r.surname.lower() in name and find_word(r.firstname)(name): #r.firstname.lower() in name:
return True
return False
def run_search(db, targets, keep=1):
"""Search rip.ie table for known persons"""
print ('searching %s rows' %len(db))
results = []
for i,r in targets.iterrows():
print (r.case_id,r.surname,r.firstname)
A = r.address
f = db.apply(lambda x: find_match(x,r),1)
res = db[f].copy()
if len(res) == 0:
print ('no names match')
res=pd.DataFrame([(r.case_id,0,'NA')],columns=['case_id','year','id'])
print (res)
elif len(res) == 1:
print ('one unique hit')
#res = res.iloc[0]
else:
#get best match address
print ('found %s hits' %len(res))
addresses=list(res.address)
#res['score']=res.apply(lambda x: fuzz.ratio(A, x.address),1)
res['score']=res.apply(lambda x: difflib.SequenceMatcher(None, A, x.address).ratio(),1)
res = res.sort_values('score',ascending=False)
res = res[:keep]
#res = res.iloc[0]
res['case_id'] = r.case_id
results.append(res)
results = pd.concat(results).reset_index(drop=True)
return results
targets = pd.read_csv('test_targets.csv')
results = run_search(sub, targets, keep=1)
results = targets.merge(results,on='case_id')
results
results.to_csv('search_results.csv',index=False)
###Output
_____no_output_____ |
offaxis/off_axis_loop.ipynb | ###Markdown
Off-Axis Field of a Current Loop *This simple formula can be obtained using the [Law of Biot Savart](../basics/biotsavart.html), integrated over a circular current loop to obrtain the magnetic field at any point in space. Compare thisto the much simpler formula for calculating the [on-axis magnetic field due to a current loop](../solenoids/current_loop.html).* ![Magnetic field in vicinity of a current loop. Point is located at axial distance, x, and radius, r.](./offaxisloop.png "Field due to a current loop") Axial, Radial Components$B_x = B_0 \frac 1 {\pi \sqrt Q} \left[E(k) \frac {1-\alpha^2-\beta^2}{Q-4\alpha} + K(k) \right]$$B_r = B_0 \frac {\gamma} {\pi \sqrt Q} \left[E(k) \frac {1+\alpha^2+\beta^2}{Q-4\alpha} - K(k) \right]$**$B$** is the magnetic field (Tesla) at any point in space that isn't on the current loop. It is equal to the sum of two field components: **$B_x$** the field component that is aligned with the axis and **$B_r$**, the field component that is in a radial direction. Symbols are defined:$\alpha = \frac r a $ and $\beta = \frac x a $ and $\gamma = \frac x r$$Q = \left[\left(1 + \alpha\right)^2 + \beta^2 \right]$$k = \sqrt {\frac {4 \alpha} Q}$**$B_0$** is the magnetic field at the center of the coil: $B_0 = \frac {i \mu_0} {2a}$ **$i$** is the current in the loop wire (Amperes)**$a$** is the loop radius (meters)**$\mu_0$** is the permeability constant (approx. 1.26 x 10-6 or *exactly* π4 x 10-7)**$x$** is the distance in the axial direction from the center of the current loop to the field measurement point.**$r$** is the distance in the radial direction from the axis of the current loop to the field measurement point.**$K(k)$** is the complete elliptic integral function, of the first kind**$E(k)$** is the complete elliptic integral function, of the second kind. Online CalculatorPlease see the [online calculator](http://tiggerntatie.github.io/emagnet/offaxis/iloopcalculator.htm) for finding fields at any point in space due to a current loop. Example ApplicationThe following Python code implements the formulas on this page and presents curves that show axial and radial fieldstrength components in the vicinity of a 1m radius loop of wire carrying 1A of current. Credits Formulas on this page are adapted from [SOME USEFUL INFORMATION FOR THE DESIGN OF AIR-CORE SOLENOIDS](https://docs.google.com/file/d/0Bw_DfnQIfCa-cWUxNzFOam1HeFk/edit?usp=sharing) by D. Bruce Montgomery and J. Terrell.
###Code
%matplotlib inline
from scipy.special import ellipk, ellipe, ellipkm1
from numpy import pi, sqrt, linspace
from pylab import plot, xlabel, ylabel, suptitle, legend, show
uo = 4E-7*pi # Permeability constant - units of H/m
Bo = lambda i, a, u=uo: i*u/2./a # Central field = f(current, loop radius, perm. constant)
al = lambda r, a: r/a # Alpha = f(radius of measurement point, radius of loop)
be = lambda x, a: x/a # Beta = f(axial distance to meas. point, radius of loop)
ga = lambda x, r: x/r # Gamma = f(axial distance, radius to meas. point)
Q = lambda r, x, a: (1 + al(r,a))**2 + be(x,a)**2 # Q = f(radius, distance to meas. point, loop radius)
k = lambda r, x, a: sqrt(4*al(r,a)/Q(r,x,a)) # k = f(radius, distance to meas. point, loop radius)
K = lambda k: ellipk(k**2.0) # Elliptic integral, first kind, as a function of k
E = lambda k: ellipe(k**2.0) # Elliptic integral, second kind, as a function of k
# On-Axis field = f(current and radius of loop, x of measurement point)
def Baxial(i, a, x, u=uo):
if a == 0:
if x == 0:
return NaN
else:
return 0.0
else:
return (u*i*a**2)/2.0/(a**2 + x**2)**(1.5)
# Axial field component = f(current and radius of loop, r and x of meas. point)
def Bx(i, a, x, r):
if r == 0:
if x == 0:
return Bo(i,a) # central field
else:
return Baxial(i,a,x) # axial field
else: # axial component, any location
return Bo(i,a)*\
(E(k(r,x,a))*((1.0-al(r,a)**2-be(x,a)**2)/(Q(r,x,a)-4*al(r,a))) + K(k(r,x,a)))\
/pi/sqrt(Q(r,x,a))
# Radial field component = f(current and radius of loop, r and x of meas. point)
def Br(i, a, x, r):
if r == 0:
return 0 # no radial component on axis!
else: # radial component, any location other than axis.
return Bo(i,a)*ga(x,r)*\
(E(k(r,x,a))*((1.0+al(r,a)**2+be(x,a)**2)/(Q(r,x,a)-4*al(r,a))) - K(k(r,x,a)))\
/pi/sqrt(Q(r,x,a))
###Output
_____no_output_____
###Markdown
Construct a family of field strength curves, as a function of radius and axial distance, for a unit coil (1m radius, 1A current):
###Code
axiallimit = 2.0 # meters from center
radiallimit = 0.5 # maximum radius to investigate
curveqty = 5
X = linspace(0,axiallimit)
R = linspace(0, radiallimit, curveqty)
[plot(X, [Bx(1,1,x,r) for x in X], label="r={0}".format(r)) for r in R]
xlabel("Axial Position (m)")
ylabel("Axial B field (T)")
suptitle("Axial component of unit coil (1m radius, 1A current) B field for various measurement radii")
legend()
show()
[plot(X, [Br(1,1,x,r) for x in X], label="r={0}".format(r)) for r in R]
xlabel("Axial Position (m)")
ylabel("Radial B field (T)")
suptitle("Radial component of unit coil (1m radius, 1A current) B field for various measurement radii")
legend()
show()
[plot(X, [sqrt(Bx(1,1,x,r)**2 + Br(1,1,x,r)**2) for x in X], label="r={0}".format(r)) for r in R]
xlabel("Axial Position (m)")
ylabel("B field (T)")
suptitle("Total unit coil (1m radius, 1A current) B field for various measurement radii")
legend()
show()
###Output
_____no_output_____
###Markdown
Now re-examine the nature of the Br field by plotting families of curves where the horizontal axis is radial position:
###Code
R = linspace(0, radiallimit)
X = linspace(0, axiallimit, curveqty)
[plot(R, [Bx(1,1,x,r) for r in R], label="x={0}".format(x)) for x in X]
xlabel("Radial Position (m)")
ylabel("Axial B field (T)")
suptitle("Axial component of unit coil (1m radius, 1A current) B field for various axial positions")
legend()
show()
[plot(R, [Br(1,1,x,r) for r in R], label="x={0}".format(x)) for x in X]
xlabel("Radial Position (m)")
ylabel("Radial B field (T)")
suptitle("Radial component of unit coil (1m radius, 1A current) B field for various axial positions")
legend()
show()
[plot(R, [sqrt(Bx(1,1,x,r)**2+Br(1,1,x,r)**2) for r in R], label="x={0}".format(x)) for x in X]
xlabel("Radial Position (m)")
ylabel("B field (T)")
suptitle("Total unit coil (1m radius, 1A current) B field for various axial positions")
legend()
show()
###Output
_____no_output_____ |
Arithmetic/Bidirectional_clean.ipynb | ###Markdown
Compress and Decompress
###Code
from __future__ import print_function
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(1)
with open('data/ecoli/Ecoli.txt') as f:
for line in f:
ecoli = list(line)
temp_dict = {'a':97,'g': 103,'c': 99,'t': 116}
char_list = [97, 103, 99, 116] # we can read this as we go
legend = dict([(v, k) for k, v in enumerate(char_list)]) # map character to 0,1,2,3,4, etc.
s = [legend[temp_dict[i]] for i in ecoli]
vocab_size = len(char_list)
n = 100000 # number of samples
tsteps = 10 #time steps
seg_len = 6 #input_dim
k = tsteps*seg_len # full context for each sample
n_symb = 4
# optimizer
sgd_opt = 'adam'
lr = 4e-3
beta1 = 0
beta2 = 0.9999
eps=1e-5
# LSTM Training
hidden_size = 32
batch_size = 250
epochs = 1
n_layer = 4 #only 4 total laters? or 4 LSTM it does say 4
opt = Adam(
learning_rate=lr , beta_1=0.0, beta_2=beta2, epsilon=eps
)
n_symb = 4
BILSTM = Sequential()
BILSTM.add(Bidirectional(LSTM(hidden_size, activation='tanh', stateful=False, batch_input_shape=(batch_size,tsteps,seg_len), return_sequences=True), input_shape=(tsteps,seg_len)))
BILSTM.add(LayerNormalization(axis=1 , center=True , scale=True))
# BILSTM.add(Bidirectional(LSTM(hidden_size, activation='tanh', stateful=False, batch_input_shape=(batch_size,tsteps,hidden_size), return_sequences=True)))
# BILSTM.add(BatchNormalization(axis=1 , center=True , scale=True))
# BILSTM.add(Bidirectional(LSTM(hidden_size, activation='tanh', stateful=False, batch_input_shape=(batch_size,tsteps,hidden_size), return_sequences=True)))
# BILSTM.add(BatchNormalization(axis=1 , center=True , scale=True))
BILSTM.add(Bidirectional(LSTM(hidden_size, activation='tanh', stateful=False, batch_input_shape=(batch_size,tsteps,hidden_size))))
BILSTM.add(LayerNormalization(axis=1 , center=True , scale=True))
BILSTM.add(Dense(n_symb))
BILSTM.add(Activation('softmax'))
BILSTM.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['categorical_accuracy'])
BILSTM.summary()
# from __future__ import print_function
# from numpy.random import seed
# seed(1)
# from tensorflow import random
# random.set_seed(1)
tic=timeit.default_timer()
inputfile, outputfile = 'data/ecoli/Ecoli.txt', 'data/ecoli/Ecoli.bi_simple_seed1'
epochs = 1
e_idx = 0
with open(inputfile, "rb") as inp, \
contextlib.closing(arith.BitOutputStream(open(outputfile, "wb"))) as bitout:
## For the first n+k characters, we compress with default method
initfreqs = fqt.FlatFrequencyTable(257)
model = fqt.SimpleFrequencyTable(initfreqs) # For the first 200,000
enc = arith.ArithmeticCoder(32)
enc.start_encode(bitout) # New line!
for symbol in s[:n+k]:
t = model.get_total() ## New lines!
l = model.get_low(symbol)
h = model.get_high(symbol)
enc.storeRegion(l,h,t)
model.increment(symbol)
e_idx += 1
prior = [0 for i in range(257)]
prior[:4] = [0.25,0.25,0.25,0.25]
prior[256] = 1-sum(prior[:256])
model = ProbabilityList(prior) # reset model, now e_idx = n+k
for overall in range(len(s)//200000 + 1):
predicted_val = []
if overall < len(s)//200000:
x = np.zeros((200000, tsteps, seg_len)) # 64 characters context
y = np.zeros((200000, n_symb))
print(overall)
idx3 = 0
for idx2 in range(200000*overall+k,200000*(overall+1)+k): #len(s)):
train_seq = s[idx2-k:idx2]
train_target = s[idx2]
x[idx3,:] = np.array(train_seq).reshape(tsteps,seg_len)
y[idx3] = to_categorical(train_target, num_classes=n_symb )
idx3 += 1
if overall == len(s)//200000:
x = np.zeros((len(s)-200000*overall-k, tsteps, seg_len)) # 64 characters context
y = np.zeros((len(s)-200000*overall-k, n_symb))
print(len(x))
print(overall)
idx3 = 0
for idx2 in range(200000*overall+k,len(s)): #len(s)):
train_seq = s[idx2-k:idx2]
train_target = s[idx2]
x[idx3,:] = np.array(train_seq).reshape(tsteps,seg_len)
y[idx3] = to_categorical(train_target, num_classes=n_symb )
idx3 += 1
if overall != 0 and overall != len(s)//200000:
predicted_val += list(BILSTM.predict(x[0:n]))
if overall != len(s)//200000:
BILSTM.fit(x[0:n], y[0:n],
batch_size=batch_size,
epochs=epochs,
validation_data=(x[n:2*n], y[n:2*n]))
predicted_val += list(BILSTM.predict(x[n:2*n]))
# For checking
x_arr = np.array(s[200000*(overall+1)-1:200000*(overall+1)+k-1]).reshape(1,tsteps,seg_len)
print(BILSTM(x_arr.astype(np.float32), training= False).numpy())
print(predicted_val[-1])
BILSTM.fit(x[n:2*n], y[n:2*n],
batch_size=batch_size,
epochs=epochs)
if overall == len(s)//200000:
predicted_val += list(BILSTM.predict(x[:]))
for prob_list in predicted_val:
# for val, prob in enumerate(prob_list):
# model.set(val, int(prob*100000)+1)
model.prob_list[:4] = prob_list
#model.prob_list[4:256] = [1/100000 for i in range(252)]
model.normalize()
t = int(100000) ## New lines!
l = int(model.get_low(s[e_idx])*100000)
h = int(model.get_high(s[e_idx])*100000)
enc.storeRegion(l,h,t)
# t = model.get_total()
# l = model.get_low(legend[s[e_idx]])
# h = model.get_high(legend[s[e_idx]])
# enc.storeRegion(l,h,t)
e_idx += 1
if overall != len(s)//200000: ## checking to confirm
print(e_idx-1)
print(200000*(overall+1)+k-1)
x= None
y = None
del x
del y
predicted_val = None
del predicted_val
e_idx += 1
print(e_idx)
enc.finish_encode(bitout)
toc=timeit.default_timer()
print(toc-tic)
###Output
0
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 19s 187us/sample - loss: 1.4005 - categorical_accuracy: 0.2886 - val_loss: 1.3704 - val_categorical_accuracy: 0.3052
[[0.10682002 0.23681933 0.32078034 0.33558035]]
[0.10682004 0.23681933 0.3207804 0.3355803 ]
Train on 100000 samples
100000/100000 [==============================] - 7s 73us/sample - loss: 1.3724 - categorical_accuracy: 0.3021
200059
200059
1
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 104us/sample - loss: 1.3703 - categorical_accuracy: 0.3051 - val_loss: 1.3609 - val_categorical_accuracy: 0.3197
[[0.16085665 0.21093686 0.3661135 0.26209292]]
[0.16085666 0.21093689 0.36611354 0.26209292]
Train on 100000 samples
100000/100000 [==============================] - 7s 73us/sample - loss: 1.3623 - categorical_accuracy: 0.3197
400059
400059
2
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 104us/sample - loss: 1.3504 - categorical_accuracy: 0.3340 - val_loss: 1.3658 - val_categorical_accuracy: 0.3197
[[0.22976562 0.45477754 0.15848337 0.15697353]]
[0.22976558 0.45477754 0.15848337 0.15697348]
Train on 100000 samples
100000/100000 [==============================] - 7s 73us/sample - loss: 1.3506 - categorical_accuracy: 0.3344
600059
600059
3
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 104us/sample - loss: 1.3482 - categorical_accuracy: 0.3367 - val_loss: 1.3461 - val_categorical_accuracy: 0.3348
[[0.34205866 0.21478042 0.19060406 0.25255686]]
[0.34205866 0.21478042 0.19060406 0.25255683]
Train on 100000 samples
100000/100000 [==============================] - 7s 73us/sample - loss: 1.3440 - categorical_accuracy: 0.3398
800059
800059
4
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 104us/sample - loss: 1.3395 - categorical_accuracy: 0.3465 - val_loss: 1.3445 - val_categorical_accuracy: 0.3427
[[0.28499782 0.26763287 0.27802828 0.16934104]]
[0.28499782 0.26763287 0.27802828 0.16934103]
Train on 100000 samples
100000/100000 [==============================] - 7s 73us/sample - loss: 1.3415 - categorical_accuracy: 0.3415
1000059
1000059
5
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 104us/sample - loss: 1.3450 - categorical_accuracy: 0.3387 - val_loss: 1.3432 - val_categorical_accuracy: 0.3401
[[0.25097355 0.23550521 0.15837172 0.35514948]]
[0.2509736 0.23550521 0.15837172 0.35514948]
Train on 100000 samples
100000/100000 [==============================] - 7s 73us/sample - loss: 1.3398 - categorical_accuracy: 0.3437
1200059
1200059
6
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 103us/sample - loss: 1.3472 - categorical_accuracy: 0.3354 - val_loss: 1.3397 - val_categorical_accuracy: 0.3465
[[0.16250196 0.13320644 0.37157688 0.33271477]]
[0.16250195 0.13320644 0.37157694 0.33271474]
Train on 100000 samples
100000/100000 [==============================] - 7s 72us/sample - loss: 1.3399 - categorical_accuracy: 0.3455
1400059
1400059
7
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 104us/sample - loss: 1.3437 - categorical_accuracy: 0.3401 - val_loss: 1.3518 - val_categorical_accuracy: 0.3320
[[0.142209 0.3581357 0.1361492 0.3635061]]
[0.14220902 0.35813573 0.13614915 0.36350608]
Train on 100000 samples
100000/100000 [==============================] - 7s 73us/sample - loss: 1.3469 - categorical_accuracy: 0.3378
1600059
1600059
8
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 104us/sample - loss: 1.3454 - categorical_accuracy: 0.3400 - val_loss: 1.3406 - val_categorical_accuracy: 0.3425
[[0.2984873 0.19177109 0.21848066 0.29126093]]
[0.29848734 0.19177106 0.21848069 0.29126096]
Train on 100000 samples
100000/100000 [==============================] - 7s 73us/sample - loss: 1.3400 - categorical_accuracy: 0.3446
1800059
1800059
9
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 104us/sample - loss: 1.3391 - categorical_accuracy: 0.3466 - val_loss: 1.3365 - val_categorical_accuracy: 0.3492
[[0.3352718 0.12610734 0.3108131 0.2278078 ]]
[0.3352718 0.1261073 0.31081307 0.2278078 ]
Train on 100000 samples
100000/100000 [==============================] - 7s 73us/sample - loss: 1.3358 - categorical_accuracy: 0.3505
2000059
2000059
10
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 104us/sample - loss: 1.3412 - categorical_accuracy: 0.3419 - val_loss: 1.3363 - val_categorical_accuracy: 0.3517
[[0.21322675 0.17990552 0.20059533 0.40627244]]
[0.21322675 0.17990552 0.20059533 0.40627244]
Train on 100000 samples
100000/100000 [==============================] - 7s 73us/sample - loss: 1.3323 - categorical_accuracy: 0.3539
2200059
2200059
11
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 104us/sample - loss: 1.3310 - categorical_accuracy: 0.3565 - val_loss: 1.3316 - val_categorical_accuracy: 0.3551
[[0.30734414 0.23319876 0.22618341 0.23327366]]
[0.30734414 0.23319875 0.22618341 0.23327366]
Train on 100000 samples
100000/100000 [==============================] - 7s 73us/sample - loss: 1.3311 - categorical_accuracy: 0.3551
2400059
2400059
12
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 104us/sample - loss: 1.3405 - categorical_accuracy: 0.3458 - val_loss: 1.3327 - val_categorical_accuracy: 0.3537
[[0.16882233 0.29516506 0.3181532 0.21785946]]
[0.16882232 0.29516503 0.3181532 0.21785943]
Train on 100000 samples
100000/100000 [==============================] - 7s 73us/sample - loss: 1.3288 - categorical_accuracy: 0.3578
2600059
2600059
13
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 104us/sample - loss: 1.3233 - categorical_accuracy: 0.3628 - val_loss: 1.3614 - val_categorical_accuracy: 0.3250
[[0.23680843 0.3197375 0.24387547 0.19957855]]
[0.23680846 0.31973752 0.24387547 0.19957855]
Train on 100000 samples
100000/100000 [==============================] - 7s 73us/sample - loss: 1.3469 - categorical_accuracy: 0.3377
2800059
2800059
14
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 104us/sample - loss: 1.3279 - categorical_accuracy: 0.3578 - val_loss: 1.3386 - val_categorical_accuracy: 0.3473
[[0.22505125 0.46360007 0.14570253 0.16564608]]
[0.22505131 0.46360007 0.14570251 0.16564606]
Train on 100000 samples
100000/100000 [==============================] - 7s 73us/sample - loss: 1.3364 - categorical_accuracy: 0.3486
3000059
3000059
15
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 104us/sample - loss: 1.3292 - categorical_accuracy: 0.3547 - val_loss: 1.3343 - val_categorical_accuracy: 0.3484
[[0.24795687 0.29482338 0.202236 0.25498375]]
[0.24795686 0.2948234 0.20223603 0.25498375]
Train on 100000 samples
100000/100000 [==============================] - 7s 73us/sample - loss: 1.3328 - categorical_accuracy: 0.3507
3200059
3200059
16
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 103us/sample - loss: 1.3280 - categorical_accuracy: 0.3582 - val_loss: 1.3305 - val_categorical_accuracy: 0.3547
[[0.35650343 0.24039528 0.22585429 0.17724705]]
[0.35650334 0.2403953 0.2258543 0.17724703]
Train on 100000 samples
100000/100000 [==============================] - 7s 73us/sample - loss: 1.3266 - categorical_accuracy: 0.3574
3400059
3400059
17
Train on 100000 samples, validate on 100000 samples
100000/100000 [==============================] - 10s 104us/sample - loss: 1.3373 - categorical_accuracy: 0.3443 - val_loss: 1.3290 - val_categorical_accuracy: 0.3569
[[0.12689638 0.23692253 0.3957916 0.24038951]]
[0.12689634 0.2369225 0.39579165 0.24038947]
Train on 100000 samples
###Markdown
Decompression
###Code
from __future__ import print_function
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(1)
with open('data/ecoli/Ecoli.txt') as f:
for line in f:
ecoli = list(line)
temp_dict = {'a':97,'g': 103,'c': 99,'t': 116}
char_list = [97, 103, 99, 116] # we can read this as we go
legend = dict([(v, k) for k, v in enumerate(char_list)]) # map character to 0,1,2,3,4, etc.
s = [legend[temp_dict[i]] for i in ecoli]
vocab_size = len(char_list)
n = 100000 # number of samples
tsteps = 10 #time steps
seg_len = 6 #input_dim
k = tsteps*seg_len # full context for each sample
n_symb = 4
# optimizer
sgd_opt = 'adam'
lr = 4e-3
beta1 = 0
beta2 = 0.9999
eps=1e-5
# LSTM Training
hidden_size = 32
batch_size = 250
epochs = 1
n_layer = 4 #only 4 total laters? or 4 LSTM it does say 4
opt = Adam(
learning_rate=lr , beta_1=0.0, beta_2=beta2, epsilon=eps
)
n_symb = 4
BILSTM = Sequential()
BILSTM.add(Bidirectional(LSTM(hidden_size, activation='tanh', stateful=False, batch_input_shape=(batch_size,tsteps,seg_len), return_sequences=True), input_shape=(tsteps,seg_len)))
BILSTM.add(LayerNormalization(axis=1 , center=True , scale=True))
# BILSTM.add(Bidirectional(LSTM(hidden_size, activation='tanh', stateful=False, batch_input_shape=(batch_size,tsteps,hidden_size), return_sequences=True)))
# BILSTM.add(BatchNormalization(axis=1 , center=True , scale=True))
# BILSTM.add(Bidirectional(LSTM(hidden_size, activation='tanh', stateful=False, batch_input_shape=(batch_size,tsteps,hidden_size), return_sequences=True)))
# BILSTM.add(BatchNormalization(axis=1 , center=True , scale=True))
BILSTM.add(Bidirectional(LSTM(hidden_size, activation='tanh', stateful=False, batch_input_shape=(batch_size,tsteps,hidden_size))))
BILSTM.add(LayerNormalization(axis=1 , center=True , scale=True))
BILSTM.add(Dense(n_symb))
BILSTM.add(Activation('softmax'))
BILSTM.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['categorical_accuracy'])
BILSTM.summary()
tic=timeit.default_timer()
inputfile, outputfile = 'data/ecoli/Ecoli.bi_simple_seed1', 'data/ecoli/Ecoli_decompressed.txt'
epochs = 1
e_idx = 0
# Perform file decompression
with open(inputfile, "rb") as inp, open(outputfile, "wb") as out:
bitin = arith.BitInputStream(inp)
## For the first n+k characters, we compress with default method
initfreqs = fqt.FlatFrequencyTable(257)
model = fqt.SimpleFrequencyTable(initfreqs)
dec = arith.ArithmeticCoder(32)
dec.start_decode(bitin)
new_s = []
while e_idx < n+k:
total = model.get_total()
Range = dec.R
offset = dec.getTarget()
value = dec.getTarget(total)
start = 0
end = model.get_symbol_limit()
while end - start > 1:
middle = (start + end) >> 1
if model.get_low(middle) > value:
end = middle
else:
start = middle
symbol = start
l = model.get_low(symbol)
h = model.get_high(symbol)
dec.loadRegion(l,h,total)
model.increment(symbol)
out.write(bytes((char_list[symbol],)))
new_s.append(symbol)
e_idx += 1
prior = [0 for i in range(257)]
prior[:4] = [0.25,0.25,0.25,0.25]
prior[256] = 1-sum(prior[:256])
model = ProbabilityList(prior) # reset model, now e_idx = n+k
for overall in range(len(s)//100000 + 1): # assume we save len(s), this only takes 8 bits, and cut the need for 256
if overall < len(s)//100000:
x = np.zeros((100000, tsteps, seg_len)) # 64 characters context
y = np.zeros((100000, n_symb))
print(overall)
idx3 = 0
for idx2 in range(100000*overall+k,100000*(overall+1)+k): #len(s)):
train_seq = new_s[idx2-k:idx2]
train_target = new_s[idx2]
x[idx3,:] = np.array(train_seq).reshape(tsteps,seg_len)
y[idx3] = to_categorical(train_target, num_classes=n_symb )
idx3 += 1
BILSTM.fit(x[0:n], y[0:n],
batch_size=batch_size,
epochs=epochs)
if overall == len(s)//100000:
segment_len = len(s)-100000*overall-k
else:
segment_len = 100000
print(new_s == s[:len(new_s)])
temp_x = new_s[-1*k-1:-1]
x_arr = np.array(temp_x).reshape(1,tsteps,seg_len)
print(BILSTM(x_arr.astype(np.float32), training= False).numpy())
temp_x = new_s[-1*k:]
for i in range(segment_len):
x_arr = np.array(temp_x).reshape(1,tsteps,seg_len)
prob_list_temp = BILSTM(x_arr.astype(np.float32), training= False).numpy()
model.prob_list[:4] = prob_list_temp[0]
model.normalize()
# print(model.prob_list[:4])
# print(dec.R)
# print(dec.getTarget(total))
# print(model.get_symbol_limit())
total = int(100000) ## New lines!
Range = dec.R
offset = dec.getTarget()
value = dec.getTarget(total)
start = 0
end = model.get_symbol_limit()
while end - start > 1:
middle = (start + end) >> 1
if int(model.get_low(middle)*100000) > value:
#print(int(model.get_low(middle)*100000))
end = middle
else:
start = middle
symbol = start
assert symbol != 256
out.write(bytes((char_list[symbol],)))
l = int(model.get_low(symbol)*100000)
h = int(model.get_high(symbol)*100000)
dec.loadRegion(l,h,total)
temp_x = temp_x[1:] + [symbol]
new_s.append(symbol)
if e_idx%20000 == 0:
print(e_idx)
e_idx += 1
print(BILSTM(x_arr.astype(np.float32), training= False).numpy())
print(e_idx-1)
print(200000*(overall+1)+k-1)
e_idx += 1
print(e_idx)
toc=timeit.default_timer()
print(toc-tic)
np.save('hmmmm', new_s)
print(new_s[100030:100060])
print(s[100030:100060])
print(new_s[200030:200060])
print(s[200030:200060])
filecmp.cmp('data/ecoli/Ecoli.txt', 'data/ecoli/Ecoli_decompressed.txt')
###Output
_____no_output_____ |
Seminar/2_LinearModels_LHCb_PID.ipynb | ###Markdown
Sample management
###Code
!wget https://github.com/hse-aml/hadron-collider-machine-learning/releases/download/Week_2/training.csv.gz
!gunzip training.csv.gz
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_csv('training.csv')
len(data)
###Output
_____no_output_____
###Markdown
Feature descriptionHere, Spd stands for Scintillating Pad Detector, Prs - Preshower, Ecal - electromagnetic calorimeter, Hcal - hadronic calorimeter, Brem denotes traces of the particles that were deflected by detectorFeatures:* ID - id value for tracks (presents only in the test file for the submitting purposes)* Label - string valued observable denoting particle types. Can take values "Electron", "Muon", "Kaon", "Proton", "Pion" and "Ghost". This column is absent in the test file.* FlagSpd - flag (0 or 1), if reconstructed track passes through Spd* FlagPrs - flag (0 or 1), if reconstructed track passes through Prs* FlagBrem - flag (0 or 1), if reconstructed track passes through Brem* FlagEcal - flag (0 or 1), if reconstructed track passes through Ecal* FlagHcal - flag (0 or 1), if reconstructed track passes through Hcal* FlagRICH1 - flag (0 or 1), if reconstructed track passes through the first RICH detector* FlagRICH2 - flag (0 or 1), if reconstructed track passes through the second RICH detector* FlagMuon - flag (0 or 1), if reconstructed track passes through muon stations (Muon)* SpdE - energy deposit associated to the track in the Spd* PrsE - energy deposit associated to the track in the Prs* EcalE - energy deposit associated to the track in the Hcal* HcalE - energy deposit associated to the track in the Hcal* PrsDLLbeElectron - delta log-likelihood for a particle candidate to be electron using information from Prs* BremDLLbeElectron - delta log-likelihood for a particle candidate to be electron using information from Brem* TrackP - particle momentum* TrackPt - particle transverse momentum* TrackNDoFSubdetector1 - number of degrees of freedom for track fit using hits in the tracking sub-detector1* TrackQualitySubdetector1 - chi2 quality of the track fit using hits in the tracking sub-detector1* TrackNDoFSubdetector2 - number of degrees of freedom for track fit using hits in the tracking sub-detector2* TrackQualitySubdetector2 - chi2 quality of the track fit using hits in the tracking sub-detector2* TrackNDoF - number of degrees of freedom for track fit using hits in all tracking sub-detectors* TrackQualityPerNDoF - chi2 quality of the track fit per degree of freedom* TrackDistanceToZ - distance between track and z-axis (beam axis)* Calo2dFitQuality - quality of the 2d fit of the clusters in the calorimeter* Calo3dFitQuality - quality of the 3d fit in the calorimeter with assumption that particle was electron* EcalDLLbeElectron - delta log-likelihood for a particle candidate to be electron using information from Ecal* EcalDLLbeMuon - delta log-likelihood for a particle candidate to be muon using information from Ecal* EcalShowerLongitudinalParameter - longitudinal parameter of Ecal shower* HcalDLLbeElectron - delta log-likelihood for a particle candidate to be electron using information from Hcal* HcalDLLbeMuon - delta log-likelihood for a particle candidate to be using information from Hcal* RICHpFlagElectron - flag (0 or 1) if momentum is greater than threshold for electrons to produce Cherenkov light* RICHpFlagProton - flag (0 or 1) if momentum is greater than threshold for protons to produce Cherenkov light* RICHpFlagPion - flag (0 or 1) if momentum is greater than threshold for pions to produce Cherenkov light* RICHpFlagKaon - flag (0 or 1) if momentum is greater than threshold for kaons to produce Cherenkov light* RICHpFlagMuon - flag (0 or 1) if momentum is greater than threshold for muons to produce Cherenkov light* RICH_DLLbeBCK - delta log-likelihood for a particle candidate to be background using information from RICH* RICH_DLLbeKaon - delta log-likelihood for a particle candidate to be kaon using information from RICH* RICH_DLLbeElectron - delta log-likelihood for a particle candidate to be electron using information from RICH* RICH_DLLbeMuon - delta log-likelihood for a particle candidate to be muon using information from RICH* RICH_DLLbeProton - delta log-likelihood for a particle candidate to be proton using information from RICH* MuonFlag - muon flag (is this track muon) which is determined from muon stations* MuonLooseFlag muon flag (is this track muon) which is determined from muon stations using looser criteria* MuonLLbeBCK - log-likelihood for a particle candidate to be not muon using information from muon stations* MuonLLbeMuon - log-likelihood for a particle candidate to be muon using information from muon stations* DLLelectron - delta log-likelihood for a particle candidate to be electron using information from all subdetectors* DLLmuon - delta log-likelihood for a particle candidate to be muon using information from all subdetectors* DLLkaon - delta log-likelihood for a particle candidate to be kaon using information from all subdetectors* DLLproton - delta log-likelihood for a particle candidate to be proton using information from all subdetectors* GhostProbability - probability for a particle candidate to be ghost track. This variable is an output of classification model used in the tracking algorithm.Delta log-likelihood in the features descriptions means the difference between log-likelihood for the mass hypothesis that a given track is left by some particle (for example, electron) and log-likelihood for the mass hypothesis that a given track is left by a pion (so, DLLpion = 0 and thus we don't have these columns). This is done since most tracks (~80%) are left by pions and in practice we actually need to discriminate other particles from pions. In other words, the null hypothesis is that particle is a pion. Feature enigineeringFeature selection and preprocessing, model validation
###Code
data.columns
###Output
_____no_output_____
###Markdown
Let's consider PID between two particle types for simplicity:
###Code
data = data[(data.Label == 'Kaon') | (data.Label == 'Pion')].copy()
features = [col for col in data.columns if col != 'Label']
data['Label'] = (data.Label == 'Kaon').astype(float)
print(len(data))
from sklearn import linear_model, metrics, model_selection, preprocessing
train, test = model_selection.train_test_split(data, test_size=0.25)
###Output
_____no_output_____
###Markdown
Selecting the best features is quite an important and non-trivial part of building machine learning models. Scikit-learn has a number of ways to automate this process - to be used with caution - see [this page](https://scikit-learn.org/stable/modules/feature_selection.html) for more details.At this point we are not going to use these tools, but rather do a really simple thing: will score each feature with `roc_auc_score` to find those giving maximum separation between classes.
###Code
# Build an array of scores of form [(feature1, score1), (feature2, score2), ...]
scores = [(f, metrics.roc_auc_score(data.Label, data[f])) for f in features]
# Sort this array by the scores in descending order.
# As AUC is symmetric with respect to 0.5, we'll sort
# by max(score, 1-score):
scores = (sorted(scores, key=lambda x: -max(x[1], 1-x[1])))
# Print top 10:
print("Feature : roc_auc_score \n")
for f, score in scores[:10]:
print("{} : {}".format(f, score))
###Output
_____no_output_____
###Markdown
So, just a single `DLLkaon` feature gives us an AUC of 94%!Let's see if we can beat this score. The simplest thing we can do is to take, say, 10 best features and feed them into a logistic regression model:
###Code
top10_features = list(list(zip(*scores))[0][:10])
def get_features(dataset):
return dataset[top10_features]
model = linear_model.LogisticRegression()
model.fit(get_features(train), train.Label)
preds_train = model.predict_proba(get_features(train))[:,1]
preds_test = model.predict_proba(get_features(test ))[:,1]
print(metrics.roc_auc_score(train.Label, preds_train))
print(metrics.roc_auc_score(test .Label, preds_test ))
###Output
_____no_output_____
###Markdown
Hmm, that just decreased the score.Let's look at the range of these features:
###Code
for f in top10_features:
print("{:20s} : ({:10.2f}, {:10.2f})".format(f, data[f].min(), data[f].max()))
###Output
_____no_output_____
###Markdown
We can notice two things:1. ranges are very different2. some variables have 'unnatural' minimum of -999Let's discuss problem 1 first. Our model treats its inputs as vectors of $R^M$ space ($M$ is the number of features), and calculates things like dot-product ${\bf W}\cdot{\bf x}$. This assumes that all the components of these vectors are objects of the same nature and have the same units. Obviously this is not the case. We can however emulate this by scaling the components of these vectors to have same variance and mean:
###Code
def get_features(dataset):
return dataset[top10_features]
scaler = preprocessing.RobustScaler()
scaler.fit(get_features(train))
model = linear_model.LogisticRegression()
model.fit(scaler.transform(get_features(train)), train.Label)
preds_train = model.predict_proba(scaler.transform(get_features(train)))[:,1]
preds_test = model.predict_proba(scaler.transform(get_features(test )))[:,1]
print(metrics.roc_auc_score(train.Label, preds_train))
print(metrics.roc_auc_score(test .Label, preds_test ))
###Output
_____no_output_____
###Markdown
This increased the score slightly.Now, problem 2. Let's have a look at one of these features with -999 minimum:
###Code
plt.hist(data.RICH_DLLbeKaon, bins=300);
###Output
_____no_output_____
###Markdown
Note the standalone peak near -1000 (actually, -999). Looks like some discreet value was used to denote the cases when `RICH_DLLbeKaon` could not be calculated.The simplest thing we can do with it is to just replace -999 by the mean of the feature, but since in such a way we'll lose this information, let's encode it into a new feature:
###Code
def convert_outlier(column, value=-999):
"""
This function takes a single pandas column and returns a dataframe
with two columns: same column with all occurrences of `value`
replaced by mean and a binary `column == value` column
"""
is_out = (column == value)
is_out.name += '_out'
mean = column[~is_out].mean()
column = column.copy()
column[is_out] = mean
return pd.concat([column, is_out.astype(float)], axis=1)
outlier_columns = [f for f in top10_features if (data[f] == -999).sum() > 0]
print(outlier_columns)
def get_features(dataset):
return pd.concat([convert_outlier(dataset[f]) if f in outlier_columns else
dataset[f] for f in top10_features], axis=1)
###Output
_____no_output_____
###Markdown
Let's go to our final training.
###Code
scaler = preprocessing.RobustScaler()
scaler.fit(get_features(train))
model = linear_model.LogisticRegression()
model.fit(scaler.transform(get_features(train)), train.Label)
preds_train = model.predict_proba(scaler.transform(get_features(train)))[:,1]
preds_test = model.predict_proba(scaler.transform(get_features(test )))[:,1]
print(metrics.roc_auc_score(train.Label, preds_train))
print(metrics.roc_auc_score(test .Label, preds_test ))
fpr_, tpr_, _ = metrics.roc_curve(test.Label, preds_test )
fpr_dll, tpr_dll, _ = metrics.roc_curve(test.Label, test.DLLkaon)
plt.plot(fpr_dll, tpr_dll, label='DLLkaon')
plt.plot(fpr_, tpr_, label='Our Model')
plt.legend()
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.show();
###Output
_____no_output_____
###Markdown
Huh! We've finally beaten the `DLLkaon` score. k-fold cross validationNow let's use the k-fold cross validation technique to ensure this is indeed the case.k-fold cross-validation randomly divides the data into k blocks of roughly equal size. Each of the blocks is left out in turn and the other k-1 blocks are used to train the model. The held out block is predicted and these predictions are summarized into some type of performance measure (e.g. accuracy, root mean squared error (RMSE), etc.). The k estimates of performance are averaged to get the overall resampled estimate.
###Code
kf = model_selection.KFold(n_splits=5, shuffle=True, random_state=1234)
aucs_single = []
aucs_model = []
for i_train, i_test in kf.split(data):
train = data.iloc[i_train]
test = data.iloc[i_test ]
scaler = preprocessing.RobustScaler()
scaler.fit(get_features(train))
model = linear_model.LogisticRegression()
model.fit(scaler.transform(get_features(train)), train.Label)
preds_test = model.predict_proba(
scaler.transform(get_features(test))
)[:,1]
aucs_model .append(metrics.roc_auc_score(test.Label, preds_test))
aucs_single.append(metrics.roc_auc_score(test.Label, test.DLLkaon))
plt.figure(figsize=(7, 1.7))
plt.scatter(aucs_model , [0] * len(aucs_model ), s=100, alpha=0.5, c='r', label='Our model');
plt.scatter(aucs_single, [0] * len(aucs_single), s=100, alpha=0.5, c='b', label='just DLLkaon');
plt.yticks([]);
plt.ylim(-0.2, 0.4)
plt.xlabel("AUC");
plt.legend();
###Output
_____no_output_____
###Markdown
Sample management
###Code
!wget https://github.com/hse-aml/hadron-collider-machine-learning/releases/download/Week_2/training.csv.gz
!gunzip training.csv.gz
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_csv('training.csv')
len(data)
###Output
_____no_output_____
###Markdown
Feature descriptionHere, Spd stands for Scintillating Pad Detector, Prs - Preshower, Ecal - electromagnetic calorimeter, Hcal - hadronic calorimeter, Brem denotes traces of the particles that were deflected by detectorFeatures:* ID - id value for tracks (presents only in the test file for the submitting purposes)* Label - string valued observable denoting particle types. Can take values "Electron", "Muon", "Kaon", "Proton", "Pion" and "Ghost". This column is absent in the test file.* FlagSpd - flag (0 or 1), if reconstructed track passes through Spd* FlagPrs - flag (0 or 1), if reconstructed track passes through Prs* FlagBrem - flag (0 or 1), if reconstructed track passes through Brem* FlagEcal - flag (0 or 1), if reconstructed track passes through Ecal* FlagHcal - flag (0 or 1), if reconstructed track passes through Hcal* FlagRICH1 - flag (0 or 1), if reconstructed track passes through the first RICH detector* FlagRICH2 - flag (0 or 1), if reconstructed track passes through the second RICH detector* FlagMuon - flag (0 or 1), if reconstructed track passes through muon stations (Muon)* SpdE - energy deposit associated to the track in the Spd* PrsE - energy deposit associated to the track in the Prs* EcalE - energy deposit associated to the track in the Hcal* HcalE - energy deposit associated to the track in the Hcal* PrsDLLbeElectron - delta log-likelihood for a particle candidate to be electron using information from Prs* BremDLLbeElectron - delta log-likelihood for a particle candidate to be electron using information from Brem* TrackP - particle momentum* TrackPt - particle transverse momentum* TrackNDoFSubdetector1 - number of degrees of freedom for track fit using hits in the tracking sub-detector1* TrackQualitySubdetector1 - chi2 quality of the track fit using hits in the tracking sub-detector1* TrackNDoFSubdetector2 - number of degrees of freedom for track fit using hits in the tracking sub-detector2* TrackQualitySubdetector2 - chi2 quality of the track fit using hits in the tracking sub-detector2* TrackNDoF - number of degrees of freedom for track fit using hits in all tracking sub-detectors* TrackQualityPerNDoF - chi2 quality of the track fit per degree of freedom* TrackDistanceToZ - distance between track and z-axis (beam axis)* Calo2dFitQuality - quality of the 2d fit of the clusters in the calorimeter* Calo3dFitQuality - quality of the 3d fit in the calorimeter with assumption that particle was electron* EcalDLLbeElectron - delta log-likelihood for a particle candidate to be electron using information from Ecal* EcalDLLbeMuon - delta log-likelihood for a particle candidate to be muon using information from Ecal* EcalShowerLongitudinalParameter - longitudinal parameter of Ecal shower* HcalDLLbeElectron - delta log-likelihood for a particle candidate to be electron using information from Hcal* HcalDLLbeMuon - delta log-likelihood for a particle candidate to be using information from Hcal* RICHpFlagElectron - flag (0 or 1) if momentum is greater than threshold for electrons to produce Cherenkov light* RICHpFlagProton - flag (0 or 1) if momentum is greater than threshold for protons to produce Cherenkov light* RICHpFlagPion - flag (0 or 1) if momentum is greater than threshold for pions to produce Cherenkov light* RICHpFlagKaon - flag (0 or 1) if momentum is greater than threshold for kaons to produce Cherenkov light* RICHpFlagMuon - flag (0 or 1) if momentum is greater than threshold for muons to produce Cherenkov light* RICH_DLLbeBCK - delta log-likelihood for a particle candidate to be background using information from RICH* RICH_DLLbeKaon - delta log-likelihood for a particle candidate to be kaon using information from RICH* RICH_DLLbeElectron - delta log-likelihood for a particle candidate to be electron using information from RICH* RICH_DLLbeMuon - delta log-likelihood for a particle candidate to be muon using information from RICH* RICH_DLLbeProton - delta log-likelihood for a particle candidate to be proton using information from RICH* MuonFlag - muon flag (is this track muon) which is determined from muon stations* MuonLooseFlag muon flag (is this track muon) which is determined from muon stations using looser criteria* MuonLLbeBCK - log-likelihood for a particle candidate to be not muon using information from muon stations* MuonLLbeMuon - log-likelihood for a particle candidate to be muon using information from muon stations* DLLelectron - delta log-likelihood for a particle candidate to be electron using information from all subdetectors* DLLmuon - delta log-likelihood for a particle candidate to be muon using information from all subdetectors* DLLkaon - delta log-likelihood for a particle candidate to be kaon using information from all subdetectors* DLLproton - delta log-likelihood for a particle candidate to be proton using information from all subdetectors* GhostProbability - probability for a particle candidate to be ghost track. This variable is an output of classification model used in the tracking algorithm.Delta log-likelihood in the features descriptions means the difference between log-likelihood for the mass hypothesis that a given track is left by some particle (for example, electron) and log-likelihood for the mass hypothesis that a given track is left by a pion (so, DLLpion = 0 and thus we don't have these columns). This is done since most tracks (~80%) are left by pions and in practice we actually need to discriminate other particles from pions. In other words, the null hypothesis is that particle is a pion. Feature enigineeringFeature selection and preprocessing, model validation
###Code
data.columns
###Output
_____no_output_____
###Markdown
Let's consider PID between two particle types for simplicity:
###Code
data = data[(data.Label == 'Kaon') | (data.Label == 'Pion')].copy()
features = [col for col in data.columns if col != 'Label']
data['Label'] = (data.Label == 'Kaon').astype(float)
print(len(data))
from sklearn import linear_model, metrics, model_selection, preprocessing
train, test = model_selection.train_test_split(data, test_size=0.25)
###Output
_____no_output_____
###Markdown
Selecting the best features is quite an important and non-trivial part of building machine learning models. Scikit-learn has a number of ways to automate this process - to be used with caution - see [this page](https://scikit-learn.org/stable/modules/feature_selection.html) for more details.At this point we are not going to use these tools, but rather do a really simple thing: will score each feature with `roc_auc_score` to find those giving maximum separation between classes.
###Code
# Build an array of scores of form [(feature1, score1), (feature2, score2), ...]
scores = [(f, metrics.roc_auc_score(data.Label, data[f])) for f in features]
# Sort this array by the scores in descending order.
# As AUC is symmetric with respect to 0.5, we'll sort
# by max(score, 1-score):
scores = (sorted(scores, key=lambda x: -max(x[1], 1-x[1])))
# Print top 10:
print("Feature : roc_auc_score \n")
for f, score in scores[:10]:
print("{} : {}".format(f, score))
###Output
_____no_output_____
###Markdown
So, just a single `DLLkaon` feature gives us an AUC of 94%!Let's see if we can beat this score. The simplest thing we can do is to take, say, 10 best features and feed them into a logistic regression model:
###Code
top10_features = list(list(zip(*scores))[0][:10])
def get_features(dataset):
return dataset[top10_features]
model = linear_model.LogisticRegression()
model.fit(get_features(train), train.Label)
preds_train = model.predict_proba(get_features(train))[:,1]
preds_test = model.predict_proba(get_features(test ))[:,1]
print(metrics.roc_auc_score(train.Label, preds_train))
print(metrics.roc_auc_score(test .Label, preds_test ))
###Output
_____no_output_____
###Markdown
Hmm, that just decreased the score.Let's look at the range of these features:
###Code
for f in top10_features:
print("{:20s} : ({:10.2f}, {:10.2f})".format(f, data[f].min(), data[f].max()))
###Output
_____no_output_____
###Markdown
We can notice two things:1. ranges are very different2. some variables have 'unnatural' minimum of -999Let's discuss problem 1 first. Our model treats its inputs as vectors of $R^M$ space ($M$ is the number of features), and calculates things like dot-product ${\bf W}\cdot{\bf x}$. This assumes that all the components of these vectors are objects of the same nature and have the same units. Obviously this is not the case. We can however emulate this by scaling the components of these vectors to have same variance and mean:
###Code
def get_features(dataset):
return dataset[top10_features]
scaler = preprocessing.RobustScaler()
scaler.fit(get_features(train))
model = linear_model.LogisticRegression()
model.fit(scaler.transform(get_features(train)), train.Label)
preds_train = model.predict_proba(scaler.transform(get_features(train)))[:,1]
preds_test = model.predict_proba(scaler.transform(get_features(test )))[:,1]
print(metrics.roc_auc_score(train.Label, preds_train))
print(metrics.roc_auc_score(test .Label, preds_test ))
###Output
_____no_output_____
###Markdown
This increased the score slightly.Now, problem 2. Let's have a look at one of these features with -999 minimum:
###Code
plt.hist(data.RICH_DLLbeKaon, bins=300);
###Output
_____no_output_____
###Markdown
Note the standalone peak near -1000 (actually, -999). Looks like some discreet value was used to denote the cases when `RICH_DLLbeKaon` could not be calculated.The simplest thing we can do with it is to just replace -999 by the mean of the feature, but since in such a way we'll lose this information, let's encode it into a new feature:
###Code
def convert_outlier(column, value=-999):
"""
This function takes a single pandas column and returns a dataframe
with two columns: same column with all occurrences of `value`
replaced by mean and a binary `column == value` column
"""
is_out = (column == value)
is_out.name += '_out'
mean = column[~is_out].mean()
column = column.copy()
column[is_out] = mean
return pd.concat([column, is_out.astype(float)], axis=1)
outlier_columns = [f for f in top10_features if (data[f] == -999).sum() > 0]
print(outlier_columns)
def get_features(dataset):
return pd.concat([convert_outlier(dataset[f]) if f in outlier_columns else
dataset[f] for f in top10_features], axis=1)
###Output
_____no_output_____
###Markdown
Let's go to our final training.
###Code
scaler = preprocessing.RobustScaler()
scaler.fit(get_features(train))
model = linear_model.LogisticRegression()
model.fit(scaler.transform(get_features(train)), train.Label)
preds_train = model.predict_proba(scaler.transform(get_features(train)))[:,1]
preds_test = model.predict_proba(scaler.transform(get_features(test )))[:,1]
print(metrics.roc_auc_score(train.Label, preds_train))
print(metrics.roc_auc_score(test .Label, preds_test ))
fpr_, tpr_, _ = metrics.roc_curve(test.Label, preds_test )
fpr_dll, tpr_dll, _ = metrics.roc_curve(test.Label, test.DLLkaon)
plt.plot(fpr_dll, tpr_dll, label='DLLkaon')
plt.plot(fpr_, tpr_, label='Our Model')
plt.legend()
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.show();
###Output
_____no_output_____
###Markdown
Huh! We've finally beaten the `DLLkaon` score. k-fold cross validationNow let's use the k-fold cross validation technique to ensure this is indeed the case.k-fold cross-validation randomly divides the data into k blocks of roughly equal size. Each of the blocks is left out in turn and the other k-1 blocks are used to train the model. The held out block is predicted and these predictions are summarized into some type of performance measure (e.g. accuracy, root mean squared error (RMSE), etc.). The k estimates of performance are averaged to get the overall resampled estimate.
###Code
kf = model_selection.KFold(n_splits=5, shuffle=True, random_state=1234)
aucs_single = []
aucs_model = []
for i_train, i_test in kf.split(data):
train = data.iloc[i_train]
test = data.iloc[i_test ]
scaler = preprocessing.RobustScaler()
scaler.fit(get_features(train))
model = linear_model.LogisticRegression()
model.fit(scaler.transform(get_features(train)), train.Label)
preds_test = model.predict_proba(
scaler.transform(get_features(test))
)[:,1]
aucs_model .append(metrics.roc_auc_score(test.Label, preds_test))
aucs_single.append(metrics.roc_auc_score(test.Label, test.DLLkaon))
plt.figure(figsize=(7, 1.7))
plt.scatter(aucs_model , [0] * len(aucs_model ), s=100, alpha=0.5, c='r', label='Our model');
plt.scatter(aucs_single, [0] * len(aucs_single), s=100, alpha=0.5, c='b', label='just DLLkaon');
plt.yticks([]);
plt.ylim(-0.2, 0.4)
plt.xlabel("AUC");
plt.legend();
###Output
_____no_output_____ |
experiments/Final_Transformer_MXNet_51200_11_10_202.ipynb | ###Markdown
###Code
!pip install -U mxnet-cu101==1.7.0
!pip install d2l==0.14.4
### !pip install ipython-autotime
### %load_ext autotime
import math
from d2l import mxnet as d2l
from mxnet import np, npx
from mxnet.gluon import nn
from mxnet import np, npx, init, gluon, autograd
import collections
import os
import time
npx.set_np()
from mxnet import autograd, np, npx
###Output
_____no_output_____
###Markdown
The code for Transformer from scratch is collected here. The code is mostly from http://d2l.ai/chapter_attention-mechanisms/transformer.html . I did many comments to the code at most difficult points. I hope my additional code and comments will help in better understanding of the Transformer. This is the original article for the Transformer : Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems (pp. 5998–6008). The future work: 1. To learn the Transformer on big data set.2. Transation from (to) English to Finnish language.3. Modify the architecture of the Transformer. 4. Better tokenization and preprocessing. Attention Mechanism Masked softmaxThis is importand auxiliary function.""" The masked softmax takes a 3-dimensional input and enables us to filter out some elements by specifying a valid length for the last dimension.... As a result, any value outside the valid length will be masked as 0.""" (citation from d2l.ai). The notion of valid length come from the need to add special token if our sentence is shorter than length we use for all sentencies in batches. The tokens will not participate in prediction. My comments are started with , the comments with one are from the original d2l.ai code.Some functions for plotting and downloading of specific files from specific places are still taken from the d2l.ai library on GitHub : https://github.com/d2l-ai/d2l-en/blob/master/d2l/mxnet.py But the biggest par of the code is collected here (and commented).
###Code
### from d2l.ai
def masked_softmax(X, valid_len):
"""Perform softmax by filtering out some elements."""
# X: 3-D tensor, valid_len: 1-D or 2-D tensor
### why 3-D tensor ?
### first dimention; we will quantify samples within batch,
### so, the first dimention determines the number of samples in the batch
###
### second dimention; we will quantify queries,
### we may have several queries,
### the second dimention determines the number of queries
###
### we may set up the valid lengths same for every sample in the
### batch, i.e 1-D valid-lengh with size (batch_size, )
### the same means : independent of the queries
### On the contarry: we may set up valid lengths individually for every
### sample in a batch and for every query,
### in this case it will be 2-D valid length
### with size (batch size, number of queries)
###
### Third parameter will correspond to the number of key/value pairs
###
### We may need the valid_length when: 1. we <pad> the end of a sentence: it is too
### short, shorter than num_steps ; 2. when we use the valid_lenght in decoder
### via training, and every word in target sentence is used as query: the query
### can (or may ?) see the all words to the left, but not to the right (see the
### encoder decoder code below). To handle the case we use valid_length too.
###
if valid_len is None:
return npx.softmax(X)
else:
shape = X.shape
if valid_len.ndim == 1:
valid_len = valid_len.repeat(shape[1], axis=0)
else:
valid_len = valid_len.reshape(-1)
# Fill masked elements with a large negative, whose exp is 0
X = npx.sequence_mask(X.reshape(-1, shape[-1]), valid_len, True,
axis=1, value=-1e6)
return npx.softmax(X).reshape(shape)
### from d2l.ai
masked_softmax(np.random.uniform(size=(2, 2, 4)), np.array([2, 3]))
### 2 - number of samples in the batch
### 2 - we make deal with 2 queries
### 4 - four key/value pairs
### for the first sample in our batch , from 4 pairs we will take
### into account only results from first 2 pairs, the rest will be multiplied by 0,
### because that pairs correspond to <pad> tokens
### for the second sample (4 key/value pairs) we will take into account
### only results for first 3 key/value pairs (the rest will masked with 0,
### because the rest pairs correspond to <pad> tokens)
### this is the meaning of np.array([2,3]) as valid length
### the velid length is not dependent from queries in this case
### from d2l.ai
npx.batch_dot(np.ones((2, 1, 3)), np.ones((2, 3, 2)))
### one more example with 1-D valid length
valid_length = np.array([2,3])
### the shape is (2,) : one dimentional length
print('valid_length shape= ', valid_length.shape)
masked_softmax(np.random.uniform (size =(2, 3, 5)), valid_length )
### if we declare 2-D valid_length
valid_length = np.array([[3, 5, 4], [2,4, 1], [1,4, 3],[1,2,3]])
print('valid_length shape= ', valid_length.shape)
masked_softmax(np.random.uniform(size = (4, 3, 5)), valid_length)
### Let us consider the first sample in our batch
### [[0.21225105, 0.31475353, 0.4729953 , 0. , 0. ,
### 0. ],
### [0.19417836, 0.20596693, 0.16711308, 0.15453914, 0.27820238,
### 0. ],
### [0.2753876 , 0.21671425, 0.30811197, 0.19978616, 0. ,
### 0. ]],
### from third dimention in np.random.uniform(size = (4, 3, 5)) we may see it correspond to
### 5 key/value pairs (that is why the length of the lines is 5)
### second dimention in np.random.uniform(size = (4, 3, 5)) means the results are obtained from
### 3 queries, that is why there are 3 lines corresponding to every batch
###
### Below we may see there are 4 groups, because the first dimention, the
### number of samples, is 4 (batch size)
###
### np.array([[3, 5, 4], [2,4, 1], [1,4, 3],[1,2,3]])
### is 2-D array (of size 4 * 3 in our case))
### 4 is batch size, 3 is number of queries : we have 4 groups with 3 lines in each;
### the [3,5,4] subarray correspond to the first sample in the batch,
### in the first group : the first line has first 3 non zero elements,
### the second line 5 first non zero and third line 4 first non zero elements.
###Output
valid_length shape= (4, 3)
###Markdown
Dot product attention Why we need it, how it is calculated We have query with dimension `d`.We have kv_pairs: key/value pairs. Every key and value are vectors of dimension `d`. We pass the query trought the 'grid' with the leng of kv_pairs and get kv_pairs of scores. How it works within the pass: we make dot product of query with every of kv_pairs keys in the 'grid' and get a kv_pairs scores. We also normilize the scores by dividing on $\sqrt{d}$. If we have batch with size batch_size and number of queries = queries, we will get tensor of scores of size (batch_size, queries, kv_pairs). In this way we receive the attention_weights tensor.We also have tensor 'value' of values of size (batch_size, kv_pairs, dim_v). Finally, using npx.batch_dot(attention_weights, value) we will get tensor of size (batch_size, queries, dim_v) which corresponf of the 'passing' our queries throught the 'grid' of key/value pairs: for every query, for every sample in the batch we will get the transformed vector of size dim_v.
###Code
### from d2l.ai book
class DotProductAttention(nn.Block):
def __init__(self, dropout, **kwargs):
super(DotProductAttention, self).__init__(**kwargs)
self.dropout = nn.Dropout(dropout)
# `query`: (`batch_size`, #queries, `d`)
# `key`: (`batch_size`, #kv_pairs, `d`)
# `value`: (`batch_size`, #kv_pairs, `dim_v`)
# `valid_len`: either (`batch_size`, ) or (`batch_size`, xx)
def forward(self, query, key, value, valid_len=None):
d = query.shape[-1]
# Set transpose_b=True to swap the last two dimensions of key
scores = npx.batch_dot(query, key, transpose_b=True) / math.sqrt(d)
attention_weights = self.dropout(masked_softmax(scores, valid_len))
return npx.batch_dot(attention_weights, value)
if False:
### the code from d2l.ai
atten = DotProductAttention(dropout=0.5)
atten.initialize()
### batch size of 2, #kv_pairs = 10, every key is vector of size 2 with
### ones : (1.,1.)
keys = np.ones((2, 10, 2))
### we start with vector which keep float numbers from 0 to 39;
### reshape it to tensor which model one sample batch with 10 key/value pairs and
### dimension of values dim_v = 4; finally we repeat the construction to get 2
### similar samples (batch with 2 samples).
values = np.arange(40).reshape(1, 10, 4).repeat(2, axis=0)
atten(np.ones((2, 1, 2)), keys, values, np.array([2, 6]))
if False:
atten = DotProductAttention(dropout=0.5)
atten.initialize()
keys = np.ones((3,10,5)) # keys in batch of size 3; for every line in batch we have
### 10 keys/values pairs ; where every key is 5 dimentional vector (and value will be 7 dimentional vector);
### each key is forming pair with value, there are 10 such pairs
values = np.arange(70).reshape(1,10,7).repeat(3, axis =0) # values in batch of
### size 3 ; 10 values with 7 dimentional vector each;
### in our batch the 3 samples are identical by construction
queries = np.ones((3,4,5)) # quiries in batch of size 3, there are 4 queries,
### where every query is vector of size 5 (same size as for key)
atten(queries, keys, values, np.array([3, 8, 6])) # values in batch of size 3 ;
### 4 quiry per every sample in batch where every query is vector of size 5
### the valid_len is 1-D
### for the 3 samples the valid_length have size 3 , 8 , 6 ;
### size 3 for first sample , ....., ..... size 6 for the last sample
### the outputs are:
### for every entry in the batch (for every of the 3 samples)
### for every of 4 queries
### total : 3*4 = 12 final values: vectors of size 7
### the values are different for different samples in the batch ,
### because we used different valid length,
### but for every sample group in the batch (same sample, different queries),
### all 4 final values are the same:
### even we use 4 queries, all the quiries are equal in our case
###Output
_____no_output_____
###Markdown
Multihead Attention""" The *multi-head attention* layer consists of $h$ parallel self-attention layers, each one is called a *head*. For each head, before feeding into the attention layer, we project the queries, keys, and values with three dense layers with hidden sizes $p_q$, $p_k$, and $p_v$, respectively. The outputs of these $h$ attention heads are concatenated and then processed by a final dense layer.![Multi-head attention](../img/multi-head-attention.svg)Assume that the dimension for a query, a key, and a value are $d_q$, $d_k$, and $d_v$, respectively. Then, for each head $i=1,\ldots, h$, we can train learnable parameters$\mathbf W_q^{(i)}\in\mathbb R^{p_q\times d_q}$,$\mathbf W_k^{(i)}\in\mathbb R^{p_k\times d_k}$,and $\mathbf W_v^{(i)}\in\mathbb R^{p_v\times d_v}$. Therefore, the output for each head is$$\mathbf o^{(i)} = \mathrm{attention}(\mathbf W_q^{(i)}\mathbf q, \mathbf W_k^{(i)}\mathbf k,\mathbf W_v^{(i)}\mathbf v),$$where $\textrm{attention}$ can be any attention layer, such as the `DotProductAttention` and `MLPAttention` as we introduced in :numref:`sec_attention`.After that, the output with length $p_v$ from each of the $h$ attention heads are concatenated to be an output of length $h p_v$, which is then passed the final dense layer with $d_o$ hidden units. The weights of this dense layer can be denoted by $\mathbf W_o\in\mathbb R^{d_o\times h p_v}$. As a result, the multi-head attention output will be$$\mathbf o = \mathbf W_o \begin{bmatrix}\mathbf o^{(1)}\\\vdots\\\mathbf o^{(h)}\end{bmatrix}.$$Now we can implement the multi-head attention. Assume that the multi-head attention contain the number heads `num_heads` $=h$, the hidden size `num_hiddens` $=p_q=p_k=p_v$ are the same for the query, key, and value dense layers. In addition, since the multi-head attention keeps the same dimensionality between its input and its output, we have the output feature size $d_o =$ `num_hiddens` as well. """ (citation from d2l.ai book). There are some problems in the d2l.ai text, there is stated : $p_q$ = $p_k$ = $p_v$ = num_hiddens, and $d_o =$ `num_hiddens` as well. So, we have $W_o$ transformation from input of size (num_heads * num_hiddens) to output of size (num_hiddens). If h > 1, the input size and output size can not be equal. But in the PyTorch code in the d2l.ai we have: self.W_o = nn.Linear(num_hiddens, num_hiddens, bias=bias) with equal input and output. It is hidden in the d2l.ai MXNet code: self.W_o = nn.Dense(num_hiddens, use_bias=use_bias, flatten=False), because in the case of Gluon Dense layer we state only output dimension (num_hiddens in the case). The input dimension is not stated. There is also assumed in the code below (from d2l.ai book), the num_hiddens is multiple of num_heads. No assumptions about it in the main text of the book. But in the d2l.ai code the assumption is used.The ony interpretation to the code below I may give now: $p_v$ * num_heads=num_hiddens (same for $p_q$ = $p_k$ = $p_v$), but not $p_v$=num_hiddens. I will interpret the code with the assumption.
###Code
### from d2l.ai
class MultiHeadAttention(nn.Block):
def __init__(self, num_hiddens, num_heads, dropout, use_bias=False, **kwargs):
super(MultiHeadAttention, self).__init__(**kwargs)
self.num_heads = num_heads
self.attention = d2l.DotProductAttention(dropout)
### here, as I understand, the num_hiddens = num_heads * p_v
### where p_v (see the text above) is the dimension of the vector
### to which a query is transformed by single head,
### the size of p_v is to be (num_hidden/num_heads)
### it explains what the code below do
self.W_q = nn.Dense(num_hiddens, use_bias=use_bias, flatten=False)
self.W_k = nn.Dense(num_hiddens, use_bias=use_bias, flatten=False)
self.W_v = nn.Dense(num_hiddens, use_bias=use_bias, flatten=False)
### if every head transform query of size `dim` = num_hiddens to
### p_v = p_q = p_k = (num_hidden/num_heads), when we
### concatenate num_heads of such queries, we will get
### vector of size num_hidden again;
### it explains the input / output dimensions for W_o :
### input and output have same dimension = num_hiddens
self.W_o = nn.Dense(num_hiddens, use_bias=use_bias, flatten=False)
### every query generate num_heads outputs , which we cancatenate in
### one vector of dimention num_hiddens : so the output of every query is
### of size num_heads / num_hiddens;
### to apply self-attention we de-cancatenate the combined output
### to hum_heads of separate outputs from every query
### with size (num_hiddens / num_heads), and
### simultaneously recombine them in single batch (with size num_heads),
### which increase the total batch size to (batch_size * num_heads)
### We have to correct the valid_length to take into account
### the num_heads query transformtions are combined now in single batch.
### After application of self_attention, we make the reverse operation:
### locate the batch samples which correspond to the outputs of the same query
### in different heads, and concatenate them again in one combined output.
### The number of batches decrease and the length of output increase by the
### same factor num_heads.
### These are the roles of transpose_qkv , transpose_output functions below:
def forward(self, query, key, value, valid_len):
# For self-attention, `query`, `key`, and `value` shape:
# (`batch_size`, `seq_len`, `dim`), where `seq_len` is the length of
# input sequence. `valid_len` shape is either (`batch_size`, ) or
# (`batch_size`, `seq_len`).
# Project and transpose `query`, `key`, and `value` from
# (`batch_size`, `seq_len`, `num_hiddens`) to
# (`batch_size` * `num_heads`, `seq_len`, `num_hiddens` / `num_heads`)
query = transpose_qkv(self.W_q(query), self.num_heads)
key = transpose_qkv(self.W_k(key), self.num_heads)
value = transpose_qkv(self.W_v(value), self.num_heads)
if valid_len is not None:
# Copy `valid_len` by `num_heads` times
if valid_len.ndim == 1:
valid_len = np.tile(valid_len, self.num_heads)
else:
valid_len = np.tile(valid_len, (self.num_heads, 1))
# For self-attention, `output` shape:
# (`batch_size` * `num_heads`, `seq_len`, `num_hiddens` / `num_heads`)
output = self.attention(query, key, value, valid_len)
# `output_concat` shape: (`batch_size`, `seq_len`, `num_hiddens`)
output_concat = transpose_output(output, self.num_heads)
return self.W_o(output_concat)
### from d2l.ai
def transpose_qkv(X, num_heads):
# Input `X` shape: (`batch_size`, `seq_len`, `num_hiddens`).
# Output `X` shape:
# (`batch_size`, `seq_len`, `num_heads`, `num_hiddens` / `num_heads`)
X = X.reshape(X.shape[0], X.shape[1], num_heads, -1)
# `X` shape:
# (`batch_size`, `num_heads`, `seq_len`, `num_hiddens` / `num_heads`)
X = X.transpose(0, 2, 1, 3)
# `output` shape:
# (`batch_size` * `num_heads`, `seq_len`, `num_hiddens` / `num_heads`)
output = X.reshape(-1, X.shape[2], X.shape[3])
return output
### from d2l.ai
def transpose_output(X, num_heads):
# A reversed version of `transpose_qkv`
X = X.reshape(-1, num_heads, X.shape[1], X.shape[2])
X = X.transpose(0, 2, 1, 3)
return X.reshape(X.shape[0], X.shape[1], -1)
if False:
### from d2l.ai
### num_hiddens = 100, num_heads=10
cell = MultiHeadAttention(100, 10, 0.5)
cell.initialize()
X = np.ones((2, 4, 5))
valid_len = np.array([2, 3])
cell(X, X, X, valid_len).shape
if False:
### it correspond to scenario size of embedding is 512 ; num_heads = 8 ;
### num_hiddens = 512
cell = MultiHeadAttention(512, 8, 0.5)
cell.initialize()
# num of batches is 3 ; seq_len is 20 ; size of embedding is 512
X = np.ones((3, 20, 512))
valid_len = np.array([15,17,12])
cell(X, X, X, valid_len).shape
###Output
_____no_output_____
###Markdown
Position-wise encodingTwo 1 * 1 convolutional layers are applied. Extract position independent features of word representations (in the same way the convolution layers are applied in image recognition networks). """ Similar to the multi-head attention, the position-wise feed-forward network will only change the last dimension size of the input—the feature dimension. In addition, if two items in the input sequence are identical, the according outputs will be identical as well. """ (citation from d2l.ai)
###Code
### from d2l.ai
class PositionWiseFFN(nn.Block):
def __init__(self, ffn_num_hiddens, pw_num_outputs, **kwargs):
super(PositionWiseFFN, self).__init__(**kwargs)
self.dense1 = nn.Dense(ffn_num_hiddens, flatten=False,
activation='relu')
self.dense2 = nn.Dense(pw_num_outputs, flatten=False)
def forward(self, X):
return self.dense2(self.dense1(X))
if False:
ffn = PositionWiseFFN(4, 8)
ffn.initialize()
ffn(np.ones((2, 3, 4)))[0]
###Output
_____no_output_____
###Markdown
Add and Norm""" we add a layer that contains a residual structure and a layer normalization after both the multi-head attention layer and the position-wise FFN network. Layer normalization is similar to batch normalization ........ One difference is that the mean and variances for the layer normalization are calculated along the last dimension, e.g X.mean(axis=-1) instead of the first batch dimension, e.g., X.mean(axis=0). Layer normalization prevents the range of values in the layers from changing too much, which allows faster training and better generalization ability. """ (citation from d2l.ai)
###Code
if False:
### from d2l.ai
layer = nn.LayerNorm()
layer.initialize()
batch = nn.BatchNorm()
batch.initialize()
X = np.array([[1, 2], [2, 3]])
# Compute mean and variance from `X` in the training mode
with autograd.record():
print('layer norm:', layer(X), '\nbatch norm:', batch(X))
###Output
_____no_output_____
###Markdown
"""AddNorm accepts two inputs X and Y. We can deem X as the original input in the residual network, and Y as the outputs from either the multi-head attention layer or the position-wise FFN network. In addition, we apply dropout on Y for regularization.""" citation from d2l.ai
###Code
### from d2l.ai
class AddNorm(nn.Block):
def __init__(self, dropout, **kwargs):
super(AddNorm, self).__init__(**kwargs)
self.dropout = nn.Dropout(dropout)
self.ln = nn.LayerNorm()
def forward(self, X, Y):
return self.ln(self.dropout(Y) + X)
if False:
### d2l.ai
add_norm = AddNorm(0.5)
add_norm.initialize()
add_norm(np.ones((2, 3, 4)), np.ones((2, 3, 4))).shape
###Output
_____no_output_____
###Markdown
Positional Encoding
###Code
### I used the code as alternative to the original positional encoding;
### just encode position of words (tokens) in sentence ,
### it changes the results , but the results are quite well.
if False:
### from d2l.ai
class PositionalEncoding(nn.Block):
def __init__(self, num_hiddens, dropout, max_len=100):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(dropout)
# Create a long enough `P`
### max_len correspond to sequence length ;
### num_hiddens correspond to embedding size
###
self.P = np.zeros((1, max_len, num_hiddens))
### X = np.arange(0, max_len).reshape(-1, 1) / np.power(
### 10000, np.arange(0, num_hiddens, 2) / num_hiddens)
### self.P[:, :, 0::2] = np.sin(X)
### self.P[:, :, 1::2] = np.cos(X)
###################### my code be carefull !!!!!
X = np.arange(0, max_len).reshape(-1, 1) / max_len
### 10000, np.arange(0, num_hiddens, 2) / num_hiddens)
self.P[:, :, 0::1] = np.sin(X)
### self.P[:, :, 1::2] = np.cos(X)
################################
def forward(self, X):
X = X + self.P[:, :X.shape[1], :].as_in_ctx(X.ctx)
return self.dropout(X)
### from d2l.ai
class PositionalEncoding(nn.Block):
def __init__(self, num_hiddens, dropout, max_len=1000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(dropout)
# Create a long enough `P`
### max_len correspond to sequence length ;
### num_hiddens correspond to embedding size
self.P = np.zeros((1, max_len, num_hiddens))
X = np.arange(0, max_len).reshape(-1, 1) / np.power(
10000, np.arange(0, num_hiddens, 2) / num_hiddens)
self.P[:, :, 0::2] = np.sin(X)
self.P[:, :, 1::2] = np.cos(X)
def forward(self, X):
X = X + self.P[:, :X.shape[1], :].as_in_ctx(X.ctx)
return self.dropout(X)
if False:
### from d2l.ai
### num_hiddens = 20 , dropout = 0
pe = PositionalEncoding(20, 0)
pe.initialize()
### we assume batch_size = 1; max_length = 100 correspond to tokens (here words) in our line;
### num_hiddens = 20 (embedding size)
###
Y = pe(np.zeros((1, 100, 20)))
### dim correspond to coordinate in embedding vector of out tokens (words)
d2l.plot(np.arange(100), Y[0, :, 4:8].T, figsize=(6, 2.5),
legend=["dim %d" % p for p in [4, 5, 6, 7]])
###Output
_____no_output_____
###Markdown
Encoder"""Armed with all the essential components of Transformer, let us first build a Transformer encoder block. This encoder contains a multi-head attention layer, a position-wise feed-forward network, and two “add and norm” connection blocks. As shown in the code, for both of the attention model and the positional FFN model in the EncoderBlock, their outputs’ dimension are equal to the num_hiddens. This is due to the nature of the residual block, as we need to add these outputs back to the original value during “add and norm”. """ (citation from d2l.ai)
###Code
### from d2l.ai
### this block will not change the input shape
class EncoderBlock(nn.Block):
def __init__(self, num_hiddens, ffn_num_hiddens, num_heads, dropout,
use_bias=False, **kwargs):
super(EncoderBlock, self).__init__(**kwargs)
self.attention = MultiHeadAttention(num_hiddens, num_heads, dropout,
use_bias)
self.addnorm1 = AddNorm(dropout)
self.ffn = PositionWiseFFN(ffn_num_hiddens, num_hiddens)
self.addnorm2 = AddNorm(dropout)
def forward(self, X, valid_len):
### we sum the original input to the attention block and the output from the
### block + we normilize the result using AddNorm
Y = self.addnorm1(X, self.attention(X, X, X, valid_len))
return self.addnorm2(Y, self.ffn(Y))
###Output
_____no_output_____
###Markdown
""" Now it comes to the implementation of the entire Transformer encoder. With the Transformer encoder, $n$ blocks of `EncoderBlock` stack up one after another. Because of the residual connection, the embedding layer size $d$ is same as the Transformer block output size. Also note that we multiply the embedding output by $\sqrt{d}$ to prevent its values from being too small. """ (citation from d2l.ai)
###Code
### from d2l.ai
class Encoder(nn.Block):
"""The base encoder interface for the encoder-decoder architecture."""
def __init__(self, **kwargs):
super(Encoder, self).__init__(**kwargs)
def forward(self, X, *args):
raise NotImplementedError
### from d2l.ai
class TransformerEncoder(Encoder):
def __init__(self, vocab_size, num_hiddens, ffn_num_hiddens,
num_heads, num_layers, dropout, use_bias=False, **kwargs):
super(TransformerEncoder, self).__init__(**kwargs)
self.num_hiddens = num_hiddens
self.embedding = nn.Embedding(vocab_size, num_hiddens)
self.pos_encoding = PositionalEncoding(num_hiddens, dropout)
self.blks = nn.Sequential()
for _ in range(num_layers):
self.blks.add(
EncoderBlock(num_hiddens, ffn_num_hiddens, num_heads, dropout,
use_bias))
def forward(self, X, valid_len, *args):
X = self.pos_encoding(self.embedding(X) * math.sqrt(self.num_hiddens))
for blk in self.blks:
X = blk(X, valid_len)
return X
###Output
_____no_output_____
###Markdown
Decoder""" During training, the output for the $t$-query could observe all the previous key-value pairs. It results in an different behavior from prediction. Thus, during prediction we can eliminate the unnecessary information by specifying the valid length to be $t$ for the $t^\textrm{th}$ query. """(citation from d2l.ai)
###Code
### from d2l.ai
class DecoderBlock(nn.Block):
# `i` means it is the i-th block in the decoder
### the i will be initialized from the TransformerDecoder block
### the block will be used in TransformerDecoder in stack ,
### several blocks will be aranged in sequence, output from
### one block will be input to the next blosk
def __init__(self, num_hiddens, ffn_num_hiddens, num_heads,
dropout, i, **kwargs):
super(DecoderBlock, self).__init__(**kwargs)
self.i = i
### in the block we will aplly (MultiHeadAttention + AddNorm)
### and again (MultiHeadAttention + AddNorm) ;
### then we will apply PositionWiseFFN
self.attention1 = MultiHeadAttention(num_hiddens, num_heads, dropout)
self.addnorm1 = AddNorm(dropout)
self.attention2 = MultiHeadAttention(num_hiddens, num_heads, dropout)
self.addnorm2 = AddNorm(dropout)
self.ffn = PositionWiseFFN(ffn_num_hiddens, num_hiddens)
self.addnorm3 = AddNorm(dropout)
def forward(self, X, state):
### we use state [0] and state[1] to keep output from TransformerEncoder :
### enc_outputs and enc_valid_length;
### which correspond to sentences we are translating (sentences in language FROM
### which we translate);
### the state [0] and state [1] are received from TransformerDecoder
### enclosing block as shared parameter;
enc_outputs, enc_valid_len = state[0], state[1]
# `state[2][i]` contains the past queries for this block
### in the first block (i = 1) , at this place in code,
### the queries = None, see the code in TransformetEncoder :
###
### def init_state(self, enc_outputs, enc_valid_len, *args):
### return [enc_outputs, enc_valid_len, [None]*self.num_layers]
###
### TransformerEncoder is initialized from EncoderDecoder
### using the 'init_state' function (see above) , as
### we can see, the fird element in array is None for self.layers;
### the 'init_state' determines the 'state' in TransformerEncoder,
### in the code above we use state[0] and state[1] to determine
### 'enc_outputs', 'enc_valid_len' in this block
if state[2][self.i] is None:
key_values = X
else:
### queries are processed and concatenated and used as new
### grid of key/value pairs
key_values = np.concatenate((state[2][self.i], X), axis=1)
state[2][self.i] = key_values
if autograd.is_training():
### here are are in training mode
### below in 'attention' function we will use X as queries,
### X correspond to all words in the target sentence within training;
### seq_len correspond to the length of the whole target sentence;
### we will use seq_len queries, for every sample in the batch;
### for us important the following:
### first query from the sentence has to be constrained
### to first key_value pair; second: to the first two key_value pairs,
### etc...
### that is why the valid_len is generated in the way:
batch_size, seq_len, _ = X.shape
# Shape: (batch_size, seq_len), the values in the j-th column
# are j+1
### while training we take into account the result of passing a query
### in target sentence throught the 'grid' of key/value pairs to the
### left of the query ;
### every query in the target sequence has its own valid_len and
### the valid_len correspond to the position of a query in the
### sentence
valid_len = np.tile(np.arange(1, seq_len + 1, ctx=X.ctx),
(batch_size, 1))
else:
valid_len = None
### the attention mechanism is used on key_values corresponding
### to the target sentence key_values (then AddNorm is applied)
X2 = self.attention1(X, key_values, key_values, valid_len)
Y = self.addnorm1(X, X2)
### the attention mechanism is used on TransformerEncoder outputs
### key_values as the 'grid' (then AddNorm is applied);
### the key/values are the learned pairs
### which are originated from the source sentence
Y2 = self.attention2(Y, enc_outputs, enc_outputs, enc_valid_len)
Z = self.addnorm2(Y, Y2)
return self.addnorm3(Z, self.ffn(Z)), state
### from d2l.ai
class Decoder(nn.Block):
"""The base decoder interface for the encoder-decoder architecture."""
def __init__(self, **kwargs):
super(Decoder, self).__init__(**kwargs)
def init_state(self, enc_outputs, *args):
raise NotImplementedError
def forward(self, X, state):
raise NotImplementedError
### from d2l.ai
class TransformerDecoder(Decoder):
def __init__(self, vocab_size, num_hiddens, ffn_num_hiddens,
num_heads, num_layers, dropout, **kwargs):
super(TransformerDecoder, self).__init__(**kwargs)
self.num_hiddens = num_hiddens
self.num_layers = num_layers
self.embedding = nn.Embedding(vocab_size, num_hiddens)
self.pos_encoding = PositionalEncoding(num_hiddens, dropout)
### sequential application of several DecoderBlock's
self.blks = nn.Sequential()
for i in range(num_layers):
self.blks.add(
DecoderBlock(num_hiddens, ffn_num_hiddens, num_heads,
dropout, i))
self.dense = nn.Dense(vocab_size, flatten=False)
def init_state(self, enc_outputs, env_valid_len, *args):
return [enc_outputs, env_valid_len, [None]*self.num_layers]
def forward(self, X, state):
X = self.pos_encoding(self.embedding(X) * math.sqrt(self.num_hiddens))
for blk in self.blks:
X, state = blk(X, state)
return self.dense(X), state
### from d2l.ai
### this block couples together TransformerEncoder and TransformerDecoder
###
class EncoderDecoder(nn.Block):
"""The base class for the encoder-decoder architecture."""
def __init__(self, encoder, decoder, **kwargs):
super(EncoderDecoder, self).__init__(**kwargs)
self.encoder = encoder
self.decoder = decoder
def forward(self, enc_X, dec_X, *args):
### the enc_outputs are moved to decoder from encoder;
### the coupling happens in this point of code
enc_outputs = self.encoder(enc_X, *args)
### initial decoder state: dec_state is calculated using the dec_outputs
### and used as 'state' in TransformerDecoder
dec_state = self.decoder.init_state(enc_outputs, *args)
### use initial state + input dec_X to the decoder to calculate
### the decoder output
return self.decoder(dec_X, dec_state)
###Output
_____no_output_____
###Markdown
Training
###Code
### from d2l.ai
### because of the padding (and valid_length) we have to filter out some entries
class MaskedSoftmaxCELoss(gluon.loss.SoftmaxCELoss):
# `pred` shape: (`batch_size`, `seq_len`, `vocab_size`)
# `label` shape: (`batch_size`, `seq_len`)
# `valid_len` shape: (`batch_size`, )
def forward(self, pred, label, valid_len):
# weights shape: (batch_size, seq_len, 1)
weights = np.expand_dims(np.ones_like(label), axis=-1)
weights = npx.sequence_mask(weights, valid_len, True, axis=1)
return super(MaskedSoftmaxCELoss, self).forward(pred, label, weights)
if False:
### from d2l.ai
loss = MaskedSoftmaxCELoss()
loss(np.ones((3, 4, 10)), np.ones((3, 4)), np.array([4, 2, 0]))
### from d2l.ai
### prevents too high gradients
def grad_clipping(model, theta):
"""Clip the gradient."""
if isinstance(model, gluon.Block):
params = [p.data() for p in model.collect_params().values()]
else:
params = model.params
norm = math.sqrt(sum((p.grad ** 2).sum() for p in params))
if norm > theta:
for param in params:
param.grad[:] *= theta / norm
### from d2l.ai
### accumulate results in one array, auxiliary function
class Accumulator:
"""For accumulating sums over `n` variables."""
def __init__(self, n):
self.data = [0.0] * n
def add(self, *args):
self.data = [a + float(b) for a, b in zip(self.data, args)]
def reset(self):
self.data = [0.0] * len(self.data)
def __getitem__(self, idx):
return self.data[idx]
### from d2l.ai
def train_s2s_ch9(model, data_iter, lr, num_epochs, device):
model.initialize(init.Xavier(), force_reinit=True, ctx=device)
trainer = gluon.Trainer(model.collect_params(),
'adam', {'learning_rate': lr})
loss = MaskedSoftmaxCELoss()
animator = d2l.Animator(xlabel='epoch', ylabel='loss',
xlim=[1, num_epochs], ylim=[0, 1.00])
for epoch in range(1, num_epochs + 1):
timer = d2l.Timer()
metric = d2l.Accumulator(2) # loss_sum, num_tokens
### use data_iter from load_data_nmt to get X and Y which include:
### the source and target
### sentence representations + X_vlen and Y_vlen : the valid lenghts of
### the sentencies
for batch in data_iter:
X, X_vlen, Y, Y_vlen = [x.as_in_ctx(device) for x in batch]
Y_input, Y_label, Y_vlen = Y[:, :-1], Y[:, 1:], Y_vlen-1
with autograd.record():
Y_hat, _ = model(X, Y_input, X_vlen, Y_vlen)
l = loss(Y_hat, Y_label, Y_vlen)
l.backward()
grad_clipping(model, 1)
num_tokens = Y_vlen.sum()
trainer.step(num_tokens)
metric.add(l.sum(), num_tokens)
if epoch % 10 == 0:
animator.add(epoch, (metric[0]/metric[1],))
print(f'loss {metric[0] / metric[1]:.3f}, {metric[1] / timer.stop():.1f} '
f'tokens/sec on {str(device)}')
###Output
_____no_output_____
###Markdown
Reading and Processing the Text
###Code
### from d2l.ai
def download_extract(name, folder=None):
"""Download and extract a zip/tar file."""
fname = download(name)
base_dir = os.path.dirname(fname)
data_dir, ext = os.path.splitext(fname)
if ext == '.zip':
fp = zipfile.ZipFile(fname, 'r')
elif ext in ('.tar', '.gz'):
fp = tarfile.open(fname, 'r')
else:
assert False, 'Only zip/tar files can be extracted.'
fp.extractall(base_dir)
return os.path.join(base_dir, folder) if folder else data_dir
###Output
_____no_output_____
###Markdown
""" ... a dataset that contains a set of English sentences with the corresponding French translations. As can be seen that each line contains an English sentence with its French translation, which are separated by a TAB.""" (citation from d2l.ai)
###Code
### d2l.ai
### the data for the translation are prepared by the d2l.ai project (book)
d2l.DATA_HUB['fra-eng'] = (d2l.DATA_URL + 'fra-eng.zip',
'94646ad1522d915e7b0f9296181140edcf86a4f5')
def read_data_nmt():
data_dir = d2l.download_extract('fra-eng')
with open(os.path.join(data_dir, 'fra.txt'), 'r') as f:
return f.read()
raw_text = read_data_nmt()
print(raw_text[0:106])
### from d2l.ai
def preprocess_nmt(text):
def no_space(char, prev_char):
return char in set(',.!') and prev_char != ' '
text = text.replace('\u202f', ' ').replace('\xa0', ' ').lower()
out = [' ' + char if i > 0 and no_space(char, text[i-1]) else char
for i, char in enumerate(text)]
return ''.join(out)
### from d2l.ai
text = preprocess_nmt(raw_text)
print(text[0:95])
### from d2l.ai
def tokenize_nmt(text, num_examples=None):
source, target = [], []
for i, line in enumerate(text.split('\n')):
if num_examples and i > num_examples:
break
parts = line.split('\t')
if len(parts) == 2:
source.append(parts[0].split(' '))
target.append(parts[1].split(' '))
return source, target
### from d2l.ai
source, target = tokenize_nmt(text)
source[0:3], target[0:3]
###Output
_____no_output_____
###Markdown
Histogram of the number of tokens per sentenceThere are mostly 5 token sentencies, num of tokens is usually less than 10..15.
###Code
### from d2l.ai
d2l.set_figsize()
d2l.plt.hist([[len(l) for l in source], [len(l) for l in target]],
label=['source', 'target'])
d2l.plt.legend(loc='upper right');
###Output
_____no_output_____
###Markdown
Vocabulary
###Code
### from d2l.ai
def count_corpus(tokens):
"""Count token frequencies."""
# Here `tokens` is a 1D list or 2D list
if len(tokens) == 0 or isinstance(tokens[0], list):
# Flatten a list of token lists into a list of tokens
tokens = [token for line in tokens for token in line]
return collections.Counter(tokens)
### from d2l.ai
class Vocab:
"""Vocabulary for text."""
def __init__(self, tokens=None, min_freq=0, reserved_tokens=None):
if tokens is None:
tokens = []
if reserved_tokens is None:
reserved_tokens = []
# Sort according to frequencies
counter = count_corpus(tokens)
self.token_freqs = sorted(counter.items(), key=lambda x: x[0])
self.token_freqs.sort(key=lambda x: x[1], reverse=True)
# The index for the unknown token is 0
self.unk, uniq_tokens = 0, ['<unk>'] + reserved_tokens
uniq_tokens += [token for token, freq in self.token_freqs
if freq >= min_freq and token not in uniq_tokens]
self.idx_to_token, self.token_to_idx = [], dict()
for token in uniq_tokens:
self.idx_to_token.append(token)
self.token_to_idx[token] = len(self.idx_to_token) - 1
def __len__(self):
return len(self.idx_to_token)
def __getitem__(self, tokens):
if not isinstance(tokens, (list, tuple)):
return self.token_to_idx.get(tokens, self.unk)
return [self.__getitem__(token) for token in tokens]
def to_tokens(self, indices):
if not isinstance(indices, (list, tuple)):
return self.idx_to_token[indices]
return [self.idx_to_token[index] for index in indices]
### from d2l.ai
src_vocab = Vocab(source, min_freq=3,
reserved_tokens=['<pad>', '<bos>', '<eos>'])
len(src_vocab)
###Output
_____no_output_____
###Markdown
Loading the dataset
###Code
### from d2l.ai
def truncate_pad(line, num_steps, padding_token):
if len(line) > num_steps:
return line[:num_steps] # Trim
return line + [padding_token] * (num_steps - len(line)) # Pad
### the <pad> is represented by number 1 in Vocabuary
### from d2l.ai
truncate_pad(src_vocab[source[0]], 10, src_vocab['<pad>'])
### from d2l.ai
def build_array(lines, vocab, num_steps, is_source):
lines = [vocab[l] for l in lines]
if not is_source:
lines = [[vocab['<bos>']] + l + [vocab['<eos>']] for l in lines]
array = np.array([truncate_pad(
l, num_steps, vocab['<pad>']) for l in lines])
valid_len = (array != vocab['<pad>']).sum(axis=1)
return array, valid_len
### from d2l.ai
def load_array(data_arrays, batch_size, is_train=True):
"""Construct a Gluon data iterator."""
dataset = gluon.data.ArrayDataset(*data_arrays)
return gluon.data.DataLoader(dataset, batch_size, shuffle=is_train)
### from d2l.ai
### quite importand function to construct dataset for training (data_iter)
### from original data
def load_data_nmt(batch_size, num_steps, num_examples=51200):
text = preprocess_nmt(read_data_nmt())
source, target = tokenize_nmt(text, num_examples)
src_vocab = Vocab(source, min_freq=3,
reserved_tokens=['<pad>', '<bos>', '<eos>'])
tgt_vocab = Vocab(target, min_freq=3,
reserved_tokens=['<pad>', '<bos>', '<eos>'])
src_array, src_valid_len = build_array(
source, src_vocab, num_steps, True)
tgt_array, tgt_valid_len = build_array(
target, tgt_vocab, num_steps, False)
data_arrays = (src_array, src_valid_len, tgt_array, tgt_valid_len)
data_iter = load_array(data_arrays, batch_size)
return src_vocab, tgt_vocab, data_iter
### from d2l.ai
def try_gpu(i=0):
"""Return gpu(i) if exists, otherwise return cpu()."""
return npx.gpu(i) if npx.num_gpus() >= i + 1 else npx.cpu()
###Output
_____no_output_____
###Markdown
Model: training and prediction
###Code
### the code from d2l.ai
### estimate the execution time for the cell in seconds
start = time.time()
num_hiddens, num_layers, dropout, batch_size, num_steps = 32, 2, 0.0, 512, 10
lr, num_epochs, device = 0.001, 800, try_gpu()
ffn_num_hiddens, num_heads = 64, 4 ### num_hiddens is to be a multiple of num_heads !!
src_vocab, tgt_vocab, train_iter = load_data_nmt(batch_size, num_steps)
encoder = TransformerEncoder(
len(src_vocab), num_hiddens, ffn_num_hiddens, num_heads, num_layers,
dropout)
decoder = TransformerDecoder(
len(src_vocab), num_hiddens, ffn_num_hiddens, num_heads, num_layers,
dropout)
model = EncoderDecoder(encoder, decoder)
train_s2s_ch9(model, train_iter, lr, num_epochs, device)
### estimate the execution time for the cell
end = time.time()
print(end - start)
### from d2l.ai
def predict_s2s_ch9(model, src_sentence, src_vocab, tgt_vocab, num_steps,
device):
src_tokens = src_vocab[src_sentence.lower().split(' ')]
enc_valid_len = np.array([len(src_tokens)], ctx=device)
src_tokens = truncate_pad(src_tokens, num_steps, src_vocab['<pad>'])
enc_X = np.array(src_tokens, ctx=device)
# Add the batch size dimension
enc_outputs = model.encoder(np.expand_dims(enc_X, axis=0),
enc_valid_len)
dec_state = model.decoder.init_state(enc_outputs, enc_valid_len)
dec_X = np.expand_dims(np.array([tgt_vocab['<bos>']], ctx=device), axis=0)
predict_tokens = []
for _ in range(num_steps):
Y, dec_state = model.decoder(dec_X, dec_state)
# The token with highest score is used as the next time step input
dec_X = Y.argmax(axis=2)
py = dec_X.squeeze(axis=0).astype('int32').item()
if py == tgt_vocab['<eos>']:
break
predict_tokens.append(py)
return ' '.join(tgt_vocab.to_tokens(predict_tokens))
for sentence in ['Go .', 'Wow !', "I'm OK .", 'I won !',
'Let it be !', 'How are you ?', 'How old are you ?',
'Cats are cats, dogs are dogs .', 'My friend lives in US .',
'He is fifty nine years old .', 'I like music and science .',
'I love you .', 'The dog is chasing the cat .',
'Somewhere on the earth .', 'Do not worry !',
'Sit down, please !', 'Not at all !', 'It is very very strange .',
'Take it into account .', 'The dark side of the moon .',
'Come on !', 'We are the champions, my friends .']:
print(sentence + ' => ' + predict_s2s_ch9(
model, sentence, src_vocab, tgt_vocab, num_steps, device))
###Output
Go . => vous êtes <unk> .
Wow ! => vous êtes <unk> .
I'm OK . => je suis <unk> .
I won ! => je suis <unk> .
Let it be ! => vous êtes <unk> .
How are you ? => vous êtes <unk> .
How old are you ? => vous êtes <unk> .
Cats are cats, dogs are dogs . => vous êtes <unk> .
My friend lives in US . => vous êtes <unk> .
He is fifty nine years old . => vous êtes <unk> .
I like music and science . => je suis <unk> .
I love you . => je suis <unk> .
The dog is chasing the cat . => vous êtes <unk> .
Somewhere on the earth . => vous êtes <unk> .
Do not worry ! => vous êtes <unk> .
Sit down, please ! => vous êtes <unk> .
Not at all ! => vous êtes <unk> .
It is very very strange . => vous êtes <unk> .
Take it into account . => vous êtes <unk> .
The dark side of the moon . => vous êtes <unk> .
Come on ! => vous êtes <unk> .
We are the champions, my friends . => vous êtes <unk> .
|
Chapter07/03_stock_price_prediction/logistic_regression.ipynb | ###Markdown
Data Sources
###Code
################
# Fundamentals #
################
# Morningstar fundamentals (2002 - Ongoing)
# https://www.quantopian.com/help/fundamentals
from quantopian.pipeline.data import Fundamentals
#####################
# Analyst Estimates #
#####################
# Earnings Surprises - Zacks (27 May 2006 - Ongoing)
# https://www.quantopian.com/data/zacks/earnings_surprises
from quantopian.pipeline.data.zacks import EarningsSurprises
from quantopian.pipeline.factors.zacks import BusinessDaysSinceEarningsSurprisesAnnouncement
##########
# Events #
##########
# Buyback Announcements - EventVestor (01 Jun 2007 - Ongoing)
# https://www.quantopian.com/data/eventvestor/buyback_auth
from quantopian.pipeline.data.eventvestor import BuybackAuthorizations
from quantopian.pipeline.factors.eventvestor import BusinessDaysSinceBuybackAuth
# CEO Changes - EventVestor (01 Jan 2007 - Ongoing)
# https://www.quantopian.com/data/eventvestor/ceo_change
from quantopian.pipeline.data.eventvestor import CEOChangeAnnouncements
# Dividends - EventVestor (01 Jan 2007 - Ongoing)
# https://www.quantopian.com/data/eventvestor/dividends
from quantopian.pipeline.data.eventvestor import (
DividendsByExDate,
DividendsByPayDate,
DividendsByAnnouncementDate,
)
from quantopian.pipeline.factors.eventvestor import (
BusinessDaysSincePreviousExDate,
BusinessDaysUntilNextExDate,
BusinessDaysSinceDividendAnnouncement,
)
# Earnings Calendar - EventVestor (01 Jan 2007 - Ongoing)
# https://www.quantopian.com/data/eventvestor/earnings_calendar
from quantopian.pipeline.data.eventvestor import EarningsCalendar
from quantopian.pipeline.factors.eventvestor import (
BusinessDaysUntilNextEarnings,
BusinessDaysSincePreviousEarnings
)
# 13D Filings - EventVestor (01 Jan 2007 - Ongoing)
# https://www.quantopian.com/data/eventvestor/_13d_filings
from quantopian.pipeline.data.eventvestor import _13DFilings
from quantopian.pipeline.factors.eventvestor import BusinessDaysSince13DFilingsDate
#############
# Sentiment #
#############
# News Sentiment - Sentdex Sentiment Analysis (15 Oct 2012 - Ongoing)
# https://www.quantopian.com/data/sentdex/sentiment
from quantopian.pipeline.data.sentdex import sentiment
###Output
_____no_output_____
###Markdown
Setup
###Code
# trading days per period
MONTH = 21
YEAR = 12 * MONTH
START = '2014-01-01'
END = '2015-12-31'
###Output
_____no_output_____
###Markdown
Universe
###Code
def Q100US():
return filters.make_us_equity_universe(
target_size=100,
rankby=factors.AverageDollarVolume(window_length=200),
mask=filters.default_us_equity_universe_mask(),
groupby=classifiers.fundamentals.Sector(),
max_group_weight=0.3,
smoothing_func=lambda f: f.downsample('month_start'),
)
# UNIVERSE = StaticAssets(symbols(['MSFT', 'AAPL']))
UNIVERSE = Q100US()
class AnnualizedData(CustomFactor):
# Get the sum of the last 4 reported values
window_length = 260
def compute(self, today, assets, out, asof_date, values):
for asset in range(len(assets)):
# unique asof dates indicate availability of new figures
_, filing_dates = np.unique(asof_date[:, asset], return_index=True)
quarterly_values = values[filing_dates[-4:], asset]
# ignore annual windows with <4 quarterly data points
if len(~np.isnan(quarterly_values)) != 4:
out[asset] = np.nan
else:
out[asset] = np.sum(quarterly_values)
class AnnualAvg(CustomFactor):
window_length = 252
def compute(self, today, assets, out, values):
out[:] = (values[0] + values[-1])/2
def factor_pipeline(factors):
start = time()
pipe = Pipeline({k: v(mask=UNIVERSE).rank() for k, v in factors.items()},
screen=UNIVERSE)
result = run_pipeline(pipe, start_date=START, end_date=END)
return result, time() - start
###Output
_____no_output_____
###Markdown
Value Factors
###Code
class ValueFactors:
"""Definitions of factors for cross-sectional trading algorithms"""
@staticmethod
def PriceToSalesTTM(**kwargs):
"""Last closing price divided by sales per share"""
return Fundamentals.ps_ratio.latest
@staticmethod
def PriceToEarningsTTM(**kwargs):
"""Closing price divided by earnings per share (EPS)"""
return Fundamentals.pe_ratio.latest
@staticmethod
def PriceToDilutedEarningsTTM(mask):
"""Closing price divided by diluted EPS"""
last_close = USEquityPricing.close.latest
diluted_eps = AnnualizedData(inputs = [Fundamentals.diluted_eps_earnings_reports_asof_date,
Fundamentals.diluted_eps_earnings_reports],
mask=mask)
return last_close / diluted_eps
@staticmethod
def PriceToForwardEarnings(**kwargs):
"""Price to Forward Earnings"""
return Fundamentals.forward_pe_ratio.latest
@staticmethod
def DividendYield(**kwargs):
"""Dividends per share divided by closing price"""
return Fundamentals.trailing_dividend_yield.latest
@staticmethod
def PriceToFCF(mask):
"""Price to Free Cash Flow"""
last_close = USEquityPricing.close.latest
fcf_share = AnnualizedData(inputs = [Fundamentals.fcf_per_share_asof_date,
Fundamentals.fcf_per_share],
mask=mask)
return last_close / fcf_share
@staticmethod
def PriceToOperatingCashflow(mask):
"""Last Close divided by Operating Cash Flows"""
last_close = USEquityPricing.close.latest
cfo_per_share = AnnualizedData(inputs = [Fundamentals.cfo_per_share_asof_date,
Fundamentals.cfo_per_share],
mask=mask)
return last_close / cfo_per_share
@staticmethod
def PriceToBook(mask):
"""Closing price divided by book value"""
last_close = USEquityPricing.close.latest
book_value_per_share = AnnualizedData(inputs = [Fundamentals.book_value_per_share_asof_date,
Fundamentals.book_value_per_share],
mask=mask)
return last_close / book_value_per_share
@staticmethod
def EVToFCF(mask):
"""Enterprise Value divided by Free Cash Flows"""
fcf = AnnualizedData(inputs = [Fundamentals.free_cash_flow_asof_date,
Fundamentals.free_cash_flow],
mask=mask)
return Fundamentals.enterprise_value.latest / fcf
@staticmethod
def EVToEBITDA(mask):
"""Enterprise Value to Earnings Before Interest, Taxes, Deprecation and Amortization (EBITDA)"""
ebitda = AnnualizedData(inputs = [Fundamentals.ebitda_asof_date,
Fundamentals.ebitda],
mask=mask)
return Fundamentals.enterprise_value.latest / ebitda
@staticmethod
def EBITDAYield(mask):
"""EBITDA divided by latest close"""
ebitda = AnnualizedData(inputs = [Fundamentals.ebitda_asof_date,
Fundamentals.ebitda],
mask=mask)
return USEquityPricing.close.latest / ebitda
VALUE_FACTORS = {
'DividendYield' : ValueFactors.DividendYield,
'EBITDAYield' : ValueFactors.EBITDAYield,
'EVToEBITDA' : ValueFactors.EVToEBITDA,
'EVToFCF' : ValueFactors.EVToFCF,
'PriceToBook' : ValueFactors.PriceToBook,
'PriceToDilutedEarningsTTM': ValueFactors.PriceToDilutedEarningsTTM,
'PriceToEarningsTTM' : ValueFactors.PriceToEarningsTTM,
'PriceToFCF' : ValueFactors.PriceToFCF,
'PriceToForwardEarnings' : ValueFactors.PriceToForwardEarnings,
'PriceToOperatingCashflow' : ValueFactors.PriceToOperatingCashflow,
'PriceToSalesTTM' : ValueFactors.PriceToSalesTTM,
}
value_result, t = factor_pipeline(VALUE_FACTORS)
print('Pipeline run time {:.2f} secs'.format(t))
value_result.info()
###Output
/usr/local/lib/python2.7/dist-packages/numpy/lib/arraysetops.py:200: FutureWarning: In the future, NAT != NAT will be True rather than False.
flag = np.concatenate(([True], aux[1:] != aux[:-1]))
###Markdown
Momentum
###Code
class MomentumFactors:
"""Custom Momentum Factors"""
class PercentAboveLow(CustomFactor):
"""Percentage of current close above low
in lookback window of window_length days
"""
inputs = [USEquityPricing.close]
window_length = 252
def compute(self, today, assets, out, close):
out[:] = close[-1] / np.min(close, axis=0) - 1
class PercentBelowHigh(CustomFactor):
"""Percentage of current close below high
in lookback window of window_length days
"""
inputs = [USEquityPricing.close]
window_length = 252
def compute(self, today, assets, out, close):
out[:] = close[-1] / np.max(close, axis=0) - 1
@staticmethod
def make_dx(timeperiod=14):
class DX(CustomFactor):
"""Directional Movement Index"""
inputs = [USEquityPricing.high,
USEquityPricing.low,
USEquityPricing.close]
window_length = timeperiod + 1
def compute(self, today, assets, out, high, low, close):
out[:] = [talib.DX(high[:, i],
low[:, i],
close[:, i],
timeperiod=timeperiod)[-1]
for i in range(len(assets))]
return DX
@staticmethod
def make_mfi(timeperiod=14):
class MFI(CustomFactor):
"""Money Flow Index"""
inputs = [USEquityPricing.high,
USEquityPricing.low,
USEquityPricing.close,
USEquityPricing.volume]
window_length = timeperiod + 1
def compute(self, today, assets, out, high, low, close, vol):
out[:] = [talib.MFI(high[:, i],
low[:, i],
close[:, i],
vol[:, i],
timeperiod=timeperiod)[-1]
for i in range(len(assets))]
return MFI
@staticmethod
def make_oscillator(fastperiod=12, slowperiod=26, matype=0):
class PPO(CustomFactor):
"""12/26-Day Percent Price Oscillator"""
inputs = [USEquityPricing.close]
window_length = slowperiod
def compute(self, today, assets, out, close_prices):
out[:] = [talib.PPO(close,
fastperiod=fastperiod,
slowperiod=slowperiod,
matype=matype)[-1]
for close in close_prices.T]
return PPO
@staticmethod
def make_stochastic_oscillator(fastk_period=5, slowk_period=3, slowd_period=3,
slowk_matype=0, slowd_matype=0):
class StochasticOscillator(CustomFactor):
"""20-day Stochastic Oscillator """
inputs = [USEquityPricing.high,
USEquityPricing.low,
USEquityPricing.close]
outputs = ['slowk', 'slowd']
window_length = fastk_period * 2
def compute(self, today, assets, out, high, low, close):
slowk, slowd = [talib.STOCH(high[:, i],
low[:, i],
close[:, i],
fastk_period=fastk_period,
slowk_period=slowk_period,
slowk_matype=slowk_matype,
slowd_period=slowd_period,
slowd_matype=slowd_matype)[-1]
for i in range(len(assets))]
out.slowk[:], out.slowd[:] = slowk[-1], slowd[-1]
return StochasticOscillator
@staticmethod
def make_trendline(timeperiod=252):
class Trendline(CustomFactor):
inputs = [USEquityPricing.close]
"""52-Week Trendline"""
window_length = timeperiod
def compute(self, today, assets, out, close_prices):
out[:] = [talib.LINEARREG_SLOPE(close,
timeperiod=timeperiod)[-1]
for close in close_prices.T]
return Trendline
MOMENTUM_FACTORS = {
'Percent Above Low' : MomentumFactors.PercentAboveLow,
'Percent Below High' : MomentumFactors.PercentBelowHigh,
'Price Oscillator' : MomentumFactors.make_oscillator(),
'Money Flow Index' : MomentumFactors.make_mfi(),
'Directional Movement Index' : MomentumFactors.make_dx(),
'Trendline' : MomentumFactors.make_trendline()
}
momentum_result, t = factor_pipeline(MOMENTUM_FACTORS)
print('Pipeline run time {:.2f} secs'.format(t))
momentum_result.info()
###Output
Pipeline run time 21.78 secs
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 50362 entries, (2014-01-02 00:00:00+00:00, Equity(24 [AAPL])) to (2015-12-31 00:00:00+00:00, Equity(47208 [GPRO]))
Data columns (total 6 columns):
Directional Movement Index 50362 non-null float64
Money Flow Index 50362 non-null float64
Percent Above Low 49536 non-null float64
Percent Below High 49536 non-null float64
Price Oscillator 50355 non-null float64
Trendline 49536 non-null float64
dtypes: float64(6)
memory usage: 2.7+ MB
###Markdown
Efficiency
###Code
class EfficiencyFactors:
@staticmethod
def CapexToAssets(mask):
"""Capital Expenditure divided by Total Assets"""
capex = AnnualizedData(inputs = [Fundamentals.capital_expenditure_asof_date,
Fundamentals.capital_expenditure],
mask=mask)
assets = Fundamentals.total_assets.latest
return - capex / assets
@staticmethod
def CapexToSales(mask):
"""Capital Expenditure divided by Total Revenue"""
capex = AnnualizedData(inputs = [Fundamentals.capital_expenditure_asof_date,
Fundamentals.capital_expenditure],
mask=mask)
revenue = AnnualizedData(inputs = [Fundamentals.total_revenue_asof_date,
Fundamentals.total_revenue],
mask=mask)
return - capex / revenue
@staticmethod
def CapexToFCF(mask):
"""Capital Expenditure divided by Free Cash Flows"""
capex = AnnualizedData(inputs = [Fundamentals.capital_expenditure_asof_date,
Fundamentals.capital_expenditure],
mask=mask)
free_cash_flow = AnnualizedData(inputs = [Fundamentals.free_cash_flow_asof_date,
Fundamentals.free_cash_flow],
mask=mask)
return - capex / free_cash_flow
@staticmethod
def EBITToAssets(mask):
"""Earnings Before Interest and Taxes (EBIT) divided by Total Assets"""
ebit = AnnualizedData(inputs = [Fundamentals.ebit_asof_date,
Fundamentals.ebit],
mask=mask)
assets = Fundamentals.total_assets.latest
return ebit / assets
@staticmethod
def CFOToAssets(mask):
"""Operating Cash Flows divided by Total Assets"""
cfo = AnnualizedData(inputs = [Fundamentals.operating_cash_flow_asof_date,
Fundamentals.operating_cash_flow],
mask=mask)
assets = Fundamentals.total_assets.latest
return cfo / assets
@staticmethod
def RetainedEarningsToAssets(mask):
"""Retained Earnings divided by Total Assets"""
retained_earnings = AnnualizedData(inputs = [Fundamentals.retained_earnings_asof_date,
Fundamentals.retained_earnings],
mask=mask)
assets = Fundamentals.total_assets.latest
return retained_earnings / assets
EFFICIENCY_FACTORS = {
'CFO To Assets' :EfficiencyFactors.CFOToAssets,
'Capex To Assets' :EfficiencyFactors.CapexToAssets,
'Capex To FCF' :EfficiencyFactors.CapexToFCF,
'Capex To Sales' :EfficiencyFactors.CapexToSales,
'EBIT To Assets' :EfficiencyFactors.EBITToAssets,
'Retained Earnings To Assets' :EfficiencyFactors.RetainedEarningsToAssets
}
efficiency_result, t = factor_pipeline(EFFICIENCY_FACTORS)
print('Pipeline run time {:.2f} secs'.format(t))
efficiency_result.info()
###Output
Pipeline run time 37.93 secs
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 50362 entries, (2014-01-02 00:00:00+00:00, Equity(24 [AAPL])) to (2015-12-31 00:00:00+00:00, Equity(47208 [GPRO]))
Data columns (total 6 columns):
CFO To Assets 50351 non-null float64
Capex To Assets 46997 non-null float64
Capex To FCF 45799 non-null float64
Capex To Sales 46997 non-null float64
EBIT To Assets 46635 non-null float64
Retained Earnings To Assets 50349 non-null float64
dtypes: float64(6)
memory usage: 2.7+ MB
###Markdown
Risk
###Code
class RiskFactors:
@staticmethod
def LogMarketCap(mask):
"""Log of Market Capitalization log(Close Price * Shares Outstanding)"""
return np.log(MarketCap(mask=mask))
class DownsideRisk(CustomFactor):
"""Mean returns divided by std of 1yr daily losses (Sortino Ratio)"""
inputs = [USEquityPricing.close]
window_length = 252
def compute(self, today, assets, out, close):
ret = pd.DataFrame(close).pct_change()
out[:] = ret.mean().div(ret.where(ret<0).std())
@staticmethod
def MarketBeta(**kwargs):
"""Slope of 1-yr regression of price returns against index returns"""
return SimpleBeta(target=symbols('SPY'), regression_length=252)
class DownsideBeta(CustomFactor):
"""Slope of 1yr regression of returns on negative index returns"""
inputs = [USEquityPricing.close]
window_length = 252
def compute(self, today, assets, out, close):
t = len(close)
assets = pd.DataFrame(close).pct_change()
start_date = (today - pd.DateOffset(years=1)).strftime('%Y-%m-%d')
spy = get_pricing('SPY',
start_date=start_date,
end_date=today.strftime('%Y-%m-%d')).reset_index(drop=True)
spy_neg_ret = (spy
.close_price
.iloc[-t:]
.pct_change()
.pipe(lambda x: x.where(x<0)))
out[:] = assets.apply(lambda x: x.cov(spy_neg_ret)).div(spy_neg_ret.var())
class Vol3M(CustomFactor):
"""3-month Volatility: Standard deviation of returns over 3 months"""
inputs = [USEquityPricing.close]
window_length = 63
def compute(self, today, assets, out, close):
out[:] = np.log1p(pd.DataFrame(close).pct_change()).std()
RISK_FACTORS = {
'Log Market Cap' : RiskFactors.LogMarketCap,
'Downside Risk' : RiskFactors.DownsideRisk,
'Index Beta' : RiskFactors.MarketBeta,
# 'Downside Beta' : RiskFactors.DownsideBeta,
'Volatility 3M' : RiskFactors.Vol3M,
}
risk_result, t = factor_pipeline(RISK_FACTORS)
print('Pipeline run time {:.2f} secs'.format(t))
risk_result.info()
###Output
Pipeline run time 48.59 secs
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 50362 entries, (2014-01-02 00:00:00+00:00, Equity(24 [AAPL])) to (2015-12-31 00:00:00+00:00, Equity(47208 [GPRO]))
Data columns (total 4 columns):
Downside Risk 50362 non-null float64
Index Beta 50079 non-null float64
Log Market Cap 50362 non-null float64
Volatility 3M 50362 non-null float64
dtypes: float64(4)
memory usage: 1.9+ MB
###Markdown
Growth
###Code
def growth_pipeline():
revenue = AnnualizedData(inputs = [Fundamentals.total_revenue_asof_date,
Fundamentals.total_revenue],
mask=UNIVERSE)
eps = AnnualizedData(inputs = [Fundamentals.diluted_eps_earnings_reports_asof_date,
Fundamentals.diluted_eps_earnings_reports],
mask=UNIVERSE)
return Pipeline({'Sales': revenue,
'EPS': eps,
'Total Assets': Fundamentals.total_assets.latest,
'Net Debt': Fundamentals.net_debt.latest},
screen=UNIVERSE)
start_timer = time()
growth_result = run_pipeline(growth_pipeline(), start_date=START, end_date=END)
for col in growth_result.columns:
for month in [3, 12]:
new_col = col + ' Growth {}M'.format(month)
kwargs = {new_col: growth_result[col].pct_change(month*MONTH).groupby(level=1).rank()}
growth_result = growth_result.assign(**kwargs)
print('Pipeline run time {:.2f} secs'.format(time() - start_timer))
growth_result.info()
###Output
Pipeline run time 23.48 secs
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 50362 entries, (2014-01-02 00:00:00+00:00, Equity(24 [AAPL])) to (2015-12-31 00:00:00+00:00, Equity(47208 [GPRO]))
Data columns (total 12 columns):
EPS 50215 non-null float64
Net Debt 47413 non-null float64
Sales 50351 non-null float64
Total Assets 50362 non-null float64
EPS Growth 3M 50152 non-null float64
EPS Growth 12M 49963 non-null float64
Net Debt Growth 3M 47350 non-null float64
Net Debt Growth 12M 47171 non-null float64
Sales Growth 3M 50288 non-null float64
Sales Growth 12M 50099 non-null float64
Total Assets Growth 3M 50299 non-null float64
Total Assets Growth 12M 50110 non-null float64
dtypes: float64(12)
memory usage: 5.0+ MB
###Markdown
Quality
###Code
class QualityFactors:
@staticmethod
def AssetTurnover(mask):
"""Sales divided by average of year beginning and year end assets"""
assets = AnnualAvg(inputs=[Fundamentals.total_assets],
mask=mask)
sales = AnnualizedData([Fundamentals.total_revenue_asof_date,
Fundamentals.total_revenue], mask=mask)
return sales / assets
@staticmethod
def CurrentRatio(mask):
"""Total current assets divided by total current liabilities"""
assets = Fundamentals.current_assets.latest
liabilities = Fundamentals.current_liabilities.latest
return assets / liabilities
@staticmethod
def AssetToEquityRatio(mask):
"""Total current assets divided by common equity"""
assets = Fundamentals.current_assets.latest
equity = Fundamentals.common_stock.latest
return assets / equity
@staticmethod
def InterestCoverage(mask):
"""EBIT divided by interest expense"""
ebit = AnnualizedData(inputs = [Fundamentals.ebit_asof_date,
Fundamentals.ebit], mask=mask)
interest_expense = AnnualizedData(inputs = [Fundamentals.interest_expense_asof_date,
Fundamentals.interest_expense], mask=mask)
return ebit / interest_expense
@staticmethod
def DebtToAssetRatio(mask):
"""Total Debts divided by Total Assets"""
debt = Fundamentals.total_debt.latest
assets = Fundamentals.total_assets.latest
return debt / assets
@staticmethod
def DebtToEquityRatio(mask):
"""Total Debts divided by Common Stock Equity"""
debt = Fundamentals.total_debt.latest
equity = Fundamentals.common_stock.latest
return debt / equity
@staticmethod
def WorkingCapitalToAssets(mask):
"""Current Assets less Current liabilities (Working Capital) divided by Assets"""
working_capital = Fundamentals.working_capital.latest
assets = Fundamentals.total_assets.latest
return working_capital / assets
@staticmethod
def WorkingCapitalToSales(mask):
"""Current Assets less Current liabilities (Working Capital), divided by Sales"""
working_capital = Fundamentals.working_capital.latest
sales = AnnualizedData([Fundamentals.total_revenue_asof_date,
Fundamentals.total_revenue], mask=mask)
return working_capital / sales
class MertonsDD(CustomFactor):
"""Merton's Distance to Default """
inputs = [Fundamentals.total_assets,
Fundamentals.total_liabilities,
libor.value,
USEquityPricing.close]
window_length = 252
def compute(self, today, assets, out, tot_assets, tot_liabilities, r, close):
mertons = []
for col_assets, col_liabilities, col_r, col_close in zip(tot_assets.T, tot_liabilities.T,
r.T, close.T):
vol_1y = np.nanstd(col_close)
numerator = np.log(
col_assets[-1] / col_liabilities[-1]) + ((252 * col_r[-1]) - ((vol_1y ** 2) / 2))
mertons.append(numerator / vol_1y)
out[:] = mertons
QUALITY_FACTORS = {
'AssetToEquityRatio' : QualityFactors.AssetToEquityRatio,
'AssetTurnover' : QualityFactors.AssetTurnover,
'CurrentRatio' : QualityFactors.CurrentRatio,
'DebtToAssetRatio' : QualityFactors.DebtToAssetRatio,
'DebtToEquityRatio' : QualityFactors.DebtToEquityRatio,
'InterestCoverage' : QualityFactors.InterestCoverage,
'MertonsDD' : QualityFactors.MertonsDD,
'WorkingCapitalToAssets': QualityFactors.WorkingCapitalToAssets,
'WorkingCapitalToSales' : QualityFactors.WorkingCapitalToSales,
}
quality_result, t = factor_pipeline(QUALITY_FACTORS)
print('Pipeline run time {:.2f} secs'.format(t))
quality_result.info()
###Output
Pipeline run time 36.81 secs
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 50362 entries, (2014-01-02 00:00:00+00:00, Equity(24 [AAPL])) to (2015-12-31 00:00:00+00:00, Equity(47208 [GPRO]))
Data columns (total 9 columns):
AssetToEquityRatio 45176 non-null float64
AssetTurnover 50314 non-null float64
CurrentRatio 45680 non-null float64
DebtToAssetRatio 50080 non-null float64
DebtToEquityRatio 48492 non-null float64
InterestCoverage 35250 non-null float64
MertonsDD 50362 non-null float64
WorkingCapitalToAssets 45680 non-null float64
WorkingCapitalToSales 45669 non-null float64
dtypes: float64(9)
memory usage: 3.8+ MB
###Markdown
Payout
###Code
class PayoutFactors:
@staticmethod
def DividendPayoutRatio(mask):
"""Dividends Per Share divided by Earnings Per Share"""
dps = AnnualizedData(inputs = [Fundamentals.dividend_per_share_earnings_reports_asof_date,
Fundamentals.dividend_per_share_earnings_reports], mask=mask)
eps = AnnualizedData(inputs = [Fundamentals.basic_eps_earnings_reports_asof_date,
Fundamentals.basic_eps_earnings_reports], mask=mask)
return dps / eps
@staticmethod
def DividendGrowth(**kwargs):
"""Annualized percentage DPS change"""
return Fundamentals.dps_growth.latest
PAYOUT_FACTORS = {
'Dividend Payout Ratio': PayoutFactors.DividendPayoutRatio,
'Dividend Growth': PayoutFactors.DividendGrowth
}
payout_result, t = factor_pipeline(PAYOUT_FACTORS)
print('Pipeline run time {:.2f} secs'.format(t))
payout_result.info()
###Output
Pipeline run time 23.15 secs
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 50362 entries, (2014-01-02 00:00:00+00:00, Equity(24 [AAPL])) to (2015-12-31 00:00:00+00:00, Equity(47208 [GPRO]))
Data columns (total 2 columns):
Dividend Growth 40517 non-null float64
Dividend Payout Ratio 39947 non-null float64
dtypes: float64(2)
memory usage: 1.2+ MB
###Markdown
Profitability
###Code
class ProfitabilityFactors:
@staticmethod
def GrossProfitMargin(mask):
"""Gross Profit divided by Net Sales"""
gross_profit = AnnualizedData([Fundamentals.gross_profit_asof_date,
Fundamentals.gross_profit], mask=mask)
sales = AnnualizedData([Fundamentals.total_revenue_asof_date,
Fundamentals.total_revenue], mask=mask)
return gross_profit / sales
@staticmethod
def NetIncomeMargin(mask):
"""Net income divided by Net Sales"""
net_income = AnnualizedData([Fundamentals.net_income_income_statement_asof_date,
Fundamentals.net_income_income_statement], mask=mask)
sales = AnnualizedData([Fundamentals.total_revenue_asof_date,
Fundamentals.total_revenue], mask=mask)
return net_income / sales
PROFITABIILTY_FACTORS = {
'Gross Profit Margin': ProfitabilityFactors.GrossProfitMargin,
'Net Income Margin': ProfitabilityFactors.NetIncomeMargin,
'Return on Equity': Fundamentals.roe.latest,
'Return on Assets': Fundamentals.roa.latest,
'Return on Invested Capital': Fundamentals.roic.latest
}
profitability_result, t = factor_pipeline(PAYOUT_FACTORS)
print('Pipeline run time {:.2f} secs'.format(t))
payout_result.info()
# profitability_pipeline().show_graph(format='png')
###Output
_____no_output_____
###Markdown
Build Dataset Get Returns
###Code
lookahead = [1, 5, 10, 20]
returns = run_pipeline(Pipeline({'Returns{}D'.format(i): Returns(inputs=[USEquityPricing.close],
window_length=i+1, mask=UNIVERSE) for i in lookahead},
screen=UNIVERSE),
start_date=START,
end_date=END)
return_cols = ['Returns{}D'.format(i) for i in lookahead]
returns.info()
data = pd.concat([returns,
value_result,
momentum_result,
quality_result,
payout_result,
growth_result,
efficiency_result,
risk_result], axis=1).sortlevel()
data.index.names = ['date', 'asset']
data['stock'] = data.index.get_level_values('asset').map(lambda x: x.asset_name)
###Output
_____no_output_____
###Markdown
Remove columns and rows with less than 80% of data availability
###Code
rows_before, cols_before = data.shape
data = (data
.dropna(axis=1, thresh=int(len(data)*.8))
.dropna(thresh=int(len(data.columns) * .8)))
data = data.fillna(data.median())
rows_after, cols_after = data.shape
print('{:,d} rows and {:,d} columns dropped'.format(rows_before-rows_after, cols_before-cols_after))
data.sort_index(1).info()
g = sns.clustermap(data.drop(['stock'] + return_cols, axis=1).corr())
plt.gcf().set_size_inches((14,14));
###Output
_____no_output_____
###Markdown
Prepare Features
###Code
X = pd.get_dummies(data.drop(return_cols, axis=1))
X.info()
###Output
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 47377 entries, (2014-01-02 00:00:00+00:00, Equity(24 [AAPL])) to (2015-12-31 00:00:00+00:00, Equity(47208 [GPRO]))
Columns: 182 entries, DividendYield to stock_YELP INC
dtypes: float64(182)
memory usage: 66.1+ MB
###Markdown
Shifted Returns
###Code
y = data.loc[:, return_cols]
shifted_y = []
for col in y.columns:
t = int(re.search(r'\d+', col).group(0))
shifted_y.append(y.groupby(level='asset')['Returns{}D'.format(t)].shift(-t).to_frame(col))
y = pd.concat(shifted_y, axis=1)
y.info()
ax = sns.boxplot(y[return_cols])
ax.set_title('Return Distriubtions');
target = 'Returns10D'
outliers = .01
model_data = pd.concat([y[[target]], X], axis=1).dropna().reset_index('asset', drop=True)
model_data = model_data[model_data[target].between(*model_data[target].quantile([outliers, 1-outliers]).values)]
model_data[target] = np.log1p(model_data[target])
features = model_data.drop(target, axis=1).columns
dates = model_data.index.unique()
print(model_data.info())
model_data[target].describe()
idx = pd.IndexSlice
###Output
_____no_output_____
###Markdown
Logistic Regression: Classification
###Code
def time_series_split(d, nfolds=5, min_train=21):
"""Generate train/test dates for nfolds
with at least min_train train obs
"""
train_dates = d[:min_train].tolist()
n = int(len(dates)/(nfolds + 1)) + 1
test_folds = [d[i:i + n] for i in range(min_train, len(d), n)]
for test_dates in test_folds:
if len(train_dates) > min_train:
yield train_dates, test_dates
train_dates.extend(test_dates)
target = 'Returns10D'
label = (y[target] > 0).astype(int).to_frame(target)
model_data = pd.concat([label, X], axis=1).dropna().reset_index('asset', drop=True)
features = model_data.drop(target, axis=1).columns
dates = model_data.index.unique()
print(model_data.info())
nfolds = 250
Cs = np.logspace(-5, 5, 11)
scaler = StandardScaler()
logistic_results, logistic_coeffs = pd.DataFrame(), pd.DataFrame()
for C in Cs:
print(C)
coeffs = []
log_reg = LogisticRegression(C=C)
for i, (train_dates, test_dates) in enumerate(time_series_split(dates, nfolds=nfolds)):
X_train = model_data.loc[idx[train_dates], features]
y_train = model_data.loc[idx[train_dates], target]
log_reg.fit(X=scaler.fit_transform(X_train), y=y_train)
X_test = model_data.loc[idx[test_dates], features]
y_test = model_data.loc[idx[test_dates], target]
y_pred = log_reg.predict_proba(scaler.transform(X_test))[:, 1]
coeffs.append(log_reg.coef_.squeeze())
logistic_results = (logistic_results
.append(y_test
.to_frame('actuals')
.assign(predicted=y_pred, C=C)))
logistic_coeffs[C] = np.mean(coeffs, axis=0)
auc_by_C = logistic_results.groupby('C').apply(lambda x: roc_auc_score(y_true=x.actuals.astype(int),
y_score=x.predicted))
auc_by_C
base_auc = auc.iloc[-1]
best_auc = auc.max()
best_C = auc.idxmax()
fig, axes = plt.subplots(ncols=2, sharex=True)
auc_by_C.sort_index(ascending=False).plot(logx=True, title='Area under the Curve (AUC)', ax=axes[0])
axes[0].axhline(y=base_auc, c='black')
axes[0].axvline(x=best_C, c='darkgrey', ls='--')
axes[0].set_xlabel('Regularization')
axes[0].set_ylabel('Information Coefficient')
logistic_coeffs.T.sort_index(ascending=False).plot(legend=False, logx=True, title='Logistic Ridge Path', ax=axes[1])
axes[1].set_xlabel('Regularization')
axes[1].set_ylabel('Coefficients')
axes[1].axvline(x=best_C, c='darkgrey', ls='--')
fig.tight_layout();
###Output
_____no_output_____
###Markdown
Ordinal Logit
###Code
target = 'Returns10D'
label = (y[target] > 0).astype(int).to_frame(target)
model_data = pd.concat([label, X], axis=1).dropna().reset_index('asset', drop=True)
features = model_data.drop(target, axis=1).columns
dates = model_data.index.unique()
print(model_data.info())
###Output
_____no_output_____ |
source/examples/basics/gog/geom_step.ipynb | ###Markdown
geom_step()
###Code
from datetime import datetime
import pandas as pd
from lets_plot import *
LetsPlot.setup_html()
df = pd.read_csv('https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/economics.csv', \
parse_dates=['date'])
df = df[df.date > datetime(2000, 1, 1)]
ggplot(df, aes('date', 'unemploy')) + scale_x_datetime() + geom_step()
###Output
_____no_output_____
###Markdown
geom_step()
###Code
from datetime import datetime
import pandas as pd
from lets_plot import *
LetsPlot.setup_html()
df = pd.read_csv('https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/economics.csv', \
parse_dates=['date'])
df = df[df.date > datetime(2000, 1, 1)]
ggplot(df, aes('date', 'unemploy')) + scale_x_datetime() + geom_step()
###Output
_____no_output_____ |
old_and_other_codes/tobaco3482_text.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
import os
import pathlib
data_root = pathlib.Path('/content/drive/MyDrive/tobaco_OCR/')
print(data_root)
for item in data_root.iterdir():
print(item)
def get_file_paths_and_labels(data_root):
text_paths = [str(path) for path in data_root.glob('*/*.txt')]
return text_paths
text_paths = get_file_paths_and_labels(data_root)
print(text_paths)
texts = []
for this_text in text_paths:
with open(this_text) as f:
lines = f.readlines()
lines = ' '.join(lines)
texts.append(lines)
print(len(texts))
df = pd.DataFrame(list(zip(text_paths, texts)),
columns =['Path', 'text'])
df.head()
import re
df['text'] = [re.sub(r'[^\w\s]','',s) for s in df['text']]
df['text'] = [s.replace('\n','') for s in df['text']]
df
## Tokenize, Lemmatize, stopwords removal
import spacy
import nltk
nlp = spacy.load("en", disable=['parser', 'tagger', 'ner'])
from nltk.corpus import stopwords
nltk.download('stopwords')
stops = stopwords.words("english")
def normalize(comment, lowercase, remove_stopwords):
if lowercase:
comment = comment.lower()
comment = nlp(comment)
lemmatized = list()
for word in comment:
lemma = word.lemma_.strip()
if lemma:
if not remove_stopwords or (remove_stopwords and lemma not in stops):
lemmatized.append(lemma)
return " ".join(lemmatized)
df['text'] = df['text'].apply(normalize, lowercase=True, remove_stopwords=True)
df
#Tokenize
import nltk
nltk.download('punkt')
nltk.download('wordnet')
df['text'] = [nltk.word_tokenize(s) for s in df['text']]
df
from gensim.models.fasttext import FastText
word_tokens = df['text']
# Defining values for parameters
embedding_size = 300
window_size = 5
min_word = 5
down_sampling = 1e-2
# %%time
fast_Text_model = FastText(word_tokens,
size=embedding_size,
window=window_size,
min_count=min_word,
sample=down_sampling,
workers = 4,
sg=1,
iter=100)
!mkdir -p saved_model
fast_Text_model.save('saved_model/my_model')
!zip -r '/content/saved_model.zip' '/content/saved_model'
from google.colab import files
files.download('/content/saved_model.zip')
# from gensim.models import Word2Vec
# # Save fastText gensim model
# fast_Text_model.save("model/ft_model_yelp")
# # Load saved gensim fastText model
# fast_Text_model = Word2Vec.load("model/ft_model_yelp")
!pip install fasttext
import fasttext
import fasttext.util
ft = fasttext.load_model('cc.en.300.bin')
# Check word embedding for a perticular word
l = fast_Text_model.wv['chicken']
print(len(l))
print(l)
def vectorize_str(token_list):
embedd = []
for word in token_list:
embedd.append(fast_Text_model.wv['word'])
df['text'] = [fast_Text_model.wv['chicken'] for word df['text']]
biglist = []
for this_list in df['text']:
for i in this_list:
biglist.append(i)
print(len(biglist))
print(len(set(biglist)))
###Output
_____no_output_____ |
Notebook/Exercice3.ipynb | ###Markdown
Exercice 3 Exercice 3.1
###Code
import os
import glob
def sequence_parser (fasta):
"""input: a Fasta file
output : tuple (descriptor,sequence) with descriptor being the first line of the fasta file
"""
with open (fasta,'r') as f:
lines = f.readlines()
descriptor =lines[0]
sequence = lines[1:]
sequence = [line.rstrip() for line in sequence]
sequence = ''.join(sequence)
return descriptor, sequence
descr_salmo,sequence_salmo =sequence_parser('data/salmonella_spi1_region.fna')
print(descr_salmo)
sequence_salmo[:1000]
###Output
>gi|821161554|gb|CP011428.1| Salmonella enterica subsp. enterica strain YU39, complete genome, subsequence 3000000 to 3200000
###Markdown
Exercice 3.2 First, we need to copy and paste the NEB file in a text editor, and put the extension .fna
###Code
fasta_phage = 'data/lambda.fna'
descr_phage,sequence_phage = sequence_parser(fasta_phage)
sequence_phage[:1000]
def restriction_sites(seq, recog_seq):
"""Determines the positions of the first base of the enzyme's recognition site
in a given genome sequence """
recog_index = []
seq_length = len(seq)
recog_length = len(recog_seq)
for i in range (0,seq_length-recog_length):
if recog_seq == seq[i:i+recog_length]:
recog_index.append(i)
return recog_index
?restriction_sites
HindIII = 'AAGCTT'
EcoRI = 'GAATTC'
KpnI = 'GGTACC'
HIII = restriction_sites(sequence_phage,HindIII)
EcI = restriction_sites(sequence_phage,EcoRI)
KpI = restriction_sites(sequence_phage,KpnI)
print(HIII)
print(EcI)
print(KpI)
sequence_phage[37583:37583+len(HindIII)] == HindIII
###Output
_____no_output_____
###Markdown
Exercice 4 exercice 4.1
###Code
seq = 'ATGACTACGT'
block_size = 4
def seq_block(seq,block_size):
blocks = []
for i in range(0,len(seq),block_size):
block = seq[i:i+block_size]
if len(block) == block_size:
blocks.append(block)
return blocks
seq_block(seq,block_size)
def gc_block(seq,block_size):
gc = []
seq.upper()
for i in range(0,len(seq),block_size):
block = seq[i:i+block_size]
if len(block) == block_size:
gc.append((block.count('G')+block.count('C'))/block_size)
return tuple(gc)
gc_block(seq,block_size)
def gc_map(seq, block_size, gc_thresh):
#First make sure all the bases are uppercase
seq.upper()
bl=''
gc_content = gc_block(seq,block_size)
seq_blocks =seq_block(seq,block_size)
for i,_ in enumerate(gc_content):
if gc_content[i] < gc_thresh:
bl += seq_blocks[i].lower()
else:
bl += seq_blocks[i]
return bl
gc_map(seq, block_size, 0.4)
salmo_map = gc_map(sequence_salmo,1000,0.45)
20000//60
length_salmo =len(sequence_salmo)
print(length_salmo)
salmo_ = sequence_salmo.split(' ',60)
#if os.path.isfile('salmo_map.txt'):
# raise RuntimeError('salmo_map.txt already exists.')
#with open('salmo_map.txt','w') as f:
# f.write(salmo_map)
###Output
200000
###Markdown
Exercice 4.2
###Code
#we work with codons so we might need codons conversion
import bootcamp_utils
seq = 'GGATGATGATGTAAAAC'
seq.find('TAA')
def all_start(seq):
i =seq.find('ATG')
starts = []
while i >= 0:
starts.append(i)
i = seq.find('ATG', i + 1)
return starts
def first_stop1(seq):
stop = []
i = 0
while i < len(seq) - 2 and seq[i:i+3] not in ('TAA', 'TAG', 'TGA'):
i += 3
#we need to return the last base of stop codon
return i + 3
def first_stop(seq):
i = seq.find('TAA')
if i == -1:
i=1e10
j = seq.find('TAG')
if j == -1:
j=1e10
k = seq.find('TGA')
if k == -1:
k=1e10
return min(i,j,k)+3
print(first_stop(seq[6:]))
min(1,2,3)
def longest_ORF1(seq):
ORF = []
start = all_start(seq)
stop,temp = 0,0
for i,id_start in enumerate(start):
stop = first_stop(seq[id_start:])
print(stop, id_start)
while stop % 3 != 0 and temp<len(seq)-2:
temp = stop
stop = first_stop(seq[stop:])
ORF.append(seq[id_start:stop])
return ORF
#longest_ORF1(seq)
def all_stop(seq):
taa = []
tag = []
tga = []
#first start with taa
i = seq.find('TAA')
while i >= 0:
taa.append(i+3)
i = seq.find('ATG', i + 1)
#then tag
j = seq.find('TAG')
while j >= 0:
tag.append(j+3)
j = seq.find('TAG', j + 1)
#and tga
k = seq.find('TGA')
while k >= 0:
tga.append(k+3)
k = seq.find('TGA', k + 1)
return taa+tga+tag
a= all_start(seq)
a
b = all_stop(seq)
b
def find_ORF(seq):
ORF = []
start = all_start(seq)
stop = sorted(all_stop(seq))
for i in start:
for j in stop :
if (j - i) % 3 == 0:
ORF.append(seq[i:j])
break
return tuple(ORF)
find_ORF(seq)
def longestORF(seq):
temp =[""]
ORF = find_ORF(seq)
print(ORF)
for i in ORF :
if len(i) > len(temp[0]):
temp[0] = i
return temp
#long_ORF_salmo = longestORF(sequence_salmo) #I can't do it, my code is not optimized at all because I get all the
a = all_start(sequence_salmo)
#print(a)
b = all_stop(sequence_salmo)
#print(b)
long_ORF_salmo = longestORF(sequence_salmo)
long_ORF_salmo
longestORF(seq)
def DNA_prot(seq):
#we will assume that the protein is the transduction/translation of the longest ORF
#ORF = longestORF(seq)
prot_seq = ""
for i in range(0,len(seq)-3,3):
prot_seq += bootcamp_utils.codons[seq[i:i+3]]
return prot_seq
prot_seq = DNA_prot('ATGATGATGGAATAA')
prot_seq2 = DNA_prot('ATGAGGTTCTTATCTTCAGGGGGAGGC')
prot_seq2
###Output
_____no_output_____ |
bert_base_verbose.ipynb | ###Markdown
Tokenizing
###Code
%%time
tokenizer = BertTokenizer.from_pretrained(BERT_MODEL_PATH, cache_dir=None,do_lower_case=True)
#train_df = pd.read_csv(os.path.join(Data_dir,"train.csv"))#.sample(num_to_load+valid_size,random_state=SEED)
print('loaded %d records' % len(train_df))
# Make sure all comment_text values are strings
train_df['text'] = train_df['text'].astype(str)
x_train = convert_lines(train_df["text"].fillna("DUMMY_VALUE"),MAX_SEQUENCE_LENGTH,tokenizer)
print("X_train : {}".format(len(x_train)))
#test_df = pd.read_csv(os.path.join(Data_dir,"test.csv"))#.sample(num_to_load+valid_size,random_state=SEED)
print('loaded %d records' % len(test_df))
test_df['text'] = test_df['text'].astype(str)
x_test = convert_lines(test_df["text"].fillna("DUMMY_VALUE"),MAX_SEQUENCE_LENGTH,tokenizer)
print("X_test : {}".format(len(x_test)))
train_df=train_df.fillna(0)
# above not working in linux ?? these x_train & x_test are obtained from windows
#x_train = np.loadtxt('../job_nlp/x_train.csv', delimiter=',')
#x_test = np.loadtxt('../job_nlp/x_test.csv', delimiter=',')
seed_everything(SEED)
output_model_file = "bert_pytorch.bin"
lr=2e-5
batch_size = 8
accumulation_steps=2
n_labels = 2
criterion = nn.CrossEntropyLoss()
TARGET = 'label'
train_df[TARGET] = train_df[TARGET]-1
#x_train = train_df['text']
y_train = torch.tensor(train_df[TARGET])#.long()
y_train
y_train[:5]
def to_numpy(x):
return x.cpu().detach().numpy()
test_dataset = TensorDataset(torch.tensor(x_test, dtype = torch.long)) #TensorDataset(X_valid, valid_length, torch.tensor(Y_valid))
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
model = BertForSequenceClassification.from_pretrained("../job_nlp/working",cache_dir=None, num_labels=5)
%%time
best_epoch_list = []
best_val_acc_list = []
start_time = time()
epoch_df = pd.DataFrame()
splits = list(StratifiedKFold(n_splits=5, shuffle=True, random_state=SEED).split(x_train, y_train))
for fold in [0, 1, 2, 3, 4]:
print("{}/5 fold training starts!".format(fold+1))
fold_num = str(fold + 1)
trn_index, val_index = splits[fold]
#print("A")
X_train, X_valid = x_train[trn_index], x_train[val_index]
#train_length, valid_length = lengths[trn_index], lengths[val_index]
Y_train, Y_valid = y_train[trn_index], y_train[val_index]
train_dataset = TensorDataset(torch.tensor(X_train, dtype = torch.long), torch.tensor(Y_train, dtype=torch.long)) #TensorDataset(X_train, train_length, torch.tensor(Y_train))
valid_dataset = TensorDataset(torch.tensor(X_valid, dtype = torch.long), torch.tensor(Y_valid, dtype=torch.long)) #TensorDataset(X_valid, valid_length, torch.tensor(Y_valid))
#print("AA")
model = BertForSequenceClassification.from_pretrained("../job_nlp/working",cache_dir=None, num_labels=5)
model.zero_grad()
model = model.to(device)
#optimizer = BertAdam(optimizer_grouped_parameters,
# lr=lr,
# warmup=0.05,
# t_total=num_train_optimization_steps)
#scheduler = StepLR(optimizer, step_size=5, gamma=0.5)
#print("AAA")
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
#train = train_dataset
num_train_optimization_steps = int(EPOCHS*len(train_dataset)/batch_size/accumulation_steps)
#optimizer = BertAdam(optimizer_grouped_parameters,
# lr=lr,
# warmup=0.05,
# t_total=np.ceil(num_train_optimization_steps))
optimizer = AdamW(model.parameters(), lr, weight_decay=0.000025)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
valid_loader = DataLoader(valid_dataset, batch_size=batch_size, shuffle=False)
best_valid_score = 0
best_val_acc = 0
epoch_list = []
val_acc_list = []
fold_epoch_df = pd.DataFrame()
fold_epoch_df['epoch'] = epoch_list
fold_epoch_df['fold'] = fold+1
fold_epoch_df['acc'] = val_acc_list
epoch_df = pd.concat([epoch_df, fold_epoch_df], axis=0)
#tq = tqdm_notebook(range(EPOCHS))
#model, optimizer = amp.initialize(model, optimizer, opt_level="O1",verbosity=0)
for epoch in range(1, EPOCHS + 1):
# print("AAAA")
#start_time = time.time()
train_loss = 0
train_total_correct = 0
model.train()
optimizer.zero_grad()
#tk0 = tqdm_notebook(enumerate(train_loader),total=len(train_loader),leave=False)
for i, (x_batch, y_batch) in enumerate(train_loader):
# print("AAAAA")
preds = model(x_batch.to(device), attention_mask = (x_batch>0).to(device), labels=None)
loss = criterion(preds, y_batch.to(device))
loss.backward()
if (i+1) % accumulation_steps == 0: # Wait for several backward steps
optimizer.step() # Now we can do an optimizer step
optimizer.zero_grad()
else:
optimizer.step()
optimizer.zero_grad()
train_loss += loss.item()/len(train_loader)
#print("AAAAAA")
# Validation Starts
model.eval()
val_loss = 0
valid_total_correct = 0
#valid_preds = np.zeros(len(valid_dataset),5)
#valid_targets = np.zeros(len(valid_dataset),5)
with torch.no_grad():
for i, (x_batch, y_batch) in enumerate(valid_loader):
#valid_targets[i*batch_size: (i+1)*batch_size] = y_batch.numpy().copy()
# print("AAAAAAAA")
preds = model(x_batch.to(device), attention_mask = (x_batch>0).to(device), labels=None)
loss = criterion(preds, y_batch.to(device))
output_prob = F.softmax(preds, dim=1)
predict_vector = np.argmax(to_numpy(output_prob), axis=1)
label_vector = to_numpy(y_batch)
#valid_preds[i*batch_size: (i+1)*batch_size] = np.argmax(preds_prob.detach().cpu().squeeze().numpy())
bool_vector = predict_vector == label_vector
val_loss += loss.item()/len(valid_loader)
valid_total_correct += bool_vector.sum()
#val_score = roc_auc_score(valid_targets, valid_preds)
elapsed = time() - start_time
val_acc = valid_total_correct / len(valid_loader.dataset)
if val_acc > best_val_acc:
best_val_acc = val_acc
best_epoch = epoch
print("val_acc has improved !! ")
best_epoch_list.append(best_epoch)
best_val_acc_list.append(best_val_acc)
#torch.save(model.state_dict(), '../job_nlp/Bert_20e_fold_{}.pt'.format(fold))
#print("================ ༼ つ ◕_◕ ༽つ BEST epoch : {}, Accuracy : {} ".format(epoch, best_val_acc))
#lr = [_['lr'] for _ in optimizer.param_g] # or optimizer
print("================ ༼ つ ◕_◕ ༽つ Epoch {} - train_loss: {:.5f} val_loss: {:.5f} val_acc: {:.5f} elapsed: {:.0f}m {:.0f}s".format(epoch, train_loss, val_loss, best_val_acc, elapsed // 60, elapsed % 60))
epoch_list.append(epoch)
val_acc_list.append(val_acc)
print("============== ༼ つ ◕_◕ ༽つ BEST epoch : {}, Accuracy : {} ====================================".format(epoch, best_val_acc))
#best_epoch_list.append(best_epoch)
#best_val_acc_list.append(best_val_acc)
#---- Inference ----
#batch_size = 8
print("========================== ༼ つ ◕_◕ ༽つ Model Load {}_th FOLD =================================".format(fold))
#model.load_state_dict(torch.load('Bert_20e_fold_{}.pt'.format(fold)))
model.eval()
predictions = np.zeros((len(test_loader.dataset),5))
with torch.no_grad():
for i, (x_batch, ) in enumerate(test_loader):
preds = model(x_batch.to(device), attention_mask = (x_batch>0).to(device), labels=None)
predictions[i*batch_size: (i+1)*batch_size] = to_numpy(preds)
print("predict values check : ",predictions[0])
#np.savetxt("../job_nlp/bert_raw_submission/bert_20e_fold_{}.csv".format(fold), predictions, delimiter=",")
###Output
1/5 fold training starts!
/home/yilgukseo/anaconda3/envs/pytorch/lib/python3.7/site-packages/ipykernel_launcher.py:20: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
val_acc has improved !!
================ ༼ つ ◕_◕ ༽つ Epoch 1 - train_loss: 1.31766 val_loss: 1.06345 val_acc: 0.62428 elapsed: 1m 8s
|
DATA512 FINAL PROJECT .ipynb | ###Markdown
DATA 512 Final Project Does Seattle's School Choice System offer equal access to students regardless of race and socioeconomics?Alyssa Goodrich December 10, 2017 Abstract:With the advent of Donald Trump's appointment of Betsy Devos, a long-time school choice advocate as the Secretary of Education, there is increased attention to the topic of school choice. Proponents of this system believe that students benefit from being able to choose the school that best meets their individual needs, which results in better achievement and attainment. Opponents of this system are concerned that it benefits only a certain group of students, while leaving out groups of marginalized students resulting in increased segregation and worse outcomes for some students. This paper investigates Seattle's school choice system and addresses the following questions: - **1.** Do Seattle's option schools serve all student groups equally? - **2a.** Does the admission prioritization system benefit students in the geozone to the detriment of children outside the geozone? - **2b.** Do admissions prioritization based on geozone disproportionately benefit any group of students? - **3.** Do students who attend option schools perform better than their peers who attend attendance area schools (as measured by standardized testing conducted by the state)? We find that Seattle's school choice system disproportionately serves White students and Native American Students, while under-serving black, Hispanic, Asian and Low income students. We also find that the admission system favors white students because white students disproportionately live in the geozone areas. Finally, we find that there are mixed results on whether option schools increase student achievement. For students overall, there is not a significant difference between achievement at option schools vs attendance area schools. However, Hispanic students had higher rates of achievement at option schools, while black and white students had higher rates of achievement at attendance area schools. The disproportionate access of white students to option schools may warrant advocacy for a policy change in Seattle's school choice admission process. Achievement rates at choice and attendance area schools warrants more study. IntroductionIn the past year President Trump appointed Betsy DeVos as the new head of the Department of Education. Ms DeVos is a passionate proponent of school choice and is working to increase access to school choice throughout the country.$^{1}$ Traditionally, students enrolled in public school attend the school that is determined by their address (e.g., most students automatically attend the school that is in their town or neighborhood). Under a school choice system, students are given the option to apply their publicly-funded education benefit to a number of schools, including a range of public schools, charter schools, and in some cases, private or parochial schools. Proponents of this system believe that students benefit from being able to choose the school that best meets their individual needs, which results in better achievement and attainment$^{2}$. Opponents of this system are concerned that it benefits only a certain group of students, while leaving out groups of marginalized students resulting in increased segregation and worse outcomes for some students.$^{3}$ For this project I am going to focus on the school choice system that exists in Seattle public elementary schools. Background on Seattle's School Choice SystemThe Seattle Public Schools school district offers families a choice of schools from among the public schools in the district. Unlike some school choice systems, Seattle students cannot apply their public education benefit to private or parochial schools. There are two types of schools in the Seattle Public School district: Attendance Area Schools and option schools. There are 55 attendance area elementary and K-8 schools, and they are available first to students who live in the school's "attendance area" (all kids who live within the geographic region called an "attendance area" are guaranteed a spot in that school if they want it). If a school has additional space, then students from other attendance areas may also enter a lottery to attend. There are fifteen option schools that serve only students who opt in to attending. Access to option schools is determined by lottery. However, the enrollment process offers priority to some students. Spots in option schools are allotted first to students who have a sibling who attends the school, then to students who are in the geozone (a geographic area surrounding the school), and then to additional students by lottery number. In recent years demand for option schools has increased so much that many, and in some cases all, of the spots in option schools have gone to students who live in the geozone or have siblings who attend. Anecdotally, I have observed that many of the option schools are located in areas with high real estate prices. This means that the option schools may have effectively become an option only for students whose families can afford to live in these expensive neighborhoods, many of whom already attend high performing schools. These observations are the impetus for some of my research questions.The location of geozones for elementary schools are marked in blue hashes on the below map. The location of geozones for highschools are marked in red hashes. For this analysis, I am focusing on elementary schools.
###Code
library("IRdisplay")
display_png(file='Project Data/GeozoneMap.png')
###Output
Warning message:
"package 'IRdisplay' was built under R version 3.4.3"
###Markdown
Research QuestionsMy research questions will focus on helping me answer three broader questions. - **1.** Do Seattle's option schoolss serve all student groups equally? - **2a.** Does the admission prioritization system benefit student in the geozone to the detriment of children outside the geozone? - **2b.** Do admissions prioritization based on geozone disproportionately benefit any group of students? - **3.** Do students who attend option schoolss perform better than their peers who attend attendance area schools (as measured by standardized testing conducted by the state)? Analytical MethodsI will use basic statistical methods to complete this research. The method used for each research question is outlined below: - **Question 1:** Do Seattle's option schoolss serve all student groups equally? **Method:** For each racial or economic group of students, I calculated the proportion of students who attend an option school (e.g., what percent of Asian students went to an option school). I also calculated the proportion of all student who did not belong that group who went to an option school (e.g., what percent of non-asian students went to an option school.) I then used a two-tailed test to compare population proportions and recorded the proportions as well as the p-value comparing the proportions. **Human centered considerations:** I chose this method because I believe it will be effective at determining whether different racial and socioeconomic groups are equally represented in the school choice system. A drawback of this data and my approach is that it relies on data generated by the state where students were likely forced to check a box to identify their race. However, it is possible (maybe likely) that some students didn't have the option to check the box that they most identify as, for example the use of black vs African American and lack of distinction between Latino and Hispanic. Thankfully the State has considered their students and has not published data that could allow any student to be identified. If a there are fewer than ten students in a racial group, the data about those students was suppressed. - **Question 2a:** Does the admission prioritization system benefit students in the geozone to the detriment of children outside the geozone? **Method:** Calculate the proportion of applicants who live in the geozone who are admitted and compare it to the the proportion of applicants who live outside of the geozone who are admitted and complete a two tailed test to compare population proportions. **Human-centered considerations:** The biggest consideration for this method is the difficulty in providing transparency in the data. I determined which students lived in the geozone in a highly analog manner. I printed maps of the geozone areas and identified which attendance areas schools had an attendance area that overlapped with the geozone. I then used the populations at those attendance area schools as my "in geozone" populations. In many cases the overlap was near perfect. In some cases, it was not. I have codified the outcome of this process but it would be labor-intensive for another researcher to reproduce. - **Question 2b:** Does admissions prioritization based on geozone disproportionately benefit any group of students? **Method:** I calculated the proportion of each race that lived within the geozone area (e.g., what percent of all Asian students live in geozone eligible areas) and compared it to the proportion of students not in that race living in the geozone area (e.g., what proportion of all students who are not Asian live in geozone eligible areas). I conducted a two tailed test to compare proportions and if a racial group had a higher proportion geozone eligible than students not of that race, I concluded that race was overrepresented in geozone eligible areas. **Human-centered considerations:** I chose this test because I was seeking to reveal whether the geozone prioritization system contributed to the low rates of certain student groups at option schoolss. However, I realize that this metric affects not only whether students are likely to get in if they apply, but also whether they apply at all. - **Question 3:** Do students who attend option schoolss perform better than their peers who attend attendance area schools (as measured by the Smarter Balanced Assessment standardized test conducted by the state)? **Method:** I calculated the proportion of students who passed the Smarter Balanced Assessment in each racial/socioeconomic group at each school. I then used a t test to compare the mean pass rates at option schoolss to the mean pass rates at attendance area schools. **Human-centered considerations:** When assessing whether students have equal access to option schoolss I think it is also important to consider whether there is a benefit to attending an option school. Unfortunately, standardized test data was one of the few metrics I had available to make that assessment. Standardized test data is not ideal for several reasons: it ignores all factors other than academic achievement (such as social adjustment, social and emotional learning, supportive community and many other factors). I was able to avoid privacy issues because the State removed information for any racial group that had fewer than ten students in a cohort (so it would be impossible to triangulate scores or individual students.) FindingsIn this section, I conduct the analysis outlined in the methods section above, as well as report findings. First, import necessary packages to enable the analysis including dplyr, tidyr and reshape2 for data wrangling, and ggplot2 for creating graphs.
###Code
library("dplyr")
library("tidyr")
library("ggplot2")
library("reshape2")
###Output
_____no_output_____
###Markdown
Import and structure dataFirst we imported the data files use in the analysis.
###Code
#Read in data files
Achievement = read.csv('Project Data/2_03_AIM-EOC-MSP-SBA Assessments School.csv', fileEncoding = "UTF-8-BOM")
schools = read.csv('Project Data/Name&Type1.csv', fileEncoding = "UTF-8-BOM")
AcceptRate = read.csv("Project Data/SibGeozoneAcc.csv", fileEncoding = "UTF-8-BOM")
Demographics = read.csv("Project Data/1_2_Demographic Information by School.csv", fileEncoding = "UTF-8-BOM")
###Output
_____no_output_____
###Markdown
Next we needed to prepare the data to enable our analyses. These steps included: - Filter out schools that are not in Seattle Public School District, or are not elementary or K-8 schools- Merge data sources to get all needed data in the same structure.- Eliminate unused data to make data sets cleaner.
###Code
#filter out schools that are not in Seattle Public School District
SPSAchievement = filter(Achievement,District == "Seattle Public Schools")
#Create a list of schools that we can use to filter out our relevant schools from data sets that have
#additional schools
schoolList = schools[,1]
#Filter our SPSAchievement dataframe to include only SPS elementary schools
FilteredAch <- filter(SPSAchievement, SPSAchievement[,"School"] %in% schoolList)
#Remove data that isn't used in the analysis to make it simpler
SimpleAch <- select(FilteredAch, School, GradeLevel, StudentGroup, countMetStandardWithoutPP, countNotMet)
#merge in data on whether school is attendance area or option schools
SimpleAch = merge(SimpleAch, schools, by.x=c("School"), by.y=c("School"))
#merge school type data in to AcceptRate dataframe
AcceptRateType <- merge(AcceptRate, schools, by.x=c("School"), by.y=c("School"))
#Filter our Demographics dataframe to include only SPS elementary schools
FilteredDemo = filter(Demographics,District == "Seattle Public Schools")
FilteredDemo <- filter(FilteredDemo, FilteredDemo[,"School"] %in% schoolList)
#Remove data that isn't used in the analysis to make it simpler
FilteredDemo <- select(FilteredDemo, School, TotalEnrollment, NumberAmericanIndianorAlaskanNative, NumberAsian, NumberBlack ,NumberHispanic , NumberWhite,NumberFreeorReducedPricedMeals)
FilteredDemo <- merge(FilteredDemo, schools, by ="School")
FilteredDemo <- filter(FilteredDemo, FilteredDemo[,"Type"] != "Other")
FilteredDemo$Type = factor(FilteredDemo$Type)
write.csv(FilteredDemo, file = "DemographicInfo.csv")
write.csv(AcceptRateType, file = "AcceptRates.csv")
write.csv(SimpleAch, file = 'AchievementInfo.csv')
###Output
Warning message:
"package 'bindrcpp' was built under R version 3.4.3"
###Markdown
Question 1. Do Seattle's option schools serve all student groups equally?For each racial or economic group of students, I calculated the proportion of students who attend an option school (e.g., what percent of Asian students went to an option school). I also calculated the proportion of all student who did not belong that group who went to an option school (e.g., what percent of non-asian students went to an option school. I then used a two-tailed test to compare population proportions and recorded the proportions as well as the p-value comparing the proportions. To do this, I first aggregated the demographic data to create a table that shows how many students of each race attended option schoolss versus attendance area schools. See plot below.
###Code
# Aggregate enrollment data to show how many students from each racial group attend option schoolss vs attendance area schools
FilteredDemoAgg = aggregate(FilteredDemo[,2:8], by=list(Category=FilteredDemo[,9]), FUN=sum)
FilteredDemoAgg
###Output
_____no_output_____
###Markdown
I then created a function to compare the populations proportions of students in the geozone eligible area. I then created a loop to compare the proportion for each group of students (e.g., how many Asian students are geozone elibile and is that the same as the proportion of non Asian students who are geozone eligible.) I captured the result in the below table
###Code
# Create a function to compare the population proportion of each student group in Geozone Eligible areas, vs non student group in geozone elgible
ComparePopulations <- function(Group){
PropChoiceTable <- matrix(c(FilteredDemoAgg[2,Group],FilteredDemoAgg[1,Group],(FilteredDemoAgg[2,'TotalEnrollment']-FilteredDemoAgg[2,Group]),(FilteredDemoAgg[1,'TotalEnrollment']-FilteredDemoAgg[1,Group])),ncol=2,byrow=TRUE)
rownames(PropChoiceTable) <- c(Group,paste("not", Group))
colnames(PropChoiceTable) <- c("Choice","Attendance Area")
PropChoiceTable <- as.table(PropChoiceTable)
return(prop.test(PropChoiceTable))
}
# Instantiate a table to capture the proprotion of each racial group attending option schools and
# the P-value of whether that proportion is different than for other student groups collectively
ProportionAtChoice <- data.frame(matrix(ncol = 4, nrow = length(FilteredDemoAgg) -2))
names = c('StudentGroup', '% at Choice','% others at Choice','p-value')
colnames(ProportionAtChoice) = names
#Create Group list to loop through
Groups = c('NumberAmericanIndianorAlaskanNative', 'NumberAsian' ,'NumberBlack', 'NumberHispanic', 'NumberWhite', 'NumberFreeorReducedPricedMeals')
# Compare populations in Choice vs regular schools for each student group and save results into dataframe
for (i in (1:length(Groups))){
Group = Groups[i]
temp = ComparePopulations(Group)
ProportionAtChoice[i,'StudentGroup'] = unlist(strsplit(Group, split='Number', fixed=TRUE))[2]
ProportionAtChoice[i,'% at Choice'] = temp$estimate[1]
ProportionAtChoice[i,'% others at Choice'] = temp$estimate[2]
ProportionAtChoice[i,'p-value'] = temp$p.value
}
#Print results
ProportionAtChoice[ProportionAtChoice$StudentGroup == 'AmericanIndianorAlaskanNative',1] = 'Native Am./\nAlska. Native'
ProportionAtChoice[ProportionAtChoice$StudentGroup == 'FreeorReducedPricedMeals',1] = 'Low income'
ProportionAtChoice
###Output
_____no_output_____
###Markdown
These results show that Native American and white students are over-represented at option schoolss while Asian, Black, Hispanic and low-income students are under-represented. We can see that the P-values for the comparison of proportions is significant in all cases. The below chart helps to visualize the difference between each student group's proportion in option schools as it compares to the overall percent of students in an option school.The horizontal line represents the overall percent of students in an option school. So we can see that White students and Native American students are above the line, while other groups are below.Now that we have concluded different student groups are not equally represented we will investigate whether the admissions policies might contribute to this issue.
###Code
#Calculate proportion of each group attending option schools
ProportionAttend = ProportionAtChoice[,2]
#Rename columns to make them less cumbersome
names = ProportionAtChoice[,1]
names(ProportionAttend)= names
#reorder from highest to lowest
ProportionAttend = ProportionAttend[order(-ProportionAttend)]
#print table
#Create chart
options(repr.plot.width=8, repr.plot.height=6)
a = barplot(ProportionAttend, main="Proportion of student group attending option schools",
xlab="Student Group", ,ylab = "Proportion of Student Group", ylim = c(0,.35),col=cols <- c( "darkblue"))
text(x = a, y= ProportionAttend,labels=round(ProportionAttend,2), pos = 3,cex=.8, xpd = NA)
abline(h = FilteredDemoAgg[2,2]/(FilteredDemoAgg[2,2]+FilteredDemoAgg[1,2]))
###Output
_____no_output_____
###Markdown
Question 2a. Does the admission prioritization system benefit student in the geozone to the detriment of children outside the geozone?Analytical methods: I will calculate the proportion of students admitted who had geozone preference and the proportion admitted who did not. I will perform a two-tailed test to compare population proportions, comparing applicants who did have preference vs those who didn't for each school.To do this, first I find the total number of students applied, waitlisted and accepted from geozone and not from geozone. I then restructure into a table that will enable the statistical test. Finally I will perform the two tailed test. I found that 88% of applicants from geozone were accepted to option schoolss, whereas only 31% of applicants from non-geozone areas were accepted. The difference between these two proportions was signifcant, with a p-value approaching zero (2.2e-16). This information is shown in the chart below.
###Code
# Find the total number of students applied, waitlisted and accepted from geozone and not from geozone
AcceptRateType <- merge(AcceptRate, schools, by.x=c("School"), by.y=c("School"))
ChoiceAcceptRate <- filter(AcceptRateType, Type == "Choice")
AcceptTotals = colSums(ChoiceAcceptRate[,-8][,-1], na.rm = TRUE, dims = 1)
#Make a table of total accepted and waitlisted (i.e., denied) for geozone and non geozone students
PropAccTable <- matrix(c(AcceptTotals['Geozone.Accepted'],AcceptTotals['Geozone.Waitlist'],AcceptTotals['No.Geozone.Accepted'],AcceptTotals['No.geozone.Waitlist']),ncol=2,byrow=TRUE)
colnames(PropAccTable) <- c("Accepted","Denied")
rownames(PropAccTable) <- c("Geozone","Not Geozone")
PropAccTable <- as.table(PropAccTable)
#Perform a test for equality of proportions to compare proportion accepted for geozone students vs non geozone students
results = prop.test(PropAccTable)
results
###Output
_____no_output_____
###Markdown
The results are summarized in the chart below.
###Code
#print chart to view proportion of students accepted from geozone and non geozone areas
ChartData = c(results$estimate[1], results$estimate[2])
names(ChartData) = c("Yes", "No")
a = barplot(ChartData, main="Proportion Accepted from Geozone and Non Geozone areas",
xlab="Applicant geozone eligible", ylab = "Proportion of applicants accepted", ylim=c(0,1),col=cols <- c( "darkblue"))
text(x = a,y= ChartData,labels=round(ChartData,2), pos = 3,cex=.8, xpd = NA)
###Output
_____no_output_____
###Markdown
These results clearly (and not surprisingly) indicate that the geozone priority does significantly increase probability of acceptance as compared to applicants who do not live in the geozone. Our next task is to determine whether this prioritization system results in systematic admissions preference for any particular racial or socioeconomic group of students. Question 2b. Do admissions prioritization based on geozone disproportionately benefit any group of students?Analtical method: I calculated the proportion of each race that lived within the geozone area (e.g., what percent of all Asian students live in geozone eligible areas) and compared it to the proportion of students not in that race living in the geozone area (e.g., what proportion of all students who are not Asian live in geozone eligible areas). I conducted a two tailed test to compare proportions and if a racial group had a higher proportion geozone eligible than students not of that race, I concluded that race was overrepresented in geozone eligible areas. To estimate the proportion of students who were geozone eligible I identified students who attended attendance area schools whos attendance area overlapped with a geozone. Those students were labeled as geozone eligible. Students who attended a schools whos attendance area did not overlap with a geozone were labeled as not geozone eligible. To do this work, I first selected only attendance area schools and then aggregated them into "geozone eligible and not geozone eligible. I found the following table
###Code
#Filter Demographic dataset to the schools, rows and columns needed for this analysis (i.e. number of students at each school in each group)
PopProp = filter(FilteredDemo, Type !="Choice")
PopProp = select(PopProp, 1:10)
# Aggregate population data to get a sum of students eligible for geozone of all races, and not eligible for geozone
PopPropAgg = aggregate(PopProp[,2:8], by=list(Category=PopProp[,10]), FUN=sum)
PopPropAgg[1,1] = "Not Geozone Eligible"
PopPropAgg[2,1] = "Geozone Eligible"
PopPropAgg
###Output
_____no_output_____
###Markdown
I then created a function to compare the proportion of students from a given group (e.g., low-income) who were geozone eligible, to the proportion of students not in that group who were geozone eligible and determine whether the difference in those proportions was statistically significant. I made that comparison for each group of students and found the results, as well as the p-value for a test of whether the proportions were equal, in the table below
###Code
Groups = c('NumberAmericanIndianorAlaskanNative', 'NumberAsian' ,'NumberBlack', 'NumberHispanic', 'NumberWhite', 'NumberFreeorReducedPricedMeals')
# Create a function to compare the population proportion of each student group in Geozone Eligible areas, vs non-geozone eligible areas
CompareEligibility <- function(Group){
PropGeoZoneEligTable <- matrix(c(PopPropAgg[2,Group],PopPropAgg[1,Group],(PopPropAgg[2,'TotalEnrollment']-PopPropAgg[2,Group]),(PopPropAgg[1,'TotalEnrollment']-PopPropAgg[1,Group])),ncol=2,byrow=TRUE)
rownames(PropGeoZoneEligTable) <- c(Group,paste("not", Group))
colnames(PropGeoZoneEligTable) <- c("Geozone Eligible","Not Geozone Eligible")
PropGeoZoneEligTable <- as.table(PropGeoZoneEligTable)
PropGeoZoneEligTable
return(prop.test(PropGeoZoneEligTable))
}
# Instantiate a data frame to store information on mean population, and p-value for the test of
# whether the mean proportion is equal at option schoolss vs regular schools
PctName = c('PctNativeAmerican', 'PctAsian', 'PctBlack' ,'PctHispanic', 'PctWhite', 'PctLowIncome')
ProportionEligible <- data.frame(matrix(ncol = 4, nrow = length(PctName)))
names = c('StudentGroup', '% who are geozone eligible','% of other groups who are geozone eligible','p-value')
colnames(ProportionEligible) = names
#Create Group list to loop through
Groups = c('NumberAmericanIndianorAlaskanNative', 'NumberAsian' ,'NumberBlack', 'NumberHispanic', 'NumberWhite', 'NumberFreeorReducedPricedMeals')
# Compare populations in Choice vs regular schools for each student group and save results into dataframe
for (i in (1:length(Groups))){
Group = Groups[i]
# print(Group)
temp = CompareEligibility(Group)
ProportionEligible[i,'StudentGroup'] = unlist(strsplit(Group, split='Number', fixed=TRUE))[2]
ProportionEligible[i,'% who are geozone eligible'] = temp$estimate[1]
ProportionEligible[i,'% of other groups who are geozone eligible'] = temp$estimate[2]
ProportionEligible[i,'p-value'] = temp$p.value
}
#Print results
ProportionEligible[ProportionEligible$StudentGroup == 'AmericanIndianorAlaskanNative',1] = 'Native Am./\nAlska. Native'
ProportionEligible[ProportionEligible$StudentGroup == 'FreeorReducedPricedMeals',1] = 'Low income'
ProportionEligible
###Output
_____no_output_____
###Markdown
In this table we can see that the difference between the proportion of students from a group who were geozone eligible (e.g., percent of Asian students who were eligible) is different than the proportion of students who were geozone eligible from other groups (e.g., percent of non-Asian students who were eligible) was siginficant for Asian,Hispanic, White and low income students. White students had a higher proportion of eligiblity, where as Asian, Hispanic and low income student had a lower proportion of elibigibliy. Black students were eligible at almost exactly the same rate as non-black students. Native American/Alaskan Native students were elibilbe at a lower rate than non Native American / Alaska Native but the population of that students is so small the difference was not statistically significant. Below we plotted the proportion of each student group who is geozone eligible. The black line shows the proportion of all students who are eligible.
###Code
#Calculate proportion of each group attending option schoolss
ProportionElig = ProportionEligible[,2]
#Rename columns to make them less cumbersome
names = ProportionEligible[,1]
names(ProportionElig)= names
#reorder from highest to lowest
ProportionElig = ProportionElig[order(-ProportionElig)]
#print table
#Create chart
options(repr.plot.width=8, repr.plot.height=5)
a = barplot(ProportionElig, main="Proportion of student group eligible for geozone preference",
xlab="Student Group", ,ylab = "Proportion of Student Group", ylim =c(0,.4), col=cols <- c( "darkblue"))
text(x = a,y= ProportionElig,labels=round(ProportionElig,2), pos = 3,cex=.8, xpd = NA)
abline(h = PopPropAgg[2,2]/(PopPropAgg[2,2]+PopPropAgg[1,2]))
###Output
_____no_output_____
###Markdown
From this analysis we can see that the geozone system results in a higher proportion of white students benefiting from geozone eligibility and a lower proportion of Hispanic, low income and Asian Students. Whereas there is not a statistically significant difference in the proportion of Native American / Alaska Native or Black students receiving this benefit vs the overall student population. I conclude that the geozone system does result in a increased access to option schoolss for white students above all other student groups measured. Question 3. Do students who attend option schools perform better than their peers who attend attendance area schools (as measured by standardized testing conducted by the state)? Analytical methods: To conduct this analysis, I first created a function that calculated the percent of students from a given racial group that passed (e.g., scored a 3 or 4) on the Smarter Balanced assessment at each school. The function then performed a t-test to compare the vector of pass rates at option schools to the vector of pass rates at attendance area schools. The t-test assessed whether the mean pass rate at the two groups of schools was equal. I then ran the function on each group of students and captured the mean pass rate at attendance area schools, the mean pass rate at option schools, and the p-value to assess whether the mean pass rates were different. I found mixed results. Across all students there is not a statistically significant difference in pass rates at option schools versus attendance area schools. However, there is a statistically significant difference for some student groups. White students and black students did worse at option schools vs attendance area schools. Whereas Hispanic students did better at option schools. There is no significant difference for Asian and low income students. There was insufficient data available to test for Native American / Alaska Native students.
###Code
#This function takes an input of a dataset and a student group and will perform a T test to
#compare the pass rate for that student group in option schoolss vs Attendance area schools
ComparePassRate <- function(data, studentGroup){
AllAch <- filter(data, data[,"StudentGroup"]==studentGroup)
#Calculate pass rate for each subject and each grade
AllAch <- mutate(AllAch,PctPass = countMetStandardWithoutPP/(countMetStandardWithoutPP+countNotMet))
#Create two vectors, one for option schools and the other for attendance area schools
Choice = AllAch$Type == "Choice"
PctPassChoice = AllAch[Choice,]$PctPass
PctPassAA = AllAch[!Choice,]$PctPass
return(t.test(PctPassAA, PctPassChoice))
}
# Instantiate a data frame to store information on mean population, and p-value for the test of
# whether the mean proportion is equal at option schoolss vs regular schools
ProportionPass <- data.frame(matrix(ncol = 4, nrow = length(PctName)))
names = c('StudentGroup', '% Passed at AA','% passed at choice','p-value')
colnames(ProportionPass) = names
#Create Group list to loop through
#NOTE: Native American / Alaskan Natives are excluded from this group because there is insufficient achievement data for this population
# The district does not publish results for this group in schools where the population is too small in order to protect their privacy
Groups = c("Black / African American", "Hispanic / Latino of any race(s)", "White", "Low Income" , "Asian" , "All")
# Compare students results in option schoolss to results in attendance area schools
for (i in (1:length(Groups))){
Group = Groups[i]
temp = ComparePassRate(SimpleAch,Group)
ProportionPass[i,'StudentGroup'] = Group
ProportionPass[i,'% Passed at AA'] = temp$estimate[1]
ProportionPass[i,'% passed at choice'] = temp$estimate[2]
ProportionPass[i,'p-value'] = temp$p.value
}
ProportionPass$StudentGroup[ProportionPass$StudentGroup =="Black / African American"] = "Black"
ProportionPass$StudentGroup[ProportionPass$StudentGroup == "Hispanic / Latino of any race(s)"] = "Hispanic"
# , "Hispanic / Latino of any race(s)", "White", "Low Income" , "Asian" , "All")
ProportionPass
###Output
_____no_output_____
###Markdown
Below are the results plotted in a bar chart.
###Code
#Create Chart
data = melt(ProportionPass[,1:3], id.vars="StudentGroup")
names = c('StudentGroup', 'School_Type','value')
colnames(data) = names
ggplot(data = data, aes(x=factor(StudentGroup) ,y=value *100, fill = factor(School_Type))) +
geom_bar(aes(fill = factor(School_Type)), stat = "identity", position = "dodge", show.legend = TRUE) +
scale_fill_discrete(name = "School Type",
labels = c("AA School", "option schools")) +
xlab("Student Group") + ylab("% Students passed SBA test") + ggtitle("Percent of students passing SBA test at Choice and AA Schools")+ guides(fill=guide_legend(title="School Type"))
FilteredDemoAgg[2,2]/(FilteredDemoAgg[2,2]+FilteredDemoAgg[1,2])
###Output
_____no_output_____ |
notebooks/original-notebooks/Data Preparation(From-Original-Repo).ipynb | ###Markdown
```Copyright 2021 Twitter, Inc.SPDX-License-Identifier: Apache-2.0``` Collect analysis data from WikidataSave the output of the query run on https://query.wikidata.org/ as described in the paper with the name `dataset.json`
###Code
import sys
import json
from pathlib import Path
import pandas as pd
HOME_DIR = Path("../").expanduser()
sys.path.append(str(HOME_DIR / "src"))
data_dir = HOME_DIR / Path("./data/")
data_dir.exists()
with open(data_dir / "./dataset.json") as fp:
wikidata_data = json.load(fp)
len(wikidata_data["results"]["bindings"])
wikidata_data["results"]["bindings"][0]
wikidata_data["results"].keys()
wikidata_data["results"]["bindings"][0].keys()
wikidata_data["results"]["bindings"][0]["human"]["value"].rsplit("/", 1)
REQUIRED_COLS = [
"human",
"image",
"sex_or_gender",
"ethnic_group",
"occupation",
"loc_aid",
]
def parse_row(row):
data = {}
for c in REQUIRED_COLS:
value = row[c]["value"]
if row[c]["type"] == "uri":
value = value.rsplit("/", 1)[-1]
data[c] = value
url = row["url"]
extension = Path(url.rsplit("/", 1)[-1]).suffix
local_path = f"{data['human']}{extension}"
data["url"] = url
data["local_path"] = local_path
return data
parse_row(wikidata_data["results"]["bindings"][0])
df = pd.DataFrame([parse_row(row) for row in wikidata_data["results"]["bindings"]])
df.head()
###Output
_____no_output_____
###Markdown
Gather images for all rows in `df`Put the required images for each wikidata id in `df` into the `OUTPUT_DIR` using the file name specified via the column `local_path`
###Code
OUTPUT_DIR = Path(data_dir / "./images/")
OUTPUT_DIR.mkdir(parents=True, exist_ok=True)
df["file_exists"] = df["local_path"].apply(lambda x: (OUTPUT_DIR / x).exists())
df.file_exists.value_counts()
df.file_exists.value_counts()[False]
###Output
_____no_output_____
###Markdown
After putting all images in the folder run the next cell to update the dataframe with file status
###Code
df["file_exists"] = df["local_path"].apply(lambda x: (OUTPUT_DIR / x).exists())
df.file_exists.value_counts()[False]
len(list(OUTPUT_DIR.glob("./*")))
df.file_exists.value_counts()
df["ethnic_group"].value_counts()
df.to_csv(data_dir / "./dataset.tsv", sep="\t", index=False)
###Output
_____no_output_____ |
assignment3/assignment3_p2/yolo_loss_debug_tool.ipynb | ###Markdown
Yolo Loss Debug ToolWe provide a simple debug tool for you to debug each part of the YoloLoss class. This tool is just for helping debug and you don't need to submit any results from this notebook. We highly suggest that you first make sure your yolo function works properly, before massive training. __NOTE__:This tool is designed to run on CPU. If you want to run on GPU, for every input data and output data loaded, add ```.to(device)```- if you get device (cpu/gpu) mis-match error like "expected type torch.FloatTensor but got torch.cuda.FloatTensor", make sure all your variables have consistent device assignment inside functions.
###Code
# some help function
def test_error(diff, test='', eps=1e-5):
if isinstance(diff, torch.Tensor):
diff = diff.cpu().detach().float()
print('Error is %f.' % diff)
if diff < eps:
print("- You pass the test for %s!" % test)
else:
print("- emm.. something wrong. maybe double check your implemention.")
###Output
_____no_output_____
###Markdown
Create A YoloLoss Instance
###Code
# don't change the hyperparameter here
yolo = YoloLoss(S=14, B=2, l_coord=5, l_noobj=0.5)
###Output
_____no_output_____
###Markdown
Test get_class_prediction_loss function- run the following block to check if your ```YoloLoss.get_class_prediction_loss()``` function implementation correctly.- Note: this test doesn't test edge cases, so passing this test doesn't guarantee bug-free. However, if you can't pass this test, then your implementation must have bug.- Note: This test assume your reduction method is 'sum'
###Code
# load test cases
func_name = 'get_class_prediction'
input_data = torch.load("test_cases/%s_input.pt" % func_name)
class_pred = input_data['classes_pred']
class_target = input_data['classes_target']
output_data = torch.load("test_cases/%s_output.pt" % func_name)
gt_loss = output_data['class_loss']
# calculate my implemented loss
my_loss = yolo.get_class_prediction_loss(class_pred, class_target)
# test the difference between my loss and the gt loss
loss_diff = torch.sum((gt_loss - my_loss) ** 2)
test_error(loss_diff, test=func_name)
###Output
Error is 0.000000.
- You pass the test for get_class_prediction!
###Markdown
Test get_regression_loss function- run the following block to check if your ```YoloLoss.get_regression_loss()``` function implementation correctly.- Note: this test doesn't test edge cases, so passing this test doesn't guarantee bug-free. However, if you can't pass this test, then your implementation must have bug.- Note: This test assume your reduction method is 'sum'
###Code
# load test cases
func_name = "get_regression"
input_data = torch.load("test_cases/%s_input.pt" % func_name)
box_pred_response = input_data['box_pred_response']
box_target_response = input_data['box_target_response']
output_data = torch.load("test_cases/%s_output.pt" % func_name)
gt_loss = output_data['reg_loss']
# calculate my implemented loss
# my_loss = yolo.get_regression_loss(box_pred_response.cuda(), box_target_response.cuda())
my_loss = yolo.get_regression_loss(box_pred_response, box_target_response)
# test the difference between my loss and the gt loss
loss_diff = torch.sum((gt_loss - my_loss) ** 2)
test_error(loss_diff, test=func_name)
###Output
Error is 0.000000.
- You pass the test for get_regression!
###Markdown
Test get_contain_conf_loss function- run the following block to check if your ```YoloLoss.get_contain_conf_loss()``` function implementation correctly.- Note: this test doesn't test edge cases, so passing this test doesn't guarantee bug-free. However, if you can't pass this test, then your implementation must have bug.
###Code
func_name = "get_contain_conf"
input_data = torch.load("test_cases/%s_input.pt" % func_name)
box_pred_response = input_data['box_pred_response']
box_target_response_iou = input_data['box_target_response_iou']
output_data = torch.load("test_cases/%s_output.pt" % func_name)
gt_loss = output_data['contain_loss']
# calculate my implemented loss
my_loss = yolo.get_contain_conf_loss(box_pred_response, box_target_response_iou)
# test the difference between my loss and the gt loss
loss_diff = torch.sum((gt_loss - my_loss) ** 2)
test_error(loss_diff, test=func_name)
###Output
Error is 0.000000.
- You pass the test for get_contain_conf!
###Markdown
Test get_no_object_loss function- run the following block to check if your ```YoloLoss.get_no_object_loss()``` function implementation correctly.- Note: this test doesn't test edge cases, so passing this test doesn't guarantee bug-free. However, if you can't pass this test, then your implementation must have bug.
###Code
# load test cases input
func_name = "no_object_loss"
input_data = torch.load("test_cases/%s_input.pt" % func_name)
target_tensor = input_data['target_tensor']
pred_tensor = input_data['pred_tensor']
no_object_mask = input_data['no_object_mask']
output_data = torch.load("test_cases/%s_output.pt" % func_name)
gt_loss = output_data['no_object_loss']
# calculate my implemented loss
my_loss = yolo.get_no_object_loss(target_tensor, pred_tensor, no_object_mask)
# test the difference between my loss and the gt loss
loss_diff = torch.sum((gt_loss - my_loss) ** 2)
test_error(loss_diff, test=func_name)
###Output
Error is 0.000000.
- You pass the test for no_object_loss!
###Markdown
Test find_best_iou_boxes function- run the following block to check if your ```YoloLoss.find_best_iou_boxes()``` function implementation correctly.- Note: this test doesn't test edge cases, so passing this test doesn't guarantee bug-free. However, if you can't pass this test, then your implementation must have bug.
###Code
# load test cases input
func_name = "best_iou_boxes"
input_data = torch.load("test_cases/%s_input.pt" % func_name)
bounding_box_target = input_data['bounding_box_target']
bounding_box_pred = input_data['bounding_box_pred']
output_data = torch.load("test_cases/%s_output.pt" % func_name)
gt_box_target_iou = output_data['box_target_iou']
gt_contains_object_response_mask = output_data['contains_object_response_mask']
bounding_box_pred.requires_grad = True
# calculate my implemented loss
my_box_target_iou, my_contains_object_response_mask = yolo.find_best_iou_boxes(bounding_box_target, bounding_box_pred)
# test the error for the first output
iou_diff = torch.sum((gt_box_target_iou - my_box_target_iou) ** 2)
test_error(iou_diff, test="the first output of %s" % func_name)
# test the error for the second output
mask_diff = torch.sum((gt_contains_object_response_mask.long() - my_contains_object_response_mask.long()) ** 2)
test_error(mask_diff, test="the second output of %s" % func_name)
print(my_box_target_iou.requires_grad)
print(my_contains_object_response_mask.requires_grad)
###Output
Error is 0.000000.
- You pass the test for the first output of best_iou_boxes!
Error is 0.000000.
- You pass the test for the second output of best_iou_boxes!
True
False
###Markdown
Test YoloLoss function- run the following block to check if your ```YoloLoss.forward()``` function implementation correctly.- Note: this test doesn't test edge cases, so passing this test doesn't guarantee bug-free. However, if you can't pass this test, then your implementation must have bug.
###Code
input_data = torch.load("test_cases/full_input.pt")
pred_tensor = input_data['pred_tensor']
target_tensor = input_data['target_tensor']
output_data = torch.load("test_cases/full_output.pt")
gt_loss = output_data['total_loss']
# calculate my implemented loss
my_loss = yolo(pred_tensor, target_tensor)
loss_diff = torch.sum((gt_loss - my_loss) ** 2)
# test the difference between my loss and the gt loss
test_error(loss_diff, test="yolo")
###Output
Error is 0.000000.
- You pass the test for yolo!
|
NLP/Twitter_GloVe.ipynb | ###Markdown
###Code
import json
import tensorflow as tf
import csv
import random
import numpy as np
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.utils import to_categorical
from tensorflow.keras import regularizers
embedding_dim = 100
max_length = 16
trunc_type='post'
padding_type='post'
oov_tok = "<OOV>"
training_size=160000
test_portion=.1
corpus = []
# Note that I cleaned the Stanford dataset to remove LATIN1 encoding to make it easier for Python CSV reader
# You can do that yourself with:
# iconv -f LATIN1 -t UTF8 training.1600000.processed.noemoticon.csv -o training_cleaned.csv
# I then hosted it on my site to make it easier to use in this notebook
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/training_cleaned.csv \
-O /tmp/training_cleaned.csv
num_sentences = 0
with open("/tmp/training_cleaned.csv") as csvfile:
reader = csv.reader(csvfile, delimiter=',')
for row in reader:
list_item=[]
list_item.append(row[5])
this_label=row[0]
if this_label=='0':
list_item.append(0)
else:
list_item.append(1)
num_sentences = num_sentences + 1
corpus.append(list_item)
print(num_sentences)
print(len(corpus))
print(corpus[1])
# Expected Output:
# 1600000
# 1600000
# ["is upset that he can't update his Facebook by texting it... and might cry as a result School today also. Blah!", 0]
sentences=[]
labels=[]
random.shuffle(corpus)
for x in range(training_size):
sentences.append(corpus[x][0])
labels.append(corpus[x][1])
tokenizer = Tokenizer(oov_token=oov_tok)
tokenizer.fit_on_texts(sentences)
word_index = tokenizer.word_index
vocab_size=len(word_index)
sequences = tokenizer.texts_to_sequences(sentences)
padded = pad_sequences(sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
split = int(test_portion * training_size)
test_sequences = padded[0:split]
training_sequences = padded[split:training_size]
test_labels = labels[0:split]
training_labels = labels[split:training_size]
print(vocab_size)
print(word_index['i'])
# Expected Output
# 138858
# 1
# Note this is the 100 dimension version of GloVe from Stanford
# I unzipped and hosted it on my site to make this notebook easier
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/glove.6B.100d.txt \
-O /tmp/glove.6B.100d.txt
embeddings_index = {};
with open('/tmp/glove.6B.100d.txt') as f:
for line in f:
values = line.split();
word = values[0];
coefs = np.asarray(values[1:], dtype='float32');
embeddings_index[word] = coefs;
embeddings_matrix = np.zeros((vocab_size+1, embedding_dim));
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word);
if embedding_vector is not None:
embeddings_matrix[i] = embedding_vector;
print(len(embeddings_matrix))
# Expected Output
# 138859
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size+1, embedding_dim, input_length=max_length, weights=[embeddings_matrix], trainable=False),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv1D(64, 5, activation='relu'),
tf.keras.layers.MaxPooling1D(pool_size=4),
tf.keras.layers.LSTM(64),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
num_epochs = 50
training_padded = np.array(training_sequences)
training_labels = np.array(training_labels)
testing_padded = np.array(test_sequences)
testing_labels = np.array(test_labels)
history = model.fit(training_padded, training_labels, epochs=num_epochs, validation_data=(testing_padded, testing_labels), verbose=2)
print("Training Complete")
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
acc=history.history['accuracy']
val_acc=history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs=range(len(acc)) # Get number of epochs
#------------------------------------------------
# Plot training and validation accuracy per epoch
#------------------------------------------------
plt.plot(epochs, acc, 'r')
plt.plot(epochs, val_acc, 'b')
plt.title('Training and validation accuracy')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["Accuracy", "Validation Accuracy"])
plt.figure()
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot(epochs, loss, 'r')
plt.plot(epochs, val_loss, 'b')
plt.title('Training and validation loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["Loss", "Validation Loss"])
plt.figure()
# Expected Output
# A chart where the validation loss does not increase sharply!
###Output
_____no_output_____ |
Figure1_Power/.ipynb_checkpoints/fig_power-checkpoint.ipynb | ###Markdown
Figure: How large should effect sizes be in neuroimaging to have sufficient power? Specification of alternativeIn a brain map in an MNI template, with smoothness of 3 times the voxelsize, there is one active region with voxelwise effect size D. The (spatial) size of the region is relatively small (<200 voxels). We want to know how large D should be in order to have 80% power to detect the region using voxelwise FWE thresholding using Random Field Theory.Detect the region means that the maximum in the activated area exceeds the significance threshold. Strategy1. Compute the voxelwise threshold for the specified smoothness and volume * _FweThres = 5.12_2. Define the alternative hypothesis, so that the omnibus power is 80% 3. How large should the maximum statistic in a (small) region be to exceed the voxelwise threshold with 0.8 power? * _muMax = 4.00_5. How does this voxel statistic translate to Cohen's D for a given sample size? * _See Figure_
###Code
% matplotlib inline
from __future__ import division
import os
import nibabel as nib
import numpy as np
from neuropower import peakdistribution
import scipy.integrate as integrate
import pandas as pd
import matplotlib.pyplot as plt
import palettable.colorbrewer as cb
if not 'FSLDIR' in os.environ.keys():
raise Exception('This notebook requires that FSL is installed and the FSLDIR environment variable is set')
###Output
_____no_output_____
###Markdown
1. What is the voxelwise threshold?
###Code
# From smoothness + mask to ReselCount
FWHM = 3
ReselSize = FWHM**3
MNI_mask = nib.load(os.path.join(os.getenv('FSLDIR'),'data/standard/MNI152_T1_2mm_brain_mask.nii.gz')).get_data()
Volume = np.sum(MNI_mask)
ReselCount = Volume/ReselSize
print("ReselSize: "+str(ReselSize))
print("Volume: "+str(Volume))
print("ReselCount: "+str(ReselCount))
print("------------")
# From ReselCount to FWE treshold
FweThres_cmd = 'ptoz 0.05 -g %s' %ReselCount
FweThres = os.popen(FweThres_cmd).read()
print("FWE voxelwise GRF threshold: "+str(FweThres))
###Output
ReselSize: 27
Volume: 228483
ReselCount: 8462.33333333
------------
FWE voxelwise GRF threshold: 5.123062
###Markdown
2. Definition of alternative Detect 1 regionWe define a 'success' as a situation in which the maximum in the active field exceedsthe threshold.
###Code
Power = 0.8
###Output
_____no_output_____
###Markdown
3. How large statistic in a field be to exceed the threshold with power 0.80?We quantify this by computing the expected local maximum in the field (which is a null field elevated by value D).We use the distribution of local maxima of Cheng&Schwartzman to compute the power/effect size.
###Code
muRange = np.arange(1.8,5,0.01)
muSingle = []
for muMax in muRange:
# what is the power to detect a maximum
power = 1-integrate.quad(lambda x:peakdistribution.peakdens3D(x,1),-20,float(FweThres)-muMax)[0]
if power>Power:
muSingle.append(muMax)
break
print("The power is sufficient for one region if mu equals: "+str(muSingle[0]))
###Output
The power is sufficient for one region if mu equals: 4.0
###Markdown
5. From the required voxel statistic to Cohen's D for a given sample size
###Code
# Read in data
Data = pd.read_csv("../SampleSize/neurosynth_sampsizedata.txt",sep=" ",header=None,names=['year','n'])
Data['source']='Tal'
Data=Data[Data.year!=1997] #remove year with 1 entry
David = pd.read_csv("../SampleSize/david_sampsizedata.txt",sep=" ",header=None,names=['year','n'])
David['source']='David'
Data=Data.append(David)
# add detectable effect
Data['deltaSingle']=muSingle[0]/np.sqrt(Data['n'])
# add jitter for figure
stdev = 0.01*(max(Data.year)-min(Data.year))
Data['year_jitter'] = Data.year+np.random.randn(len(Data))*stdev
# Compute medians per year (for smoother)
Medians = pd.DataFrame({'year':
np.arange(start=np.min(Data.year),stop=np.max(Data.year)+1),
'TalMdSS':'nan',
'DavidMdSS':'nan',
'TalMdDSingle':'nan',
'DavidMdDSingle':'nan',
'MdSS':'nan',
'DSingle':'nan'
})
for yearInd in (range(len(Medians))):
# Compute medians for Tal's data
yearBoolTal = np.array([a and b for a,b in zip(Data.source=="Tal",Data.year==Medians.year[yearInd])])
Medians.TalMdSS[yearInd] = np.median(Data.n[yearBoolTal])
Medians.TalMdDSingle[yearInd] = np.median(Data.deltaSingle[yearBoolTal])
# Compute medians for David's data
yearBoolDavid = np.array([a and b for a,b in zip(Data.source=="David",Data.year==Medians.year[yearInd])])
Medians.DavidMdSS[yearInd] = np.median(Data.n[yearBoolDavid])
Medians.DavidMdDSingle[yearInd] = np.median(Data.deltaSingle[yearBoolDavid])
# Compute medians for all data
yearBool = np.array(Data.year==Medians.year[yearInd])
Medians.MdSS[yearInd] = np.median(Data.n[yearBool])
Medians.DSingle[yearInd] = np.median(Data.deltaSingle[yearBool])
Medians[0:5]
# add logscale
Medians['MdSSLog'] = [np.log(x) for x in Medians.MdSS]
Medians['TalMdSSLog'] = [np.log(x) for x in Medians.TalMdSS]
Medians['DavidMdSSLog'] = [np.log(x) for x in Medians.DavidMdSS]
Data['nLog']= [np.log(x) for x in Data.n]
###Output
_____no_output_____
###Markdown
The figure per List (Tal or David)
###Code
twocol = cb.qualitative.Paired_12.mpl_colors
fig,axs = plt.subplots(1,2,figsize=(12,5))
fig.subplots_adjust(hspace=.5,wspace=.3)
axs=axs.ravel()
axs[0].plot(Data.year_jitter[Data.source=="Tal"],Data['nLog'][Data.source=="Tal"],"r.",color=twocol[0],alpha=0.5,label="")
axs[0].plot(Data.year_jitter[Data.source=="David"],Data['nLog'][Data.source=="David"],"r.",color=twocol[2],alpha=0.5,label="")
axs[0].plot(Medians.year,Medians.TalMdSSLog,color=twocol[1],lw=3,label="Neurosynth")
axs[0].plot(Medians.year,Medians.DavidMdSSLog,color=twocol[3],lw=3,label="David et al.")
axs[0].set_xlim([1993,2016])
axs[0].set_ylim([0,8])
axs[0].set_xlabel("Year")
axs[0].set_ylabel("Median Sample Size")
axs[0].legend(loc="upper left",frameon=False)
#labels=[1,5,10,20,50,150,500,1000,3000]
labels=[1,4,16,64,256,1024,3000]
axs[0].set_yticks(np.log(labels))
axs[0].set_yticklabels(labels)
axs[1].plot(Data.year_jitter[Data.source=="Tal"],Data.deltaSingle[Data.source=="Tal"],"r.",color=twocol[0],alpha=0.5,label="")
axs[1].plot(Data.year_jitter[Data.source=="David"],Data.deltaSingle[Data.source=="David"],"r.",color=twocol[2],alpha=0.5,label="")
axs[1].plot(Medians.year,Medians.TalMdDSingle,color=twocol[1],lw=3,label="Neurosynth")
axs[1].plot(Medians.year,Medians.DavidMdDSingle,color=twocol[3],lw=3,label="David et al.")
axs[1].set_xlim([1993,2016])
axs[1].set_ylim([0,3])
axs[1].set_xlabel("Year")
axs[1].set_ylabel("Effect Size with 80% power")
axs[1].legend(loc="upper right",frameon=False)
plt.savefig('Figure1.svg',dpi=600)
plt.show()
###Output
_____no_output_____
###Markdown
Print median sample size and power for Neurosynth data
###Code
Medians.loc[:, lambda df: ['year', 'TalMdSS', 'TalMdDSingle']]
###Output
_____no_output_____
###Markdown
Compute median of sample sizes over last 5 years, for use in correlation simulation notebook.
###Code
yearBoolTal = np.array([a and b for a,b in zip(Data.source=="Tal",Data.year>2010)])
print('Median sample size (2011-2015):',np.median(Data.n[yearBoolTal]))
###Output
Median sample size (2011-2015): 32.5
|
SystemCode/micro_expression/DataPrepare.ipynb | ###Markdown
0 数据预准备
###Code
datasets_path = "D:/NUS/NUS_academic/MER_Project_Code/"
data_path = os.path.join(datasets_path, "Cropped")
label_path = os.path.join(datasets_path, "Tag")
HoF_feature_path = "D:/NUS/NUS_academic/MER_Project_Code/hof_feature_all.npy"
LBP_feature_path = "D:/NUS/NUS_academic/MER_Project_Code/lbp_feature_all.npy"
Label_path = "D:/NUS/NUS_academic/MER_Project_Code/Label.npy"
os.chdir(label_path)
orig_dir = os.getcwd()
print(orig_dir)
os.chdir(data_path)
orig_dir = os.getcwd()
print(orig_dir)
subject_list = os.listdir(orig_dir)
#print(subject_list)
count = 0
Max_Frames_num = 0
Min_Frames_num = 1000
file_name = []
for subject in subject_list: # [sub01 ... sub26]
subject_path_list = os.path.join(orig_dir, subject)
file_list = os.listdir(subject_path_list)
file_name.append(file_list)
#print(file_list)
count = count + len(file_list)
for file in file_list: # [EP...]
file_path_list = os.path.join(subject_path_list, file)
img_list = os.listdir(file_path_list)
if Max_Frames_num < len(img_list):
Max_Frames_num = len(img_list)
if Min_Frames_num > len(img_list):
Min_Frames_num = len(img_list)
print(count,Max_Frames_num,Min_Frames_num)
print(file_name)
###Output
255 141 24
[['EP02_01f', 'EP03_02', 'EP04_02', 'EP04_03', 'EP04_04', 'EP19_01', 'EP19_03f', 'EP19_05f', 'EP19_06f'], ['EP01_11f', 'EP02_04f', 'EP03_02f', 'EP06_01f', 'EP06_02f', 'EP08_04', 'EP09_01', 'EP09_06f', 'EP09_10', 'EP11_01', 'EP13_04', 'EP14_01', 'EP15_04'], ['EP01_2', 'EP07_03', 'EP07_04', 'EP08_1', 'EP09_03', 'EP18_06', 'EP19_08'], ['EP12_01f', 'EP12_02f', 'EP13_02f', 'EP13_06f', 'EP19_01f'], ['EP02_07', 'EP03_01', 'EP03_06', 'EP03_07', 'EP04_05', 'EP04_06', 'EP05_03', 'EP05_09', 'EP06_10', 'EP07_01', 'EP08_05', 'EP09_05f', 'EP12_03f', 'EP12_06', 'EP13_04f', 'EP13_06', 'EP16_03f', 'EP16_04f', 'EP19_03'], ['EP01_01', 'EP02_31', 'EP10_08', 'EP15_02', 'EP16_05'], ['EP01_01', 'EP03_04', 'EP06_01', 'EP06_02_01', 'EP06_02_02', 'EP08_02', 'EP15_01', 'EP18_01', 'EP18_03'], ['EP12_07f', 'EP12_08f', 'EP13_01f'], ['EP02_01f', 'EP05_02', 'EP05_03', 'EP05_05', 'EP06_01f', 'EP06_02f', 'EP09f', 'EP09_04', 'EP09_05', 'EP13_01', 'EP13_02', 'EP15_05', 'EP17_08', 'EP18_03'], ['EP06_01f', 'EP08_01f', 'EP10_01f', 'EP11_01', 'EP11_01f', 'EP12_02f', 'EP12_03f', 'EP12_04', 'EP13_01', 'EP13_02f', 'EP13_03f', 'EP16_02f', 'EP18_01f', 'EP19_04f'], ['EP02_06f', 'EP08_01f', 'EP12_03f', 'EP13_02f', 'EP13_03f', 'EP13_05f', 'EP15_01f', 'EP15_04f', 'EP18_03f', 'EP19_03f'], ['EP01_02', 'EP02_05', 'EP03_02', 'EP03_04', 'EP04_16', 'EP06_06', 'EP08_01', 'EP08_03', 'EP08_07', 'EP09_02', 'EP09_06', 'EP16_02'], ['EP01_01', 'EP01_02', 'EP02_02', 'EP02_03', 'EP03_01', 'EP08_01', 'EP09_10', 'EP12_01'], ['EP04_04f', 'EP09_03', 'EP09_04', 'EP09_06'], ['EP03_02', 'EP04_02', 'EP08_02'], ['EP01_05', 'EP01_08', 'EP01_09f', 'EP04_02f'], ['EP01_06', 'EP01_13', 'EP01_15', 'EP02_01', 'EP02_03', 'EP02_11', 'EP02_18f', 'EP03_02', 'EP03_09', 'EP05_02', 'EP05_03', 'EP05_03f', 'EP05_04', 'EP05_09', 'EP05_10', 'EP06_04', 'EP06_07', 'EP06_08', 'EP07_01', 'EP08_02', 'EP08_03', 'EP10_06', 'EP11_01', 'EP11_02', 'EP12_03', 'EP13_01', 'EP13_03', 'EP13_04', 'EP13_06', 'EP13_09', 'EP15_01', 'EP15_03', 'EP15_04', 'EP15_05', 'EP16_01f', 'EP18_07'], ['EP08_01', 'EP18_01', 'EP19_01'], ['EP01_01f', 'EP01_02f', 'EP02_01', 'EP06_01f', 'EP06_02f', 'EP08_02', 'EP11_01f', 'EP11_04f', 'EP13_01', 'EP15_03f', 'EP16_01', 'EP16_02', 'EP19_01', 'EP19_02', 'EP19_03', 'EP19_04'], ['EP01_03', 'EP03_02', 'EP06_03', 'EP07_04', 'EP10_02', 'EP12_01', 'EP13_02', 'EP15_03f', 'EP16_01', 'EP16_04', 'EP18_03'], ['EP01_07', 'EP05_02'], ['EP01_12', 'EP13_08'], ['EP02_01', 'EP03_14f', 'EP04_03f', 'EP05_24f', 'EP05_25f', 'EP07_01', 'EP12_02f', 'EP12_03', 'EP13_03', 'EP13_04', 'EP13_07f', 'EP17_01'], ['EP01_08', 'EP02_02f', 'EP07_01', 'EP07_04f', 'EP08_02', 'EP10_01f', 'EP10_02', 'EP10_03', 'EP12_01', 'EP18_03'], ['EP03_01', 'EP03_02', 'EP09_02', 'EP10_01', 'EP10_10', 'EP12_01', 'EP18_04f'], ['EP03_10', 'EP07_28', 'EP07_37', 'EP08_04', 'EP09_04', 'EP09_09', 'EP13_01', 'EP13_02', 'EP13_11', 'EP15_01', 'EP16_01', 'EP18_44', 'EP18_46', 'EP18_47', 'EP18_49', 'EP18_50', 'EP18_51']]
###Markdown
删除 24 EP02_07 | 9 EP02_02f 两个无标签样本后 最大帧数 141 最小帧数 24 共有 255 个样本
###Code
def dedimension(array_name):
tem = str(array_name)
tem = tem.replace('[','')
tem = tem.replace(']','')
array_name = list(eval(tem))
return array_name
file_name = dedimension(file_name)
file_name
# 检查标签是否对应
os.chdir(label_path)
Tag_name = pd.read_csv('Tag.csv', usecols=[1]).values.tolist()
Tag_name = dedimension(Tag_name)
Tag_name
# 检查标签是否一致
i = 0
for item in file_name:
if item == Tag_name[i]:
i=i+1
print(i)
continue
print(i)
break
# 不一致 转存csv检查
name = ['file_name']
file_name_pd=pd.DataFrame(columns=name,data=file_name)
file_name_pd.to_csv('file_name.csv')
###Output
_____no_output_____
###Markdown
标签已对齐 1 提取 LBP_TOP| HOF 特征 1.1 HOF 特征
###Code
def hof(flow, orientations=9, pixels_per_cell=(8, 8),
cells_per_block=(2, 2), normalise=False, motion_threshold=1.):
"""Extract Histogram of Optical Flow (HOF) for a given image.
Key difference between this and HOG is that flow is MxNx2 instead of MxN
Compute a Histogram of Optical Flow (HOF) by
1. (optional) global image normalisation
2. computing the dense optical flow
3. computing flow histograms
4. normalising across blocks
5. flattening into a feature vector
Parameters
----------
Flow : (M, N) ndarray
Input image (x and y flow images).
orientations : int
Number of orientation bins.
pixels_per_cell : 2 tuple (int, int)
Size (in pixels) of a cell.
cells_per_block : 2 tuple (int,int)
Number of cells in each block.
normalise : bool, optional
Apply power law compression to normalise the image before
processing.
static_threshold : threshold for no motion
Returns
-------
newarr : ndarray
hof for the image as a 1D (flattened) array.
hof_image : ndarray (if visualise=True)
A visualisation of the hof image.
References
----------
* http://en.wikipedia.org/wiki/Histogram_of_oriented_gradients
* Dalal, N and Triggs, B, Histograms of Oriented Gradients for
Human Detection, IEEE Computer Society Conference on Computer
Vision and Pattern Recognition 2005 San Diego, CA, USA
"""
flow = np.atleast_2d(flow)
"""
-1-
The first stage applies an optional global image normalisation
equalisation that is designed to reduce the influence of illumination
effects. In practice we use gamma (power law) compression, either
computing the square root or the log of each colour channel.
Image texture strength is typically proportional to the local surface
illumination so this compression helps to reduce the effects of local
shadowing and illumination variations.
"""
if flow.ndim < 3:
raise ValueError("Requires dense flow in both directions")
if normalise:
flow = sqrt(flow)
"""
-2-
The second stage computes first order image gradients. These capture
contour, silhouette and some texture information, while providing
further resistance to illumination variations. The locally dominant
colour channel is used, which provides colour invariance to a large
extent. Variant methods may also include second order image derivatives,
which act as primitive bar detectors - a useful feature for capturing,
e.g. bar like structures in bicycles and limbs in humans.
"""
if flow.dtype.kind == 'u':
# convert uint image to float
# to avoid problems with subtracting unsigned numbers in np.diff()
flow = flow.astype('float')
gx = np.zeros(flow.shape[:2])
gy = np.zeros(flow.shape[:2])
# gx[:, :-1] = np.diff(flow[:,:,1], n=1, axis=1)
# gy[:-1, :] = np.diff(flow[:,:,0], n=1, axis=0)
gx = flow[:,:,1]
gy = flow[:,:,0]
"""
-3-
The third stage aims to produce an encoding that is sensitive to
local image content while remaining resistant to small changes in
pose or appearance. The adopted method pools gradient orientation
information locally in the same way as the SIFT [Lowe 2004]
feature. The image window is divided into small spatial regions,
called "cells". For each cell we accumulate a local 1-D histogram
of gradient or edge orientations over all the pixels in the
cell. This combined cell-level 1-D histogram forms the basic
"orientation histogram" representation. Each orientation histogram
divides the gradient angle range into a fixed number of
predetermined bins. The gradient magnitudes of the pixels in the
cell are used to vote into the orientation histogram.
"""
magnitude = np.sqrt(gx**2 + gy**2)
orientation = np.arctan2(gy, gx) * (180 / math.pi) % 180
sy, sx = flow.shape[:2]
cx, cy = pixels_per_cell
bx, by = cells_per_block
n_cellsx = int(np.floor(sx // cx)) # number of cells in x
n_cellsy = int(np.floor(sy // cy)) # number of cells in y
# compute orientations integral images
orientation_histogram = np.zeros((n_cellsy, n_cellsx, orientations))
# 两维 第一维从每个cell中间像素开始 到图片最后一个像素 步长为每个cell的大小
subsample = np.index_exp[int(cy / 2):cy * n_cellsy:cy, int(cx / 2):cx * n_cellsx:cx]
for i in range(orientations-1):
#create new integral image for this orientation
# isolate orientations in this range
temp_ori = np.where(orientation < 180 / orientations * (i + 1),
orientation, -1)
temp_ori = np.where(orientation >= 180 / orientations * i,
temp_ori, -1)
# select magnitudes for those orientations
cond2 = (temp_ori > -1) * (magnitude > motion_threshold)
temp_mag = np.where(cond2, magnitude, 0)
#temp_filt = uniform_filter(temp_mag, size=(cy, cx))
temp_filt = ndimage.uniform_filter(temp_mag, size=(cy, cx))
orientation_histogram[:, :, i] = temp_filt[subsample]
''' Calculate the no-motion bin '''
temp_mag = np.where(magnitude <= motion_threshold, magnitude, 0)
temp_filt = ndimage.uniform_filter(temp_mag, size=(cy, cx))
#print("tem_filt shape = ",temp_filt.shape)
orientation_histogram[:, :, -1] = temp_filt[subsample]
#print("orientation_histogram shape = ",orientation_histogram.shape)
"""
The fourth stage computes normalisation, which takes local groups of
cells and contrast normalises their overall responses before passing
to next stage. Normalisation introduces better invariance to illumination,
shadowing, and edge contrast. It is performed by accumulating a measure
of local histogram "energy" over local groups of cells that we call
"blocks". The result is used to normalise each cell in the block.
Typically each individual cell is shared between several blocks, but
its normalisations are block dependent and thus different. The cell
thus appears several times in the final output vector with different
normalisations. This may seem redundant but it improves the performance.
We refer to the normalised block descriptors as Histogram of Oriented
Gradient (hog) descriptors.
"""
n_blocksx = (n_cellsx - bx) + 1
n_blocksy = (n_cellsy - by) + 1
normalised_blocks = np.zeros((n_blocksy, n_blocksx,
by, bx, orientations))
for x in range(n_blocksx):
for y in range(n_blocksy):
block = orientation_histogram[y:y+by, x:x+bx, :]
eps = 1e-5
normalised_blocks[y, x, :] = block / np.sqrt(block.sum()**2 + eps)
return normalised_blocks.ravel()
#extract Histogram of Optical Flow features for a given consequetive frames. Replace frame by extracted HOG
#def extract_hof_feature(seq_location_list,seq_label):
def extract_hof_feature(seq_location):
feature_hof = []
label_list = []
filepath=seq_location
seq_folder = os.listdir(filepath)
#seq_folder_sorted=sorted(seq_folder, key=numericalsort)
seq_folder_sorted=seq_folder
hof_hist=[]
#iterate the frames inside a given sequence
for b in range(0,len(seq_folder_sorted)-1):
framepath = os.path.join(filepath, seq_folder_sorted[b])
#print("image being processed", framepath)
#read first frame
frame = cv2.imread(framepath)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.resize(gray,dsize=(128,150))
previousGray = gray
framepath_next = os.path.join(filepath, seq_folder_sorted[b+1])
frame_next = cv2.imread(framepath_next)
gray = cv2.cvtColor(frame_next, cv2.COLOR_BGR2GRAY)
gray = cv2.resize(gray,dsize=(128,150))
flow = cv2.calcOpticalFlowFarneback(previousGray, gray,
flow=None, pyr_scale=0.5, levels=5, winsize=11,
iterations=10, poly_n=5, poly_sigma=1.1, flags=0)
"""
pyrScale: 金字塔上下两层之间的尺度关系,该参数一般设置为pyrScale=0.5,表示图像金字塔上一层是下一层的2倍降采样
levels: 图像金字塔的层数
winsize: 均值窗口大小,winsize越大,算法对图像噪声越鲁棒,并且能提升对快速运动目标的检测效果,但也会引起运动区域模糊。
iterations:算法在图像金字塔每层的迭代次数
poly_n: 用于在每个像素点处计算多项式展开的相邻像素点的个数。poly_n越大,图像的近似逼近越光滑,算法鲁棒性更好,也会带来更多的运动区域模糊。通常,poly_n=5 or 7
poly_sigma:标准差,poly_n=5时,poly_sigma = 1.1;poly_n=7时,poly_sigma = 1.5
flow 计算的流量图像具有与prev相同的大小并为CV_32FC2类型
"""
hof_feature_one = hof(flow, orientations=9, pixels_per_cell=(8, 8),cells_per_block=(2, 2))
hof_hist.append(hof_feature_one)
print(np.array(hof_hist).shape)
hof_hist_mean = np.mean(hof_hist,axis=0)
print(np.array(hof_hist_mean).shape)
return(hof_hist)
os.chdir(data_path)
orig_dir = os.getcwd()
print(orig_dir)
# 图片需要resize 这里resize成(128,150)
hof_feature = extract_hof_feature("D:/NUS/NUS_academic/Project_Code/Cropped/sub01/EP04_04")
# D:/NUS/NUS_academic/Project_Code/Cropped/sub01/EP02_01f (40,8568)
# D:/NUS/NUS_academic/Project_Code/Cropped/sub01/EP02_01f (43,74)
# 批量化处理 提取特征
subject_list = os.listdir(orig_dir) # [sub01 ... sub26]
hof_feature_all = []
for subject in subject_list:
subject_path_list = os.path.join(orig_dir, subject)
file_list = os.listdir(subject_path_list) #[EP...]
for file in file_list:
file_path_list = os.path.join(subject_path_list, file)
hof_feature = extract_hof_feature(file_path_list)
print(file_path_list+": extract over")
hof_feature_all.append(hof_feature)
hof_feature_all = np.array(hof_feature_all)
print("all_shape = ",hof_feature_all.shape)
np.save(HoF_feature_path,hof_feature_all)
###Output
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP02_01f: extract over
(30, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP03_02: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP04_02: extract over
(25, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP04_03: extract over
(43, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP04_04: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP19_01: extract over
(80, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP19_03f: extract over
(50, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP19_05f: extract over
(125, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP19_06f: extract over
(50, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP01_11f: extract over
(110, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP02_04f: extract over
(120, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP03_02f: extract over
(99, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP06_01f: extract over
(75, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP06_02f: extract over
(90, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP08_04: extract over
(99, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP09_01: extract over
(140, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP09_06f: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP09_10: extract over
(90, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP11_01: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP13_04: extract over
(45, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP14_01: extract over
(70, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP15_04: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP01_2: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP07_03: extract over
(85, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP07_04: extract over
(80, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP08_1: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP09_03: extract over
(100, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP18_06: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP19_08: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub04\EP12_01f: extract over
(50, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub04\EP12_02f: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub04\EP13_02f: extract over
(90, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub04\EP13_06f: extract over
(35, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub04\EP19_01f: extract over
(65, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP02_07: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP03_01: extract over
(50, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP03_06: extract over
(42, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP03_07: extract over
(50, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP04_05: extract over
(85, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP04_06: extract over
(90, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP05_03: extract over
(87, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP05_09: extract over
(51, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP06_10: extract over
(57, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP07_01: extract over
(65, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP08_05: extract over
(54, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP09_05f: extract over
(50, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP12_03f: extract over
(75, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP12_06: extract over
(75, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP13_04f: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP13_06: extract over
(99, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP16_03f: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP16_04f: extract over
(52, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP19_03: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub06\EP01_01: extract over
(49, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub06\EP02_31: extract over
(45, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub06\EP10_08: extract over
(50, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub06\EP15_02: extract over
(57, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub06\EP16_05: extract over
(77, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP01_01: extract over
(65, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP03_04: extract over
(35, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP06_01: extract over
(80, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP06_02_01: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP06_02_02: extract over
(50, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP08_02: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP15_01: extract over
(45, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP18_01: extract over
(85, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP18_03: extract over
(92, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub08\EP12_07f: extract over
(99, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub08\EP12_08f: extract over
(77, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub08\EP13_01f: extract over
(70, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP02_01f: extract over
(130, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP05_02: extract over
(65, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP05_03: extract over
(50, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP05_05: extract over
(120, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP06_01f: extract over
(95, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP06_02f: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP09f: extract over
(120, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP09_04: extract over
(85, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP09_05: extract over
(90, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP13_01: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP13_02: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP15_05: extract over
(70, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP17_08: extract over
(70, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP18_03: extract over
(90, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP06_01f: extract over
(65, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP08_01f: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP10_01f: extract over
(70, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP11_01: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP11_01f: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP12_02f: extract over
(85, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP12_03f: extract over
(85, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP12_04: extract over
(65, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP13_01: extract over
(90, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP13_02f: extract over
(65, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP13_03f: extract over
(89, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP16_02f: extract over
(62, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP18_01f: extract over
(45, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP19_04f: extract over
(45, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP02_06f: extract over
(50, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP08_01f: extract over
(94, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP12_03f: extract over
(90, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP13_02f: extract over
(75, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP13_03f: extract over
(95, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP13_05f: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP15_01f: extract over
(35, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP15_04f: extract over
(75, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP18_03f: extract over
(140, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP19_03f: extract over
(75, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP01_02: extract over
(99, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP02_05: extract over
(65, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP03_02: extract over
(96, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP03_04: extract over
(99, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP04_16: extract over
(75, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP06_06: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP08_01: extract over
(80, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP08_03: extract over
(75, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP08_07: extract over
(45, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP09_02: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP09_06: extract over
(90, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP16_02: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP01_01: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP01_02: extract over
(57, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP02_02: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP02_03: extract over
(80, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP03_01: extract over
(52, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP08_01: extract over
(75, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP09_10: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP12_01: extract over
(75, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub14\EP04_04f: extract over
(35, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub14\EP09_03: extract over
(85, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub14\EP09_04: extract over
(80, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub14\EP09_06: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub15\EP03_02: extract over
(90, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub15\EP04_02: extract over
(57, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub15\EP08_02: extract over
(90, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub16\EP01_05: extract over
(23, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub16\EP01_08: extract over
(112, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub16\EP01_09f: extract over
(90, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub16\EP04_02f: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP01_06: extract over
(90, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP01_13: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP01_15: extract over
(45, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP02_01: extract over
(50, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP02_03: extract over
(49, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP02_11: extract over
(90, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP02_18f: extract over
(43, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP03_02: extract over
(65, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP03_09: extract over
(96, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_02: extract over
(80, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_03: extract over
(65, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_03f: extract over
(65, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_04: extract over
(75, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_09: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_10: extract over
(90, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP06_04: extract over
(30, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP06_07: extract over
(75, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP06_08: extract over
(94, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP07_01: extract over
(67, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP08_02: extract over
(33, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP08_03: extract over
(57, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP10_06: extract over
(44, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP11_01: extract over
(85, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP11_02: extract over
(53, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP12_03: extract over
(45, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP13_01: extract over
(85, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP13_03: extract over
(35, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP13_04: extract over
(45, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP13_06: extract over
(95, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP13_09: extract over
(30, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP15_01: extract over
(37, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP15_03: extract over
(76, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP15_04: extract over
(45, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP15_05: extract over
(63, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP16_01f: extract over
(67, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP18_07: extract over
(36, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub18\EP08_01: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub18\EP18_01: extract over
(45, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub18\EP19_01: extract over
(99, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP01_01f: extract over
(91, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP01_02f: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP02_01: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP06_01f: extract over
(80, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP06_02f: extract over
(79, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP08_02: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP11_01f: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP11_04f: extract over
(85, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP13_01: extract over
(50, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP15_03f: extract over
(50, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP16_01: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP16_02: extract over
(35, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP19_01: extract over
(45, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP19_02: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP19_03: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP19_04: extract over
(50, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP01_03: extract over
(66, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP03_02: extract over
(30, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP06_03: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP07_04: extract over
(85, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP10_02: extract over
(65, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP12_01: extract over
(50, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP13_02: extract over
(95, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP15_03f: extract over
(35, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP16_01: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP16_04: extract over
(78, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP18_03: extract over
(30, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub21\EP01_07: extract over
(95, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub21\EP05_02: extract over
(96, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub22\EP01_12: extract over
(105, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub22\EP13_08: extract over
(94, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP02_01: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP03_14f: extract over
(99, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP04_03f: extract over
(65, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP05_24f: extract over
(80, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP05_25f: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP07_01: extract over
(84, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP12_02f: extract over
(75, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP12_03: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP13_03: extract over
(65, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP13_04: extract over
(70, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP13_07f: extract over
(65, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP17_01: extract over
(98, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP01_08: extract over
(35, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP02_02f: extract over
(42, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP07_01: extract over
(37, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP07_04f: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP08_02: extract over
(23, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP10_01f: extract over
(35, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP10_02: extract over
(42, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP10_03: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP12_01: extract over
(35, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP18_03: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP03_01: extract over
(83, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP03_02: extract over
(35, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP09_02: extract over
(75, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP10_01: extract over
(97, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP10_10: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP12_01: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP18_04f: extract over
(85, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP03_10: extract over
(40, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP07_28: extract over
(98, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP07_37: extract over
(58, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP08_04: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP09_04: extract over
(55, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP09_09: extract over
(57, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP13_01: extract over
(84, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP13_02: extract over
(81, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP13_11: extract over
(39, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP15_01: extract over
(50, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP16_01: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_44: extract over
(70, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_46: extract over
(80, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_47: extract over
(64, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_49: extract over
(83, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_50: extract over
(60, 9180)
(9180,)
D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_51: extract over
all_shape = (255,)
###Markdown
1.2 LBP_TOP 特征
###Code
def get_pixel(img, center, x, y):
new_value = 0
try:
if img[x][y] >= center:
new_value = 1
except:
pass
return new_value
def lbp_calculated_pixel_x_y(img, x, y):
'''
64 | 128 | 1
----------------
32 | 0 | 2
----------------
16 | 8 | 4
'''
center = img[x][y]
val_ar = []
val_ar.append(get_pixel(img, center, x-1, y+1)) # top_right
val_ar.append(get_pixel(img, center, x, y+1)) # right
val_ar.append(get_pixel(img, center, x+1, y+1)) # bottom_right
val_ar.append(get_pixel(img, center, x+1, y)) # bottom
val_ar.append(get_pixel(img, center, x+1, y-1)) # bottom_left
val_ar.append(get_pixel(img, center, x, y-1)) # left
val_ar.append(get_pixel(img, center, x-1, y-1)) # top_left
val_ar.append(get_pixel(img, center, x-1, y)) # top
power_val = [1, 2, 4, 8, 16, 32, 64, 128]
val = 0
for i in range(len(val_ar)):
val += val_ar[i] * power_val[i]
return val
# calculate orthogonal LBPx_y for all pixels
def lbp_calculated_pixel_ortho_x_y(img_stream,frame_count, x, y):
'''
64 | 128 | 1
----------------
32 | 0 | 2
----------------
16 | 8 | 4
'''
index=frame_count
# for X-Y place
# img_stream(n,x,y)
center = img_stream[index][x][y]
val_ar = []
val_ar.append(get_pixel(img_stream[index], center, x-1, y+1)) # top_right
val_ar.append(get_pixel(img_stream[index], center, x, y+1)) # right
val_ar.append(get_pixel(img_stream[index], center, x+1, y+1)) # bottom_right
val_ar.append(get_pixel(img_stream[index], center, x+1, y)) # bottom
val_ar.append(get_pixel(img_stream[index], center, x+1, y-1)) # bottom_left
val_ar.append(get_pixel(img_stream[index], center, x, y-1)) # left
val_ar.append(get_pixel(img_stream[index], center, x-1, y-1)) # top_left
val_ar.append(get_pixel(img_stream[index], center, x-1, y)) # top
power_val = [1, 2, 4, 8, 16, 32, 64, 128]
val = 0
for i in range(len(val_ar)):
val += val_ar[i] * power_val[i]
return val
# calculate orthogonal LBP_x_t_for all pixels
# for X-T plane
def lbp_calculated_pixel_ortho_x_t(img_stream,frame_count, x, y):
'''
64 | 128 | 1
----------------
32 | 0 | 2
----------------
16 | 8 | 4
'''
index=frame_count
center = img_stream[index][x][y]
val_ar = []
# top right is x+1, y, T+1
new_value = 0
try:
if img_stream[index+1][x+1][y] >= center:
new_value = 1
except:
pass
val_ar.append(new_value)
# right is x+1, y, T
new_value = 0
try:
if img_stream[index][x+1][y] >= center:
new_value = 1
except:
pass
val_ar.append(new_value)
# bottom right is x+1, y, T-1
new_value = 0
try:
if img_stream[index-1][x+1][y] >= center:
new_value = 1
except:
pass
val_ar.append(new_value)
# bottom is x,y, T-1
new_value = 0
try:
if img_stream[index-1][x][y] >= center:
new_value = 1
except:
pass
val_ar.append(new_value)
# bottom left is x-1, y, T-1
new_value = 0
try:
if img_stream[index-1][x-1][y] >= center:
new_value = 1
except:
pass
val_ar.append(new_value)
# left is x-1,y, T
new_value = 0
try:
if img_stream[index][x-1][y] >= center:
new_value = 1
except:
pass
val_ar.append(new_value)
# top left is x-1,y, T+1
new_value = 0
try:
if img_stream[index+1][x-1][y] >= center:
new_value = 1
except:
pass
val_ar.append(new_value)
# top is x,y, T+1
new_value = 0
try:
if img_stream[index+1][x][y] >= center:
new_value = 1
except:
pass
val_ar.append(new_value)
power_val = [1, 2, 4, 8, 16, 32, 64, 128]
val = 0
for i in range(len(val_ar)):
val += val_ar[i] * power_val[i]
return val
# calculate orthogonal LBP_y_t_for all pixels
# for Y-T plane
def lbp_calculated_pixel_ortho_y_t(img_stream,frame_count, x, y):
'''
64 | 128 | 1
----------------
32 | 0 | 2
----------------
16 | 8 | 4
'''
index=frame_count
center = img_stream[index][x][y]
val_ar = []
# top right is x, y+1, T+1
new_value = 0
try:
if img_stream[index+1][x][y+1] >= center:
new_value = 1
except:
pass
val_ar.append(new_value)
# right is x, y, T+1
new_value = 0
try:
if img_stream[index+1][x][y] >= center:
new_value = 1
except:
pass
val_ar.append(new_value)
# bottom right is x, y-1, T+1
new_value = 0
try:
if img_stream[index+1][x][y-1] >= center:
new_value = 1
except:
pass
val_ar.append(new_value)
# bottom is x,y-1, T
new_value = 0
try:
if img_stream[index][x][y-1] >= center:
new_value = 1
except:
pass
val_ar.append(new_value)
# bottom left is x, y-1, T-1
new_value = 0
try:
if img_stream[index-1][x][y-1] >= center:
new_value = 1
except:
pass
val_ar.append(new_value)
# left is x,y, T-1
new_value = 0
try:
if img_stream[index-1][x][y] >= center:
new_value = 1
except:
pass
val_ar.append(new_value)
# top left is x,y+1, T-1
new_value = 0
try:
if img_stream[index-1][x][y+1] >= center:
new_value = 1
except:
pass
val_ar.append(new_value)
# top is x,y+1, T
new_value = 0
try:
if img_stream[index][x][y+1] >= center:
new_value = 1
except:
pass
val_ar.append(new_value)
power_val = [1, 2, 4, 8, 16, 32, 64, 128]
val = 0
for i in range(len(val_ar)):
val += val_ar[i] * power_val[i]
return val
os.chdir(data_path)
orig_dir = os.getcwd()
print(orig_dir)
import time
HoF_feature_path
subject_list = os.listdir(orig_dir) # [sub01 ... sub26]
lbp_feature_all = []
for subject in subject_list:
subject_path_list = os.path.join(orig_dir, subject)
file_list = os.listdir(subject_path_list) #[EP...]
for file in file_list:
file_path_list = os.path.join(subject_path_list, file)
img_list = os.listdir(file_path_list)
img_seqs = []
for img in img_list:
img_path = os.path.join(file_path_list, img)
img_bgr = cv2.imread(img_path)
img_gray = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2GRAY)
img_gray = cv2.resize(img_gray,(128,150))
img_seqs.append(img_gray)
print(np.array(img_seqs).shape)
# extract all the features from the sequence
start_time=time.time()
height, width = img_seqs[0].shape
print("sequence processing started for "+file_path_list)
count=0
hist_lbp_top=[]
lbp_top_feature = []
for image in img_seqs:
img_lbp_x_y = np.zeros((height, width), np.uint8)
img_lbp_x_t = np.zeros((height, width), np.uint8)
img_lbp_y_t = np.zeros((height, width), np.uint8)
for i in range(0, height):
for j in range(0, width):
img_lbp_x_y[i,j]=lbp_calculated_pixel_ortho_x_y(img_seqs,count, i, j)
img_lbp_x_t[i,j]=lbp_calculated_pixel_ortho_x_t(img_seqs,count, i, j)
img_lbp_y_t[i,j]=lbp_calculated_pixel_ortho_y_t(img_seqs,count, i, j)
hist_lbp_x_y = cv2.calcHist([img_lbp_x_y], [0], None, [256], [0, 256])
hist_lbp_x_t = cv2.calcHist([img_lbp_x_t], [0], None, [256], [0, 256])
hist_lbp_y_t = cv2.calcHist([img_lbp_y_t], [0], None, [256], [0, 256])
histogram=np.concatenate((hist_lbp_x_y, hist_lbp_x_t, hist_lbp_y_t), axis=0)
hist_lbp_top.append(histogram)
count=count+1
print("sequence processing finished for", file_path_list)
print("--- %s seconds ---" % (time.time() - start_time))
temp = np.array(hist_lbp_top)
print(temp.shape)
temp = np.squeeze(temp)
lbp_top_feature = np.sum(temp,axis=0)/temp.shape[0]
#print(lbp_top_feature.shape)
lbp_feature_all.append(lbp_top_feature)
# 规范化
lbp_feature_all = np.array(lbp_feature_all)
print(lbp_feature_all.shape)
np.save(LBP_feature_path,lbp_feature_all)
print("complete")
###Output
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP02_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP02_01f
--- 23.221096754074097 seconds ---
(41, 768, 1)
(31, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP03_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP03_02
--- 18.63767695426941 seconds ---
(31, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP04_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP04_02
--- 30.84303617477417 seconds ---
(56, 768, 1)
(26, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP04_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP04_03
--- 13.704723596572876 seconds ---
(26, 768, 1)
(44, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP04_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP04_04
--- 23.949355840682983 seconds ---
(44, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP19_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP19_01
--- 22.147188901901245 seconds ---
(41, 768, 1)
(81, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP19_03f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP19_03f
--- 45.42732763290405 seconds ---
(81, 768, 1)
(51, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP19_05f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP19_05f
--- 27.66580867767334 seconds ---
(51, 768, 1)
(126, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP19_06f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub01\EP19_06f
--- 67.70280241966248 seconds ---
(126, 768, 1)
(51, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP01_11f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP01_11f
--- 28.394867181777954 seconds ---
(51, 768, 1)
(111, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP02_04f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP02_04f
--- 60.5104079246521 seconds ---
(111, 768, 1)
(121, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP03_02f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP03_02f
--- 65.30404949188232 seconds ---
(121, 768, 1)
(100, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP06_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP06_01f
--- 54.18171691894531 seconds ---
(100, 768, 1)
(76, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP06_02f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP06_02f
--- 40.92088198661804 seconds ---
(76, 768, 1)
(91, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP08_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP08_04
--- 48.700287103652954 seconds ---
(91, 768, 1)
(100, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP09_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP09_01
--- 53.238285779953 seconds ---
(100, 768, 1)
(141, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP09_06f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP09_06f
--- 75.84212732315063 seconds ---
(141, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP09_10
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP09_10
--- 22.00824999809265 seconds ---
(41, 768, 1)
(91, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP11_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP11_01
--- 49.517143964767456 seconds ---
(91, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP13_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP13_04
--- 30.474143028259277 seconds ---
(56, 768, 1)
(46, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP14_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP14_01
--- 24.93014121055603 seconds ---
(46, 768, 1)
(71, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP15_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub02\EP15_04
--- 38.237746238708496 seconds ---
(71, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP01_2
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP01_2
--- 22.57829999923706 seconds ---
(41, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP07_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP07_03
--- 31.852742910385132 seconds ---
(61, 768, 1)
(86, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP07_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP07_04
--- 46.57462120056152 seconds ---
(86, 768, 1)
(81, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP08_1
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP08_1
--- 42.87567496299744 seconds ---
(81, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP09_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP09_03
--- 32.89593291282654 seconds ---
(61, 768, 1)
(101, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP18_06
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP18_06
--- 54.99587321281433 seconds ---
(101, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP19_08
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub03\EP19_08
--- 21.529346704483032 seconds ---
(41, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub04\EP12_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub04\EP12_01f
--- 32.069495677948 seconds ---
(61, 768, 1)
(51, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub04\EP12_02f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub04\EP12_02f
--- 28.09721541404724 seconds ---
(51, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub04\EP13_02f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub04\EP13_02f
--- 32.27723240852356 seconds ---
(61, 768, 1)
(91, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub04\EP13_06f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub04\EP13_06f
--- 48.826460123062134 seconds ---
(91, 768, 1)
(36, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub04\EP19_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub04\EP19_01f
--- 19.41757869720459 seconds ---
(36, 768, 1)
(66, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP02_07
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP02_07
--- 35.8545982837677 seconds ---
(66, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP03_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP03_01
--- 33.461284160614014 seconds ---
(61, 768, 1)
(51, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP03_06
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP03_06
--- 28.216371297836304 seconds ---
(51, 768, 1)
(43, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP03_07
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP03_07
--- 23.165289163589478 seconds ---
(43, 768, 1)
(51, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP04_05
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP04_05
--- 28.171464681625366 seconds ---
(51, 768, 1)
(86, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP04_06
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP04_06
--- 46.119529724121094 seconds ---
(86, 768, 1)
(91, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP05_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP05_03
--- 48.1280779838562 seconds ---
(91, 768, 1)
(88, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP05_09
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP05_09
--- 47.09032964706421 seconds ---
(88, 768, 1)
(52, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP06_10
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP06_10
--- 28.139662504196167 seconds ---
(52, 768, 1)
(58, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP07_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP07_01
--- 30.840255975723267 seconds ---
(58, 768, 1)
(66, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP08_05
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP08_05
--- 35.71866250038147 seconds ---
(66, 768, 1)
(55, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP09_05f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP09_05f
--- 29.537124633789062 seconds ---
(55, 768, 1)
(51, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP12_03f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP12_03f
--- 28.070010900497437 seconds ---
(51, 768, 1)
(76, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP12_06
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP12_06
--- 39.93309950828552 seconds ---
(76, 768, 1)
(76, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP13_04f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP13_04f
--- 41.16162443161011 seconds ---
(76, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP13_06
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP13_06
--- 32.464112997055054 seconds ---
(61, 768, 1)
(100, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP16_03f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP16_03f
--- 53.23204326629639 seconds ---
(100, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP16_04f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP16_04f
--- 22.073710203170776 seconds ---
(41, 768, 1)
(53, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP19_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub05\EP19_03
--- 28.863285541534424 seconds ---
(53, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub06\EP01_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub06\EP01_01
--- 30.78486442565918 seconds ---
(56, 768, 1)
(50, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub06\EP02_31
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub06\EP02_31
--- 27.01147150993347 seconds ---
(50, 768, 1)
(46, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub06\EP10_08
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub06\EP10_08
--- 24.789870738983154 seconds ---
(46, 768, 1)
(51, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub06\EP15_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub06\EP15_02
--- 27.41254472732544 seconds ---
(51, 768, 1)
(58, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub06\EP16_05
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub06\EP16_05
--- 31.119561910629272 seconds ---
(58, 768, 1)
(78, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP01_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP01_01
--- 41.586607933044434 seconds ---
(78, 768, 1)
(66, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP03_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP03_04
--- 35.49739742279053 seconds ---
(66, 768, 1)
(36, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP06_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP06_01
--- 18.872900247573853 seconds ---
(36, 768, 1)
(81, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP06_02_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP06_02_01
--- 42.638845682144165 seconds ---
(81, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP06_02_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP06_02_02
--- 29.32724666595459 seconds ---
(56, 768, 1)
(51, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP08_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP08_02
--- 27.82595992088318 seconds ---
(51, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP15_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP15_01
--- 30.251827239990234 seconds ---
(56, 768, 1)
(46, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP18_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP18_01
--- 24.089800357818604 seconds ---
(46, 768, 1)
(86, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP18_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub07\EP18_03
--- 45.699461936950684 seconds ---
(86, 768, 1)
(93, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub08\EP12_07f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub08\EP12_07f
--- 50.12642765045166 seconds ---
(93, 768, 1)
(100, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub08\EP12_08f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub08\EP12_08f
--- 55.436808347702026 seconds ---
(100, 768, 1)
(78, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub08\EP13_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub08\EP13_01f
--- 42.02087664604187 seconds ---
(78, 768, 1)
(71, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP02_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP02_01f
--- 38.63877892494202 seconds ---
(71, 768, 1)
(131, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP05_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP05_02
--- 71.22207021713257 seconds ---
(131, 768, 1)
(66, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP05_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP05_03
--- 34.888638973236084 seconds ---
(66, 768, 1)
(51, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP05_05
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP05_05
--- 26.665191411972046 seconds ---
(51, 768, 1)
(121, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP06_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP06_01f
--- 63.44804549217224 seconds ---
(121, 768, 1)
(96, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP06_02f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP06_02f
--- 50.399489641189575 seconds ---
(96, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP09f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP09f
--- 32.621169328689575 seconds ---
(61, 768, 1)
(121, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP09_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP09_04
--- 65.5127604007721 seconds ---
(121, 768, 1)
(86, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP09_05
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP09_05
--- 45.46613597869873 seconds ---
(86, 768, 1)
(91, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP13_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP13_01
--- 48.638492822647095 seconds ---
(91, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP13_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP13_02
--- 33.46892499923706 seconds ---
(61, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP15_05
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP15_05
--- 29.3365581035614 seconds ---
(56, 768, 1)
(71, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP17_08
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP17_08
--- 37.561789751052856 seconds ---
(71, 768, 1)
(71, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP18_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub09\EP18_03
--- 37.75939154624939 seconds ---
(71, 768, 1)
(91, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP06_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP06_01f
--- 48.43955874443054 seconds ---
(91, 768, 1)
(66, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP08_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP08_01f
--- 36.17514991760254 seconds ---
(66, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP10_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP10_01f
--- 32.370991230010986 seconds ---
(61, 768, 1)
(71, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP11_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP11_01
--- 37.512797355651855 seconds ---
(71, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP11_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP11_01f
--- 30.190576553344727 seconds ---
(56, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP12_02f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP12_02f
--- 33.51362490653992 seconds ---
(61, 768, 1)
(86, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP12_03f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP12_03f
--- 46.01496410369873 seconds ---
(86, 768, 1)
(86, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP12_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP12_04
--- 46.570573568344116 seconds ---
(86, 768, 1)
(66, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP13_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP13_01
--- 35.60989761352539 seconds ---
(66, 768, 1)
(91, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP13_02f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP13_02f
--- 48.55180335044861 seconds ---
(91, 768, 1)
(66, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP13_03f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP13_03f
--- 34.96216416358948 seconds ---
(66, 768, 1)
(90, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP16_02f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP16_02f
--- 47.57298803329468 seconds ---
(90, 768, 1)
(63, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP18_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP18_01f
--- 33.786609172821045 seconds ---
(63, 768, 1)
(46, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP19_04f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub10\EP19_04f
--- 24.545586585998535 seconds ---
(46, 768, 1)
(46, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP02_06f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP02_06f
--- 24.444525003433228 seconds ---
(46, 768, 1)
(51, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP08_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP08_01f
--- 27.02346706390381 seconds ---
(51, 768, 1)
(95, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP12_03f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP12_03f
--- 51.768065452575684 seconds ---
(95, 768, 1)
(91, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP13_02f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP13_02f
--- 49.53948140144348 seconds ---
(91, 768, 1)
(76, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP13_03f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP13_03f
--- 40.49453282356262 seconds ---
(76, 768, 1)
(96, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP13_05f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP13_05f
--- 51.20479679107666 seconds ---
(96, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP15_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP15_01f
--- 30.291164875030518 seconds ---
(56, 768, 1)
(36, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP15_04f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP15_04f
--- 20.16545820236206 seconds ---
(36, 768, 1)
(76, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP18_03f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP18_03f
--- 40.558555364608765 seconds ---
(76, 768, 1)
(141, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP19_03f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub11\EP19_03f
--- 76.82019543647766 seconds ---
(141, 768, 1)
(76, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP01_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP01_02
--- 41.61810517311096 seconds ---
(76, 768, 1)
(100, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP02_05
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP02_05
--- 53.97971773147583 seconds ---
(100, 768, 1)
(66, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP03_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP03_02
--- 35.2444634437561 seconds ---
(66, 768, 1)
(97, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP03_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP03_04
--- 51.13015556335449 seconds ---
(97, 768, 1)
(100, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP04_16
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP04_16
--- 53.78947329521179 seconds ---
(100, 768, 1)
(76, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP06_06
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP06_06
--- 40.818190574645996 seconds ---
(76, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP08_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP08_01
--- 33.2923629283905 seconds ---
(61, 768, 1)
(81, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP08_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP08_03
--- 43.37636685371399 seconds ---
(81, 768, 1)
(76, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP08_07
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP08_07
--- 40.150850772857666 seconds ---
(76, 768, 1)
(46, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP09_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP09_02
--- 24.270841598510742 seconds ---
(46, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP09_06
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP09_06
--- 21.435827493667603 seconds ---
(41, 768, 1)
(91, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP16_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub12\EP16_02
--- 49.23366594314575 seconds ---
(91, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP01_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP01_01
--- 29.548529386520386 seconds ---
(56, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP01_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP01_02
--- 22.09330725669861 seconds ---
(41, 768, 1)
(58, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP02_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP02_02
--- 31.512957096099854 seconds ---
(58, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP02_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP02_03
--- 21.649386882781982 seconds ---
(41, 768, 1)
(81, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP03_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP03_01
--- 43.72447061538696 seconds ---
(81, 768, 1)
(53, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP08_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP08_01
--- 27.74989938735962 seconds ---
(53, 768, 1)
(76, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP09_10
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP09_10
--- 41.01608657836914 seconds ---
(76, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP12_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub13\EP12_01
--- 29.95556616783142 seconds ---
(56, 768, 1)
(76, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub14\EP04_04f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub14\EP04_04f
--- 40.324278116226196 seconds ---
(76, 768, 1)
(36, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub14\EP09_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub14\EP09_03
--- 19.09267783164978 seconds ---
(36, 768, 1)
(86, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub14\EP09_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub14\EP09_04
--- 47.5525336265564 seconds ---
(86, 768, 1)
(81, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub14\EP09_06
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub14\EP09_06
--- 44.24739146232605 seconds ---
(81, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub15\EP03_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub15\EP03_02
--- 33.5093994140625 seconds ---
(61, 768, 1)
(91, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub15\EP04_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub15\EP04_02
--- 48.98984742164612 seconds ---
(91, 768, 1)
(58, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub15\EP08_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub15\EP08_02
--- 31.04803204536438 seconds ---
(58, 768, 1)
(91, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub16\EP01_05
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub16\EP01_05
--- 48.75628089904785 seconds ---
(91, 768, 1)
(24, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub16\EP01_08
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub16\EP01_08
--- 12.65657377243042 seconds ---
(24, 768, 1)
(113, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub16\EP01_09f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub16\EP01_09f
--- 60.08291268348694 seconds ---
(113, 768, 1)
(91, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub16\EP04_02f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub16\EP04_02f
--- 48.82380509376526 seconds ---
(91, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP01_06
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP01_06
--- 21.684346437454224 seconds ---
(41, 768, 1)
(91, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP01_13
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP01_13
--- 55.13883924484253 seconds ---
(91, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP01_15
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP01_15
--- 22.172275066375732 seconds ---
(41, 768, 1)
(46, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP02_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP02_01
--- 24.140573024749756 seconds ---
(46, 768, 1)
(51, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP02_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP02_03
--- 27.442167043685913 seconds ---
(51, 768, 1)
(50, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP02_11
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP02_11
--- 26.989132165908813 seconds ---
(50, 768, 1)
(91, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP02_18f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP02_18f
--- 48.84811568260193 seconds ---
(91, 768, 1)
(44, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP03_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP03_02
--- 23.44041681289673 seconds ---
(44, 768, 1)
(66, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP03_09
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP03_09
--- 36.065598249435425 seconds ---
(66, 768, 1)
(97, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_02
--- 52.846195459365845 seconds ---
(97, 768, 1)
(81, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_03
--- 42.97974896430969 seconds ---
(81, 768, 1)
(66, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_03f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_03f
--- 36.13388800621033 seconds ---
(66, 768, 1)
(66, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_04
--- 35.001784801483154 seconds ---
(66, 768, 1)
(76, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_09
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_09
--- 41.672333002090454 seconds ---
(76, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_10
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP05_10
--- 22.09701156616211 seconds ---
(41, 768, 1)
(91, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP06_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP06_04
--- 49.00646734237671 seconds ---
(91, 768, 1)
(31, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP06_07
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP06_07
--- 17.061389207839966 seconds ---
(31, 768, 1)
(76, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP06_08
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP06_08
--- 41.26897096633911 seconds ---
(76, 768, 1)
(95, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP07_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP07_01
--- 52.00259590148926 seconds ---
(95, 768, 1)
(68, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP08_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP08_02
--- 37.10748910903931 seconds ---
(68, 768, 1)
(34, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP08_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP08_03
--- 18.431276082992554 seconds ---
(34, 768, 1)
(58, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP10_06
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP10_06
--- 31.242526292800903 seconds ---
(58, 768, 1)
(45, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP11_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP11_01
--- 24.237813711166382 seconds ---
(45, 768, 1)
(86, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP11_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP11_02
--- 46.511382818222046 seconds ---
(86, 768, 1)
(54, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP12_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP12_03
--- 28.826361656188965 seconds ---
(54, 768, 1)
(46, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP13_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP13_01
--- 25.007097005844116 seconds ---
(46, 768, 1)
(86, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP13_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP13_03
--- 46.07504653930664 seconds ---
(86, 768, 1)
(36, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP13_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP13_04
--- 20.135481119155884 seconds ---
(36, 768, 1)
(46, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP13_06
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP13_06
--- 24.61832594871521 seconds ---
(46, 768, 1)
(96, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP13_09
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP13_09
--- 51.18272614479065 seconds ---
(96, 768, 1)
(31, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP15_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP15_01
--- 16.53278875350952 seconds ---
(31, 768, 1)
(38, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP15_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP15_03
--- 20.180233478546143 seconds ---
(38, 768, 1)
(77, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP15_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP15_04
--- 41.40814566612244 seconds ---
(77, 768, 1)
(46, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP15_05
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP15_05
--- 24.602710723876953 seconds ---
(46, 768, 1)
(64, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP16_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP16_01f
--- 34.15940189361572 seconds ---
(64, 768, 1)
(68, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP18_07
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub17\EP18_07
--- 37.320502042770386 seconds ---
(68, 768, 1)
(37, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub18\EP08_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub18\EP08_01
--- 20.128291368484497 seconds ---
(37, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub18\EP18_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub18\EP18_01
--- 33.75273871421814 seconds ---
(61, 768, 1)
(46, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub18\EP19_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub18\EP19_01
--- 24.660354375839233 seconds ---
(46, 768, 1)
(100, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP01_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP01_01f
--- 54.14876127243042 seconds ---
(100, 768, 1)
(92, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP01_02f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP01_02f
--- 50.13623666763306 seconds ---
(92, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP02_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP02_01
--- 32.58148717880249 seconds ---
(61, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP06_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP06_01f
--- 29.67001724243164 seconds ---
(56, 768, 1)
(81, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP06_02f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP06_02f
--- 43.89953422546387 seconds ---
(81, 768, 1)
(80, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP08_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP08_02
--- 42.95509147644043 seconds ---
(80, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP11_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP11_01f
--- 30.56592559814453 seconds ---
(56, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP11_04f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP11_04f
--- 29.904568433761597 seconds ---
(56, 768, 1)
(86, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP13_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP13_01
--- 45.4558207988739 seconds ---
(86, 768, 1)
(51, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP15_03f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP15_03f
--- 27.64949059486389 seconds ---
(51, 768, 1)
(51, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP16_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP16_01
--- 28.183801651000977 seconds ---
(51, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP16_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP16_02
--- 22.283297538757324 seconds ---
(41, 768, 1)
(36, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP19_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP19_01
--- 19.062458276748657 seconds ---
(36, 768, 1)
(46, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP19_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP19_02
--- 24.99239468574524 seconds ---
(46, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP19_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP19_03
--- 30.976591110229492 seconds ---
(56, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP19_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub19\EP19_04
--- 22.113358974456787 seconds ---
(41, 768, 1)
(51, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP01_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP01_03
--- 28.248898029327393 seconds ---
(51, 768, 1)
(67, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP03_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP03_02
--- 36.11355781555176 seconds ---
(67, 768, 1)
(31, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP06_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP06_03
--- 16.43480896949768 seconds ---
(31, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP07_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP07_04
--- 30.715346574783325 seconds ---
(56, 768, 1)
(86, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP10_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP10_02
--- 46.9775767326355 seconds ---
(86, 768, 1)
(66, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP12_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP12_01
--- 34.782493591308594 seconds ---
(66, 768, 1)
(51, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP13_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP13_02
--- 27.80975341796875 seconds ---
(51, 768, 1)
(96, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP15_03f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP15_03f
--- 53.33187174797058 seconds ---
(96, 768, 1)
(36, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP16_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP16_01
--- 19.535189151763916 seconds ---
(36, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP16_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP16_04
--- 22.765284538269043 seconds ---
(41, 768, 1)
(79, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP18_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub20\EP18_03
--- 43.99681496620178 seconds ---
(79, 768, 1)
(31, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub21\EP01_07
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub21\EP01_07
--- 16.787142753601074 seconds ---
(31, 768, 1)
(96, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub21\EP05_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub21\EP05_02
--- 52.94481873512268 seconds ---
(96, 768, 1)
(97, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub22\EP01_12
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub22\EP01_12
--- 53.25487995147705 seconds ---
(97, 768, 1)
(106, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub22\EP13_08
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub22\EP13_08
--- 57.28993535041809 seconds ---
(106, 768, 1)
(95, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP02_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP02_01
--- 52.66643571853638 seconds ---
(95, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP03_14f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP03_14f
--- 34.54217481613159 seconds ---
(61, 768, 1)
(100, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP04_03f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP04_03f
--- 54.19701290130615 seconds ---
(100, 768, 1)
(66, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP05_24f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP05_24f
--- 35.68053722381592 seconds ---
(66, 768, 1)
(81, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP05_25f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP05_25f
--- 43.78041481971741 seconds ---
(81, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP07_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP07_01
--- 33.19082236289978 seconds ---
(61, 768, 1)
(85, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP12_02f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP12_02f
--- 45.969687938690186 seconds ---
(85, 768, 1)
(76, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP12_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP12_03
--- 41.33297371864319 seconds ---
(76, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP13_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP13_03
--- 29.718204021453857 seconds ---
(56, 768, 1)
(66, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP13_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP13_04
--- 36.362021684646606 seconds ---
(66, 768, 1)
(71, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP13_07f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP13_07f
--- 37.909383058547974 seconds ---
(71, 768, 1)
(66, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP17_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub23\EP17_01
--- 34.98381423950195 seconds ---
(66, 768, 1)
(99, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP01_08
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP01_08
--- 52.94194149971008 seconds ---
(99, 768, 1)
(36, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP02_02f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP02_02f
--- 18.94686007499695 seconds ---
(36, 768, 1)
(43, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP07_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP07_01
--- 23.05140233039856 seconds ---
(43, 768, 1)
(38, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP07_04f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP07_04f
--- 20.02903389930725 seconds ---
(38, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP08_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP08_02
--- 33.0203971862793 seconds ---
(61, 768, 1)
(24, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP10_01f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP10_01f
--- 13.278198957443237 seconds ---
(24, 768, 1)
(36, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP10_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP10_02
--- 19.167975902557373 seconds ---
(36, 768, 1)
(43, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP10_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP10_03
--- 23.59839701652527 seconds ---
(43, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP12_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP12_01
--- 21.64469313621521 seconds ---
(41, 768, 1)
(36, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP18_03
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub24\EP18_03
--- 19.075254678726196 seconds ---
(36, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP03_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP03_01
--- 33.96526575088501 seconds ---
(61, 768, 1)
(84, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP03_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP03_02
--- 44.4824595451355 seconds ---
(84, 768, 1)
(36, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP09_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP09_02
--- 20.10835027694702 seconds ---
(36, 768, 1)
(76, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP10_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP10_01
--- 40.35464596748352 seconds ---
(76, 768, 1)
(98, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP10_10
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP10_10
--- 52.507179498672485 seconds ---
(98, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP12_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP12_01
--- 30.677897214889526 seconds ---
(56, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP18_04f
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub25\EP18_04f
--- 21.799452781677246 seconds ---
(41, 768, 1)
(86, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP03_10
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP03_10
--- 46.58915328979492 seconds ---
(86, 768, 1)
(41, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP07_28
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP07_28
--- 21.730900526046753 seconds ---
(41, 768, 1)
(99, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP07_37
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP07_37
--- 52.71217131614685 seconds ---
(99, 768, 1)
(59, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP08_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP08_04
--- 32.478095293045044 seconds ---
(59, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP09_04
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP09_04
--- 32.70830750465393 seconds ---
(61, 768, 1)
(56, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP09_09
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP09_09
--- 29.97273850440979 seconds ---
(56, 768, 1)
(58, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP13_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP13_01
--- 31.470333337783813 seconds ---
(58, 768, 1)
(85, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP13_02
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP13_02
--- 45.635282039642334 seconds ---
(85, 768, 1)
(82, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP13_11
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP13_11
--- 45.39938950538635 seconds ---
(82, 768, 1)
(40, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP15_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP15_01
--- 21.212344884872437 seconds ---
(40, 768, 1)
(51, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP16_01
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP16_01
--- 27.8880934715271 seconds ---
(51, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_44
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_44
--- 32.37933540344238 seconds ---
(61, 768, 1)
(71, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_46
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_46
--- 38.420966386795044 seconds ---
(71, 768, 1)
(81, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_47
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_47
--- 43.181153774261475 seconds ---
(81, 768, 1)
(65, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_49
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_49
--- 34.377336740493774 seconds ---
(65, 768, 1)
(84, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_50
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_50
--- 47.82361912727356 seconds ---
(84, 768, 1)
(61, 150, 128)
sequence processing started for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_51
sequence processing finished for D:\NUS\NUS_academic\Project_Code\Cropped\sub26\EP18_51
--- 33.16539764404297 seconds ---
(61, 768, 1)
(255, 768)
complete
###Markdown
2 封装标签
###Code
os.chdir(label_path)
orig_dir = os.getcwd()
print(orig_dir)
Tag = pd.read_csv('Tag.csv', usecols=[3])
Tag = np.array(Tag).reshape(255, )
print(Tag.shape)
np.save(Label_path ,Tag)
###Output
(255,)
###Markdown
3 剔除无用数据
###Code
HOF_dataset = np.load(HoF_feature_path)
print(HOF_dataset.shape)
LBP_dataset = np.load(LBP_feature_path)
print(LBP_dataset.shape)
label = np.load(Label_path)
delete_list = []
for i in range(255):
if(label[i]==4):
delete_list.append(i)
print(len(delete_list))
HOF_dataset = np.delete(HOF_dataset,delete_list,axis=0)
LBP_dataset = np.delete(LBP_dataset,delete_list,axis=0)
label = np.delete(label,delete_list)
print(HOF_dataset.shape)
print(LBP_dataset.shape)
print(label.shape)
###Output
(150, 9180)
(150, 768)
(150,)
###Markdown
4 SVM分类 4.1 划分训练集,测试集
###Code
from sklearn.model_selection import train_test_split
(trDat,
vlDat,
trLbl,
vlLbl) = train_test_split(merge_all,
label,
# Make sure the split is applied on each class
stratify=label,
test_size=0.15,
random_state=228,
shuffle=True)
print("The shape of trDat is", trDat.shape)
print("The shape of vlDat is", vlDat.shape)
print("The shape of trlbl is", trLbl.shape)
print("The shape of vllbl is", vlLbl.shape)
###Output
The shape of trDat is (127, 9948)
The shape of vlDat is (23, 9948)
The shape of trlbl is (127,)
The shape of vllbl is (23,)
###Markdown
4.2 合并特征
###Code
# HOF_dataset = np.load(HoF_feature_path)
# print(HOF_dataset.shape)
# LBP_dataset = np.load(LBP_feature_path)
# print(LBP_dataset.shape)
merge_all = np.concatenate((HOF_dataset,LBP_dataset),axis=1)
print(merge_all.shape)
# label = np.load(Label_path)
# delete_list = []
# for i in range(255):
# if(label[i]==4):
# delete_list.append(i)
# print(len(delete_list))
# merge_all = np.delete(merge_all,delete_list,axis=0)
# print(merge_all.shape)
# label = np.delete(label,delete_list)
# print(np.array(HOF_dataset).shape)
from sklearn.model_selection import LeaveOneOut
loo = LeaveOneOut()
count = 0
i = 0
for train,test in loo.split(merge_all):
print("loop: ", i)
i=i+1
print(f'train index: {train} , test index: {test}')
# print(f'train data: {HOF_dataset[train]} , test data: {HOF_dataset[test]}')
print('--------------------')
svm_model = svm.SVC(kernel = 'linear', C = 0.1,gamma=0.1).fit(HOF_dataset[train],label[train])
y_pred = svm_model.predict(HOF_dataset[test])
print("y_pred: ",y_pred)
if y_pred == label[test]:
count = count +1
print(count/150)
###Output
loop: 0
train index: [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [0]
--------------------
y_pred: [0]
loop: 1
train index: [ 0 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [1]
--------------------
y_pred: [0]
loop: 2
train index: [ 0 1 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [2]
--------------------
y_pred: [0]
loop: 3
train index: [ 0 1 2 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [3]
--------------------
y_pred: [0]
loop: 4
train index: [ 0 1 2 3 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [4]
--------------------
y_pred: [0]
loop: 5
train index: [ 0 1 2 3 4 6 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [5]
--------------------
y_pred: [0]
loop: 6
train index: [ 0 1 2 3 4 5 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [6]
--------------------
y_pred: [0]
loop: 7
train index: [ 0 1 2 3 4 5 6 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [7]
--------------------
y_pred: [0]
loop: 8
train index: [ 0 1 2 3 4 5 6 7 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [8]
--------------------
y_pred: [0]
loop: 9
train index: [ 0 1 2 3 4 5 6 7 8 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [9]
--------------------
y_pred: [0]
loop: 10
train index: [ 0 1 2 3 4 5 6 7 8 9 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [10]
--------------------
y_pred: [0]
loop: 11
train index: [ 0 1 2 3 4 5 6 7 8 9 10 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [11]
--------------------
y_pred: [0]
loop: 12
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [12]
--------------------
y_pred: [0]
loop: 13
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [13]
--------------------
y_pred: [0]
loop: 14
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [14]
--------------------
y_pred: [0]
loop: 15
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [15]
--------------------
y_pred: [0]
loop: 16
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [16]
--------------------
y_pred: [0]
loop: 17
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [17]
--------------------
y_pred: [0]
loop: 18
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [18]
--------------------
y_pred: [0]
loop: 19
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [19]
--------------------
y_pred: [0]
loop: 20
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [20]
--------------------
y_pred: [0]
loop: 21
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [21]
--------------------
y_pred: [0]
loop: 22
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [22]
--------------------
y_pred: [0]
loop: 23
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [23]
--------------------
y_pred: [0]
loop: 24
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [24]
--------------------
y_pred: [0]
loop: 25
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [25]
--------------------
y_pred: [0]
loop: 26
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [26]
--------------------
y_pred: [0]
loop: 27
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [27]
--------------------
y_pred: [0]
loop: 28
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [28]
--------------------
y_pred: [0]
loop: 29
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [29]
--------------------
y_pred: [0]
loop: 30
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [30]
--------------------
y_pred: [0]
loop: 31
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [31]
--------------------
y_pred: [0]
loop: 32
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [32]
--------------------
y_pred: [0]
loop: 33
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [33]
--------------------
y_pred: [0]
loop: 34
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [34]
--------------------
y_pred: [0]
loop: 35
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [35]
--------------------
y_pred: [0]
loop: 36
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [36]
--------------------
y_pred: [0]
loop: 37
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [37]
--------------------
y_pred: [0]
loop: 38
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [38]
--------------------
y_pred: [0]
loop: 39
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [39]
--------------------
y_pred: [0]
loop: 40
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [40]
--------------------
y_pred: [0]
loop: 41
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [41]
--------------------
y_pred: [0]
loop: 42
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [42]
--------------------
y_pred: [0]
loop: 43
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [43]
--------------------
y_pred: [0]
loop: 44
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [44]
--------------------
y_pred: [0]
loop: 45
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [45]
--------------------
y_pred: [0]
loop: 46
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [46]
--------------------
y_pred: [0]
loop: 47
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [47]
--------------------
y_pred: [0]
loop: 48
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [48]
--------------------
y_pred: [0]
loop: 49
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [49]
--------------------
y_pred: [0]
loop: 50
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [50]
--------------------
y_pred: [0]
loop: 51
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [51]
--------------------
y_pred: [0]
loop: 52
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [52]
--------------------
y_pred: [0]
loop: 53
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [53]
--------------------
y_pred: [0]
loop: 54
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [54]
--------------------
y_pred: [0]
loop: 55
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [55]
--------------------
y_pred: [0]
loop: 56
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [56]
--------------------
y_pred: [0]
loop: 57
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [57]
--------------------
y_pred: [0]
loop: 58
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [58]
--------------------
y_pred: [0]
loop: 59
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [59]
--------------------
y_pred: [0]
loop: 60
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [60]
--------------------
y_pred: [0]
loop: 61
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [61]
--------------------
y_pred: [0]
loop: 62
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [62]
--------------------
y_pred: [0]
loop: 63
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [63]
--------------------
y_pred: [0]
loop: 64
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [64]
--------------------
y_pred: [0]
loop: 65
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [65]
--------------------
y_pred: [0]
loop: 66
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [66]
--------------------
y_pred: [0]
loop: 67
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [67]
--------------------
y_pred: [0]
loop: 68
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [68]
--------------------
y_pred: [0]
loop: 69
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [69]
--------------------
y_pred: [0]
loop: 70
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [70]
--------------------
y_pred: [0]
loop: 71
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [71]
--------------------
y_pred: [0]
loop: 72
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [72]
--------------------
y_pred: [0]
loop: 73
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [73]
--------------------
y_pred: [0]
loop: 74
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [74]
--------------------
y_pred: [0]
loop: 75
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [75]
--------------------
y_pred: [0]
loop: 76
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [76]
--------------------
y_pred: [0]
loop: 77
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [77]
--------------------
y_pred: [0]
loop: 78
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [78]
--------------------
y_pred: [0]
loop: 79
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [79]
--------------------
y_pred: [0]
loop: 80
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [80]
--------------------
y_pred: [0]
loop: 81
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [81]
--------------------
y_pred: [0]
loop: 82
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [82]
--------------------
y_pred: [0]
loop: 83
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [83]
--------------------
y_pred: [0]
loop: 84
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [84]
--------------------
y_pred: [0]
loop: 85
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [85]
--------------------
y_pred: [0]
loop: 86
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [86]
--------------------
y_pred: [0]
loop: 87
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [87]
--------------------
y_pred: [0]
loop: 88
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [88]
--------------------
y_pred: [0]
loop: 89
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [89]
--------------------
y_pred: [0]
loop: 90
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [90]
--------------------
y_pred: [0]
loop: 91
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [91]
--------------------
y_pred: [0]
loop: 92
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [92]
--------------------
y_pred: [0]
loop: 93
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [93]
--------------------
y_pred: [0]
loop: 94
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [94]
--------------------
y_pred: [0]
loop: 95
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [95]
--------------------
y_pred: [0]
loop: 96
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [96]
--------------------
y_pred: [0]
loop: 97
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [97]
--------------------
y_pred: [0]
loop: 98
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [98]
--------------------
y_pred: [0]
loop: 99
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [99]
--------------------
y_pred: [0]
loop: 100
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [100]
--------------------
y_pred: [0]
loop: 101
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [101]
--------------------
y_pred: [0]
loop: 102
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [102]
--------------------
y_pred: [0]
loop: 103
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [103]
--------------------
y_pred: [0]
loop: 104
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [104]
--------------------
y_pred: [0]
loop: 105
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [105]
--------------------
y_pred: [0]
loop: 106
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [106]
--------------------
y_pred: [0]
loop: 107
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [107]
--------------------
y_pred: [0]
loop: 108
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [108]
--------------------
y_pred: [0]
loop: 109
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [109]
--------------------
y_pred: [0]
loop: 110
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [110]
--------------------
y_pred: [0]
loop: 111
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [111]
--------------------
y_pred: [0]
loop: 112
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [112]
--------------------
y_pred: [0]
loop: 113
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [113]
--------------------
y_pred: [0]
loop: 114
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [114]
--------------------
y_pred: [0]
loop: 115
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [115]
--------------------
y_pred: [0]
loop: 116
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [116]
--------------------
y_pred: [0]
loop: 117
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [117]
--------------------
y_pred: [0]
loop: 118
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [118]
--------------------
y_pred: [0]
loop: 119
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [119]
--------------------
y_pred: [0]
loop: 120
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [120]
--------------------
y_pred: [0]
loop: 121
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [121]
--------------------
y_pred: [0]
loop: 122
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [122]
--------------------
y_pred: [0]
loop: 123
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [123]
--------------------
y_pred: [0]
loop: 124
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [124]
--------------------
y_pred: [0]
loop: 125
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [125]
--------------------
y_pred: [0]
loop: 126
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [126]
--------------------
y_pred: [0]
loop: 127
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [127]
--------------------
y_pred: [0]
loop: 128
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [128]
--------------------
y_pred: [0]
loop: 129
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [129]
--------------------
y_pred: [0]
loop: 130
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [130]
--------------------
y_pred: [0]
loop: 131
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [131]
--------------------
y_pred: [0]
loop: 132
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [132]
--------------------
y_pred: [0]
loop: 133
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [133]
--------------------
y_pred: [0]
loop: 134
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [134]
--------------------
y_pred: [0]
loop: 135
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [135]
--------------------
y_pred: [0]
loop: 136
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [136]
--------------------
y_pred: [0]
loop: 137
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [137]
--------------------
y_pred: [0]
loop: 138
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 139 140 141 142 143 144
145 146 147 148 149] , test index: [138]
--------------------
y_pred: [0]
loop: 139
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 140 141 142 143 144
145 146 147 148 149] , test index: [139]
--------------------
y_pred: [0]
loop: 140
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 141 142 143 144
145 146 147 148 149] , test index: [140]
--------------------
y_pred: [0]
loop: 141
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 142 143 144
145 146 147 148 149] , test index: [141]
--------------------
y_pred: [0]
loop: 142
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 143 144
145 146 147 148 149] , test index: [142]
--------------------
y_pred: [0]
loop: 143
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 144
145 146 147 148 149] , test index: [143]
--------------------
y_pred: [0]
loop: 144
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
145 146 147 148 149] , test index: [144]
--------------------
y_pred: [0]
loop: 145
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
144 146 147 148 149] , test index: [145]
--------------------
y_pred: [0]
loop: 146
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
144 145 147 148 149] , test index: [146]
--------------------
y_pred: [0]
loop: 147
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
144 145 146 148 149] , test index: [147]
--------------------
y_pred: [0]
loop: 148
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
144 145 146 147 149] , test index: [148]
--------------------
y_pred: [0]
loop: 149
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
144 145 146 147 148] , test index: [149]
--------------------
y_pred: [0]
0.42
###Markdown
4.2 SVM 训练 4.2.1 GtidSearch
###Code
from sklearn import svm
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
parameters={'kernel':['linear','rbf','sigmoid','poly'],'C':np.linspace(0.1,20,50),'gamma':np.linspace(0.1,20,20)}
svc = svm.SVC()
model = GridSearchCV(svc,parameters,cv=5,scoring='accuracy')
model.fit(trDat,trLbl)
print(model.best_params_)
model.score(vlDat,vlLbl)
y_pred = model.predict(vlDat)
print(model.score(vlDat,vlLbl))
print(classification_report(vlLbl, y_pred))
###Output
{'C': 0.1, 'gamma': 0.1, 'kernel': 'linear'}
0.4782608695652174
precision recall f1-score support
0 0.70 0.70 0.70 10
1 0.29 0.40 0.33 5
2 0.00 0.00 0.00 4
3 0.67 0.50 0.57 4
accuracy 0.48 23
macro avg 0.41 0.40 0.40 23
weighted avg 0.48 0.48 0.48 23
###Markdown
4.2.2 One train
###Code
from sklearn import svm
from sklearn.metrics import confusion_matrix
svm_model = svm.SVC(kernel = 'linear', C = 2,decision_function_shape='ovo').fit(trDat,trLbl)
y_pred = svm_model.predict(vlDat)
print(confusion_matrix(vlLbl, y_pred))
print("train score:",svm_model.score(trDat,trLbl))
print("test score:",svm_model.score(vlDat,vlLbl))
###Output
[[6 0 0 0]
[0 2 1 0]
[1 1 1 0]
[0 1 1 1]]
train score: 1.0
test score: 0.6666666666666666
###Markdown
4.2.3 留一法训练
###Code
from sklearn.model_selection import LeaveOneOut
loo = LeaveOneOut()
count = 0
i = 0
for train,test in loo.split(HOF_dataset):
print("loop: ", i)
i=i+1
print(f'train index: {train} , test index: {test}')
# print(f'train data: {HOF_dataset[train]} , test data: {HOF_dataset[test]}')
print('--------------------')
svm_model = svm.SVC(kernel = 'rbf', C = 2.536734693877551,gamma=1.1473684210526316).fit(HOF_dataset[train],label[train])
y_pred = svm_model.predict(HOF_dataset[test])
print("y_pred: ",y_pred)
if y_pred == label[test]:
count = count +1
print(count/150)
###Output
loop: 0
train index: [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [0]
--------------------
y_pred: [0]
loop: 1
train index: [ 0 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [1]
--------------------
y_pred: [0]
loop: 2
train index: [ 0 1 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [2]
--------------------
y_pred: [0]
loop: 3
train index: [ 0 1 2 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [3]
--------------------
y_pred: [2]
loop: 4
train index: [ 0 1 2 3 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [4]
--------------------
y_pred: [2]
loop: 5
train index: [ 0 1 2 3 4 6 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [5]
--------------------
y_pred: [2]
loop: 6
train index: [ 0 1 2 3 4 5 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [6]
--------------------
y_pred: [2]
loop: 7
train index: [ 0 1 2 3 4 5 6 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [7]
--------------------
y_pred: [2]
loop: 8
train index: [ 0 1 2 3 4 5 6 7 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [8]
--------------------
y_pred: [2]
loop: 9
train index: [ 0 1 2 3 4 5 6 7 8 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [9]
--------------------
y_pred: [3]
loop: 10
train index: [ 0 1 2 3 4 5 6 7 8 9 11 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [10]
--------------------
y_pred: [3]
loop: 11
train index: [ 0 1 2 3 4 5 6 7 8 9 10 12 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [11]
--------------------
y_pred: [3]
loop: 12
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 13 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [12]
--------------------
y_pred: [0]
loop: 13
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 14 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [13]
--------------------
y_pred: [0]
loop: 14
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 15 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [14]
--------------------
y_pred: [0]
loop: 15
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 16 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [15]
--------------------
y_pred: [0]
loop: 16
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 17 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [16]
--------------------
y_pred: [0]
loop: 17
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 18
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [17]
--------------------
y_pred: [0]
loop: 18
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [18]
--------------------
y_pred: [0]
loop: 19
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [19]
--------------------
y_pred: [0]
loop: 20
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [20]
--------------------
y_pred: [3]
loop: 21
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [21]
--------------------
y_pred: [3]
loop: 22
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [22]
--------------------
y_pred: [3]
loop: 23
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [23]
--------------------
y_pred: [3]
loop: 24
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [24]
--------------------
y_pred: [3]
loop: 25
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [25]
--------------------
y_pred: [0]
loop: 26
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 27 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [26]
--------------------
y_pred: [1]
loop: 27
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 28 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [27]
--------------------
y_pred: [3]
loop: 28
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 29 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [28]
--------------------
y_pred: [0]
loop: 29
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 30 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [29]
--------------------
y_pred: [0]
loop: 30
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 31 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [30]
--------------------
y_pred: [3]
loop: 31
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 32 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [31]
--------------------
y_pred: [0]
loop: 32
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 33 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [32]
--------------------
y_pred: [0]
loop: 33
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 34 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [33]
--------------------
y_pred: [0]
loop: 34
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 35 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [34]
--------------------
y_pred: [0]
loop: 35
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 36
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [35]
--------------------
y_pred: [0]
loop: 36
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [36]
--------------------
y_pred: [0]
loop: 37
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [37]
--------------------
y_pred: [1]
loop: 38
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [38]
--------------------
y_pred: [2]
loop: 39
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [39]
--------------------
y_pred: [1]
loop: 40
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [40]
--------------------
y_pred: [2]
loop: 41
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [41]
--------------------
y_pred: [2]
loop: 42
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [42]
--------------------
y_pred: [1]
loop: 43
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [43]
--------------------
y_pred: [1]
loop: 44
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [44]
--------------------
y_pred: [1]
loop: 45
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [45]
--------------------
y_pred: [2]
loop: 46
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [46]
--------------------
y_pred: [2]
loop: 47
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [47]
--------------------
y_pred: [0]
loop: 48
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [48]
--------------------
y_pred: [0]
loop: 49
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [49]
--------------------
y_pred: [0]
loop: 50
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [50]
--------------------
y_pred: [0]
loop: 51
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [51]
--------------------
y_pred: [0]
loop: 52
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [52]
--------------------
y_pred: [0]
loop: 53
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [53]
--------------------
y_pred: [0]
loop: 54
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [54]
--------------------
y_pred: [0]
loop: 55
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [55]
--------------------
y_pred: [3]
loop: 56
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [56]
--------------------
y_pred: [3]
loop: 57
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [57]
--------------------
y_pred: [3]
loop: 58
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 59 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [58]
--------------------
y_pred: [3]
loop: 59
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 60 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [59]
--------------------
y_pred: [3]
loop: 60
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 61 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [60]
--------------------
y_pred: [0]
loop: 61
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 62 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [61]
--------------------
y_pred: [0]
loop: 62
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 63 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [62]
--------------------
y_pred: [0]
loop: 63
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 64 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [63]
--------------------
y_pred: [0]
loop: 64
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 65 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [64]
--------------------
y_pred: [1]
loop: 65
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 66 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [65]
--------------------
y_pred: [1]
loop: 66
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 67 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [66]
--------------------
y_pred: [1]
loop: 67
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 68 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [67]
--------------------
y_pred: [0]
loop: 68
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 69 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [68]
--------------------
y_pred: [1]
loop: 69
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 70 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [69]
--------------------
y_pred: [0]
loop: 70
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 71 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [70]
--------------------
y_pred: [0]
loop: 71
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 72
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [71]
--------------------
y_pred: [1]
loop: 72
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [72]
--------------------
y_pred: [0]
loop: 73
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [73]
--------------------
y_pred: [2]
loop: 74
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [74]
--------------------
y_pred: [1]
loop: 75
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [75]
--------------------
y_pred: [1]
loop: 76
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 77 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [76]
--------------------
y_pred: [0]
loop: 77
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 78 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [77]
--------------------
y_pred: [0]
loop: 78
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 79 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [78]
--------------------
y_pred: [0]
loop: 79
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 80 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [79]
--------------------
y_pred: [1]
loop: 80
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [80]
--------------------
y_pred: [0]
loop: 81
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [81]
--------------------
y_pred: [0]
loop: 82
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [82]
--------------------
y_pred: [2]
loop: 83
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [83]
--------------------
y_pred: [2]
loop: 84
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [84]
--------------------
y_pred: [1]
loop: 85
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [85]
--------------------
y_pred: [0]
loop: 86
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 87 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [86]
--------------------
y_pred: [1]
loop: 87
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 88 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [87]
--------------------
y_pred: [0]
loop: 88
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 89 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [88]
--------------------
y_pred: [2]
loop: 89
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 90
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [89]
--------------------
y_pred: [2]
loop: 90
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [90]
--------------------
y_pred: [1]
loop: 91
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [91]
--------------------
y_pred: [0]
loop: 92
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [92]
--------------------
y_pred: [0]
loop: 93
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [93]
--------------------
y_pred: [1]
loop: 94
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 95 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [94]
--------------------
y_pred: [0]
loop: 95
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 96 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [95]
--------------------
y_pred: [0]
loop: 96
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 97 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [96]
--------------------
y_pred: [0]
loop: 97
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 98 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [97]
--------------------
y_pred: [1]
loop: 98
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 99 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [98]
--------------------
y_pred: [0]
loop: 99
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 100 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [99]
--------------------
y_pred: [0]
loop: 100
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 101 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [100]
--------------------
y_pred: [0]
loop: 101
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 102 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [101]
--------------------
y_pred: [0]
loop: 102
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 103 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [102]
--------------------
y_pred: [0]
loop: 103
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 104 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [103]
--------------------
y_pred: [0]
loop: 104
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 105 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [104]
--------------------
y_pred: [0]
loop: 105
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 106 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [105]
--------------------
y_pred: [3]
loop: 106
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 107 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [106]
--------------------
y_pred: [3]
loop: 107
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 108
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [107]
--------------------
y_pred: [0]
loop: 108
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [108]
--------------------
y_pred: [0]
loop: 109
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [109]
--------------------
y_pred: [0]
loop: 110
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [110]
--------------------
y_pred: [3]
loop: 111
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [111]
--------------------
y_pred: [3]
loop: 112
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 113 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [112]
--------------------
y_pred: [0]
loop: 113
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 114 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [113]
--------------------
y_pred: [3]
loop: 114
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 115 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [114]
--------------------
y_pred: [3]
loop: 115
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 116 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [115]
--------------------
y_pred: [0]
loop: 116
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 117 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [116]
--------------------
y_pred: [0]
loop: 117
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 118 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [117]
--------------------
y_pred: [0]
loop: 118
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 119 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [118]
--------------------
y_pred: [2]
loop: 119
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 120 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [119]
--------------------
y_pred: [2]
loop: 120
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 121 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [120]
--------------------
y_pred: [0]
loop: 121
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 122 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [121]
--------------------
y_pred: [0]
loop: 122
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 123 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [122]
--------------------
y_pred: [0]
loop: 123
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 124 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [123]
--------------------
y_pred: [0]
loop: 124
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 125 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [124]
--------------------
y_pred: [0]
loop: 125
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 126
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [125]
--------------------
y_pred: [0]
loop: 126
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [126]
--------------------
y_pred: [2]
loop: 127
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [127]
--------------------
y_pred: [0]
loop: 128
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [128]
--------------------
y_pred: [3]
loop: 129
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [129]
--------------------
y_pred: [3]
loop: 130
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 131 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [130]
--------------------
y_pred: [0]
loop: 131
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 132 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [131]
--------------------
y_pred: [0]
loop: 132
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 133 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [132]
--------------------
y_pred: [0]
loop: 133
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 134 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [133]
--------------------
y_pred: [0]
loop: 134
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 135 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [134]
--------------------
y_pred: [0]
loop: 135
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 136 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [135]
--------------------
y_pred: [0]
loop: 136
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 137 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [136]
--------------------
y_pred: [0]
loop: 137
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 138 139 140 141 142 143 144
145 146 147 148 149] , test index: [137]
--------------------
y_pred: [0]
loop: 138
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 139 140 141 142 143 144
145 146 147 148 149] , test index: [138]
--------------------
y_pred: [0]
loop: 139
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 140 141 142 143 144
145 146 147 148 149] , test index: [139]
--------------------
y_pred: [0]
loop: 140
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 141 142 143 144
145 146 147 148 149] , test index: [140]
--------------------
y_pred: [0]
loop: 141
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 142 143 144
145 146 147 148 149] , test index: [141]
--------------------
y_pred: [0]
loop: 142
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 143 144
145 146 147 148 149] , test index: [142]
--------------------
y_pred: [0]
loop: 143
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 144
145 146 147 148 149] , test index: [143]
--------------------
y_pred: [0]
loop: 144
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
145 146 147 148 149] , test index: [144]
--------------------
y_pred: [0]
loop: 145
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
144 146 147 148 149] , test index: [145]
--------------------
y_pred: [0]
loop: 146
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
144 145 147 148 149] , test index: [146]
--------------------
y_pred: [0]
loop: 147
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
144 145 146 148 149] , test index: [147]
--------------------
y_pred: [0]
loop: 148
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
144 145 146 147 149] , test index: [148]
--------------------
y_pred: [0]
loop: 149
train index: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
144 145 146 147 148] , test index: [149]
--------------------
y_pred: [0]
0.5369127516778524
###Markdown
5 其他测试代码
###Code
import re
numbers=re.compile(r'(\d+)')
def numericalsort(value):
parts=numbers.split(value)
#parts = int(parts[1])
parts[1::2]=map(int, parts[1::2])
return parts
cal = numericalsort("EP02_01")
cal
###Output
_____no_output_____
###Markdown
6 特征降维 由于特征维数过大 因此尝试用特征降维来进行滤波 6.1 PCA 降维
###Code
HOF_dataset = np.load(HoF_feature_path)
%config InlineBackend.figure_format="svg"
from sklearn.decomposition import PCA
from matplotlib import pyplot as plt
candidate_components = range(1,50,1)
explained_ratios =[]
for c in candidate_components:
pca = PCA(n_components=c)
X_pca = pca.fit_transform(LBP_dataset)
explained_ratios.append(np.sum(pca.explained_variance_ratio_))
plt.figure(figsize=(10,6), dpi=144)
plt.grid()
plt.plot(candidate_components, explained_ratios)
plt.xlabel('Number of PCA Components')
plt.ylabel('Explained Variance Ratio')
plt.title('Explained variance ratio for PCA')
plt.yticks(np.arange(0.5,1.05,.05))
plt.xticks(np.arange(0,50,1))
# 选择80个主成分 还原率保证大于95%
pca = PCA(n_components=80)
X_pca_HOF = pca.fit_transform(HOF_dataset)
X_pca_HOF.shape
X_pca_HOF
pca = PCA(n_components=11)
X_pca_LBP = pca.fit_transform(LBP_dataset)
X_pca_LBP.shape
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler(copy=True, with_mean=True, with_std=True)
new_X_pca_LBP = scaler.fit_transform(X_pca_LBP)
new_X_pca_LBP
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler(copy=True, with_mean=True, with_std=True)
new_X_pca_HOF = scaler.fit_transform(X_pca_HOF)
new_X_pca_HOF
merge_all = np.concatenate((new_X_pca_HOF,new_X_pca_LBP),axis=1)
print(merge_all.shape)
merge_all_origin = np.concatenate((HOF_dataset,LBP_dataset),axis=1)
print(merge_all_origin.shape)
from sklearn.model_selection import train_test_split
(trDat,
vlDat,
trLbl,
vlLbl) = train_test_split(X_pca_HOF,
label,
# Make sure the split is applied on each class
stratify=label,
test_size=0.1,
random_state=228,
shuffle=True)
print("The shape of trDat is", trDat.shape)
print("The shape of vlDat is", vlDat.shape)
print("The shape of trlbl is", trLbl.shape)
print("The shape of vllbl is", vlLbl.shape)
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
parameters={'kernel':['linear','rbf','sigmoid','poly'],'C':np.linspace(0.1,20,50),'gamma':np.linspace(0.1,20,20)}
param_grid ={'C':[1,5,10,50,100],
'gamma':[0.0001,0.0005,0.001,0.005,0.01]}
# clf =GridSearchCV(SVC(kernel='rbf', class_weight='balanced'), parameters)
# clf = clf.fit(trDat, trLbl)
svc = svm.SVC(kernel='linear')
model = GridSearchCV(svc,param_grid,cv=5,scoring='accuracy',verbose=2)
model.fit(X_pca,label)
print(model.best_params_, model.best_score_)
from sklearn.model_selection import LeaveOneOut
loo = LeaveOneOut()
count = 0
i = 0
y_pre = []
for train,test in loo.split(merge_all):
print("loop: ", i)
i=i+1
#print(f'train index: {train} , test index: {test}')
# print(f'train data: {HOF_dataset[train]} , test data: {HOF_dataset[test]}')
print('--------------------')
svm_model = svm.SVC(kernel = 'linear', C = 2).fit(merge_all[train],label[train])
y_pred = svm_model.predict(merge_all[test])
y_pre.append(y_pred)
print("y_pred: ",y_pred)
if y_pred == label[test]:
count = count +1
print(count/150)
# only hoof
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
print(classification_report(label, y_pre))
print(confusion_matrix(label, y_pre))
# only LBP
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
print(classification_report(label, y_pre))
print(confusion_matrix(label, y_pre))
# only mix
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
print(classification_report(label, y_pre))
print(confusion_matrix(label, y_pre))
# only RF after PCA mix
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
print(classification_report(label, y_pre))
print(confusion_matrix(label, y_pre))
# only RF after PCA mix_origin
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
print(classification_report(label, y_pre))
print(confusion_matrix(label, y_pre))
###Output
precision recall f1-score support
0 0.56 0.92 0.69 63
1 0.30 0.09 0.14 32
2 0.47 0.30 0.36 27
3 0.53 0.36 0.43 28
accuracy 0.53 150
macro avg 0.46 0.42 0.41 150
weighted avg 0.48 0.53 0.47 150
[[58 0 0 5]
[18 3 8 3]
[12 6 8 1]
[16 1 1 10]]
###Markdown
6.2 LASSO 降维
###Code
merge_all.shape
###Output
_____no_output_____
###Markdown
6.3 Random Forest 降维
###Code
# 调参: https://zhuanlan.zhihu.com/p/126288078
from sklearn.model_selection import train_test_split
(trDat,
vlDat,
trLbl,
vlLbl) = train_test_split(merge_all,
label,
# Make sure the split is applied on each class
stratify=label,
test_size=0.2,
random_state=228,
shuffle=True)
print("The shape of trDat is", trDat.shape)
print("The shape of vlDat is", vlDat.shape)
print("The shape of trlbl is", trLbl.shape)
print("The shape of vllbl is", vlLbl.shape)
HOF_dataset.shape
#label.shape
# 以下是对合并数据的随机森林
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
RFC = RandomForestClassifier(n_estimators=18,n_jobs= -1 ,random_state=1)
RFC.fit(merge_all,label)
#y_pred = RFC.predict(vlDat)
print("train score:",RFC.score(merge_all,label))
# print("test score:",RFC.score(vlDat,vlLbl))
# print(classification_report(vlLbl, y_pred))
# HOF
from sklearn.ensemble import RandomForestClassifier
RFC = RandomForestClassifier(n_estimators=7000,n_jobs= -1 ,random_state=0)
RFC.fit(trDat,trLbl)
y_pred = RFC.predict(vlDat)
print("train score:",RFC.score(trDat,trLbl))
print("test score:",RFC.score(vlDat,vlLbl))
print(classification_report(vlLbl, y_pred))
from sklearn.model_selection import LeaveOneOut
loo = LeaveOneOut()
count = 0
i = 0
y_pre = []
for train,test in loo.split(merge_all):
print("loop: ", i)
i=i+1
#print(f'train index: {train} , test index: {test}')
# print(f'train data: {HOF_dataset[train]} , test data: {HOF_dataset[test]}')
print('--------------------')
RFC = RandomForestClassifier(n_estimators=5000,n_jobs= -1 ,random_state=0)
RFC.fit(merge_all[train],label[train])
#svm_model = svm.SVC(kernel = 'linear', C = 1.3183673469387756,gamma=0.1).fit(X_pca[train],label[train])
y_pred = RFC.predict(merge_all[test])
print("y_pred: ",y_pred)
y_pre.append(y_pred)
if y_pred == label[test]:
count = count +1
print(count/150)
from sklearn.model_selection import LeaveOneOut
loo = LeaveOneOut()
count = 0
i = 0
y_pre = []
for train,test in loo.split(merge_all_origin):
print("loop: ", i)
i=i+1
#print(f'train index: {train} , test index: {test}')
# print(f'train data: {HOF_dataset[train]} , test data: {HOF_dataset[test]}')
print('--------------------')
RFC = RandomForestClassifier(n_estimators=5000,n_jobs= -1 ,random_state=0)
RFC.fit(merge_all_origin[train],label[train])
#svm_model = svm.SVC(kernel = 'linear', C = 1.3183673469387756,gamma=0.1).fit(X_pca[train],label[train])
y_pred = RFC.predict(merge_all_origin[test])
print("y_pred: ",y_pred)
y_pre.append(y_pred)
if y_pred == label[test]:
count = count +1
print(count/150)
# import_level = RFC.feature_importances_ #这个方法可以调取关于特征重要程度
# # x_columns = data.columns[1:]
# index = np.argsort(import_level)[::-1]
# for each in range(HOF_dataset.shape[1]):
# print('The important level of ' + str(import_level[index[each]]))
from sklearn.datasets import load_breast_cancer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# 建立随机森林
rfc = RandomForestClassifier(n_estimators=5000, random_state=90)
# 用交叉验证计算得分
score_pre = cross_val_score(rfc, merge_all, label, cv=10).mean()
score_pre
# 调参,绘制学习曲线来调参n_estimators(对随机森林影响最大)
score_lt = []
# 每隔10步建立一个随机森林,获得不同n_estimators的得分
for i in range(0,200,10):
rfc = RandomForestClassifier(n_estimators=i+1
,random_state=90)
score = cross_val_score(rfc, merge_all, label, cv=10).mean()
score_lt.append(score)
score_max = max(score_lt)
print('最大得分:{}'.format(score_max),
'子树数量为:{}'.format(score_lt.index(score_max)*10+1))
# 绘制学习曲线
x = np.arange(1,201,10)
plt.subplot(111)
plt.plot(x, score_lt, 'r-')
plt.show()
# 在41附近缩小n_estimators的范围为30-49
score_lt = []
for i in range(20,30):
rfc = RandomForestClassifier(n_estimators=i
,random_state=90)
score = cross_val_score(rfc, merge_all, label, cv=10).mean()
score_lt.append(score)
score_max = max(score_lt)
print('最大得分:{}'.format(score_max),
'子树数量为:{}'.format(score_lt.index(score_max)+30))
# 绘制学习曲线
x = np.arange(30,50)
plt.subplot(111)
plt.plot(x, score_lt,'o-')
plt.show()
# 建立n_estimators为45的随机森林
rfc = RandomForestClassifier(n_estimators=45, random_state=90)
# 用网格搜索调整max_depth
param_grid = {'max_depth':np.arange(1,20)}
GS = GridSearchCV(rfc, param_grid, cv=10)
GS.fit(merge_all, label)
best_param = GS.best_params_
best_score = GS.best_score_
print(best_param, best_score)
###Output
_____no_output_____ |
docs/auto_examples/plot_accuracy.ipynb | ###Markdown
Plot free recall accuracyThis example plots free recall accuracy for a single subject.
###Code
# Code source: Andrew Heusser
# License: MIT
#import
import quail
#load data
egg = quail.load_example_data()
#analysis
analyzed_data = quail.analyze(egg, analysis='accuracy', listgroup=['condition1']*4+['condition2']*4)
#plot by list
quail.plot(analyzed_data, plot_style='violin', title='Average Recall Accuracy')
###Output
_____no_output_____
###Markdown
Plot free recall accuracyThis example plots free recall accuracy for a single subject.
###Code
# Code source: Andrew Heusser
# License: MIT
#import
import quail
#load data
egg = quail.load('example')
#analysis
fegg = egg.analyze('accuracy', listgroup=['condition1']*4+['condition2']*4)
#plot by list
fegg.plot(plot_style='violin', title='Average Recall Accuracy')
###Output
_____no_output_____ |
_notebooks/Market-News.ipynb | ###Markdown
Kriptovaliutų naujienos
###Code
#hide_input
import warnings
warnings.filterwarnings('ignore')
from scripts.read_data import read_api
from scripts.read_data import read_news
from scripts.read_data import read_covid
import matplotlib.pyplot as plt
from IPython.display import Markdown as md
#hide_input
blockchain = read_news('blockchain')
btc = read_news('btc')
xrp = read_news('xrp')
trx = read_news('trx')
eth = read_news('eth')
ada = read_news('ada')
exchange = read_news('exchange')
#hide_input
#md('##### {}'.format(titleBTC))
md("- {}{} {}{} [| Skaityti daugiau]({}) ".format(blockchain['title'],'. ', blockchain['body'], ' ...', blockchain['url']))
#hide_input
#md('##### {}'.format(titleBTC))
md("- {}{} {}{} [| Skaityti daugiau]({}) ".format(btc['title'],'. ', btc['body'], ' ...', btc['url']))
#hide_input
#md('##### {}'.format(titleBTC))
md("- {}{} {}{} [| Skaityti daugiau]({}) ".format(xrp['title'],'. ', xrp['body'], ' ...', xrp['url']))
#hide_input
#md('##### {}'.format(titleBTC))
md("- {}{} {}{} [| Skaityti daugiau]({}) ".format(trx['title'],'. ', trx['body'], ' ...', trx['url']))
#hide_input
#md('##### {}'.format(titleBTC))
md("- {}{} {}{} [| Skaityti daugiau]({}) ".format(eth['title'],'. ', eth['body'], ' ...', eth['url']))
#hide_input
#md('##### {}'.format(titleBTC))
md("- {}{} {}{} [| Skaityti daugiau]({}) ".format(ada['title'],'. ', ada['body'], ' ...', ada['url']))
#hide_input
#md('##### {}'.format(titleBTC))
md("- {}{} {}{} [| Skaityti daugiau]({}) ".format(exchange['title'],'. ', exchange['body'], ' ...', exchange['url']))
###Output
_____no_output_____ |
notebooks/.ipynb_checkpoints/Liander_hourly_gas_PBv01-checkpoint.ipynb | ###Markdown
Calculate hourly gas demand Libraries
###Code
from pandas import DataFrame, read_csv
import datetime
import math
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import pylab
from scipy.optimize import curve_fit
import os
###Output
_____no_output_____
###Markdown
Working directory
###Code
working_directory ='/Users/peter/python/pyBuurtwarmte'
os.chdir(working_directory)
## Read data from Alliander
file = "dagprofielen_gas.csv"
Liander_gas = pd.read_csv(file,
skiprows=1,
skipinitialspace=True,
sep=';',
decimal = ",",
header=0,
names=['Date','Dag','G1a profiled demand [MJ]','G2a profiled demand [MJ]','Totaal', 'Uur fractie G'],
parse_dates=[0]
)
Liander_gas.reset_index(drop=True)
Liander_gas['New Date'] = Liander_gas['Date'].apply(lambda x: x - pd.DateOffset(minutes=1))
# NB: it's easier to work on integer index instead of datetime index
# Liander_gas.set_index('New Date', inplace=True)
Liander_gas.replace(["NaN", 'NaT'], np.nan, inplace = True)
Liander_gas = Liander_gas.dropna(how='any')
#Liander_gas
Liander_test = pd.DataFrame(Liander_gas.groupby("Dag")["Uur fractie G"].sum())
Liander_test=Liander_test.rename(columns = {'Uur fractie G':'Dag fractie G'})
Liander_test['Dag fractie G'].sum()
Liander_gas = Liander_gas.merge(Liander_test, on='Dag')
Liander_gas['Dag fractie G'].sum()
Liander_gas['Uur fractie G'].sum()
Liander_gas.plot(y='Uur fractie G')
Liander_test.plot(y='Dag fractie G')
file = "Liander_gas_day_spec.csv"
export_report = Liander_gas.to_csv(file, index=False)
###Output
_____no_output_____ |
notebooks/plotting/Overview.ipynb | ###Markdown
Overview plot An overall summary plot of behavior, tracking, and spiking activity for a particular session
###Code
session_key = {'subject_id': 'vIRt49', 'session': 1}
experiment.Session & session_key
experiment.BehaviorTrial & session_key
###Output
_____no_output_____
###Markdown
Extract whisker tracking data
###Code
tracking.Tracking.WhiskerTracking & session_key
device_key = {'tracking_device': 'WT_Camera_Vincent 0'}
tracking_timestamps = (tracking.Tracking & session_key & device_key).fetch1('tracking_timestamps')
tracking_timestamps
whiskers, whisker_angles = (tracking.Tracking.WhiskerTracking & session_key & device_key).fetch(
'whisker_idx', 'angle', order_by='whisker_idx')
whiskers
whisker_angles
plt.plot(whisker_angles[1])
###Output
_____no_output_____
###Markdown
Extract units' spiketimes
###Code
ephys.Unit & session_key
insertion_keys = (ephys.ProbeInsertion & session_key).fetch('KEY')
insertion_keys
spike_rasters = {}
for insertion_key in insertion_keys:
units, spikes = (ephys.Unit & insertion_key).fetch('unit', 'spike_times')
# concatenating spiketimes from all units into a 1d-vector
spike_vec = np.concatenate(spikes)
# build a 1d-vector of equal length, representing the corresponding unit number for each spike in the "spike_vec"
unit_vec = np.concatenate([[t] * len(s) for s, t in zip(spikes, unit)])
# store in the "spike_rasters" with the insertion_number as key
spike_rasters[insertion_key['insertion_number']] = {'spike_vec': spike_vec, 'unit_vec': unit_vec}
spike_rasters
###Output
_____no_output_____
###Markdown
PLOT
###Code
fig, axes = plt.subplots(len(spike_rasters) + 1, 1, figsize=(16, 8))
# plot whisker angle in the first subplot
axes[0].plot(tracking_timestamps, whisker_angles[0])
# plot spike raster in the remaining subplots, one per probe
for insertion_num, ax in zip(spike_rasters, axes[1:]):
ax.plot(spike_rasters[insertion_num]['spike_vec'], spike_rasters[insertion_num]['unit_vec'], 'k|')
###Output
_____no_output_____
###Markdown
Example segmentation by trial
###Code
experiment.BehaviorTrial * experiment.SessionTrial & session_key
###Output
_____no_output_____
###Markdown
fetch back the start and stop times of all trials in this session
###Code
trial_starts, trial_stops = (experiment.BehaviorTrial * experiment.SessionTrial & session_key).fetch('start_time', 'stop_time')
trial_starts
trial_stops
###Output
_____no_output_____
###Markdown
extract the whisker angle data from the last trial only
###Code
in_trial = np.logical_and(tracking_timestamps >= trial_starts[-1], tracking_timestamps < trial_stops[-1])
trial_timestamps = tracking_timestamps[in_trial]
trial_whisker = whisker_angles[1][in_trial]
trial_timestamps
trial_whisker
plt.plot(trial_timestamps, trial_whisker)
###Output
_____no_output_____
###Markdown
extract the unit spiketimes from the last trial only
###Code
units, spikes, waveforms = (ephys.Unit * ephys.Unit.Waveform & insertion_key).fetch('unit', 'spike_times', 'waveform')
units
unit_spikes = spikes[5] # select spike times from unit #5
unit_spikes
in_trial = np.logical_and(unit_spikes >= trial_starts[-1], unit_spikes < trial_stops[-1])
trial_spikes = unit_spikes[in_trial]
trial_spikes
###Output
_____no_output_____
###Markdown
PLOT
###Code
fig, ax = plt.subplots(1, 1, figsize=(16, 8))
ax.plot(trial_timestamps, trial_whisker)
ax.plot(trial_spikes, np.full_like(trial_spikes, np.nanmax(trial_whisker)), 'k*')
###Output
_____no_output_____
###Markdown
Overview plot An overall summary plot of behavior, tracking, and spiking activity for a particular session
###Code
session_key = {'subject_id': 'vIRt47', 'session': 20}
experiment.Session & session_key
experiment.BehaviorTrial & session_key
###Output
_____no_output_____
###Markdown
Extract whisker tracking data
###Code
tracking.Tracking.WhiskerTracking & session_key
device_key = {'tracking_device': 'WT_Camera_Vincent 0'}
tracking_timestamps = (tracking.Tracking & session_key & device_key).fetch1('tracking_timestamps')
tracking_timestamps
whiskers, whisker_angles = (tracking.Tracking.WhiskerTracking & session_key & device_key).fetch(
'whisker_idx', 'angle', order_by='whisker_idx')
whiskers
whisker_angles
plt.plot(whisker_angles[1])
whisker_phase = (tracking.ProcessedWhisker & session_key & device_key).fetch(
'phase', order_by='whisker_idx')
plt.plot(whisker_phase[1])
###Output
_____no_output_____
###Markdown
Extract units' spiketimes
###Code
ephys.Unit & session_key
insertion_keys = (ephys.ProbeInsertion & session_key).fetch('KEY')
insertion_keys
spike_rasters = {}
for insertion_key in insertion_keys:
units, spikes = (ephys.Unit & insertion_key).fetch('unit', 'spike_times')
# concatenating spiketimes from all units into a 1d-vector
spike_vec = np.concatenate(spikes)
# build a 1d-vector of equal length, representing the corresponding unit number for each spike in the "spike_vec"
unit_vec = np.concatenate([[t] * len(s) for s, t in zip(spikes, unit)])
# store in the "spike_rasters" with the insertion_number as key
spike_rasters[insertion_key['insertion_number']] = {'spike_vec': spike_vec, 'unit_vec': unit_vec}
spike_rasters
###Output
_____no_output_____
###Markdown
PLOT
###Code
fig, axes = plt.subplots(len(spike_rasters) + 1, 1, figsize=(16, 8))
# plot whisker angle in the first subplot
axes[0].plot(tracking_timestamps, whisker_angles[0])
# plot spike raster in the remaining subplots, one per probe
for insertion_num, ax in zip(spike_rasters, axes[1:]):
ax.plot(spike_rasters[insertion_num]['spike_vec'], spike_rasters[insertion_num]['unit_vec'], 'k|')
###Output
_____no_output_____
###Markdown
Example segmentation by trial
###Code
experiment.BehaviorTrial * experiment.SessionTrial & session_key
###Output
_____no_output_____
###Markdown
fetch back the start and stop times of all trials in this session
###Code
trial_starts, trial_stops = (experiment.BehaviorTrial * experiment.SessionTrial & session_key).fetch('start_time', 'stop_time')
trial_starts
trial_stops
###Output
_____no_output_____
###Markdown
extract the whisker angle data from the last trial only
###Code
in_trial = np.logical_and(tracking_timestamps >= trial_starts[-1], tracking_timestamps < trial_stops[-1])
trial_timestamps = tracking_timestamps[in_trial]
trial_whisker = whisker_angles[1][in_trial]
trial_timestamps
trial_whisker
plt.plot(trial_timestamps, trial_whisker)
###Output
_____no_output_____
###Markdown
extract the unit spiketimes from the last trial only
###Code
units, spikes, waveforms = (ephys.Unit * ephys.Unit.Waveform & insertion_key).fetch('unit', 'spike_times', 'waveform')
units
unit_spikes = spikes[5] # select spike times from unit #5
unit_spikes
in_trial = np.logical_and(unit_spikes >= trial_starts[-1], unit_spikes < trial_stops[-1])
trial_spikes = unit_spikes[in_trial]
trial_spikes
###Output
_____no_output_____
###Markdown
PLOT
###Code
fig, ax = plt.subplots(1, 1, figsize=(16, 8))
ax.plot(trial_timestamps, trial_whisker)
ax.plot(trial_spikes, np.full_like(trial_spikes, np.nanmax(trial_whisker)), 'k*')
###Output
_____no_output_____ |
notebooks/04-Deploying_Dask.ipynb | ###Markdown
04. Deploying Dask Overview In this notebook, you will learn how to: - Configure remote Dask.distributed deployment. - Deploy Dask.distributed scheduler and workers on compute nodes. - Access scheduler and worker dashboards. Import idact It's recommended that *idact* is installed with *pip*. Alternatively, make sure the dependencies are installed: `pip install -r requirements.txt`, and add *idact* to path, for example:
###Code
import sys
sys.path.append('../')
###Output
_____no_output_____
###Markdown
We will use a wildcard import for convenience:
###Code
from idact import *
import bitmath
###Output
_____no_output_____
###Markdown
Load the cluster Let's load the environment and the cluster. Make sure to use your cluster name.
###Code
load_environment()
cluster = show_cluster("hpc")
cluster
access_node = cluster.get_access_node()
access_node.connect()
###Output
_____no_output_____
###Markdown
Configure remote Dask deployment Install Dask.distributed on the cluster Make sure `dask`, `distributed` and `bokeh` packages are installed with the Python 3.5+ distribution you intend to use on the cluster, see [Dask documentation](http://distributed.dask.org/en/latest/install.html).If you encounter any problems with deployment, this may be due to some library versions being incompatible. You can try installing frozen versions included with the *idact* repo in `envs/dask_jupyter_tornado.txt`, same as described in the previous tutorial `03. Deploying Jupyter`. Specify setup actions Same as for the Jupyter deployment, in order for *idact* to find and execute the proper binaries, you'll need to specify setup steps as a list of Bash script lines.They may very well be the exact same instructions as for the Jupyter deployment. If they are not, replace `cluster.config.setup_actions.jupyter` with the correct instructions list.
###Code
cluster.config.setup_actions.dask = cluster.config.setup_actions.jupyter
save_environment()
###Output
_____no_output_____
###Markdown
Choose the scratch directory Dask requires a directory for offloading data when the memory starts to fill up. Usually, the faster the storage, the better, see `--local-directory` in [dask-worker documentation](http://distributed.dask.org/en/latest/worker.htmlspill-data-to-disk).You can pass an absolute path, or a cluster environment variable.
###Code
cluster.config.scratch = '$SCRATCH'
save_environment()
###Output
_____no_output_____
###Markdown
Allocate nodes for the Dask deployment We will deploy Dask on three nodes: a scheduler on the first node, and one worker on each node. Make sure to adjust the `--account` parameter, same as in previous notebooks.
###Code
nodes = cluster.allocate_nodes(nodes=3,
cores=2,
memory_per_node=bitmath.GiB(10),
walltime=Walltime(minutes=10),
native_args={
'--account': 'intdata'
})
nodes
nodes.wait()
nodes
###Output
_____no_output_____
###Markdown
Deploy Dask After the initial setup, Dask can be deployed with a single command:
###Code
dd = deploy_dask(nodes)
dd
###Output
2018-11-24 01:40:46 INFO: Deploying Dask on 3 nodes.
2018-11-24 01:40:46 INFO: Connecting to p0289:60904 (1/3).
2018-11-24 01:40:48 INFO: Connecting to p0290:43482 (2/3).
2018-11-24 01:40:50 INFO: Connecting to p0291:54153 (3/3).
2018-11-24 01:40:52 INFO: Deploying scheduler on the first node: p0289.
2018-11-24 01:41:20 INFO: Checking scheduler connectivity from p0289 (1/3).
2018-11-24 01:41:20 INFO: Checking scheduler connectivity from p0290 (2/3).
2018-11-24 01:41:20 INFO: Checking scheduler connectivity from p0291 (3/3).
2018-11-24 01:41:21 INFO: Deploying workers.
2018-11-24 01:41:21 INFO: Deploying worker 1/3.
2018-11-24 01:41:42 INFO: Deploying worker 2/3.
2018-11-24 01:42:05 INFO: Deploying worker 3/3.
2018-11-24 01:42:46 INFO: Retried and failed: config.retries[Retry.OPEN_TUNNEL].{count=3, seconds_between=5}
2018-11-24 01:42:46 ERROR: Failure: Adding last hop.
2018-11-24 01:43:08 INFO: Validating worker 1/3.
2018-11-24 01:43:08 INFO: Validating worker 2/3.
2018-11-24 01:43:08 INFO: Validating worker 3/3.
###Markdown
If the deployment fails, take a look at `idact.log` to find out why. You can get a Dask client for the deployment:
###Code
client = dd.get_client()
client
###Output
_____no_output_____
###Markdown
You shouldn't perform computations with Dask.distributed from your local computer, due to likely Python and library version mismatches.Even if your Python environment matched the cluster exactly, the amount of data that could be transferred to your local computer could prove overwhelming.We will address this issue in one of the next notebooks, by making the Dask deployment accessible from a notebook deployed on the cluster. Browse Dask dashboards Dask provides dashboards for the scheduler and each worker:
###Code
dd.diagnostics.addresses
###Output
_____no_output_____
###Markdown
To open all dashboards, execute the line below. You can also click the scheduler dashboard link under `get_client` above.
###Code
dd.diagnostics.open_all()
###Output
_____no_output_____
###Markdown
Cancel Dask deployment After you're done, you can cancel the deployment by calling `cancel`, though it will be killed anyway when the node allocation ends.
###Code
dd.cancel()
###Output
2018-11-24 01:43:48 INFO: Cancelling worker deployment on p0291.
2018-11-24 01:43:54 INFO: Cancelling worker deployment on p0290.
2018-11-24 01:44:02 INFO: Cancelling worker deployment on p0289.
2018-11-24 01:44:08 INFO: Cancelling scheduler deployment on p0289.
###Markdown
Alternatively, the following will just close the tunnels, without attempting to kill Dask scheduler and workers:
###Code
dd.cancel_local()
###Output
_____no_output_____
###Markdown
Dask client is multithreaded, so it needs to be closed.
###Code
client.close()
###Output
_____no_output_____
###Markdown
Cancel the allocation It's important to cancel an allocation if you're done with it early, in order to minimize the CPU time you are charged for.
###Code
nodes.running()
nodes.cancel()
nodes.running()
###Output
_____no_output_____
###Markdown
04. Deploying Dask Overview In this notebook, you will learn how to: - Configure remote Dask.distributed deployment. - Deploy Dask.distributed scheduler and workers on compute nodes. - Access scheduler and worker dashboards. Import idact It's recommended that *idact* is installed with *pip*. Alternatively, make sure the dependencies are installed: `pip install -r requirements.txt`, and add *idact* to path, for example: `import sys` `sys.path.append('')` We will use a wildcard import for convenience:
###Code
from idact import *
import bitmath
###Output
_____no_output_____
###Markdown
Load the cluster Let's load the environment and the cluster. Make sure to use your cluster name.
###Code
load_environment()
cluster = show_cluster("test")
cluster
access_node = cluster.get_access_node()
access_node.connect()
###Output
_____no_output_____
###Markdown
Configure remote Dask deployment Install Dask.distributed on the cluster Make sure `dask`, `distributed` and `bokeh` packages are installed with the Python 3.5+ distribution you intend to use on the cluster, see [Dask documentation](http://distributed.dask.org/en/latest/install.html).If you encounter any problems with deployment, this may be due to some library versions being incompatible. You can try installing frozen versions included with the *idact* repo in `envs/dask_jupyter_tornado.txt`, same as described in the previous tutorial `03. Deploying Jupyter`. Specify setup actions Same as for the Jupyter deployment, in order for *idact* to find and execute the proper binaries, you'll need to specify setup steps as a list of Bash script lines.They may very well be the exact same instructions as for the Jupyter deployment. If they are not, replace `cluster.config.setup_actions.jupyter` with the correct instructions list.
###Code
cluster.config.setup_actions.dask = cluster.config.setup_actions.jupyter
save_environment()
###Output
_____no_output_____
###Markdown
Choose the scratch directory Dask requires a directory for offloading data when the memory starts to fill up. Usually, the faster the storage, the better, see `--local-directory` in [dask-worker documentation](http://distributed.dask.org/en/latest/worker.htmlspill-data-to-disk).You can pass an absolute path, or a cluster environment variable.
###Code
cluster.config.scratch = '$SCRATCH'
save_environment()
###Output
_____no_output_____
###Markdown
Allocate nodes for the Dask deployment We will deploy Dask on three nodes: a scheduler on the first node, and one worker on each node. Make sure to adjust the `--account` parameter, same as in previous notebooks.
###Code
nodes = cluster.allocate_nodes(nodes=3,
cores=2,
memory_per_node=bitmath.GiB(10),
walltime=Walltime(minutes=10),
native_args={
'--account': 'intdata'
})
nodes
nodes.wait()
nodes
###Output
_____no_output_____
###Markdown
Deploy Dask After the initial setup, Dask can be deployed with a single command:
###Code
dd = deploy_dask(nodes)
dd
###Output
_____no_output_____
###Markdown
If the deployment fails, take a look at `idact.log` to find out why. You can get a Dask client for the deployment:
###Code
client = dd.get_client()
client
###Output
_____no_output_____
###Markdown
You shouldn't perform computations with Dask.distributed from your local computer, due to likely Python and library version mismatches.Even if your Python environment matched the cluster exactly, the amount of data that could be transferred to your local computer could prove overwhelming.We will address this issue in one of the next notebooks, by making the Dask deployment accessible from a notebook deployed on the cluster. Browse Dask dashboards Dask provides dashboards for the scheduler and each worker:
###Code
dd.diagnostics.addresses
###Output
_____no_output_____
###Markdown
To open all dashboards, execute the line below. You can also click the scheduler dashboard link under `get_client` above.
###Code
dd.diagnostics.open_all()
###Output
_____no_output_____
###Markdown
Cancel Dask deployment After you're done, you can cancel the deployment by calling `cancel`, though it will be killed anyway when the node allocation ends.
###Code
dd.cancel()
###Output
_____no_output_____
###Markdown
Alternatively, the following will just close the tunnels, without attempting to kill Dask scheduler and workers:
###Code
dd.cancel_local()
###Output
_____no_output_____
###Markdown
Dask client is multithreaded, so it needs to be closed.
###Code
client.close()
###Output
_____no_output_____
###Markdown
Cancel the allocation It's important to cancel an allocation if you're done with it early, in order to minimize the CPU time you are charged for.
###Code
nodes.running()
nodes.cancel()
nodes.running()
###Output
_____no_output_____ |
Analysis/Analysis.ipynb | ###Markdown
**INTRODUCTION**Natural Stat Tricks "Games.csv" is a collection of analytical and statistical measures taken from the 2020 NHL Stanley Cup Playoffs. Each row represents one of the teams involved in each game and their corresponding values for each metric. The conclusion of the playoffs saw the Tampa Bay Lightning claim their second franchise Stanley Cup. In light of this, do the so-called "fancy stats" indicate that Tampa Bay was worthy of winning the cup? To investigate this, consider various metrics which indicate how *lucky* or *unlucky* a team was, such as Shooting plus Save percentage (PDO), Corsi For (CF%), and Shots For percentage (SF%) among many others. **RESEARCH QUESTION 1: Do the Advanced Stats Indicate that Tampa Bay was Worthy of Winning?** First, let us examine a plot that indicates how good each team in the playoffs was at creating offensive opportunities.
###Code
Project_Functions.SFGFdata(df)
###Output
/Users/marcusdecloux/Documents/DATA301/course-project-solo_104/Analysis/Scripts/Project_Functions.py:141: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.
df.groupby(
###Markdown
GF% (Goals For Percent, found by Goals For over Total Goals -> GF/G * 100%) and SF% (Shots For Percent, found by Shots For over Total Shots -> SF/S * 100%) are indicators of which team controls the offensive pace of the game. Teams that do both well provide a fair goals to shots ratio, meaning they dont suffer from poor shooting. Tampa Bay is only outranked in **both** measures by the Vegas Golden Knights and the Colorado Avalanche, both Western Conference Teams that Tampa Bay did not need to play to win the playoffs this year. This scatterplot indicates Tampa Bay played a solid offensive game in the 2020 playoffs. Moreover, next we should consider a defensive metric.
###Code
Project_Functions.SVperdata(df)
###Output
_____no_output_____
###Markdown
SV% is a measure used to describe how good a goalie is, on average. It is computed by dividing the total number of saves by the total number of shots against (SV/SA * 100%). Per Hockey Reference, the league average SV% in 2019-2020 was 91.0% [link](https://www.hockey-reference.com/leagues/stats.html). Thus teams that recorded a higher SV% had good goaltending, and by extension good defense. Tampa Bay ranks 8th among all playoff teams, and were significantly higher than league average. It is not a stretch to say that Tampa Bay provided excellent goaltending and defense in the 2020 Playoffs. One of the most common metrics is CF%. CF% is a measure of Shot Attempts (i.e. shots directly towards the goal, regardless of wether they are blocked, miss, are saved, or go in) For to total shot attempts. CF% is used to indicate who controls the game, with the logic that the team controlling the game wins everytime. In practice, this is very rarely true, mostly due to the parity in the league.
###Code
Project_Functions.CFdata(df)
###Output
_____no_output_____
###Markdown
Tampa Bay once again ranks fairly high on this list. As a basic rule of thumb, having a CF% higher than 50 would be indicative of a team being a better controller of the game. Each of the teams that managed a better CF% - Vegas, Colorado, Nashville, and Pittsburgh - were all elimanated before Tampa Bay had to play them. Thus the Tampa Bay Lightning were an excellent team at controlling the whole game. Lastly, consider how *lucky* each team was.
###Code
Project_Functions.PDOdata(df)
###Output
_____no_output_____
###Markdown
In general, PDO measures how lucky a team is by combining the shooting and save percentages of a team. PDO values higher than 1.02 would imply that the team survived via unntaurally high shooting or save percentages (or both). Conversely, a PDO less than 0.98 is indicative of poor luck. The results of the 2020 Playoff PDO show that Tampa Bay was good, but not lucky, as their PDO scored at 1.014 which is high but not too high. **ANALYSIS**By each of the above metrics, Tampa Bay was certianly one of the best teams in the playoffs. Arguably, the Lightning were the best Eastern Conference team by the above metrics. By the numbers, Tampa Bay provided a forecful offensive game, were beneficiaries of an above average goaltending and defensive efforts, which in general lead to an ability to exert control over the opposition throughout the game. Tampa Bay cannot be viewed as lucky, analytically. By these numbers, Tampa Bay was certianly worthy of winning the cupThe opposition to *fancy stats* is generally called the "Eye-Test". This does not include the use of numbers, but takes the general public thought for each team as face value. Tampa Bay was certiantly a good team by the Eye-Test, arguably employing the best Goaltender, Defenceman and Winger in the league. Tampa Bay is known as a phenomenal team under any method of looking at the value each team provides. In conclusion, analytically, the Tampa Bay Lightning were a worthy winner of the 2020 Stanley Cup Playoffs **RESEARCH QUESTION 2: Do Fancy Stats Tell the Whole Story?** Due to the intense parity featured in the NHL, the worst team in the league is capable of beating the best on any given night. Sometimes the worse analytical team is the one that wins. This did occur in the Western Conference Playoffs in 2020, when the Dallas Stars advanced to the finals over the VEgas Golden Knights. What explains this phenomenon? First consider again teams PDO
###Code
Project_Functions.PDOdata(df)
###Output
_____no_output_____
###Markdown
In a similar fashion to Tampa Bay, the Stars cannot be considered a lucky team for the duration of the 2020 Playoffs due to their middle-of-the-pack PDO 0f 1.007. The Stars were not necessarily boosted by above average shooting or save percentages. If we consider CF% we start to see a different trend begin.
###Code
Project_Functions.CFdata(df)
###Output
_____no_output_____
###Markdown
By all accounts, Vegas was more deserving of a trip to the finals than Dallas. Dallas was a poor possession team during the playoffs with a CF% well below 50%, meaning they were, on average, dominated throughout the playoffs. And yet Dallas bucked the trend, deafeating the top two possession teams in Vegas and Colorado in subsequent rounds.
###Code
Project_Functions.SVperdata(df)
###Output
_____no_output_____
###Markdown
OpenSMILE C configure I input O outputSMILExtract -C config/demo/demo1_energy.conf -I wav_samples/speech01.wav -O speech01.energy.csv
###Code
import numpy as np
import os
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
# # 提取
# f=open('/Users/shane/Downloads/opensmile-2.3.0/dutch_vowels.txt','r')
# # 去最后一行
# last_line=f.readlines()[-1]
# f.close()
# features=last_line.split(',')
# # 去掉第一个和最后一个元素
# # 得到features
# features=features[1:-1]
# 批量提取
patients_audio_path='/Users/shane/Downloads/Hyperfun_dys/Patients'
healthy_audio_path='/Users/shane/Downloads/Hyperfun_dys/Healthy'
patients_output_path='/Users/shane/Downloads/Hyperfun_dys/out/Patients'
healthy_output_path='/Users/shane/Downloads/Hyperfun_dys/out/Healthy'
patients_audio_list=os.listdir(patients_audio_path)
healthy_audio_list=os.listdir(healthy_audio_path)
for audio in patients_audio_list:
if audio[-4:]=='.wav':
this_path_input=os.path.join(patients_audio_path,audio)
this_path_output=os.path.join(patients_output_path,audio[:-4]+'.txt')
cmd='cd /Users/shane/Downloads/opensmile-2.3.0 && SMILExtract -C config/IS09_emotion.conf -I '+this_path_input+' -O '+this_path_output
os.system(cmd)
for audio in healthy_audio_list:
if audio[-4:]=='.wav':
this_path_input=os.path.join(healthy_audio_path,audio)
this_path_output=os.path.join(healthy_output_path,audio[:-4]+'.txt')
cmd='cd /Users/shane/Downloads/opensmile-2.3.0 && SMILExtract -C config/IS09_emotion.conf -I '+this_path_input+' -O '+this_path_output
os.system(cmd)
patients_feature_list=os.listdir(patients_output_path)
healthy_feature_list=os.listdir(healthy_output_path)
patients_feature_list
patient_features=[]
i=0
for path in patients_feature_list:
if path[-4:]=='.txt':
f=open(patients_output_path+'/'+path,'r')
last_line=f.readlines()[-1]
f.close()
features=last_line.split(',')
features=features[1:-1]
patient_features.append(features)
i+=1
patient_features=np.reshape(patient_features, [i, len(features)])
patient_labels=np.zeros([i, 1])
patient=np.column_stack([patient_features, patient_labels])
healthy_features=[]
i=0
for path in healthy_feature_list:
if path[-4:]=='.txt':
f=open(healthy_output_path+'/'+path,'r')
last_line=f.readlines()[-1]
f.close()
features=last_line.split(',')
features=features[1:-1]
healthy_features.append(features)
i+=1
healthy_features=np.reshape(healthy_features, [i, len(features)])
healthy_labels=np.ones([i, 1])
healthy=np.column_stack([healthy_features, healthy_labels])
all_data=np.row_stack([patient, healthy])
print(patient_features.shape, healthy_features.shape, patient.shape, healthy.shape, all_data.shape)
X_train, X_test, y_train, y_test=train_test_split(all_data[:,:-1], all_data[:,-1],
test_size=0.2, random_state=2019, shuffle=True)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
clf = SVC(C=1,gamma='scale', kernel='poly', degree=5)
clf.fit(X_train, y_train)
pred=clf.predict(X_test)
# row: true; col: pred (true, pred)
confusion_matrix(y_test, pred)
clf.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Autism
###Code
import speech_recognition as sr
r = sr.Recognizer()
sss=sr.AudioFile('../Audio_recog/test.wav')
with sss as source:
sss_duration=sss.DURATION # get the time length of the audios
audio_host_1=r.record(source, duration=6)
audio_guest_1=r.record(source, duration=14)
audio_host_2=r.record(source, duration=19)
audio_guest_2=r.record(source, duration=23)
audio_host_3=r.record(source, duration=17)
test_host_1=r.recognize_google(audio_host_1)
test_host_2=r.recognize_google(audio_host_2)
test_host_3=r.recognize_google(audio_host_3)
test_guest_1=r.recognize_google(audio_guest_1)
test_guest_2=r.recognize_google(audio_guest_2)
# r.recognize_google(audio, language='fr-FR')
print(sss_duration)
print(test_host_1,test_guest_1, test_host_2,test_guest_2, test_host_3)
print('\n',test_guest_1)
print('\n',test_guest_2)
###Output
79.50512471655328
this is scientific American's sixty-second science Einstein celebration songs across the world is the key to a better world in the future climate change social inequality Jeremy Farrar director of the wellcome trust one of the world's biggest non-governmental funders of scientific research earlier today January 23rd he spoke to Scientific American editor and chief Maria dichristina at the world economic Forum in Davos after they both took part in the Global Science Outlook discussion at the four and nobody know myself no air organization welcome no organization is going to solve this on the road and so we've launched a campaign cool together signs can bring together people from around the world to say to stand up for those things you can see video of the entire discussion that took place at Davos earlier today just Google World economic Forum in Davos Global Science outlook for scientific American's sixty-second science I'm Steve mirsky
celebration songs across the world is the key to a better world in the future climate change social inequality
and nobody know myself no air organization welcome no organization is going to solve this on the road and so we've launched a campaign cool together signs can bring together people from around the world to say to stand up for those things
###Markdown
Use micphone input
###Code
# !pip install pyaudio
# !brew install portaudio
# !python -m speech_recognition
r = sr.Recognizer()
mic = sr.Microphone()
with mic as source:
r.adjust_for_ambient_noise(source)
audio = r.listen(source)
try:
print("Google Speech Recognition thinks you said " + r.recognize_google(audio))
except sr.UnknownValueError:
print("Google Speech Recognition could not understand audio")
except sr.RequestError as e:
print("Could not request results from Google Speech Recognition service; {0}".format(e))
###Output
Google Speech Recognition thinks you said hello
###Markdown
Adult word count (AWC)
###Code
# import nltk
# nltk.download()
# Normal people say 1000 words more than ASD patients
from nltk.corpus import words
from nltk.tokenize import word_tokenize
word_count=[]
word_list = words.words()
for w in word_tokenize(test_guest_2):
word_count.append(w in word_list)
print(sum(word_count))
###Output
40
###Markdown
Frequency of vocalizations
###Code
# total_words=len(test_host_1.split())+len(test_guest_1.split())+len(test_host_2.split())+len(test_guest_2.split())+len(test_host_3.split())
# print('total words: ', total_words)
# frequency_vocal=total_words/sss_duration
# frequency_vocal # words/s
from collections import Counter
word_counter = Counter()
text_test_guest_2=word_tokenize(test_guest_2)
for w in text_test_guest_2:
word_counter.update(w.split())
# print(word_counter)
ori_tagged=nltk.pos_tag(text_test_guest_2)
tag_fd = nltk.FreqDist(tag for (word, tag) in ori_tagged)
tag_fd.plot(cumulative=True)
###Output
_____no_output_____
###Markdown
Conversational turns
###Code
import tarfile
tar = tarfile.open('/Users/shane/Downloads/VBDiarization/models/x-vectors/0003_sre16_v2_1a.tar.gz', 'r:gz')
tar.extractall('/Users/shane/Downloads/VBDiarization/models/x-vectors')
tar.close()
import sidekit
idmap = sidekit.IdMap()
idmap.leftids = np.array(["host", "guest", "host", "guest", "host"])
idmap.rightids = np.array(["segment_1", "segment_2", "segment_3", "segment_4", "segment_5"])
idmap.start = np.empty((5), dtype="|O")
idmap.stop = np.empty((5), dtype="|O")
idmap.validate()
ndx = sidekit.Ndx()
ndx.modelset = np.array(["host", "guest"])
ndx.segset = np.array(["segment_1", "segment_2", "segment_3", "segment_4", "segment_5"])
ndx.trialmask = np.ones((2,5), dtype='bool')
ndx.validate()
key = sidekit.Key()
key.modelset = ndx.modelset
key.segset = ndx.segset
key.tar = np.zeros((2,5), dtype='bool')
key.tar[0, 0] = True
key.tar[0, 2] = True
key.tar[0, 4] = True
key.tar[1, 1] = True
key.tar[1, 3] = True
key.non = np.zeros((2,5), dtype='bool')
key.tar[0, 1] = True
key.tar[0, 3] = True
key.non[1, 0] = True
key.tar[1, 2] = True
key.tar[1, 4] = True
key.validate()
extractor = sidekit.FeaturesExtractor(audio_filename_structure=None,
feature_filename_structure=None,
sampling_frequency=None,
lower_frequency=200,
higher_frequency=3800,
filter_bank="log",
filter_bank_size=24,
window_size=0.025,
shift=0.01,
ceps_number=20,
vad="snr",
snr=40,
pre_emphasis=0.97,
save_param=["vad", "energy", "cep", "fb"],
keep_all_features=True)
extractor.save(show="test",
channel=0,
input_audio_filename='/Users/shane/Downloads/Audio_recog/test.wav',
output_feature_filename='/Users/shane/Downloads/Audio_recog/test.h5')
server = sidekit.FeaturesServer(features_extractor=None,
feature_filename_structure='/Users/shane/Downloads/Audio_recog/test.h5',
sources=None,
dataset_list=["energy", "cep", "vad"],
mask="[0-12]",
feat_norm="cmvn",
global_cmvn=None,
dct_pca=False,
dct_pca_config=None,
sdc=False,
sdc_config=None,
delta=True,
double_delta=True,
delta_filter=None,
context=None,
traps_dct_nb=None,
rasta=True,
keep_all_features=True)
server
distrib_nb = 2048 # number of Gaussian distributions for each GMM
rank_TV = 400 # Rank of the total variability matrix
tv_iteration = 10 # number of iterations to run
plda_rk = 400 # rank of the PLDA eigenvoice matrix
feature_dir = '/lium/spk1/larcher/mfcc_24/' # directory where to find the features
feature_extension = 'h5' # Extension of the feature files
nbThread = 10 # Number of parallel process to run
###Output
_____no_output_____
###Markdown
Analysis of Song Lyrics:
###Code
import pandas as pd
import lyricsgenius as genius
import string
# pip install import_ipynb
import import_ipynb
from fetch import *
import nltk
###Output
importing Jupyter notebook from fetch.ipynb
###Markdown
Search for Artist:
###Code
%%time
token = "oWh91MguQJvO6mD9otmosJnKFgpyFAx8Vqzd-enE3BYFdslxrhffaba36n0yZ2iC"
LH = findMusic("Lee Hi", token)
LH.head()
###Output
_____no_output_____
###Markdown
Clean Artist's Song Lyrics:
###Code
LHc = fixLyrics(LH, "Lyrics")
pd.set_option('max_colwidth', 1000)
LHc.head()
###Output
_____no_output_____
###Markdown
Drop Repeating (e.g. Same Song, Different Language) Songs:
###Code
LHc["Title"]
###Output
_____no_output_____
###Markdown
Notes:- 한숨: 0 (KR), 43 (JP)- Rose: 11 (Official), 44 (Live)
###Code
LHc = LHc.drop([LHc.index[43], LHc.index[44]])
LHc.head()
###Output
_____no_output_____
###Markdown
Convert Songs to CSV:
###Code
LHc.to_csv("LH.csv", index = False)
###Output
_____no_output_____
###Markdown
Strip Lyrics:
###Code
def words(lc):
# Unique words within lyrics:
uWord = []
for i in lc:
if i not in uWord:
uWord.append(i)
return uWord
unique = []
for i in LHc["Lyrics"].tolist():
unique.append(words(stopwordLyrics(i).split()))
LHc["Unique"] = unique
LHc
###Output
_____no_output_____ |
workflow/notebooks/do_things/examine_yiqtol.ipynb | ###Markdown
Examining the Yiqtol Verb Dataset
###Code
import sys
import pandas as pd
sys.path.append('../scripts/')
from analysis.load_dfs import DfLoader
df_load = DfLoader('../../results/datasets/yqtl/')
yqtl_df = df_load.df
agree_df = df_load.eng_agree()
agree_df.shape
[1, 2, 3][-1:]
pt = pd.pivot_table(
agree_df,
index=['eng_TAMsimp', 'person'],
columns='has_objc',
aggfunc='size',
fill_value=0,
)
pt.head()
pt = pt[[1, 0]]
pt
total = pt.sum(1)
total.name = 'TOTAL'
pd.concat([total, pt], 1)
yqtl_df.loc[35]['niv_sent']
from analysis.df_styles import TextShower
ts = TextShower(default=['ref_sbl', 'sentence', 'text_full', 'esv', 'esv_TAM', 'esv_verse'])
yqtl_df.eng_agree.value_counts()
yqtl_df.shape
agree_df.shape
yqtl_df.eng_TAM.value_counts().head(50)
yqtl_df.eng_TAM.value_counts().shape
agree_df.eng_TAM.value_counts().head(50)
ts.show(yqtl_df.query('esv_TAM == "PRES"'), spread=100)
ts.show(yqtl_df.query('esv_TAM == "IMPV"'), spread=100)
ts.show(yqtl_df.query('esv_TAM == "MOD is to"'), spread=100)
ts.show(yqtl_df.query('esv_TAM == "PAST"'), spread=100)
###Output
showing 100 of 598
|
docs/practices/high_level_api/high_level_api.ipynb | ###Markdown
飞桨高层API使用指南**作者:** [PaddlePaddle](https://github.com/PaddlePaddle) **日期:** 2021.10 **摘要:** 本示例教程是对飞桨高层API的详细说明,会介绍如何使用高层API,快速完成深度学习任务。 一、简介飞桨框架2.0全新推出高层API,是对飞桨API的进一步封装与升级,提供了更加简洁易用的API,进一步提升了飞桨的易学易用性,并增强飞桨的功能。飞桨高层API面向从深度学习小白到资深开发者的所有人群,对于AI初学者来说,使用高层API可以简单快速的构建深度学习项目,对于资深开发者来说,可以快速完成算法迭代。飞桨高层API具有以下特点:* 易学易用: 高层API是对普通动态图API的进一步封装和优化,同时保持与普通API的兼容性,高层API使用更加易学易用,同样的实现使用高层API可以节省大量的代码。* 低代码开发: 使用飞桨高层API的一个明显特点是编程代码量大大缩减。* 动静转换: 高层API支持动静转换,只需要改一行代码即可实现将动态图代码在静态图模式下训练,既方便使用动态图调试模型,又提升了模型训练效率。在功能增强与使用方式上,高层API有以下升级:* 模型训练方式升级: 高层API中封装了Model类,继承了Model类的神经网络可以仅用几行代码完成模型的训练。* 新增图像处理模块transform: 飞桨新增了图像预处理模块,其中包含数十种数据处理函数,基本涵盖了常用的数据处理、数据增强方法。* 提供常用的神经网络模型可供调用: 高层API中集成了计算机视觉领域和自然语言处理领域常用模型,包括但不限于mobilenet、resnet、yolov3、cyclegan、bert、transformer、seq2seq等等。同时发布了对应模型的预训练模型,可以直接使用这些模型或者在此基础上完成二次开发。 二、安装并使用飞桨高层API飞桨高层API无需独立安装,只需要安装好paddlepaddle即可。如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) Paddle 2.2.0-rc0。安装完成后import paddle即可使用相关高层API,如:paddle.Model、视觉领域paddle.vision、NLP领域paddle.text。
###Code
import paddle
import paddle.vision as vision
import paddle.text as text
paddle.__version__
###Output
_____no_output_____
###Markdown
三、目录本指南教学内容覆盖* 使用高层API提供的自带数据集进行相关深度学习任务训练。* 使用自定义数据进行数据集的定义、数据预处理和训练。* 如何在数据集定义和加载中应用数据增强相关接口。* 如何进行模型的组网。* 高层API进行模型训练的相关API使用。* 如何在fit接口满足需求的时候进行自定义,使用基础API来完成训练。* 如何使用多卡来加速训练。 四、数据集定义、加载和数据预处理对于深度学习任务,均是框架针对各种类型数字的计算,是无法直接使用原始图片和文本等文件来完成。那么就是涉及到了一项动作,就是将原始的各种数据文件进行处理加工,转换成深度学习任务可以使用的数据。 4.1 框架自带数据集使用高层API将一些常用到的数据集作为领域API,对应API所在目录为`paddle.vision.datasets`,那么先看下提供了哪些数据集。
###Code
print('视觉相关数据集:', paddle.vision.datasets.__all__)
print('自然语言相关数据集:', paddle.text.__all__)
###Output
视觉相关数据集: ['DatasetFolder', 'ImageFolder', 'MNIST', 'FashionMNIST', 'Flowers', 'Cifar10', 'Cifar100', 'VOC2012']
自然语言相关数据集: ['Conll05st', 'Imdb', 'Imikolov', 'Movielens', 'UCIHousing', 'WMT14', 'WMT16']
###Markdown
这里加载一个手写数字识别的数据集,用`mode`来标识是训练数据还是测试数据集。数据集接口会自动从远端下载数据集到本机缓存目录`~/.cache/paddle/dataset`。
###Code
from paddle.vision.transforms import ToTensor
# 训练数据集
train_dataset = vision.datasets.MNIST(mode='train', transform=ToTensor())
# 验证数据集
val_dataset = vision.datasets.MNIST(mode='test', transform=ToTensor())
###Output
_____no_output_____
###Markdown
4.2 自定义数据集更多的时候需要自己使用已有的相关数据来定义数据集,那么这里通过一个案例来了解如何进行数据集的定义,飞桨提供了`paddle.io.Dataset`基类,通过类的集成来快速实现数据集定义。
###Code
from paddle.io import Dataset
class MyDataset(Dataset):
"""
步骤一:继承paddle.io.Dataset类
"""
def __init__(self, mode='train'):
"""
步骤二:实现构造函数,定义数据读取方式,划分训练和测试数据集
"""
super(MyDataset, self).__init__()
if mode == 'train':
self.data = [
['traindata1', 'label1'],
['traindata2', 'label2'],
['traindata3', 'label3'],
['traindata4', 'label4'],
]
else:
self.data = [
['testdata1', 'label1'],
['testdata2', 'label2'],
['testdata3', 'label3'],
['testdata4', 'label4'],
]
def __getitem__(self, index):
"""
步骤三:实现__getitem__方法,定义指定index时如何获取数据,并返回单条数据(训练数据,对应的标签)
"""
data = self.data[index][0]
label = self.data[index][1]
return data, label
def __len__(self):
"""
步骤四:实现__len__方法,返回数据集总数目
"""
return len(self.data)
# 测试定义的数据集
train_dataset_2 = MyDataset(mode='train')
val_dataset_2 = MyDataset(mode='test')
print('=============train dataset=============')
for data, label in train_dataset_2:
print(data, label)
print('=============evaluation dataset=============')
for data, label in val_dataset_2:
print(data, label)
###Output
=============train dataset=============
traindata1 label1
traindata2 label2
traindata3 label3
traindata4 label4
=============evaluation dataset=============
testdata1 label1
testdata2 label2
testdata3 label3
testdata4 label4
###Markdown
4.3 数据增强训练过程中有时会遇到过拟合的问题,其中一个解决方法就是对训练数据做增强,对数据进行处理得到不同的图像,从而泛化数据集。数据增强API是定义在领域目录的transofrms下,这里介绍两种使用方式,一种是基于框架自带数据集,一种是基于自己定义的数据集。 4.3.1 框架自带数据集
###Code
from paddle.vision.transforms import Compose, Resize, ColorJitter
# 定义想要使用那些数据增强方式,这里用到了随机调整亮度、对比度和饱和度,改变图片大小
transform = Compose([ColorJitter(), Resize(size=100)])
# 通过transform参数传递定义好的数据增项方法即可完成对自带数据集的应用
train_dataset_3 = vision.datasets.MNIST(mode='train', transform=transform)
###Output
_____no_output_____
###Markdown
4.3.2 自定义数据集针对自定义数据集使用数据增强有两种方式,一种是在数据集的构造函数中进行数据增强方法的定义,之后对__getitem__中返回的数据进行应用。另外一种方式也可以给自定义的数据集类暴漏一个构造参数,在实例化类的时候将数据增强方法传递进去。
###Code
from paddle.io import Dataset
class MyDataset(Dataset):
def __init__(self, mode='train'):
super(MyDataset, self).__init__()
if mode == 'train':
self.data = [
['traindata1', 'label1'],
['traindata2', 'label2'],
['traindata3', 'label3'],
['traindata4', 'label4'],
]
else:
self.data = [
['testdata1', 'label1'],
['testdata2', 'label2'],
['testdata3', 'label3'],
['testdata4', 'label4'],
]
# 定义要使用的数据预处理方法,针对图片的操作
self.transform = Compose([ColorJitter(), Resize(size=100)])
def __getitem__(self, index):
data = self.data[index][0]
# 在这里对训练数据进行应用
# 这里只是一个示例,测试时需要将数据集更换为图片数据进行测试
data = self.transform(data)
label = self.data[index][1]
return data, label
def __len__(self):
return len(self.data)
###Output
_____no_output_____
###Markdown
五、模型组网针对高层API在模型组网上和基础API是统一的一套,无需投入额外的学习使用成本。那么这里我举几个简单的例子来做示例。 5.1 Sequential组网针对顺序的线性网络结构可以直接使用Sequential来快速完成组网,可以减少类的定义等代码编写。
###Code
# Sequential形式组网
mnist = paddle.nn.Sequential(
paddle.nn.Flatten(),
paddle.nn.Linear(784, 512),
paddle.nn.ReLU(),
paddle.nn.Dropout(0.2),
paddle.nn.Linear(512, 10)
)
###Output
_____no_output_____
###Markdown
5.2 SubClass组网针对一些比较复杂的网络结构,就可以使用Layer子类定义的方式来进行模型代码编写,在`__init__`构造函数中进行组网Layer的声明,在`forward`中使用声明的Layer变量进行前向计算。子类组网方式也可以实现sublayer的复用,针对相同的layer可以在构造函数中一次性定义,在forward中多次调用。
###Code
# Layer类继承方式组网
class Mnist(paddle.nn.Layer):
def __init__(self):
super(Mnist, self).__init__()
self.flatten = paddle.nn.Flatten()
self.linear_1 = paddle.nn.Linear(784, 512)
self.linear_2 = paddle.nn.Linear(512, 10)
self.relu = paddle.nn.ReLU()
self.dropout = paddle.nn.Dropout(0.2)
def forward(self, inputs):
y = self.flatten(inputs)
y = self.linear_1(y)
y = self.relu(y)
y = self.dropout(y)
y = self.linear_2(y)
return y
mnist_2 = Mnist()
###Output
_____no_output_____
###Markdown
5.3 模型封装定义好网络结构之后来使用`paddle.Model`完成模型的封装,将网络结构组合成一个可快速使用高层API进行训练、评估和预测的类。在封装的时候有两种场景,动态图训练模式和静态图训练模式。
###Code
# 使用GPU训练
# paddle.set_device('gpu')
# 模型封装
## 场景1:动态图模式
## 1.1 为模型预测部署场景进行模型训练
## 需要添加input和label数据描述,否则会导致使用model.save(training=False)保存的预测模型在使用时出错
inputs = paddle.static.InputSpec([-1, 1, 28, 28], dtype='float32', name='input')
label = paddle.static.InputSpec([-1, 1], dtype='int8', name='label')
model = paddle.Model(mnist, inputs, label)
## 1.2 面向实验而进行的模型训练
## 可以不传递input和label信息
# model = paddle.Model(mnist)
## 场景2:静态图模式
# paddle.enable_static()
# paddle.set_device('gpu')
# input = paddle.static.InputSpec([None, 1, 28, 28], dtype='float32')
# label = paddle.static.InputSpec([None, 1], dtype='int8')
# model = paddle.Model(mnist, input, label)
###Output
_____no_output_____
###Markdown
5.4 模型可视化在组建好网络结构后,一般会想去对网络结构进行一下可视化,逐层的去对齐一下网络结构参数,看看是否符合预期。这里可以通过`Model.summary`接口进行可视化展示。
###Code
model.summary((1, 28, 28))
###Output
---------------------------------------------------------------------------
Layer (type) Input Shape Output Shape Param #
===========================================================================
Flatten-1 [[1, 28, 28]] [1, 784] 0
Linear-1 [[1, 784]] [1, 512] 401,920
ReLU-1 [[1, 512]] [1, 512] 0
Dropout-1 [[1, 512]] [1, 512] 0
Linear-2 [[1, 512]] [1, 10] 5,130
===========================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.02
Params size (MB): 1.55
Estimated Total Size (MB): 1.57
---------------------------------------------------------------------------
###Markdown
另外,summary接口有两种使用方式,下面通过两个示例来做展示,除了`Model.summary`这种配套`paddle.Model`封装使用的接口外,还有一套配合没有经过`paddle.Model`封装的方式来使用。可以直接将实例化好的Layer子类放到`paddle.summary`接口中进行可视化呈现。
###Code
paddle.summary(mnist, (1, 28, 28))
###Output
---------------------------------------------------------------------------
Layer (type) Input Shape Output Shape Param #
===========================================================================
Flatten-1 [[1, 28, 28]] [1, 784] 0
Linear-1 [[1, 784]] [1, 512] 401,920
ReLU-1 [[1, 512]] [1, 512] 0
Dropout-1 [[1, 512]] [1, 512] 0
Linear-2 [[1, 512]] [1, 10] 5,130
===========================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.02
Params size (MB): 1.55
Estimated Total Size (MB): 1.57
---------------------------------------------------------------------------
###Markdown
这里面有一个注意的点,有些读者可能会疑惑为什么要传递`(1, 28, 28)`这个input_size参数,因为在动态图中,网络定义阶段是还没有得到输入数据的形状信息,想要做网络结构的呈现就无从下手,那么通过告知接口网络结构的输入数据形状,这样网络可以通过逐层的计算推导得到完整的网络结构信息进行呈现。如果是动态图运行模式,那么就不需要给summary接口传递输入数据形状这个值了,因为在Model封装的时候已经定义好了InputSpec,其中包含了输入数据的形状格式。 六、模型训练网络结构通过`paddle.Model`接口封装成模型类后进行执行操作非常的简洁方便,可以直接通过调用`Model.fit`就可以完成训练过程。使用`Model.fit`接口启动训练前,先通过`Model.prepare`接口来对训练进行提前的配置准备工作,包括设置模型优化器,Loss计算方法,精度计算方法等。
###Code
# 为模型训练做准备,设置优化器,损失函数和精度计算方式
model.prepare(paddle.optimizer.Adam(parameters=model.parameters()),
paddle.nn.CrossEntropyLoss(),
paddle.metric.Accuracy())
###Output
_____no_output_____
###Markdown
做好模型训练的前期准备工作后,正式调用`fit()`接口来启动训练过程,需要指定一下至少3个关键参数:训练数据集,训练轮次和单次训练数据批次大小。
###Code
# 启动模型训练,指定训练数据集,设置训练轮次,设置每次数据集计算的批次大小,设置日志格式
model.fit(train_dataset,
epochs=5,
batch_size=64,
verbose=1)
###Output
The loss value printed in the log is the current step, and the metric is the average value of previous steps.
Epoch 1/5
step 938/938 [==============================] - loss: 0.0356 - acc: 0.9887 - 22ms/step
Epoch 2/5
step 938/938 [==============================] - loss: 0.0056 - acc: 0.9905 - 22ms/step
Epoch 3/5
step 938/938 [==============================] - loss: 0.0204 - acc: 0.9908 - 22ms/step
Epoch 4/5
step 938/938 [==============================] - loss: 2.0174e-04 - acc: 0.9929 - 22ms/step
Epoch 5/5
step 938/938 [==============================] - loss: 0.0031 - acc: 0.9936 - 22ms/step
###Markdown
**注:**`fit()`的第一个参数不仅可以传递数据集`paddle.io.Dataset`,还可以传递DataLoader,如果想要实现某个自定义的数据集抽样等逻辑,可以在fit外自定义DataLoader,然后传递给fit函数。```pythontrain_dataloader = paddle.io.DataLoader(train_dataset)...model.fit(train_dataloader, ...)``` 6.1 单机单卡把刚才单步教学的训练代码做一个整合,这个完整的代码示例就是单机单卡训练程序。
###Code
# 使用GPU训练
# paddle.set_device('gpu')
# 构建模型训练用的Model,告知需要训练哪个模型
model = paddle.Model(mnist)
# 为模型训练做准备,设置优化器,损失函数和精度计算方式
model.prepare(paddle.optimizer.Adam(parameters=model.parameters()),
paddle.nn.CrossEntropyLoss(),
paddle.metric.Accuracy())
# 启动模型训练,指定训练数据集,设置训练轮次,设置每次数据集计算的批次大小,设置日志格式
model.fit(train_dataset,
epochs=5,
batch_size=64,
verbose=1)
###Output
The loss value printed in the log is the current step, and the metric is the average value of previous steps.
Epoch 1/5
step 938/938 [==============================] - loss: 2.9138e-04 - acc: 0.9930 - 22ms/step
Epoch 2/5
step 938/938 [==============================] - loss: 0.0039 - acc: 0.9949 - 22ms/step
Epoch 3/5
step 938/938 [==============================] - loss: 0.0037 - acc: 0.9948 - 22ms/step
Epoch 4/5
step 938/938 [==============================] - loss: 9.9295e-04 - acc: 0.9957 - 22ms/step
Epoch 5/5
step 938/938 [==============================] - loss: 0.0106 - acc: 0.9958 - 22ms/step
###Markdown
6.2 自定义Loss有时会遇到特定任务的Loss计算方式在框架既有的Loss接口中不存在,或算法不符合自己的需求,那么期望能够自己来进行Loss的自定义,这里就会讲解介绍一下如何进行Loss的自定义操作,首先来看下面的代码:```pythonclass SelfDefineLoss(paddle.nn.Layer): """ 1. 继承paddle.nn.Layer """ def __init__(self): """ 2. 构造函数根据自己的实际算法需求和使用需求进行参数定义即可 """ super(SelfDefineLoss, self).__init__() def forward(self, input, label): """ 3. 实现forward函数,forward在调用时会传递两个参数:input和label - input:单个或批次训练数据经过模型前向计算输出结果 - label:单个或批次训练数据对应的标签数据 接口返回值是一个Tensor,根据自定义的逻辑加和或计算均值后的损失 """ 使用Paddle中相关API自定义的计算逻辑 output = xxxxx return output```那么了解完代码层面如果编写自定义代码后看一个实际的例子,下面是在图像分割示例代码中写的一个自定义Loss,当时主要是想使用自定义的softmax计算维度。```pythonclass SoftmaxWithCrossEntropy(paddle.nn.Layer): def __init__(self): super(SoftmaxWithCrossEntropy, self).__init__() def forward(self, input, label): loss = F.softmax_with_cross_entropy(input, label, return_softmax=False, axis=1) return paddle.mean(loss)``` 6.3 自定义Metric和Loss一样,如果遇到一些想要做个性化实现的操作时,也可以来通过框架完成自定义的评估计算方法,具体的实现方式如下:```pythonclass SelfDefineMetric(paddle.metric.Metric): """ 1. 继承paddle.metric.Metric """ def __init__(self): """ 2. 构造函数实现,自定义参数即可 """ super(SelfDefineMetric, self).__init__() def name(self): """ 3. 实现name方法,返回定义的评估指标名字 """ return '自定义评价指标的名字' def compute(self, ...) """ 4. 本步骤可以省略,实现compute方法,这个方法主要用于`update`的加速,可以在这个方法中调用一些paddle实现好的Tensor计算API,编译到模型网络中一起使用低层C++ OP计算。 """ return 自己想要返回的数据,会做为update的参数传入。 def update(self, ...): """ 5. 实现update方法,用于单个batch训练时进行评估指标计算。 - 当`compute`类函数未实现时,会将模型的计算输出和标签数据的展平作为`update`的参数传入。 - 当`compute`类函数做了实现时,会将compute的返回结果作为`update`的参数传入。 """ return acc value def accumulate(self): """ 6. 实现accumulate方法,返回历史batch训练积累后计算得到的评价指标值。 每次`update`调用时进行数据积累,`accumulate`计算时对积累的所有数据进行计算并返回。 结算结果会在`fit`接口的训练日志中呈现。 """ 利用update中积累的成员变量数据进行计算后返回 return accumulated acc value def reset(self): """ 7. 实现reset方法,每个Epoch结束后进行评估指标的重置,这样下个Epoch可以重新进行计算。 """ do reset action```看一个框架中的具体例子,这个是框架中已提供的一个评估指标计算接口,这里就是按照上述说明中的实现方法进行了相关类继承和成员函数实现。```pythonfrom paddle.metric import Metricclass Precision(Metric): """ Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. Refer to https://en.wikipedia.org/wiki/Evaluation_of_binary_classifiers Noted that this class manages the precision score only for binary classification task. ...... """ def __init__(self, name='precision', *args, **kwargs): super(Precision, self).__init__(*args, **kwargs) self.tp = 0 true positive self.fp = 0 false positive self._name = name def update(self, preds, labels): """ Update the states based on the current mini-batch prediction results. Args: preds (numpy.ndarray): The prediction result, usually the output of two-class sigmoid function. It should be a vector (column vector or row vector) with data type: 'float64' or 'float32'. labels (numpy.ndarray): The ground truth (labels), the shape should keep the same as preds. The data type is 'int32' or 'int64'. """ if isinstance(preds, paddle.Tensor): preds = preds.numpy() elif not _is_numpy_(preds): raise ValueError("The 'preds' must be a numpy ndarray or Tensor.") if isinstance(labels, paddle.Tensor): labels = labels.numpy() elif not _is_numpy_(labels): raise ValueError("The 'labels' must be a numpy ndarray or Tensor.") sample_num = labels.shape[0] preds = np.floor(preds + 0.5).astype("int32") for i in range(sample_num): pred = preds[i] label = labels[i] if pred == 1: if pred == label: self.tp += 1 else: self.fp += 1 def reset(self): """ Resets all of the metric state. """ self.tp = 0 self.fp = 0 def accumulate(self): """ Calculate the final precision. Returns: A scaler float: results of the calculated precision. """ ap = self.tp + self.fp return float(self.tp) / ap if ap != 0 else .0 def name(self): """ Returns metric name """ return self._name``` 6.4 自定义Callback`fit`接口的callback参数支持传一个Callback类实例,用来在每轮训练和每个batch训练前后进行调用,可以通过callback收集到训练过程中的一些数据和参数,或者实现一些自定义操作。```pythonclass SelfDefineCallback(paddle.callbacks.Callback): """ 1. 继承paddle.callbacks.Callback 2. 按照自己的需求实现以下类成员方法: def on_train_begin(self, logs=None) 训练开始前,`Model.fit`接口中调用 def on_train_end(self, logs=None) 训练结束后,`Model.fit`接口中调用 def on_eval_begin(self, logs=None) 评估开始前,`Model.evaluate`接口调用 def on_eval_end(self, logs=None) 评估结束后,`Model.evaluate`接口调用 def on_test_begin(self, logs=None) 预测测试开始前,`Model.predict`接口中调用 def on_test_end(self, logs=None) 预测测试结束后,`Model.predict`接口中调用 def on_epoch_begin(self, epoch, logs=None) 每轮训练开始前,`Model.fit`接口中调用 def on_epoch_end(self, epoch, logs=None) 每轮训练结束后,`Model.fit`接口中调用 def on_train_batch_begin(self, step, logs=None) 单个Batch训练开始前,`Model.fit`和`Model.train_batch`接口中调用 def on_train_batch_end(self, step, logs=None) 单个Batch训练结束后,`Model.fit`和`Model.train_batch`接口中调用 def on_eval_batch_begin(self, step, logs=None) 单个Batch评估开始前,`Model.evalute`和`Model.eval_batch`接口中调用 def on_eval_batch_end(self, step, logs=None) 单个Batch评估结束后,`Model.evalute`和`Model.eval_batch`接口中调用 def on_test_batch_begin(self, step, logs=None) 单个Batch预测测试开始前,`Model.predict`和`Model.test_batch`接口中调用 def on_test_batch_end(self, step, logs=None) 单个Batch预测测试结束后,`Model.predict`和`Model.test_batch`接口中调用 """ def __init__(self): super(SelfDefineCallback, self).__init__() 按照需求定义自己的类成员方法```看一个框架中的实际例子,这是一个框架自带的ModelCheckpoint回调函数,方便在fit训练模型时自动存储每轮训练得到的模型。```pythonclass ModelCheckpoint(Callback): def __init__(self, save_freq=1, save_dir=None): self.save_freq = save_freq self.save_dir = save_dir def on_epoch_begin(self, epoch=None, logs=None): self.epoch = epoch def _is_save(self): return self.model and self.save_dir and ParallelEnv().local_rank == 0 def on_epoch_end(self, epoch, logs=None): if self._is_save() and self.epoch % self.save_freq == 0: path = '{}/{}'.format(self.save_dir, epoch) print('save checkpoint at {}'.format(os.path.abspath(path))) self.model.save(path) def on_train_end(self, logs=None): if self._is_save(): path = '{}/final'.format(self.save_dir) print('save checkpoint at {}'.format(os.path.abspath(path))) self.model.save(path)``` 七、模型评估对于训练好的模型进行评估操作可以使用`evaluate`接口来实现,事先定义好用于评估使用的数据集后,可以简单的调用`evaluate`接口即可完成模型评估操作,结束后根据prepare中loss和metric的定义来进行相关评估结果计算返回。返回格式是一个字典:* 只包含loss,`{'loss': xxx}`* 包含loss和一个评估指标,`{'loss': xxx, 'metric name': xxx}`* 包含loss和多个评估指标,`{'loss': xxx, 'metric name': xxx, 'metric name': xxx}`
###Code
result = model.evaluate(val_dataset, verbose=1)
###Output
Eval begin...
step 10000/10000 [==============================] - loss: 0.0000e+00 - acc: 0.9834 - 2ms/step
Eval samples: 10000
###Markdown
八、模型预测高层API中提供了`predict`接口来方便对训练好的模型进行预测验证,只需要基于训练好的模型将需要进行预测测试的数据放到接口中进行计算即可,接口会将经过模型计算得到的预测结果进行返回。返回格式是一个list,元素数目对应模型的输出数目:* 模型是单一输出:[(numpy_ndarray_1, numpy_ndarray_2, ..., numpy_ndarray_n)]* 模型是多输出:[(numpy_ndarray_1, numpy_ndarray_2, ..., numpy_ndarray_n), (numpy_ndarray_1, numpy_ndarray_2, ..., numpy_ndarray_n), ...]numpy_ndarray_n是对应原始数据经过模型计算后得到的预测数据,数目对应预测数据集的数目。
###Code
pred_result = model.predict(val_dataset)
###Output
Predict begin...
step 10000/10000 [==============================] - 2ms/step
Predict samples: 10000
###Markdown
8.1 使用多卡进行预测有时需要进行预测验证的数据较多,单卡无法满足时间诉求,那么`predict`接口也支持实现了使用多卡模式来运行。使用起来也是超级简单,无需修改代码程序,只需要使用launch来启动对应的预测脚本即可。```bash$ python3 -m paddle.distributed.launch infer.py```infer.py里面就是包含model.predict的代码程序。 九、模型部署 9.1 模型存储模型训练和验证达到预期后,可以使用`save`接口来将模型保存下来,用于后续模型的Fine-tuning(接口参数training=True)或推理部署(接口参数training=False)。需要注意的是,在动态图模式训练时保存推理模型的参数文件和模型文件,需要在forward成员函数上添加@paddle.jit.to_static装饰器,参考下面的例子:```pythonclass Mnist(paddle.nn.Layer): def __init__(self): super(Mnist, self).__init__() self.flatten = paddle.nn.Flatten() self.linear_1 = paddle.nn.Linear(784, 512) self.linear_2 = paddle.nn.Linear(512, 10) self.relu = paddle.nn.ReLU() self.dropout = paddle.nn.Dropout(0.2) @paddle.jit.to_static def forward(self, inputs): y = self.flatten(inputs) y = self.linear_1(y) y = self.relu(y) y = self.dropout(y) y = self.linear_2(y) return y```
###Code
model.save('~/model/mnist')
###Output
_____no_output_____
###Markdown
飞桨高层API使用指南**作者:** [PaddlePaddle](https://github.com/PaddlePaddle) **日期:** 2021.05 **摘要:** 本示例教程是对飞桨高层API的详细说明,会介绍如何使用高层API,快速完成深度学习任务。 一、简介飞桨框架2.0全新推出高层API,是对飞桨API的进一步封装与升级,提供了更加简洁易用的API,进一步提升了飞桨的易学易用性,并增强飞桨的功能。飞桨高层API面向从深度学习小白到资深开发者的所有人群,对于AI初学者来说,使用高层API可以简单快速的构建深度学习项目,对于资深开发者来说,可以快速完成算法迭代。飞桨高层API具有以下特点:* 易学易用: 高层API是对普通动态图API的进一步封装和优化,同时保持与普通API的兼容性,高层API使用更加易学易用,同样的实现使用高层API可以节省大量的代码。* 低代码开发: 使用飞桨高层API的一个明显特点是编程代码量大大缩减。* 动静转换: 高层API支持动静转换,只需要改一行代码即可实现将动态图代码在静态图模式下训练,既方便使用动态图调试模型,又提升了模型训练效率。在功能增强与使用方式上,高层API有以下升级:* 模型训练方式升级: 高层API中封装了Model类,继承了Model类的神经网络可以仅用几行代码完成模型的训练。* 新增图像处理模块transform: 飞桨新增了图像预处理模块,其中包含数十种数据处理函数,基本涵盖了常用的数据处理、数据增强方法。* 提供常用的神经网络模型可供调用: 高层API中集成了计算机视觉领域和自然语言处理领域常用模型,包括但不限于mobilenet、resnet、yolov3、cyclegan、bert、transformer、seq2seq等等。同时发布了对应模型的预训练模型,可以直接使用这些模型或者在此基础上完成二次开发。 二、安装并使用飞桨高层API飞桨高层API无需独立安装,只需要安装好paddlepaddle即可。如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) Paddle 2.1 。安装完成后import paddle即可使用相关高层API,如:paddle.Model、视觉领域paddle.vision、NLP领域paddle.text。
###Code
import paddle
import paddle.vision as vision
import paddle.text as text
paddle.__version__
###Output
_____no_output_____
###Markdown
三、目录本指南教学内容覆盖* 使用高层API提供的自带数据集进行相关深度学习任务训练。* 使用自定义数据进行数据集的定义、数据预处理和训练。* 如何在数据集定义和加载中应用数据增强相关接口。* 如何进行模型的组网。* 高层API进行模型训练的相关API使用。* 如何在fit接口满足需求的时候进行自定义,使用基础API来完成训练。* 如何使用多卡来加速训练。 四、数据集定义、加载和数据预处理对于深度学习任务,均是框架针对各种类型数字的计算,是无法直接使用原始图片和文本等文件来完成。那么就是涉及到了一项动作,就是将原始的各种数据文件进行处理加工,转换成深度学习任务可以使用的数据。 4.1 框架自带数据集使用高层API将一些常用到的数据集作为领域API,对应API所在目录为`paddle.vision.datasets`,那么先看下提供了哪些数据集。
###Code
print('视觉相关数据集:', paddle.vision.datasets.__all__)
print('自然语言相关数据集:', paddle.text.__all__)
###Output
视觉相关数据集: ['DatasetFolder', 'ImageFolder', 'MNIST', 'FashionMNIST', 'Flowers', 'Cifar10', 'Cifar100', 'VOC2012']
自然语言相关数据集: ['Conll05st', 'Imdb', 'Imikolov', 'Movielens', 'UCIHousing', 'WMT14', 'WMT16']
###Markdown
这里加载一个手写数字识别的数据集,用`mode`来标识是训练数据还是测试数据集。数据集接口会自动从远端下载数据集到本机缓存目录`~/.cache/paddle/dataset`。
###Code
from paddle.vision.transforms import ToTensor
# 训练数据集
train_dataset = vision.datasets.MNIST(mode='train', transform=ToTensor())
# 验证数据集
val_dataset = vision.datasets.MNIST(mode='test', transform=ToTensor())
###Output
Cache file /home/aistudio/.cache/paddle/dataset/mnist/train-images-idx3-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/train-images-idx3-ubyte.gz
Begin to download
Download finished
Cache file /home/aistudio/.cache/paddle/dataset/mnist/train-labels-idx1-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/train-labels-idx1-ubyte.gz
Begin to download
........
Download finished
Cache file /home/aistudio/.cache/paddle/dataset/mnist/t10k-images-idx3-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/t10k-images-idx3-ubyte.gz
Begin to download
Download finished
Cache file /home/aistudio/.cache/paddle/dataset/mnist/t10k-labels-idx1-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/t10k-labels-idx1-ubyte.gz
Begin to download
..
Download finished
###Markdown
4.2 自定义数据集更多的时候需要自己使用已有的相关数据来定义数据集,那么这里通过一个案例来了解如何进行数据集的定义,飞桨提供了`paddle.io.Dataset`基类,通过类的集成来快速实现数据集定义。
###Code
from paddle.io import Dataset
class MyDataset(Dataset):
"""
步骤一:继承paddle.io.Dataset类
"""
def __init__(self, mode='train'):
"""
步骤二:实现构造函数,定义数据读取方式,划分训练和测试数据集
"""
super(MyDataset, self).__init__()
if mode == 'train':
self.data = [
['traindata1', 'label1'],
['traindata2', 'label2'],
['traindata3', 'label3'],
['traindata4', 'label4'],
]
else:
self.data = [
['testdata1', 'label1'],
['testdata2', 'label2'],
['testdata3', 'label3'],
['testdata4', 'label4'],
]
def __getitem__(self, index):
"""
步骤三:实现__getitem__方法,定义指定index时如何获取数据,并返回单条数据(训练数据,对应的标签)
"""
data = self.data[index][0]
label = self.data[index][1]
return data, label
def __len__(self):
"""
步骤四:实现__len__方法,返回数据集总数目
"""
return len(self.data)
# 测试定义的数据集
train_dataset_2 = MyDataset(mode='train')
val_dataset_2 = MyDataset(mode='test')
print('=============train dataset=============')
for data, label in train_dataset_2:
print(data, label)
print('=============evaluation dataset=============')
for data, label in val_dataset_2:
print(data, label)
###Output
=============train dataset=============
traindata1 label1
traindata2 label2
traindata3 label3
traindata4 label4
=============evaluation dataset=============
testdata1 label1
testdata2 label2
testdata3 label3
testdata4 label4
###Markdown
4.3 数据增强训练过程中有时会遇到过拟合的问题,其中一个解决方法就是对训练数据做增强,对数据进行处理得到不同的图像,从而泛化数据集。数据增强API是定义在领域目录的transofrms下,这里介绍两种使用方式,一种是基于框架自带数据集,一种是基于自己定义的数据集。 4.3.1 框架自带数据集
###Code
from paddle.vision.transforms import Compose, Resize, ColorJitter
# 定义想要使用那些数据增强方式,这里用到了随机调整亮度、对比度和饱和度,改变图片大小
transform = Compose([ColorJitter(), Resize(size=100)])
# 通过transform参数传递定义好的数据增项方法即可完成对自带数据集的应用
train_dataset_3 = vision.datasets.MNIST(mode='train', transform=transform)
###Output
_____no_output_____
###Markdown
4.3.2 自定义数据集针对自定义数据集使用数据增强有两种方式,一种是在数据集的构造函数中进行数据增强方法的定义,之后对__getitem__中返回的数据进行应用。另外一种方式也可以给自定义的数据集类暴漏一个构造参数,在实例化类的时候将数据增强方法传递进去。
###Code
from paddle.io import Dataset
class MyDataset(Dataset):
def __init__(self, mode='train'):
super(MyDataset, self).__init__()
if mode == 'train':
self.data = [
['traindata1', 'label1'],
['traindata2', 'label2'],
['traindata3', 'label3'],
['traindata4', 'label4'],
]
else:
self.data = [
['testdata1', 'label1'],
['testdata2', 'label2'],
['testdata3', 'label3'],
['testdata4', 'label4'],
]
# 定义要使用的数据预处理方法,针对图片的操作
self.transform = Compose([ColorJitter(), Resize(size=100)])
def __getitem__(self, index):
data = self.data[index][0]
# 在这里对训练数据进行应用
# 这里只是一个示例,测试时需要将数据集更换为图片数据进行测试
data = self.transform(data)
label = self.data[index][1]
return data, label
def __len__(self):
return len(self.data)
###Output
_____no_output_____
###Markdown
五、模型组网针对高层API在模型组网上和基础API是统一的一套,无需投入额外的学习使用成本。那么这里我举几个简单的例子来做示例。 5.1 Sequential组网针对顺序的线性网络结构可以直接使用Sequential来快速完成组网,可以减少类的定义等代码编写。
###Code
# Sequential形式组网
mnist = paddle.nn.Sequential(
paddle.nn.Flatten(),
paddle.nn.Linear(784, 512),
paddle.nn.ReLU(),
paddle.nn.Dropout(0.2),
paddle.nn.Linear(512, 10)
)
###Output
_____no_output_____
###Markdown
5.2 SubClass组网针对一些比较复杂的网络结构,就可以使用Layer子类定义的方式来进行模型代码编写,在`__init__`构造函数中进行组网Layer的声明,在`forward`中使用声明的Layer变量进行前向计算。子类组网方式也可以实现sublayer的复用,针对相同的layer可以在构造函数中一次性定义,在forward中多次调用。
###Code
# Layer类继承方式组网
class Mnist(paddle.nn.Layer):
def __init__(self):
super(Mnist, self).__init__()
self.flatten = paddle.nn.Flatten()
self.linear_1 = paddle.nn.Linear(784, 512)
self.linear_2 = paddle.nn.Linear(512, 10)
self.relu = paddle.nn.ReLU()
self.dropout = paddle.nn.Dropout(0.2)
def forward(self, inputs):
y = self.flatten(inputs)
y = self.linear_1(y)
y = self.relu(y)
y = self.dropout(y)
y = self.linear_2(y)
return y
mnist_2 = Mnist()
###Output
_____no_output_____
###Markdown
5.3 模型封装定义好网络结构之后来使用`paddle.Model`完成模型的封装,将网络结构组合成一个可快速使用高层API进行训练、评估和预测的类。在封装的时候有两种场景,动态图训练模式和静态图训练模式。
###Code
# 使用GPU训练
# paddle.set_device('gpu')
# 模型封装
## 场景1:动态图模式
## 1.1 为模型预测部署场景进行模型训练
## 需要添加input和label数据描述,否则会导致使用model.save(training=False)保存的预测模型在使用时出错
inputs = paddle.static.InputSpec([-1, 1, 28, 28], dtype='float32', name='input')
label = paddle.static.InputSpec([-1, 1], dtype='int8', name='label')
model = paddle.Model(mnist, inputs, label)
## 1.2 面向实验而进行的模型训练
## 可以不传递input和label信息
# model = paddle.Model(mnist)
## 场景2:静态图模式
# paddle.enable_static()
# paddle.set_device('gpu')
# input = paddle.static.InputSpec([None, 1, 28, 28], dtype='float32')
# label = paddle.static.InputSpec([None, 1], dtype='int8')
# model = paddle.Model(mnist, input, label)
###Output
_____no_output_____
###Markdown
5.4 模型可视化在组建好网络结构后,一般会想去对网络结构进行一下可视化,逐层的去对齐一下网络结构参数,看看是否符合预期。这里可以通过`Model.summary`接口进行可视化展示。
###Code
model.summary((1, 28, 28))
###Output
---------------------------------------------------------------------------
Layer (type) Input Shape Output Shape Param #
===========================================================================
Flatten-1 [[1, 28, 28]] [1, 784] 0
Linear-1 [[1, 784]] [1, 512] 401,920
ReLU-1 [[1, 512]] [1, 512] 0
Dropout-1 [[1, 512]] [1, 512] 0
Linear-2 [[1, 512]] [1, 10] 5,130
===========================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.02
Params size (MB): 1.55
Estimated Total Size (MB): 1.57
---------------------------------------------------------------------------
###Markdown
另外,summary接口有两种使用方式,下面通过两个示例来做展示,除了`Model.summary`这种配套`paddle.Model`封装使用的接口外,还有一套配合没有经过`paddle.Model`封装的方式来使用。可以直接将实例化好的Layer子类放到`paddle.summary`接口中进行可视化呈现。
###Code
paddle.summary(mnist, (1, 28, 28))
###Output
---------------------------------------------------------------------------
Layer (type) Input Shape Output Shape Param #
===========================================================================
Flatten-1 [[1, 28, 28]] [1, 784] 0
Linear-1 [[1, 784]] [1, 512] 401,920
ReLU-1 [[1, 512]] [1, 512] 0
Dropout-1 [[1, 512]] [1, 512] 0
Linear-2 [[1, 512]] [1, 10] 5,130
===========================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.02
Params size (MB): 1.55
Estimated Total Size (MB): 1.57
---------------------------------------------------------------------------
###Markdown
这里面有一个注意的点,有些读者可能会疑惑为什么要传递`(1, 28, 28)`这个input_size参数,因为在动态图中,网络定义阶段是还没有得到输入数据的形状信息,想要做网络结构的呈现就无从下手,那么通过告知接口网络结构的输入数据形状,这样网络可以通过逐层的计算推导得到完整的网络结构信息进行呈现。如果是动态图运行模式,那么就不需要给summary接口传递输入数据形状这个值了,因为在Model封装的时候已经定义好了InputSpec,其中包含了输入数据的形状格式。 六、模型训练网络结构通过`paddle.Model`接口封装成模型类后进行执行操作非常的简洁方便,可以直接通过调用`Model.fit`就可以完成训练过程。使用`Model.fit`接口启动训练前,先通过`Model.prepare`接口来对训练进行提前的配置准备工作,包括设置模型优化器,Loss计算方法,精度计算方法等。
###Code
# 为模型训练做准备,设置优化器,损失函数和精度计算方式
model.prepare(paddle.optimizer.Adam(parameters=model.parameters()),
paddle.nn.CrossEntropyLoss(),
paddle.metric.Accuracy())
###Output
_____no_output_____
###Markdown
做好模型训练的前期准备工作后,正式调用`fit()`接口来启动训练过程,需要指定一下至少3个关键参数:训练数据集,训练轮次和单次训练数据批次大小。
###Code
# 启动模型训练,指定训练数据集,设置训练轮次,设置每次数据集计算的批次大小,设置日志格式
model.fit(train_dataset,
epochs=5,
batch_size=64,
verbose=1)
###Output
The loss value printed in the log is the current step, and the metric is the average value of previous steps.
Epoch 1/5
step 20/938 [..............................] - loss: 0.6998 - acc: 0.6555 - ETA: 24s - 27ms/ste
###Markdown
**注:**`fit()`的第一个参数不仅可以传递数据集`paddle.io.Dataset`,还可以传递DataLoader,如果想要实现某个自定义的数据集抽样等逻辑,可以在fit外自定义DataLoader,然后传递给fit函数。```pythontrain_dataloader = paddle.io.DataLoader(train_dataset)...model.fit(train_dataloader, ...)``` 6.1 单机单卡把刚才单步教学的训练代码做一个整合,这个完整的代码示例就是单机单卡训练程序。
###Code
# 使用GPU训练
# paddle.set_device('gpu')
# 构建模型训练用的Model,告知需要训练哪个模型
model = paddle.Model(mnist)
# 为模型训练做准备,设置优化器,损失函数和精度计算方式
model.prepare(paddle.optimizer.Adam(parameters=model.parameters()),
paddle.nn.CrossEntropyLoss(),
paddle.metric.Accuracy())
# 启动模型训练,指定训练数据集,设置训练轮次,设置每次数据集计算的批次大小,设置日志格式
model.fit(train_dataset,
epochs=5,
batch_size=64,
verbose=1)
###Output
The loss value printed in the log is the current step, and the metric is the average value of previous step.
Epoch 1/5
step 938/938 [==============================] - loss: 0.0433 - acc: 0.9871 - 23ms/step
Epoch 2/5
step 938/938 [==============================] - loss: 0.0040 - acc: 0.9900 - 23ms/step
Epoch 3/5
step 938/938 [==============================] - loss: 0.0015 - acc: 0.9917 - 23ms/step
Epoch 4/5
step 938/938 [==============================] - loss: 2.9539e-04 - acc: 0.9925 - 23ms/step
Epoch 5/5
step 938/938 [==============================] - loss: 0.0371 - acc: 0.9933 - 23ms/step
###Markdown
6.2 单机多卡对于高层API来实现单机多卡非常简单,整个训练代码和单机单卡没有差异。直接使用`paddle.distributed.launch`启动单机单卡的程序即可。```bash$ python -m paddle.distributed.launch train.py```train.py里面包含的就是单机单卡代码 6.3 自定义Loss有时会遇到特定任务的Loss计算方式在框架既有的Loss接口中不存在,或算法不符合自己的需求,那么期望能够自己来进行Loss的自定义,这里就会讲解介绍一下如何进行Loss的自定义操作,首先来看下面的代码:```pythonclass SelfDefineLoss(paddle.nn.Layer): """ 1. 继承paddle.nn.Layer """ def __init__(self): """ 2. 构造函数根据自己的实际算法需求和使用需求进行参数定义即可 """ super(SelfDefineLoss, self).__init__() def forward(self, input, label): """ 3. 实现forward函数,forward在调用时会传递两个参数:input和label - input:单个或批次训练数据经过模型前向计算输出结果 - label:单个或批次训练数据对应的标签数据 接口返回值是一个Tensor,根据自定义的逻辑加和或计算均值后的损失 """ 使用Paddle中相关API自定义的计算逻辑 output = xxxxx return output```那么了解完代码层面如果编写自定义代码后看一个实际的例子,下面是在图像分割示例代码中写的一个自定义Loss,当时主要是想使用自定义的softmax计算维度。```pythonclass SoftmaxWithCrossEntropy(paddle.nn.Layer): def __init__(self): super(SoftmaxWithCrossEntropy, self).__init__() def forward(self, input, label): loss = F.softmax_with_cross_entropy(input, label, return_softmax=False, axis=1) return paddle.mean(loss)``` 6.4 自定义Metric和Loss一样,如果遇到一些想要做个性化实现的操作时,也可以来通过框架完成自定义的评估计算方法,具体的实现方式如下:```pythonclass SelfDefineMetric(paddle.metric.Metric): """ 1. 继承paddle.metric.Metric """ def __init__(self): """ 2. 构造函数实现,自定义参数即可 """ super(SelfDefineMetric, self).__init__() def name(self): """ 3. 实现name方法,返回定义的评估指标名字 """ return '自定义评价指标的名字' def compute(self, ...) """ 4. 本步骤可以省略,实现compute方法,这个方法主要用于`update`的加速,可以在这个方法中调用一些paddle实现好的Tensor计算API,编译到模型网络中一起使用低层C++ OP计算。 """ return 自己想要返回的数据,会做为update的参数传入。 def update(self, ...): """ 5. 实现update方法,用于单个batch训练时进行评估指标计算。 - 当`compute`类函数未实现时,会将模型的计算输出和标签数据的展平作为`update`的参数传入。 - 当`compute`类函数做了实现时,会将compute的返回结果作为`update`的参数传入。 """ return acc value def accumulate(self): """ 6. 实现accumulate方法,返回历史batch训练积累后计算得到的评价指标值。 每次`update`调用时进行数据积累,`accumulate`计算时对积累的所有数据进行计算并返回。 结算结果会在`fit`接口的训练日志中呈现。 """ 利用update中积累的成员变量数据进行计算后返回 return accumulated acc value def reset(self): """ 7. 实现reset方法,每个Epoch结束后进行评估指标的重置,这样下个Epoch可以重新进行计算。 """ do reset action```看一个框架中的具体例子,这个是框架中已提供的一个评估指标计算接口,这里就是按照上述说明中的实现方法进行了相关类继承和成员函数实现。```pythonfrom paddle.metric import Metricclass Precision(Metric): """ Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. Refer to https://en.wikipedia.org/wiki/Evaluation_of_binary_classifiers Noted that this class manages the precision score only for binary classification task. ...... """ def __init__(self, name='precision', *args, **kwargs): super(Precision, self).__init__(*args, **kwargs) self.tp = 0 true positive self.fp = 0 false positive self._name = name def update(self, preds, labels): """ Update the states based on the current mini-batch prediction results. Args: preds (numpy.ndarray): The prediction result, usually the output of two-class sigmoid function. It should be a vector (column vector or row vector) with data type: 'float64' or 'float32'. labels (numpy.ndarray): The ground truth (labels), the shape should keep the same as preds. The data type is 'int32' or 'int64'. """ if isinstance(preds, paddle.Tensor): preds = preds.numpy() elif not _is_numpy_(preds): raise ValueError("The 'preds' must be a numpy ndarray or Tensor.") if isinstance(labels, paddle.Tensor): labels = labels.numpy() elif not _is_numpy_(labels): raise ValueError("The 'labels' must be a numpy ndarray or Tensor.") sample_num = labels.shape[0] preds = np.floor(preds + 0.5).astype("int32") for i in range(sample_num): pred = preds[i] label = labels[i] if pred == 1: if pred == label: self.tp += 1 else: self.fp += 1 def reset(self): """ Resets all of the metric state. """ self.tp = 0 self.fp = 0 def accumulate(self): """ Calculate the final precision. Returns: A scaler float: results of the calculated precision. """ ap = self.tp + self.fp return float(self.tp) / ap if ap != 0 else .0 def name(self): """ Returns metric name """ return self._name``` 6.5 自定义Callback`fit`接口的callback参数支持传一个Callback类实例,用来在每轮训练和每个batch训练前后进行调用,可以通过callback收集到训练过程中的一些数据和参数,或者实现一些自定义操作。```pythonclass SelfDefineCallback(paddle.callbacks.Callback): """ 1. 继承paddle.callbacks.Callback 2. 按照自己的需求实现以下类成员方法: def on_train_begin(self, logs=None) 训练开始前,`Model.fit`接口中调用 def on_train_end(self, logs=None) 训练结束后,`Model.fit`接口中调用 def on_eval_begin(self, logs=None) 评估开始前,`Model.evaluate`接口调用 def on_eval_end(self, logs=None) 评估结束后,`Model.evaluate`接口调用 def on_test_begin(self, logs=None) 预测测试开始前,`Model.predict`接口中调用 def on_test_end(self, logs=None) 预测测试结束后,`Model.predict`接口中调用 def on_epoch_begin(self, epoch, logs=None) 每轮训练开始前,`Model.fit`接口中调用 def on_epoch_end(self, epoch, logs=None) 每轮训练结束后,`Model.fit`接口中调用 def on_train_batch_begin(self, step, logs=None) 单个Batch训练开始前,`Model.fit`和`Model.train_batch`接口中调用 def on_train_batch_end(self, step, logs=None) 单个Batch训练结束后,`Model.fit`和`Model.train_batch`接口中调用 def on_eval_batch_begin(self, step, logs=None) 单个Batch评估开始前,`Model.evalute`和`Model.eval_batch`接口中调用 def on_eval_batch_end(self, step, logs=None) 单个Batch评估结束后,`Model.evalute`和`Model.eval_batch`接口中调用 def on_test_batch_begin(self, step, logs=None) 单个Batch预测测试开始前,`Model.predict`和`Model.test_batch`接口中调用 def on_test_batch_end(self, step, logs=None) 单个Batch预测测试结束后,`Model.predict`和`Model.test_batch`接口中调用 """ def __init__(self): super(SelfDefineCallback, self).__init__() 按照需求定义自己的类成员方法```看一个框架中的实际例子,这是一个框架自带的ModelCheckpoint回调函数,方便在fit训练模型时自动存储每轮训练得到的模型。```pythonclass ModelCheckpoint(Callback): def __init__(self, save_freq=1, save_dir=None): self.save_freq = save_freq self.save_dir = save_dir def on_epoch_begin(self, epoch=None, logs=None): self.epoch = epoch def _is_save(self): return self.model and self.save_dir and ParallelEnv().local_rank == 0 def on_epoch_end(self, epoch, logs=None): if self._is_save() and self.epoch % self.save_freq == 0: path = '{}/{}'.format(self.save_dir, epoch) print('save checkpoint at {}'.format(os.path.abspath(path))) self.model.save(path) def on_train_end(self, logs=None): if self._is_save(): path = '{}/final'.format(self.save_dir) print('save checkpoint at {}'.format(os.path.abspath(path))) self.model.save(path)``` 七、模型评估对于训练好的模型进行评估操作可以使用`evaluate`接口来实现,事先定义好用于评估使用的数据集后,可以简单的调用`evaluate`接口即可完成模型评估操作,结束后根据prepare中loss和metric的定义来进行相关评估结果计算返回。返回格式是一个字典:* 只包含loss,`{'loss': xxx}`* 包含loss和一个评估指标,`{'loss': xxx, 'metric name': xxx}`* 包含loss和多个评估指标,`{'loss': xxx, 'metric name': xxx, 'metric name': xxx}`
###Code
result = model.evaluate(val_dataset, verbose=1)
###Output
Eval begin...
step 10000/10000 [==============================] - loss: 1.1921e-07 - acc: 0.9814 - 2ms/step
Eval samples: 10000
###Markdown
八、模型预测高层API中提供了`predict`接口来方便对训练好的模型进行预测验证,只需要基于训练好的模型将需要进行预测测试的数据放到接口中进行计算即可,接口会将经过模型计算得到的预测结果进行返回。返回格式是一个list,元素数目对应模型的输出数目:* 模型是单一输出:[(numpy_ndarray_1, numpy_ndarray_2, ..., numpy_ndarray_n)]* 模型是多输出:[(numpy_ndarray_1, numpy_ndarray_2, ..., numpy_ndarray_n), (numpy_ndarray_1, numpy_ndarray_2, ..., numpy_ndarray_n), ...]numpy_ndarray_n是对应原始数据经过模型计算后得到的预测数据,数目对应预测数据集的数目。
###Code
pred_result = model.predict(val_dataset)
###Output
Predict begin...
step 10000/10000 [==============================] - 2ms/step
Predict samples: 10000
###Markdown
8.1 使用多卡进行预测有时需要进行预测验证的数据较多,单卡无法满足时间诉求,那么`predict`接口也支持实现了使用多卡模式来运行。使用起来也是超级简单,无需修改代码程序,只需要使用launch来启动对应的预测脚本即可。```bash$ python3 -m paddle.distributed.launch infer.py```infer.py里面就是包含model.predict的代码程序。 九、模型部署 9.1 模型存储模型训练和验证达到预期后,可以使用`save`接口来将模型保存下来,用于后续模型的Fine-tuning(接口参数training=True)或推理部署(接口参数training=False)。需要注意的是,在动态图模式训练时保存推理模型的参数文件和模型文件,需要在forward成员函数上添加@paddle.jit.to_static装饰器,参考下面的例子:```pythonclass Mnist(paddle.nn.Layer): def __init__(self): super(Mnist, self).__init__() self.flatten = paddle.nn.Flatten() self.linear_1 = paddle.nn.Linear(784, 512) self.linear_2 = paddle.nn.Linear(512, 10) self.relu = paddle.nn.ReLU() self.dropout = paddle.nn.Dropout(0.2) @paddle.jit.to_static def forward(self, inputs): y = self.flatten(inputs) y = self.linear_1(y) y = self.relu(y) y = self.dropout(y) y = self.linear_2(y) return y```
###Code
model.save('~/model/mnist')
###Output
_____no_output_____ |
notebook/Unit1-4-1-Tuple.ipynb | ###Markdown
5 STRUCTURED TYPES, MUTABILITY, AND HIGHERORDER FUNCTIONSThe numeric types `int` and `float` are **scalar(标量)** types. That is to say, objects of these types have **no accessible internal structure**. In contrast, `str` can be thought of as a `structured, or non-scalar`, type. One can use indexing to extract individual characters from a string and slicing to extract substrings.In this chapter, we introduce `four additional structured types`. * tuple(元组), is a rather simple generalization of `str`. * list(列表)* range* dict(字典) —are more **interesting**. We also return to the topic of functions with some examples that illustrate the utility of being able to treat functions in the same way as other types of objects. 5.1 TuplesLike `strings`, **tuples** are immutable ordered sequences of elements. The difference is that the elements of a tuple need not be characters. The individual elements can be ofany type, and need not be of the same type as each otherLiterals of type `tuple` are written by enclosing a comma-separated , list of elements within parentheses( )
###Code
t1 = () # empty tuple
t2 = (1, 'two', 3) # 1)any type; 2) not be of the same type as each other.
student=('Name',22)
print(t1)
print(t2)
print(student)
###Output
()
(1, 'two', 3)
('Name', 22)
###Markdown
Looking at this example, you might naturally be led to believe that the tuple containing the `single value 1` would be written `(1)`. But, to quote [Richard Nixon](https://en.wikipedia.org/wiki/Richard_Nixon), `“that would be wrong.”`Since parentheses`( )` are used to `group expressions`,(1) is `merely` a verbose way to write the `integer 1`.
###Code
a=(1)
a
b=(a+2,)
b
a=(1,)
a
###Output
_____no_output_____
###Markdown
To denote the singleton tuple containing this value, we write (1 ,)**Almost everybody who uses Python has at one time or another accidentally omitted that annoying `comma`.**
###Code
expre=('10+1')
tsingleton=('10+1',) # comma-separated
print(expre)
print(type(expre))
print('\nThe singleton tuple containing this value')
print(tsingleton)
print(type(tsingleton))
###Output
_____no_output_____
###Markdown
**Repetition** can be used on tuples. For example, the expression `3*('a', 2)` evaluates to `('a', 2, 'a', 2, 'a', 2).`
###Code
3*('a', 2)
###Output
_____no_output_____
###Markdown
Like string ,Tuples can be concatenated, indexed, and sliced.(indexing starts at 0)concatenated
###Code
t1 = (1, 'two', 3)
t2 = (t1, 3.25) # any type,tuples can contain tuples
print('t2=',t2)
print('t1+t2=',t1 + t2) # + concatenated
###Output
t2= ((1, 'two', 3), 3.25)
t1+t2= (1, 'two', 3, (1, 'two', 3), 3.25)
###Markdown
indexed
###Code
print('(t1 + t2)[3]=',(t1 + t2)[3]) # [3] indexed tuple :as always in Python, indexing starts at 0
###Output
(t1 + t2)[3]= (1, 'two', 3)
###Markdown
sliced.(indexing starts at 0)
###Code
print('(t1 + t2)[2:4]=',(t1 + t2)[2:4]) # [2:5] sliced
###Output
(t1 + t2)[2:4]= (3, (1, 'two', 3))
###Markdown
The second assignment statement binds the name t2 to a tuple that contains the tuple to which t1 is bound and the floating point number 3.25. This is possible because a tuple, like everything else in Python, is an object, so tuples can contain tuples.Therefore, the first print statement produces the output,```python((1, 'two', 3), 3.25)```The second print statement prints the value generated by concatenating the values bound to t1 and t2, which is a tuple with five elements. It produces the output```python(1, 'two', 3, (1, 'two', 3), 3.25)```The next statement selects and prints the fourth element of the concatenated tuple (as `always` in Python, `indexing starts at 0`), and the statement after that creates and prints a slice of that tuple, producing the output```python(1, 'two', 3)(3, (1, 'two', 3))``` Tuples are immutable The tuples cannot be modified after they are created. if you try to modify one of the elements of the tuple, you get an error:```pythonTypeError: 'tuple' object does not support item assignment```
###Code
t1 = (1, 'two', 3)
t1[0]='A'
###Output
_____no_output_____
###Markdown
Because tuples are immutable, you can’t modify the elements. You can replace one tuple with another,This statement makes a **new tuple** and then makes `t1` refer to it.
###Code
t1 = ('A',) + t1[1:]
t1
###Output
_____no_output_____
###Markdown
`for` e in seqA for statement can be used to iterate over the elements of a `tuple`.
###Code
t1 = (1, 'two', 3)
for e in t1:
print(e)
def intersect(t1, t2):
"""Assumes t1 and t2 are tuples
Returns a tuple containing elements that are in
both t1 and t2
"""
result = ()
for e in t1:
if e in t2:
result = result + (e,)
return result
t1 = (1,2, 'two', 3,4)
t2 = (1,4)
result=intersect(t1, t2)
print(result)
###Output
_____no_output_____
###Markdown
Operators and Functions on tuplesYou can operate on tuples using (supposing that tup is a tuple): * built-in functions such as `len(tup)`;* built-in functions for tuple of numbers such as `max(tup), min(tup) and sum(tup)`Tuple methods: * `count(e)` : counts the number of occurrences of a value e* `index(e)`: return the index of the first occurrences of e in tup,or error
###Code
tup=(1,2,2,3)
len(tup)
max(tup), min(tup), sum(tup)
tup.count(2)
tup.index(4)
###Output
_____no_output_____
###Markdown
5.2 Sequences and Multiple AssignmentIf you know the length of a sequence (e.g., a tuple or a string),it can be convenient to use Python’s multiple assignment statement to extract the individual elements.* **Sequence unpacking**
###Code
# Sequence unpacking
x, y ,z= (3, 4,5)
a, b, c = 'xyz'
print('x=',x,' y=',y)
print('a=',a,' b=',b,' c=',c)
###Output
x= 3 y= 4
a= x b= y c= z
###Markdown
This mechanism is particularly convenient when used in * conjunction with functions that return `fixed-size` sequences.
###Code
def bisection(func,low,high,k,epsilon):
ans = (high + low)/2.0
numGuesses = 0
while abs(func(ans,k)) >= epsilon:
numGuesses += 1
if ans**2 < k:
low = ans
else:
high = ans
ans = (high + low)/2.0
return ans,numGuesses
def func1(x,k):
return x**2-k
k = 25
epsilon = 0.01
low = 0.0
high = max(1.0, k) # build-in function
ans,numGuesses=bisection(func1,low,high,k,epsilon)
print('numGuesses =', numGuesses)
print(ans, 'is close to square root of', k)
# tuple
results = bisection(func1,low,high,k,epsilon)
print(results)
print('ans=', results[0])
print('numGuesses=', results[1])
###Output
(5.00030517578125, 13)
ans= 5.00030517578125
numGuesses= 13
###Markdown
5 STRUCTURED TYPES, MUTABILITY, AND HIGHERORDER FUNCTIONSThe numeric types `int` and `float` are **scalar(标量)** types. That is to say, objects of these types have **no accessible internal structure**. In contrast, `str` can be thought of as a `structured, or non-scalar`, type. One can use indexing to extract individual characters from a string and slicing to extract substrings.In this chapter, we introduce `four additional structured types`. * tuple(元组), is a rather simple generalization of `str`. * list(列表)* range* dict(字典) —are more **interesting**. We also return to the topic of functions with some examples that illustrate the utility of being able to treat functions in the same way as other types of objects. 5.1 TuplesLike `strings`, **tuples** are immutable ordered sequences of elements. The difference is that the elements of a tuple need not be characters. The individual elements can be ofany type, and need not be of the same type as each otherLiterals of type `tuple` are written by enclosing a comma-separated , list of elements within parentheses( )
###Code
t1 = () # empty tuple
t2 = (1, 'two', 3) # 1)any type; 2) not be of the same type as each other.
student=('Name',22)
print(t1)
print(t2)
print(student)
###Output
_____no_output_____
###Markdown
Looking at this example, you might naturally be led to believe that the tuple containing the `single value 1` would be written `(1)`. But, to quote [Richard Nixon](https://en.wikipedia.org/wiki/Richard_Nixon), `“that would be wrong.”`Since parentheses`( )` are used to `group expressions`,(1) is `merely` a verbose way to write the `integer 1`.
###Code
a=(1)
a
b=(a+2)
b
###Output
_____no_output_____
###Markdown
To denote the singleton tuple containing this value, we write (1 ,)**Almost everybody who uses Python has at one time or another accidentally omitted that annoying `comma`.**
###Code
expre=('10+1')
tsingleton=('10+1',) # comma-separated
print(expre)
print(type(expre))
print('\nThe singleton tuple containing this value')
print(tsingleton)
print(type(tsingleton))
###Output
_____no_output_____
###Markdown
**Repetition** can be used on tuples. For example, the expression `3*('a', 2)` evaluates to `('a', 2, 'a', 2, 'a', 2).`
###Code
3*('a', 2)
###Output
_____no_output_____
###Markdown
Like string ,Tuples can be concatenated, indexed, and sliced.(indexing starts at 0)concatenated
###Code
t1 = (1, 'two', 3)
t2 = (t1, 3.25) # any type,tuples can contain tuples
print('t2=',t2)
print('t1+t2=',t1 + t2) # + concatenated
###Output
_____no_output_____
###Markdown
indexed
###Code
print('(t1 + t2)[3]=',(t1 + t2)[3]) # [3] indexed tuple :as always in Python, indexing starts at 0
###Output
_____no_output_____
###Markdown
sliced.(indexing starts at 0)
###Code
print('(t1 + t2)[2:4]=',(t1 + t2)[2:4]) # [2:5] sliced
###Output
_____no_output_____
###Markdown
The second assignment statement binds the name t2 to a tuple that contains the tuple to which t1 is bound and the floating point number 3.25. This is possible because a tuple, like everything else in Python, is an object, so tuples can contain tuples.Therefore, the first print statement produces the output,```python((1, 'two', 3), 3.25)```The second print statement prints the value generated by concatenating the values bound to t1 and t2, which is a tuple with five elements. It produces the output```python(1, 'two', 3, (1, 'two', 3), 3.25)```The next statement selects and prints the fourth element of the concatenated tuple (as `always` in Python, `indexing starts at 0`), and the statement after that creates and prints a slice of that tuple, producing the output```python(1, 'two', 3)(3, (1, 'two', 3))``` Tuples are immutable The tuples cannot be modified after they are created. if you try to modify one of the elements of the tuple, you get an error:```pythonTypeError: 'tuple' object does not support item assignment```
###Code
t1 = (1, 'two', 3)
t1[1]='A'
###Output
_____no_output_____
###Markdown
Because tuples are immutable, you can’t modify the elements. You can replace one tuple with another,This statement makes a **new tuple** and then makes `t1` refer to it.
###Code
t1 = ('A',) + t1[1:]
t1
###Output
_____no_output_____
###Markdown
`for` e in seqA for statement can be used to iterate over the elements of a `tuple`.
###Code
t1 = (1, 'two', 3)
for e in t1:
print(e)
def intersect(t1, t2):
"""Assumes t1 and t2 are tuples
Returns a tuple containing elements that are in
both t1 and t2"""
result = ()
for e in t1:
if e in t2:
result += (e,)
return result
t1 = (1,2, 'two', 3,4)
t2 = (1,4)
result=intersect(t1, t2)
print(result)
def findDivisors (n1, n2):
"""Assumes that n1 and n2 are positive ints
Returns a tuple containing all common divisors(公约数) of n1 & n2"""
divisors = () #the empty tuple
for i in range(1, min (n1, n2) + 1):
if n1%i == 0 and n2%i == 0: # common divisors
divisors = divisors + (i,) # Note:1) comma-(i,)-Tuple; 2) + concatenated
return divisors
divisors = findDivisors(20, 100)
print('common divisors:',divisors)
total = 0
# iterate over the elements of a tuple :in
for d in divisors:
total += d
print('sum: ',total)
###Output
_____no_output_____
###Markdown
Operators and Functions on tuplesYou can operate on tuples using (supposing that tup is a tuple): * built-in functions such as `len(tup)`;* built-in functions for tuple of numbers such as `max(tup), min(tup) and sum(tup)`Tuple methods: * `count(e)` : counts the number of occurrences of a value e* `index(e)`: return the index of the first occurrences of e in tup,or error
###Code
tup=(1,2,2,3)
len(tup)
max(tup), min(tup), sum(tup)
tup.count(2)
tup.index(2)
###Output
_____no_output_____
###Markdown
5.2 Sequences and Multiple AssignmentIf you know the length of a sequence (e.g., a tuple or a string),it can be convenient to use Python’s multiple assignment statement to extract the individual elements.* **Sequence unpacking**
###Code
# Sequence unpacking
x, y ,z= (3, 4,5)
a, b, c = 'xyz'
print('x=',x,' y=',y)
print('a=',a,' b=',b,' c=',c)
###Output
_____no_output_____
###Markdown
This mechanism is particularly convenient when used in conjunction with functions that return `fixed-size` sequences.
###Code
def findExtremeDivisors(n1, n2):
"""Assumes that n1 and n2 are positive ints
Returns a tuple containing the smallest common divisor > 1 and
the largest common divisor of n1 and n2.
If no common divisor, returns (None, None)
"""
minVal, maxVal = None, None # multiple assignment statement
for i in range(2, min(n1, n2) + 1):
if n1 % i == 0 and n2 % i == 0:
# if minVal == None or i < minVal:
# minVal = i
# if maxVal == None or i > maxVal:
# maxVal = i
if minVal == None:
minVal = i
maxVal = i
return (minVal, maxVal) # return fixed-size sequences:tuple
# multiple assignment statement conjunction with functions that return fixed-size sequences.
minDivisor, maxDivisor = findExtremeDivisors(100, 200)
print('minDivisor=',minDivisor)
print('maxDivisor=',maxDivisor)
def findExtremeDivisors(n1, n2):
"""Assumes that n1 and n2 are positive ints
Returns a tuple containing the smallest common
divisor > 1 and the largest common divisor of n1 and n2
If no common divisor, returns (None, None)
"""
minVal, maxVal = None, None # multiple assignment statement
for i in range(2, min(n1, n2) + 1):
if n1 % i == 0 and n2 % i == 0:
if minVal == None:
minVal = i
maxVal = i
divisors = (minVal, maxVal)
return divisors # return fixed-size sequences
# tuple
divisors = findExtremeDivisors(100, 200)
print(divisors)
print('minDivisor=', divisors[0])
print('maxDivisor=', divisors[1])
###Output
_____no_output_____
###Markdown
STRUCTURED TYPES, MUTABILITYThe numeric types `int` and `float` are **scalar(标量)** types. That is to say, objects of these types have **no accessible internal structure**. In contrast, `str` can be thought of as a `structured, or non-scalar`, type. One can use indexing to extract individual characters from a string and slicing to extract substrings.In this chapter, we introduce `four additional structured types`. * tuple(元组), is a rather simple generalization of `str`. * list(列表)* dict(字典) —are more **interesting**. * range 1 TuplesLike `strings`, **tuples** are immutable ordered sequences of elements. The difference is that the elements of a tuple need not be characters. The individual elements can be ofany type, and need not be of the same type as each otherLiterals of type `tuple` are written by enclosing a comma-separated , list of elements within parentheses( )
###Code
t1 = () # empty tuple
t2 = (1, 'two', 3) # 1)any type; 2) not be of the same type as each other.
student=('Name',22)
print(t1)
print(t2)
print(student)
###Output
_____no_output_____
###Markdown
Looking at this example, you might naturally be led to believe that the tuple containing the `single value 1` would be written `(1)`. But, to quote [Richard Nixon](https://en.wikipedia.org/wiki/Richard_Nixon), `“that would be wrong.”`Since parentheses`( )` are used to `group expressions`,(1) is `merely` a verbose way to write the `integer 1`.
###Code
a=(1)
a
b=(a+2,)
b
a=(1,)
a
###Output
_____no_output_____
###Markdown
To denote the singleton tuple containing this value, we write (1 ,)**Almost everybody who uses Python has at one time or another accidentally omitted that annoying `comma`.**
###Code
expre=('10+1')
tsingleton=('10+1',) # comma-separated
print(expre)
print(type(expre))
print('\nThe singleton tuple containing this value')
print(tsingleton)
print(type(tsingleton))
###Output
_____no_output_____
###Markdown
**Repetition** can be used on tuples. For example, the expression `3*('a', 2)` evaluates to `('a', 2, 'a', 2, 'a', 2).`
###Code
3*('a', 2)
###Output
_____no_output_____
###Markdown
Like string ,Tuples can be concatenated, indexed, and sliced.(indexing starts at 0)concatenated
###Code
t1 = (1, 'two', 3.0)
t2 = (t1, 3.25) # any type,tuples can contain tuples
print('t2=',t2)
print('t1+t2=',t1 + t2) # + concatenated
###Output
_____no_output_____
###Markdown
indexed
###Code
print('(t1 + t2)[3]=',(t1 + t2)[3]) # [3] indexed tuple :as always in Python, indexing starts at 0
###Output
_____no_output_____
###Markdown
sliced.(indexing starts at 0)
###Code
print('(t1 + t2)[2:4]=',(t1 + t2)[2:4]) # [2:5] sliced
###Output
_____no_output_____
###Markdown
The second assignment statement binds the name t2 to a tuple that contains the tuple to which t1 is bound and the floating point number 3.25. This is possible because a tuple, like everything else in Python, is an object, so tuples can contain tuples.Therefore, the first print statement produces the output,```python((1, 'two', 3), 3.25)```The second print statement prints the value generated by concatenating the values bound to t1 and t2, which is a tuple with five elements. It produces the output```python(1, 'two', 3, (1, 'two', 3), 3.25)```The next statement selects and prints the fourth element of the concatenated tuple (as `always` in Python, `indexing starts at 0`), and the statement after that creates and prints a slice of that tuple, producing the output```python(1, 'two', 3)(3, (1, 'two', 3))``` 1.1 Tuples are immutable The tuples cannot be modified after they are created. if you try to modify one of the elements of the tuple, you get an error:```pythonTypeError: 'tuple' object does not support item assignment```
###Code
t1 = (1, 'two', 3)
t1[0]='A'
###Output
_____no_output_____
###Markdown
Because tuples are immutable, you can’t modify the elements. You can replace one tuple with another,This statement makes a **new tuple** and then makes `t1` refer to it.
###Code
tnew_1 = ('A',) + t1[1:]
tnew_1
###Output
_____no_output_____
###Markdown
1.2 Iterate over the elements of a tuple`for` e in seqA for statement can be used to iterate over the elements of a `tuple`.
###Code
t1 = (1, 'two', 3)
for e in t1:
print(e)
def intersect(t1, t2):
""" Assumes t1 and t2 are tuples
Returns a tuple containing elements that are in
both t1 and t2
"""
result = ()
for e in t1:
if e in t2:
result = result + (e,)
return result
t1 = (1,2, 'two', 3,4)
t2 = (1,4)
result=intersect(t1, t2)
print(result)
###Output
_____no_output_____
###Markdown
1.3 Operators and Functions on tuplesYou can operate on tuples using (supposing that tup is a tuple): * built-in functions such as `len(tup)`;* built-in functions for tuple of numbers such as `max(tup), min(tup) and sum(tup)`Tuple methods: * `count(e)` : counts the number of occurrences of a value e* `index(e)`: return the index of the first occurrences of e in tup,or error
###Code
tup=(1,2,2,3)
len(tup)
max(tup), min(tup), sum(tup)
tup.count(2)
tup.index(3)
tup.index(4)
###Output
_____no_output_____
###Markdown
1.4 Sequence unpackingIf you know the length of a sequence (e.g., a tuple or a string),it can be convenient to use Python’s multiple assignment statement to extract the individual elements.* **Sequence unpacking**
###Code
# Sequence unpacking
x, y ,z= (3, 4,5)
print('x=',x,' y=',y,' z=',z)
a, b, c = 'xyz'
print('a=',a,' b=',b,' c=',c)
###Output
_____no_output_____
###Markdown
STRUCTURED TYPES, MUTABILITYThe numeric types `int` and `float` are **scalar(标量)** types. That is to say, objects of these types have **no accessible internal structure**. In contrast, `str` can be thought of as a `structured, or non-scalar`, type. One can use indexing to extract individual characters from a string and slicing to extract substrings.In this chapter, we introduce `four additional structured types`. * tuple(元组), is a rather simple generalization of `str`. * list(列表)* range* dict(字典) —are more **interesting**. 1 TuplesLike `strings`, **tuples** are immutable ordered sequences of elements. The difference is that the elements of a tuple need not be characters. The individual elements can be ofany type, and need not be of the same type as each otherLiterals of type `tuple` are written by enclosing a comma-separated , list of elements within parentheses( )
###Code
t1 = () # empty tuple
t2 = (1, 'two', 3) # 1)any type; 2) not be of the same type as each other.
student=('Name',22)
print(t1)
print(t2)
print(student)
###Output
_____no_output_____
###Markdown
Looking at this example, you might naturally be led to believe that the tuple containing the `single value 1` would be written `(1)`. But, to quote [Richard Nixon](https://en.wikipedia.org/wiki/Richard_Nixon), `“that would be wrong.”`Since parentheses`( )` are used to `group expressions`,(1) is `merely` a verbose way to write the `integer 1`.
###Code
a=(1)
a
b=(a+2,)
b
a=(1,)
a
###Output
_____no_output_____
###Markdown
To denote the singleton tuple containing this value, we write (1 ,)**Almost everybody who uses Python has at one time or another accidentally omitted that annoying `comma`.**
###Code
expre=('10+1')
tsingleton=('10+1',) # comma-separated
print(expre)
print(type(expre))
print('\nThe singleton tuple containing this value')
print(tsingleton)
print(type(tsingleton))
###Output
_____no_output_____
###Markdown
**Repetition** can be used on tuples. For example, the expression `3*('a', 2)` evaluates to `('a', 2, 'a', 2, 'a', 2).`
###Code
3*('a', 2)
###Output
_____no_output_____
###Markdown
Like string ,Tuples can be concatenated, indexed, and sliced.(indexing starts at 0)concatenated
###Code
t1 = (1, 'two', 3)
t2 = (t1, 3.25) # any type,tuples can contain tuples
print('t2=',t2)
print('t1+t2=',t1 + t2) # + concatenated
###Output
_____no_output_____
###Markdown
indexed
###Code
print('(t1 + t2)[3]=',(t1 + t2)[3]) # [3] indexed tuple :as always in Python, indexing starts at 0
###Output
_____no_output_____
###Markdown
sliced.(indexing starts at 0)
###Code
print('(t1 + t2)[2:4]=',(t1 + t2)[2:4]) # [2:5] sliced
###Output
_____no_output_____
###Markdown
The second assignment statement binds the name t2 to a tuple that contains the tuple to which t1 is bound and the floating point number 3.25. This is possible because a tuple, like everything else in Python, is an object, so tuples can contain tuples.Therefore, the first print statement produces the output,```python((1, 'two', 3), 3.25)```The second print statement prints the value generated by concatenating the values bound to t1 and t2, which is a tuple with five elements. It produces the output```python(1, 'two', 3, (1, 'two', 3), 3.25)```The next statement selects and prints the fourth element of the concatenated tuple (as `always` in Python, `indexing starts at 0`), and the statement after that creates and prints a slice of that tuple, producing the output```python(1, 'two', 3)(3, (1, 'two', 3))``` 1.1 Tuples are immutable The tuples cannot be modified after they are created. if you try to modify one of the elements of the tuple, you get an error:```pythonTypeError: 'tuple' object does not support item assignment```
###Code
t1 = (1, 'two', 3)
t1[0]='A'
###Output
_____no_output_____
###Markdown
Because tuples are immutable, you can’t modify the elements. You can replace one tuple with another,This statement makes a **new tuple** and then makes `t1` refer to it.
###Code
t1 = ('A',) + t1[1:]
t1
###Output
_____no_output_____
###Markdown
1.2 Iterate over the elements of a tuple`for` e in seqA for statement can be used to iterate over the elements of a `tuple`.
###Code
t1 = (1, 'two', 3)
for e in t1:
print(e)
def intersect(t1, t2):
"""Assumes t1 and t2 are tuples
Returns a tuple containing elements that are in
both t1 and t2
"""
result = ()
for e in t1:
if e in t2:
result = result + (e,)
return result
t1 = (1,2, 'two', 3,4)
t2 = (1,4)
result=intersect(t1, t2)
print(result)
###Output
_____no_output_____
###Markdown
1.3 Operators and Functions on tuplesYou can operate on tuples using (supposing that tup is a tuple): * built-in functions such as `len(tup)`;* built-in functions for tuple of numbers such as `max(tup), min(tup) and sum(tup)`Tuple methods: * `count(e)` : counts the number of occurrences of a value e* `index(e)`: return the index of the first occurrences of e in tup,or error
###Code
tup=(1,2,2,3)
len(tup)
max(tup), min(tup), sum(tup)
tup.count(2)
tup.index(4)
###Output
_____no_output_____
###Markdown
1.4 Sequence unpackingIf you know the length of a sequence (e.g., a tuple or a string),it can be convenient to use Python’s multiple assignment statement to extract the individual elements.* **Sequence unpacking**
###Code
# Sequence unpacking
x, y ,z= (3, 4,5)
print('x=',x,' y=',y,' z=',z)
a, b, c = 'xyz'
print('a=',a,' b=',b,' c=',c)
###Output
_____no_output_____
###Markdown
STRUCTURED TYPES, MUTABILITYThe numeric types `int` and `float` are **scalar(标量)** types. That is to say, objects of these types have **no accessible internal structure**. In contrast, `str` can be thought of as a `structured, or non-scalar`, type. One can use indexing to extract individual characters from a string and slicing to extract substrings.In this chapter, we introduce `four additional structured types`. * tuple(元组), is a rather simple generalization of `str`. * list(列表)* dict(字典) —are more **interesting**. * range 1 TuplesLike `strings`, **tuples** are immutable ordered sequences of elements. The difference is that the elements of a tuple need not be characters. The individual elements can be ofany type, and need not be of the same type as each otherLiterals of type `tuple` are written by enclosing a comma-separated , list of elements within parentheses( )
###Code
t1 = () # empty tuple
t2 = (1, 'two', 3) # 1)any type; 2) not be of the same type as each other.
student=('Name',22)
print(t1)
print(t2)
print(student)
###Output
()
(1, 'two', 3)
('Name', 22)
###Markdown
Looking at this example, you might naturally be led to believe that the tuple containing the `single value 1` would be written `(1)`. But, to quote [Richard Nixon](https://en.wikipedia.org/wiki/Richard_Nixon), `“that would be wrong.”`Since parentheses`( )` are used to `group expressions`,(1) is `merely` a verbose way to write the `integer 1`.
###Code
a=(1)
a
b=(a+2,)
b
a=(1,)
a
###Output
_____no_output_____
###Markdown
To denote the singleton tuple containing this value, we write (1 ,)**Almost everybody who uses Python has at one time or another accidentally omitted that annoying `comma`.**
###Code
expre=('10+1')
tsingleton=('10+1',) # comma-separated
print(expre)
print(type(expre))
print('\nThe singleton tuple containing this value')
print(tsingleton)
print(type(tsingleton))
###Output
10+1
<class 'str'>
The singleton tuple containing this value
('10+1',)
<class 'tuple'>
###Markdown
**Repetition** can be used on tuples. For example, the expression `3*('a', 2)` evaluates to `('a', 2, 'a', 2, 'a', 2).`
###Code
3*('a', 2)
###Output
_____no_output_____
###Markdown
Like string ,Tuples can be concatenated, indexed, and sliced.(indexing starts at 0)concatenated
###Code
t1 = (1, 'two', 3.0)
t2 = (t1, 3.25) # any type,tuples can contain tuples
print('t2=',t2)
print('t1+t2=',t1 + t2) # + concatenated
###Output
t2= ((1, 'two', 3.0), 3.25)
t1+t2= (1, 'two', 3.0, (1, 'two', 3.0), 3.25)
###Markdown
indexed
###Code
print('(t1 + t2)[3]=',(t1 + t2)[3]) # [3] indexed tuple :as always in Python, indexing starts at 0
###Output
(t1 + t2)[3]= (1, 'two', 3.0)
###Markdown
sliced.(indexing starts at 0)
###Code
print('(t1 + t2)[2:4]=',(t1 + t2)[2:4]) # [2:5] sliced
###Output
(t1 + t2)[2:4]= (3.0, (1, 'two', 3.0))
###Markdown
The second assignment statement binds the name t2 to a tuple that contains the tuple to which t1 is bound and the floating point number 3.25. This is possible because a tuple, like everything else in Python, is an object, so tuples can contain tuples.Therefore, the first print statement produces the output,```python((1, 'two', 3), 3.25)```The second print statement prints the value generated by concatenating the values bound to t1 and t2, which is a tuple with five elements. It produces the output```python(1, 'two', 3, (1, 'two', 3), 3.25)```The next statement selects and prints the fourth element of the concatenated tuple (as `always` in Python, `indexing starts at 0`), and the statement after that creates and prints a slice of that tuple, producing the output```python(1, 'two', 3)(3, (1, 'two', 3))``` 1.1 Tuples are immutable The tuples cannot be modified after they are created. if you try to modify one of the elements of the tuple, you get an error:```pythonTypeError: 'tuple' object does not support item assignment```
###Code
t1 = (1, 'two', 3)
t1[0]='A'
###Output
_____no_output_____
###Markdown
Because tuples are immutable, you can’t modify the elements. You can replace one tuple with another,This statement makes a **new tuple** and then makes `t1` refer to it.
###Code
tnew_1 = ('A',) + t1[1:]
tnew_1
###Output
_____no_output_____
###Markdown
1.2 Iterate over the elements of a tuple`for` e in seqA for statement can be used to iterate over the elements of a `tuple`.
###Code
t1 = (1, 'two', 3)
for e in t1:
print(e)
def intersect(t1, t2):
""" Assumes t1 and t2 are tuples
Returns a tuple containing elements that are in
both t1 and t2
"""
result = ()
for e in t1:
if e in t2:
result = result + (e,)
return result
t1 = (1,2, 'two', 3,4)
t2 = (1,4)
result=intersect(t1, t2)
print(result)
###Output
(1, 4)
###Markdown
1.3 Operators and Functions on tuplesYou can operate on tuples using (supposing that tup is a tuple): * built-in functions such as `len(tup)`;* built-in functions for tuple of numbers such as `max(tup), min(tup) and sum(tup)`Tuple methods: * `count(e)` : counts the number of occurrences of a value e* `index(e)`: return the index of the first occurrences of e in tup,or error
###Code
tup=(1,2,2,3)
len(tup)
max(tup), min(tup), sum(tup)
tup.count(2)
tup.index(3)
tup.index(4)
###Output
_____no_output_____
###Markdown
1.4 Sequence unpackingIf you know the length of a sequence (e.g., a tuple or a string),it can be convenient to use Python’s multiple assignment statement to extract the individual elements.* **Sequence unpacking**
###Code
# Sequence unpacking
x, y ,z= (3, 4,5)
print('x=',x,' y=',y,' z=',z)
a, b, c = 'xyz'
print('a=',a,' b=',b,' c=',c)
###Output
_____no_output_____ |
examples/calculate.ipynb | ###Markdown
Example notebook for the functions contained in cry_calculate.py Calculate the BSSE
###Code
from crystal_functions.calculate import cry_bsse
###Output
_____no_output_____
###Markdown
Generate a pymatgen structure (substrate+adsorbate)
###Code
import sys
sys.path.insert(1, '../')
from pymatgen.core import Structure, Lattice
from pymatgen.symmetry.analyzer import SpacegroupAnalyzer
from pymatgen.core.surface import SlabGenerator
#from cry_file_readwrite import write_gui
import numpy as np
substrate = Structure.from_spacegroup("Fm-3m", Lattice.cubic(3.597), ["Cu"], [[0, 0, 0]])
substrate = SpacegroupAnalyzer(substrate).get_conventional_standard_structure()
substrate = SpacegroupAnalyzer(substrate).get_symmetrized_structure()
slab = SlabGenerator(substrate, (1,0,0), 5., 10., center_slab=True).get_slab()
substrate.atomic_numbers
n_symmops = len(SpacegroupAnalyzer(substrate).get_space_group_operations())
print(n_symmops/4)
print(SpacegroupAnalyzer(substrate).get_point_group_operations(cartesian=True)[3])
print(np.array(SpacegroupAnalyzer(substrate).get_point_group_operations(cartesian=True)[3].as_dict()['matrix']))
np.array(SpacegroupAnalyzer(substrate).get_point_group_operations(cartesian=True)[3].as_dict()['matrix'])[0:4,0:3]
for i in range(len(SpacegroupAnalyzer(substrate).get_space_group_operations())):
for symmops in np.array(SpacegroupAnalyzer(substrate).get_point_group_operations(cartesian=True)[i].as_dict()['matrix'])[0:4,0:3]:
print('{}\n'.format(' '.join(str(np.around(n,8)) for n in symmops)))
###Output
1.0 0.0 0.0
0.0 1.0 0.0
0.0 -0.0 1.0
0.0 0.0 0.0
-1.0 0.0 0.0
0.0 -1.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
-0.0 -1.0 0.0
1.0 0.0 0.0
0.0 -0.0 1.0
0.0 0.0 0.0
0.0 1.0 0.0
-1.0 -0.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
-1.0 0.0 0.0
0.0 -1.0 0.0
-0.0 -0.0 1.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 1.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
0.0 1.0 0.0
-1.0 -0.0 0.0
-0.0 -0.0 1.0
0.0 0.0 0.0
-0.0 -1.0 0.0
1.0 0.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 -1.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
-1.0 -0.0 0.0
0.0 1.0 0.0
-0.0 -0.0 1.0
0.0 0.0 0.0
0.0 -1.0 0.0
-1.0 -0.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
-0.0 1.0 0.0
1.0 0.0 0.0
0.0 -0.0 1.0
0.0 0.0 0.0
-1.0 -0.0 0.0
0.0 1.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 -1.0 0.0
0.0 -0.0 1.0
0.0 0.0 0.0
-0.0 1.0 0.0
1.0 0.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
0.0 -1.0 0.0
-1.0 -0.0 0.0
-0.0 -0.0 1.0
0.0 0.0 0.0
-0.0 -0.0 1.0
1.0 0.0 0.0
0.0 1.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
-1.0 -0.0 0.0
-0.0 -1.0 -0.0
0.0 0.0 0.0
-0.0 -0.0 1.0
0.0 -1.0 0.0
1.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
0.0 1.0 0.0
-1.0 0.0 -0.0
0.0 0.0 0.0
0.0 -0.0 1.0
-1.0 -0.0 0.0
-0.0 -1.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
1.0 0.0 0.0
0.0 1.0 -0.0
0.0 0.0 0.0
-0.0 -0.0 1.0
0.0 1.0 0.0
-1.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
0.0 -1.0 0.0
1.0 0.0 -0.0
0.0 0.0 0.0
0.0 0.0 -1.0
1.0 0.0 0.0
0.0 -1.0 -0.0
0.0 0.0 0.0
0.0 -0.0 1.0
-1.0 -0.0 0.0
-0.0 1.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
0.0 -1.0 0.0
-1.0 -0.0 -0.0
0.0 0.0 0.0
-0.0 -0.0 1.0
0.0 1.0 0.0
1.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
-1.0 -0.0 0.0
-0.0 1.0 -0.0
0.0 0.0 0.0
-0.0 -0.0 1.0
1.0 0.0 0.0
0.0 -1.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
0.0 1.0 0.0
1.0 0.0 -0.0
0.0 0.0 0.0
-0.0 -0.0 1.0
0.0 -1.0 0.0
-1.0 -0.0 0.0
0.0 0.0 0.0
0.0 1.0 -0.0
-0.0 -0.0 1.0
1.0 0.0 0.0
0.0 0.0 0.0
-0.0 -1.0 0.0
0.0 0.0 -1.0
-1.0 -0.0 -0.0
0.0 0.0 0.0
1.0 0.0 -0.0
-0.0 -0.0 1.0
0.0 -1.0 0.0
0.0 0.0 0.0
-1.0 -0.0 0.0
0.0 0.0 -1.0
-0.0 1.0 -0.0
0.0 0.0 0.0
0.0 -1.0 -0.0
-0.0 -0.0 1.0
-1.0 -0.0 0.0
0.0 0.0 0.0
-0.0 1.0 0.0
0.0 0.0 -1.0
1.0 0.0 -0.0
0.0 0.0 0.0
-1.0 -0.0 -0.0
-0.0 -0.0 1.0
-0.0 1.0 0.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 0.0 -1.0
0.0 -1.0 -0.0
0.0 0.0 0.0
-0.0 -1.0 0.0
0.0 0.0 -1.0
1.0 0.0 -0.0
0.0 0.0 0.0
0.0 1.0 -0.0
-0.0 -0.0 1.0
-1.0 0.0 0.0
0.0 0.0 0.0
-1.0 -0.0 0.0
0.0 0.0 -1.0
-0.0 -1.0 -0.0
0.0 0.0 0.0
1.0 0.0 -0.0
-0.0 -0.0 1.0
0.0 1.0 0.0
0.0 0.0 0.0
-0.0 1.0 0.0
0.0 0.0 -1.0
-1.0 0.0 -0.0
0.0 0.0 0.0
0.0 -1.0 -0.0
-0.0 -0.0 1.0
1.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 0.0 -1.0
0.0 1.0 -0.0
0.0 0.0 0.0
-1.0 -0.0 -0.0
-0.0 -0.0 1.0
-0.0 -1.0 0.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 1.0 0.0
0.0 -0.0 1.0
0.0 0.0 0.0
-1.0 0.0 0.0
0.0 -1.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
-0.0 -1.0 0.0
1.0 0.0 0.0
0.0 -0.0 1.0
0.0 0.0 0.0
0.0 1.0 0.0
-1.0 -0.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
-1.0 0.0 0.0
0.0 -1.0 0.0
-0.0 -0.0 1.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 1.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
0.0 1.0 0.0
-1.0 -0.0 0.0
-0.0 -0.0 1.0
0.0 0.0 0.0
-0.0 -1.0 0.0
1.0 0.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 -1.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
-1.0 -0.0 0.0
0.0 1.0 0.0
-0.0 -0.0 1.0
0.0 0.0 0.0
0.0 -1.0 0.0
-1.0 -0.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
-0.0 1.0 0.0
1.0 0.0 0.0
0.0 -0.0 1.0
0.0 0.0 0.0
-1.0 -0.0 0.0
0.0 1.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 -1.0 0.0
0.0 -0.0 1.0
0.0 0.0 0.0
-0.0 1.0 0.0
1.0 0.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
0.0 -1.0 0.0
-1.0 -0.0 0.0
-0.0 -0.0 1.0
0.0 0.0 0.0
-0.0 -0.0 1.0
1.0 0.0 0.0
0.0 1.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
-1.0 -0.0 0.0
-0.0 -1.0 -0.0
0.0 0.0 0.0
-0.0 -0.0 1.0
0.0 -1.0 0.0
1.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
0.0 1.0 0.0
-1.0 0.0 -0.0
0.0 0.0 0.0
0.0 -0.0 1.0
-1.0 -0.0 0.0
-0.0 -1.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
1.0 0.0 0.0
0.0 1.0 -0.0
0.0 0.0 0.0
-0.0 -0.0 1.0
0.0 1.0 0.0
-1.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
0.0 -1.0 0.0
1.0 0.0 -0.0
0.0 0.0 0.0
0.0 0.0 -1.0
1.0 0.0 0.0
0.0 -1.0 -0.0
0.0 0.0 0.0
0.0 -0.0 1.0
-1.0 -0.0 0.0
-0.0 1.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
0.0 -1.0 0.0
-1.0 -0.0 -0.0
0.0 0.0 0.0
-0.0 -0.0 1.0
0.0 1.0 0.0
1.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
-1.0 -0.0 0.0
-0.0 1.0 -0.0
0.0 0.0 0.0
-0.0 -0.0 1.0
1.0 0.0 0.0
0.0 -1.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
0.0 1.0 0.0
1.0 0.0 -0.0
0.0 0.0 0.0
-0.0 -0.0 1.0
0.0 -1.0 0.0
-1.0 -0.0 0.0
0.0 0.0 0.0
0.0 1.0 -0.0
-0.0 -0.0 1.0
1.0 0.0 0.0
0.0 0.0 0.0
-0.0 -1.0 0.0
0.0 0.0 -1.0
-1.0 -0.0 -0.0
0.0 0.0 0.0
1.0 0.0 -0.0
-0.0 -0.0 1.0
0.0 -1.0 0.0
0.0 0.0 0.0
-1.0 -0.0 0.0
0.0 0.0 -1.0
-0.0 1.0 -0.0
0.0 0.0 0.0
0.0 -1.0 -0.0
-0.0 -0.0 1.0
-1.0 -0.0 0.0
0.0 0.0 0.0
-0.0 1.0 0.0
0.0 0.0 -1.0
1.0 0.0 -0.0
0.0 0.0 0.0
-1.0 -0.0 -0.0
-0.0 -0.0 1.0
-0.0 1.0 0.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 0.0 -1.0
0.0 -1.0 -0.0
0.0 0.0 0.0
-0.0 -1.0 0.0
0.0 0.0 -1.0
1.0 0.0 -0.0
0.0 0.0 0.0
0.0 1.0 -0.0
-0.0 -0.0 1.0
-1.0 0.0 0.0
0.0 0.0 0.0
-1.0 -0.0 0.0
0.0 0.0 -1.0
-0.0 -1.0 -0.0
0.0 0.0 0.0
1.0 0.0 -0.0
-0.0 -0.0 1.0
0.0 1.0 0.0
0.0 0.0 0.0
-0.0 1.0 0.0
0.0 0.0 -1.0
-1.0 0.0 -0.0
0.0 0.0 0.0
0.0 -1.0 -0.0
-0.0 -0.0 1.0
1.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 0.0 -1.0
0.0 1.0 -0.0
0.0 0.0 0.0
-1.0 -0.0 -0.0
-0.0 -0.0 1.0
-0.0 -1.0 0.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 1.0 0.0
0.0 -0.0 1.0
0.0 0.0 0.0
-1.0 0.0 0.0
0.0 -1.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
-0.0 -1.0 0.0
1.0 0.0 0.0
0.0 -0.0 1.0
0.0 0.0 0.0
0.0 1.0 0.0
-1.0 -0.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
-1.0 0.0 0.0
0.0 -1.0 0.0
-0.0 -0.0 1.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 1.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
0.0 1.0 0.0
-1.0 -0.0 0.0
-0.0 -0.0 1.0
0.0 0.0 0.0
-0.0 -1.0 0.0
1.0 0.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 -1.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
-1.0 -0.0 0.0
0.0 1.0 0.0
-0.0 -0.0 1.0
0.0 0.0 0.0
0.0 -1.0 0.0
-1.0 -0.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
-0.0 1.0 0.0
1.0 0.0 0.0
0.0 -0.0 1.0
0.0 0.0 0.0
-1.0 -0.0 0.0
0.0 1.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 -1.0 0.0
0.0 -0.0 1.0
0.0 0.0 0.0
-0.0 1.0 0.0
1.0 0.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
0.0 -1.0 0.0
-1.0 -0.0 0.0
-0.0 -0.0 1.0
0.0 0.0 0.0
-0.0 -0.0 1.0
1.0 0.0 0.0
0.0 1.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
-1.0 -0.0 0.0
-0.0 -1.0 -0.0
0.0 0.0 0.0
-0.0 -0.0 1.0
0.0 -1.0 0.0
1.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
0.0 1.0 0.0
-1.0 0.0 -0.0
0.0 0.0 0.0
0.0 -0.0 1.0
-1.0 -0.0 0.0
-0.0 -1.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
1.0 0.0 0.0
0.0 1.0 -0.0
0.0 0.0 0.0
-0.0 -0.0 1.0
0.0 1.0 0.0
-1.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
0.0 -1.0 0.0
1.0 0.0 -0.0
0.0 0.0 0.0
0.0 0.0 -1.0
1.0 0.0 0.0
0.0 -1.0 -0.0
0.0 0.0 0.0
0.0 -0.0 1.0
-1.0 -0.0 0.0
-0.0 1.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
0.0 -1.0 0.0
-1.0 -0.0 -0.0
0.0 0.0 0.0
-0.0 -0.0 1.0
0.0 1.0 0.0
1.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
-1.0 -0.0 0.0
-0.0 1.0 -0.0
0.0 0.0 0.0
-0.0 -0.0 1.0
1.0 0.0 0.0
0.0 -1.0 0.0
0.0 0.0 0.0
0.0 0.0 -1.0
0.0 1.0 0.0
1.0 0.0 -0.0
0.0 0.0 0.0
-0.0 -0.0 1.0
0.0 -1.0 0.0
-1.0 -0.0 0.0
0.0 0.0 0.0
0.0 1.0 -0.0
-0.0 -0.0 1.0
1.0 0.0 0.0
0.0 0.0 0.0
-0.0 -1.0 0.0
0.0 0.0 -1.0
-1.0 -0.0 -0.0
0.0 0.0 0.0
1.0 0.0 -0.0
-0.0 -0.0 1.0
0.0 -1.0 0.0
0.0 0.0 0.0
-1.0 -0.0 0.0
0.0 0.0 -1.0
-0.0 1.0 -0.0
0.0 0.0 0.0
0.0 -1.0 -0.0
-0.0 -0.0 1.0
-1.0 -0.0 0.0
0.0 0.0 0.0
-0.0 1.0 0.0
0.0 0.0 -1.0
1.0 0.0 -0.0
0.0 0.0 0.0
-1.0 -0.0 -0.0
-0.0 -0.0 1.0
-0.0 1.0 0.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 0.0 -1.0
0.0 -1.0 -0.0
0.0 0.0 0.0
-0.0 -1.0 0.0
0.0 0.0 -1.0
1.0 0.0 -0.0
0.0 0.0 0.0
0.0 1.0 -0.0
-0.0 -0.0 1.0
-1.0 0.0 0.0
0.0 0.0 0.0
-1.0 -0.0 0.0
0.0 0.0 -1.0
-0.0 -1.0 -0.0
0.0 0.0 0.0
1.0 0.0 -0.0
-0.0 -0.0 1.0
0.0 1.0 0.0
0.0 0.0 0.0
-0.0 1.0 0.0
0.0 0.0 -1.0
-1.0 0.0 -0.0
0.0 0.0 0.0
0.0 -1.0 -0.0
-0.0 -0.0 1.0
1.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 0.0 -1.0
0.0 1.0 -0.0
0.0 0.0 0.0
-1.0 -0.0 -0.0
-0.0 -0.0 1.0
-0.0 -1.0 0.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 1.0 0.0
0.0 -0.0 1.0
0.0 0.0 0.0
-1.0 0.0 0.0
0.0 -1.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
-0.0 -1.0 0.0
1.0 0.0 0.0
0.0 -0.0 1.0
0.0 0.0 0.0
0.0 1.0 0.0
-1.0 -0.0 0.0
0.0 0.0 -1.0
0.0 0.0 0.0
-1.0 0.0 0.0
0.0 -1.0 0.0
-0.0 -0.0 1.0
0.0 0.0 0.0
|
1_Data_collection.ipynb | ###Markdown
DATA COLLECTION chEMBL is an open large-scale bioactivity database. chembl_webresource_client is the library developed and supported by chEMBL group. The library helps accessing chEMBL data.
###Code
import pandas as pd #for data processing
from chembl_webresource_client.new_client import new_client
###Output
_____no_output_____
###Markdown
Lets download the biological activity data from chEMBL database. The dataset is comprised of compounds that have been biologically tested for their activity towards target
###Code
target = new_client.target
target_query = target.search('FLT3') #search the target,'FLT3' is genetic code for Tyrosine protein kinase receptor
targets = pd.DataFrame.from_dict(target_query)
targets
#selecting our target
selected_target = targets.target_chembl_id[1]
selected_target
###Output
_____no_output_____
###Markdown
Lets retrieve biological activity data for Tyrosine Protein kinase receptor that are reported as IC50 values in nM
###Code
activity = new_client.activity
data = activity.filter(target_chembl_id=selected_target).filter(standard_type="IC50")
#convert data to dataframe using pandas
df = pd.DataFrame.from_dict(data)
df.head()
#dimension of dataframe
df.shape
#removing data which has 'Nan' standard value
df2 = df[df.standard_value.notna()]
df2.head()
df2.shape
###Output
_____no_output_____
###Markdown
81 entries with NaN value are removed Lets Combine all those features/columns that are important for model training
###Code
columns= ['molecule_chembl_id','canonical_smiles','standard_value']
df3 = df2[columns]
df3.head()
###Output
_____no_output_____
###Markdown
Labeling the compounds as Active/Inactive/IntermediateCompunds are being labeled(Active\Inactive\Intermediate) based on their potency value(Standard_values)compounds having values < 1000 nM will be considered active , Those greater than 10000 nM will be considered to be inactive
###Code
bioactivity_threshold = []
for i in df3.standard_value:
if float(i) >= 10000:
bioactivity_threshold.append("inactive")
elif float(i) <= 1000:
bioactivity_threshold.append("active")
else:
bioactivity_threshold.append("intermediate")
###Output
_____no_output_____
###Markdown
Let's Add Bioactivity data to our dataset
###Code
bioactivity_class = pd.Series(bioactivity_threshold, name='bioactivity')
df5 = pd.concat([df3, bioactivity_class], axis=1)
df5.head()
df5.bioactivity.value_counts()
###Output
_____no_output_____
###Markdown
There are 1645 Active and 1283 inactive compounds/molecules present in our dataset Lets save this dataset
###Code
df5.to_csv('TPKR_data2.csv',index=False)
###Output
_____no_output_____ |
docs/source/index.ipynb | ###Markdown
.. _quick-start:
###Code
# What is Featuretools?
<img src="_static/images/featuretools_nav2.svg" width="500" align="center" alt="Featuretools">
**Featuretools** is a framework to perform automated feature engineering. It excels at transforming temporal and relational datasets into feature matrices for machine learning.
## 5 Minute Quick Start
Below is an example of using Deep Feature Synthesis (DFS) to perform automated feature engineering. In this example, we apply DFS to a multi-table dataset consisting of timestamped customer transactions.
###Output
_____no_output_____
###Markdown
import featuretools as ft
###Code
#### Load Mock Data
###Output
_____no_output_____
###Markdown
data = ft.demo.load_mock_customer()
###Code
#### Prepare data
In this toy dataset, there are 3 DataFrames.
- **customers**: unique customers who had sessions
- **sessions**: unique sessions and associated attributes
- **transactions**: list of events in this session
###Output
_____no_output_____
###Markdown
customers_df = data["customers"]customers_df sessions_df = data["sessions"]sessions_df.sample(5) transactions_df = data["transactions"]transactions_df.sample(5)
###Code
First, we specify a dictionary with all the DataFrames in our dataset. The DataFrames are passed in with their index column and time index column if one exists for the DataFrame.
###Output
_____no_output_____
###Markdown
dataframes = { "customers" : (customers_df, "customer_id"), "sessions" : (sessions_df, "session_id", "session_start"), "transactions" : (transactions_df, "transaction_id", "transaction_time")}
###Code
Second, we specify how the DataFrames are related. When two DataFrames have a one-to-many relationship, we call the "one" DataFrame, the "parent DataFrame". A relationship between a parent and child is defined like this:
(parent_dataframe, parent_column, child_dataframe, child_column)
In this dataset we have two relationships
###Output
_____no_output_____
###Markdown
relationships = [("sessions", "session_id", "transactions", "session_id"), ("customers", "customer_id", "sessions", "customer_id")]
###Code
.. note::
To manage setting up DataFrames and relationships, we recommend using the :class:`EntitySet <featuretools.EntitySet>` class which offers convenient APIs for managing data like this. See :doc:`getting_started/using_entitysets` for more information.
###Output
_____no_output_____
###Markdown
Run Deep Feature SynthesisA minimal input to DFS is a dictionary of DataFrames, a list of relationships, and the name of the target DataFrame whose features we want to calculate. The ouput of DFS is a feature matrix and the corresponding list of feature definitions.Let's first create a feature matrix for each customer in the data
###Code
feature_matrix_customers, features_defs = ft.dfs(dataframes=dataframes,
relationships=relationships,
target_dataframe_name="customers")
feature_matrix_customers
###Output
_____no_output_____
###Markdown
We now have dozens of new features to describe a customer's behavior. Change target DataFrameOne of the reasons DFS is so powerful is that it can create a feature matrix for *any* DataFrame in our EntitySet. For example, if we wanted to build features for sessions.
###Code
dataframes = {
"customers" : (customers_df.copy(), "customer_id"),
"sessions" : (sessions_df.copy(), "session_id", "session_start"),
"transactions" : (transactions_df.copy(), "transaction_id", "transaction_time")
}
feature_matrix_sessions, features_defs = ft.dfs(dataframes=dataframes,
relationships=relationships,
target_dataframe_name="sessions")
feature_matrix_sessions.head(5)
###Output
_____no_output_____
###Markdown
Understanding Feature Output~~~~~~~~~~~~~~~~~~~~~~~~~~~~In general, Featuretools references generated features through the feature name. In order to make features easier to understand, Featuretools offers two additional tools, :func:`featuretools.graph_feature` and :func:`featuretools.describe_feature`, to help explain what a feature is and the steps Featuretools took to generate it. Let's look at this example feature:
###Code
feature = features_defs[18]
feature
###Output
_____no_output_____
###Markdown
Feature lineage graphsFeature lineage graphs visually walk through feature generation. Starting from the base data, they show step by step the primitives applied and intermediate features generated to create the final feature.
###Code
ft.graph_feature(feature)
###Output
_____no_output_____
###Markdown
.. graphviz:: getting_started/graphs/demo_feat.dotFeature descriptions""""""""""""""""""""Featuretools can also automatically generate English sentence descriptions of features. Feature descriptions help to explain what a feature is, and can be further improved by including manually defined custom definitions. See :doc:`/guides/feature_descriptions` for more details on how to customize automatically generated feature descriptions.
###Code
ft.describe_feature(feature)
###Output
_____no_output_____
###Markdown
What's next?* Learn about [Representing Data with EntitySets](getting_started/using_entitysets.ipynb)* Apply automated feature engineering with [Deep Feature Synthesis](getting_started/afe.ipynb)* Explore [runnable demos](https://www.featuretools.com/demos) based on real world use cases* Can't find what you're looking for? Ask for [help](resources/help.rst)
###Code
Table of contents
-----------------
.. toctree::
:maxdepth: 1
install
.. toctree::
:maxdepth: 2
getting_started/getting_started_index
guides/guides_index
.. toctree::
:maxdepth: 1
:caption: Resources and References
resources/resources_index
api_reference
Primitives <https://primitives.featurelabs.com/>
release_notes
Other links
------------
* :ref:`genindex`
* :ref:`search`
###Output
_____no_output_____
###Markdown
[Installation](installing.rst)[Tutorials](tutorials.rst)[Examples](examples.rst)[Server](server.md)[Gallery](gallery.rst)[API](api.rst)[Datasets](datasets.rst)[FAQ](faq.rst)
###Code
<style>
pre {
white-space: pre-wrap !important;
}
.table-striped > tbody > tr:nth-of-type(odd) {
background-color: #f9f9f9;
}
.table-striped > tbody > tr:nth-of-type(even) {
background-color: white;
}
.table-striped td, .table-striped th, .table-striped tr {
border: 1px solid black;
border-collapse: collapse;
margin: 1em 2em;
}
.rendered_html td, .rendered_html th {
text-align: left;
vertical-align: middle;
padding: 4px;
}
</style>
###Output
_____no_output_____
###Markdown
What is Vaex?Vaex is a python library for lazy **Out-of-Core DataFrames** (similar to Pandas), to visualize and explore big tabular datasets. It can calculate *statistics* such as mean, sum, count, standard deviation etc, on an *N-dimensional grid* up to **a billion** ($10^9$) objects/rows **per second**. Visualization is done using **histograms**, **density plots** and **3d volume rendering**, allowing interactive exploration of big data. Vaex uses memory mapping, a zero memory copy policy, and lazy computations for best performance (no memory wasted). Why vaex * **Performance:** works with huge tabular data, processes $\gt 10^9$ rows/second * **Lazy / Virtual columns:** compute on the fly, without wasting ram * **Memory efficient** no memory copies when doing filtering/selections/subsets. * **Visualization:** directly supported, a one-liner is often enough. * **User friendly API:** you will only need to deal with the DataFrame object, and tab completion + docstring will help you out: `ds.mean`, feels very similar to Pandas. * **Lean:** separated into multiple packages * `vaex-core`: DataFrame and core algorithms, takes numpy arrays as input columns. * `vaex-hdf5`: Provides memory mapped numpy arrays to a DataFrame. * `vaex-arrow`: [Arrow](https://arrow.apache.org/) support for cross language data sharing. * `vaex-viz`: Visualization based on matplotlib. * `vaex-jupyter`: Interactive visualization based on Jupyter widgets / ipywidgets, bqplot, ipyvolume and ipyleaflet. * `vaex-astro`: Astronomy related transformations and FITS file support. * `vaex-server`: Provides a server to access a DataFrame remotely. * `vaex-distributed`: (Deprecated) Now part of vaex-enterprise. * `vaex-qt`: Program written using Qt GUI. * `vaex`: Meta package that installs all of the above. * `vaex-ml`: [Machine learning](ml.ipynb) * **Jupyter integration**: vaex-jupyter will give you interactive visualization and selection in the Jupyter notebook and Jupyter lab. InstallationUsing conda: * `conda install -c conda-forge vaex`Using pip: * `pip install --upgrade vaex` Or read the [detailed instructions](installing.ipynb) Getting startedWe assume that you have installed vaex, and are running a [Jupyter notebook server](https://jupyter.readthedocs.io/en/latest/running.html). We start by importing vaex and asking it to give us an example dataset.
###Code
import vaex
df = vaex.example() # open the example dataset provided with vaex
###Output
_____no_output_____
###Markdown
Instead, you can [download some larger datasets](datasets.rst), or [read in your csv file](api.rstvaex.from_csv).
###Code
df # will pretty print the DataFrame
###Output
_____no_output_____
###Markdown
Using [square brackets[]](api.rstvaex.dataframe.DataFrame.__getitem__), we can easily filter or get different views on the DataFrame.
###Code
df_negative = df[df.x < 0] # easily filter your DataFrame, without making a copy
df_negative[:5][['x', 'y']] # take the first five rows, and only the 'x' and 'y' column (no memory copy!)
###Output
_____no_output_____
###Markdown
When dealing with huge datasets, say a billion rows ($10^9$), computations with the data can waste memory, up to 8 GB for a new column. Instead, vaex uses lazy computation, storing only a representation of the computation, and computations are done on the fly when needed. You can just use many of the numpy functions, as if it was a normal array.
###Code
import numpy as np
# creates an expression (nothing is computed)
some_expression = df.x + df.z
some_expression # for convenience, we print out some values
###Output
_____no_output_____
###Markdown
These expressions can be added to a DataFrame, creating what we call a *virtual column*. These virtual columns are similar to normal columns, except they do not waste memory.
###Code
df['r'] = some_expression # add a (virtual) column that will be computed on the fly
df.mean(df.x), df.mean(df.r) # calculate statistics on normal and virtual columns
###Output
_____no_output_____
###Markdown
One of the core features of vaex is its ability to calculate statistics on a regular (N-dimensional) grid. The dimensions of the grid are specified by the binby argument (analogous to SQL's grouby), and the shape and limits.
###Code
df.mean(df.r, binby=df.x, shape=32, limits=[-10, 10]) # create statistics on a regular grid (1d)
df.mean(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d
df.count(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d counts/histogram
###Output
_____no_output_____
###Markdown
These one and two dimensional grids can be visualized using any plotting library, such as matplotlib, but the setup can be tedious. For convenience we can use [plot1d](api.rstvaex.dataframe.DataFrame.plot1d), [plot](api.rstvaex.dataframe.DataFrame.plot), or see the [list of plotting commands](api.rstvisualization)
###Code
df.plot(df.x, df.y, show=True); # make a plot quickly
###Output
_____no_output_____
###Markdown
[Installation](installing.rst)[Tutorials](tutorials.rst)[Examples](examples.rst)[API](api.rst)[Datasets](datasets.rst)[FAQ](faq.rst)
###Code
<style>
pre {
white-space: pre-wrap !important;
}
.table-striped > tbody > tr:nth-of-type(odd) {
background-color: #f9f9f9;
}
.table-striped > tbody > tr:nth-of-type(even) {
background-color: white;
}
.table-striped td, .table-striped th, .table-striped tr {
border: 1px solid black;
border-collapse: collapse;
margin: 1em 2em;
}
.rendered_html td, .rendered_html th {
text-align: left;
vertical-align: middle;
padding: 4px;
}
</style>
###Output
_____no_output_____
###Markdown
What is Vaex?Vaex is a python library for lazy **Out-of-Core DataFrames** (similar to Pandas), to visualize and explore big tabular datasets. It can calculate *statistics* such as mean, sum, count, standard deviation etc, on an *N-dimensional grid* up to **a billion** ($10^9$) objects/rows **per second**. Visualization is done using **histograms**, **density plots** and **3d volume rendering**, allowing interactive exploration of big data. Vaex uses memory mapping, a zero memory copy policy, and lazy computations for best performance (no memory wasted). Why vaex * **Performance:** works with huge tabular data, processes $\gt 10^9$ rows/second * **Lazy / Virtual columns:** compute on the fly, without wasting ram * **Memory efficient** no memory copies when doing filtering/selections/subsets. * **Visualization:** directly supported, a one-liner is often enough. * **User friendly API:** you will only need to deal with the DataFrame object, and tab completion + docstring will help you out: `ds.mean`, feels very similar to Pandas. * **Lean:** separated into multiple packages * `vaex-core`: DataFrame and core algorithms, takes numpy arrays as input columns. * `vaex-hdf5`: Provides memory mapped numpy arrays to a DataFrame. * `vaex-arrow`: [Arrow](https://arrow.apache.org/) support for cross language data sharing. * `vaex-viz`: Visualization based on matplotlib. * `vaex-jupyter`: Interactive visualization based on Jupyter widgets / ipywidgets, bqplot, ipyvolume and ipyleaflet. * `vaex-astro`: Astronomy related transformations and FITS file support. * `vaex-server`: Provides a server to access a DataFrame remotely. * `vaex-distributed`: (Proof of concept) combined multiple servers / cluster into a single DataFrame for distributed computations. * `vaex-qt`: Program written using Qt GUI. * `vaex`: Meta package that installs all of the above. * `vaex-ml`: [Machine learning](ml.ipynb) * **Jupyter integration**: vaex-jupyter will give you interactive visualization and selection in the Jupyter notebook and Jupyter lab. InstallationUsing conda: * `conda install -c conda-forge vaex`Using pip: * `pip install --upgrade vaex` Or read the [detailed instructions](installing.ipynb) Getting startedWe assume that you have installed vaex, and are running a [Jupyter notebook server](https://jupyter.readthedocs.io/en/latest/running.html). We start by importing vaex and asking it to give us an example dataset.
###Code
import vaex
df = vaex.example() # open the example dataset provided with vaex
###Output
_____no_output_____
###Markdown
Instead, you can [download some larger datasets](datasets.rst), or [read in your csv file](api.rstvaex.from_csv).
###Code
df # will pretty print the DataFrame
###Output
_____no_output_____
###Markdown
Using [square brackets[]](api.rstvaex.dataframe.DataFrame.__getitem__), we can easily filter or get different views on the DataFrame.
###Code
df_negative = df[df.x < 0] # easily filter your DataFrame, without making a copy
df_negative[:5][['x', 'y']] # take the first five rows, and only the 'x' and 'y' column (no memory copy!)
###Output
_____no_output_____
###Markdown
When dealing with huge datasets, say a billion rows ($10^9$), computations with the data can waste memory, up to 8 GB for a new column. Instead, vaex uses lazy computation, storing only a representation of the computation, and computations are done on the fly when needed. You can just use many of the numpy functions, as if it was a normal array.
###Code
import numpy as np
# creates an expression (nothing is computed)
some_expression = df.x + df.z
some_expression # for convenience, we print out some values
###Output
_____no_output_____
###Markdown
These expressions can be added to a DataFrame, creating what we call a *virtual column*. These virtual columns are similar to normal columns, except they do not waste memory.
###Code
df['r'] = some_expression # add a (virtual) column that will be computed on the fly
df.mean(df.x), df.mean(df.r) # calculate statistics on normal and virtual columns
###Output
_____no_output_____
###Markdown
One of the core features of vaex is its ability to calculate statistics on a regular (N-dimensional) grid. The dimensions of the grid are specified by the binby argument (analogous to SQL's grouby), and the shape and limits.
###Code
df.mean(df.r, binby=df.x, shape=32, limits=[-10, 10]) # create statistics on a regular grid (1d)
df.mean(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d
df.count(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d counts/histogram
###Output
_____no_output_____
###Markdown
These one and two dimensional grids can be visualized using any plotting library, such as matplotlib, but the setup can be tedious. For convenience we can use [plot1d](api.rstvaex.dataframe.DataFrame.plot1d), [plot](api.rstvaex.dataframe.DataFrame.plot), or see the [list of plotting commands](api.rstvisualization)
###Code
df.plot(df.x, df.y, show=True); # make a plot quickly
###Output
_____no_output_____
###Markdown
IPython Notebook Validation for py.test - Documentation One of the powerful uses of the IPython notebook is for documentation purposes, here we use a notebook to demonstrate the behaviour and usage of the IPython Notebook Validation plugin for py.test. The IPython notebook format `.ipynb` stores outputs as well as inputs. Validating the notebook means to rerun the notebook and make sure that it is generating the same output as has been stored.Therefore, the **user MUST make the following the distinction**:1. Running a notebook manually will likely change the output stored in the associated .ipynb file. These outputs will be used as references for the tests (i.e. the outputs from the last time you ran the notebook)2. Validating with py.test plugin - these tests run your notebook code seperately without storing the information, the outputs generated will be compared against those in the .ipynb fileThe purpose of the testing module is to ensure that the notebook is behaving as expected and that changes to underlying source code, haven't affected the results of an IPython notebook. For example, for documentation purposes - such as this. Command line usage The py.test program doesn't usually collect notebooks for testing; by passing the `--nbval` flag at the command line, the IPython Notebook Validation plugin will collect and test notebook cells, comparing their outputs with those saved in the file.```$ py.test --nbval my_notebook.ipynb```There is also an option `--nbval-lax`, which collects notebooks and runs them, failing if there is an error. This mode does not check the output of cells unless they are marked with a special `NBVAL_CHECK_OUTPUT` comment.```$ py.test --nbval-lax my_notebook.ipynb``` REGEX Output sanitizing Since all output is captured by the IPython notebook, some pesky messages and prompts (with time-stamped messages, for example) may fail tests always, which might be expected. The plugin allows the user to specify a sanitizing file at the command prompt using the following flag:```$ py.test --nbval my_notebook.ipynb --sanitize-with my_sanitize_file```This sanitize file contains a number of REGEX replacements. It is recommended, when removing output for the tests, that you replace the removed output with some sort of marker, this helps with debugging. The following file is written to the folder of this notebook and can be used to santize its outputs:
###Code
%%writefile doc_sanitize.cfg
[regex1]
regex: \d{1,2}/\d{1,2}/\d{2,4}
replace: DATE-STAMP
[regex2]
regex: \d{2}:\d{2}:\d{2}
replace: TIME-STAMP
###Output
Writing doc_sanitize.cfg
###Markdown
The first replacement finds dates in the given format replaces them with the label 'DATE-STAMP', likewise for strings that look like time. These will prevent the tests from failing due to time differences. Validate this notebook This documentation is written as a Notebook. You can validate this notebook yourself, as shown below; the outputs that you see here are stored in the ipynb file. If your system produces different outputs, the testing process will fail. Just use the following commands:```$ cd /path/to/repo/docs/source$ py.test --nbval index.ipynb --sanitize-with doc_sanitize.cfg``` Examples of plugin behaviour The following examples demonstrate how the plugin behaves during testing. Test this notebook yourself to see the validation in action! These two imports produce no output as standard, if any **warnings** are printed out the cell will fail. Under normal operating conditions they will pass.
###Code
import numpy as np
import time
###Output
_____no_output_____
###Markdown
If python doesn't consistently print 7, then something has gone terribly wrong. **Deterministic cells** are expected to pass everytime
###Code
print(5+2)
###Output
7
###Markdown
**Random outputs** will always fail.
###Code
print([np.random.rand() for i in range(4)])
print([np.random.rand() for i in range(4)])
###Output
[0.36133679016382714, 0.5043774697891126, 0.23281910875007927, 0.2713065513128683]
[0.5512421277985322, 0.02592706358897756, 0.05036036771084684, 0.7515926759190724]
###Markdown
**Inconsistent number of lines** of output will cause an error to be thrown.
###Code
for i in range(np.random.randint(1, 8)):
print(1)
###Output
1
1
1
###Markdown
Because the **time and date** will change with each run, we would expect this cell to fail everytime. Using the sanitize file `doc_sanitize.cfg` (created above) you can clean up these outputs.
###Code
print('The time is: ' + time.strftime('%H:%M:%S'))
print("Today's date is: " + time.strftime('%d/%m/%y'))
###Output
The time is: 15:28:30
Today's date is: 21/12/16
###Markdown
Avoid output comparison for specific cells In case we want to avoid the testing process in specific input cells, we can write the comment ** NBVAL_IGNORE_OUTPUT ** at thebeginning of the them:
###Code
# NBVAL_IGNORE_OUTPUT
print('This is not going to be tested')
print(np.random.randint(1, 20000))
###Output
This is not going to be tested
12544
###Markdown
There's also a counterpart, to ensure the output is tested even when using `--nbval-lax` :
###Code
# NBVAL_CHECK_OUTPUT
print("This will be tested")
print(6 * 7)
###Output
This will be tested
42
###Markdown
Note that unexecuted cells will always skip its output check:
###Code
print('This is not going to be tested when unrun')
print(np.random.randint(1, 20000))
###Output
_____no_output_____
###Markdown
Skipping specific cells If, for some reason, a cell should not be executed during testing, the comment ** NBVAL_SKIP** can be used: ```python NBVAL_SKIPprint("Entering infinite loop...")while True: pass``` Checking exceptions Sometimes, we might want to allow a notebook cell to raise an exception, and check that the traceback is as we expect. By annotating the cell with the comment ** NBVAL_RAISES_EXCEPTION ** you can indicate that the cell is expected to raise an exception. The full traceback is not compared, but rather just that the raised exception is the same as the stored exception.
###Code
# NBVAL_RAISES_EXCEPTION
print("This exception will be tested")
raise RuntimeError("Foo")
###Output
This exception will be tested
###Markdown
This composes with the per-cell checking comments, so if you would like to avoid exceptions creating a test failure, but do not want to check the traceback, use ` NBVAL_IGNORE_OUTPUT`
###Code
# NBVAL_RAISES_EXCEPTION
print("If the raised exception doesn't match the stored exception, we get a failure")
raise SyntaxError("Foo")
# NBVAL_IGNORE_OUTPUT
# NBVAL_RAISES_EXCEPTION
print("This exception will not be checked, but will not cause a failure.")
raise RuntimeError("Bar")
###Output
This exception will not be checked, but will not cause a failure.
###Markdown
Using tags instead of comments If you do not want to put nbval comment annotations in your notebook, or your source language is not compatible with such annotations, you can use cell tags instead. Cell tags are strings that are added to the cell metadata under the label "tags", and can be added and remove using the "Tags" toolbar from Notebook version 5. The tags that Nbval recognizes are the same as the comment names, except lowercase, and with dashes ('-') instead of underscores ('\_'). For instance, the comment "`NBVAL_IGNORE_OUTPUT`" becomes the tag "`nbval-ignore-output`". However, for "`NBVAL_RAISES_EXCEPTION`", either "`nbval-raises-exception`" or the plain "`raises-exception`" tag can be used, since as of Notebook 5.1, the latter is a special tag that tells the Notebook cell executor to continue running normally after an exception is raised. Figures
###Code
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Currently, only the matplotlib text output of the Figure is compared, but it is possible to modify the plugin to allow comparison of the image whole string.
###Code
plt.imshow(np.array([[i + j for i in range(3)]
for j in range(3)]),
interpolation='None'
)
###Output
_____no_output_____
###Markdown
Skipping certain output types In case nbval is comparing some cell outputs you do not care about, like:```Traceback:missing key: TESTING dict_keys(['stderr']) != REFERENCE dict_keys(['application/javascript', 'stderr'])```There is a workaround. Add the following to your conftest.py:
###Code
def pytest_collectstart(collector):
collector.skip_compare += 'text/html', 'application/javascript', 'stderr',
###Output
_____no_output_____
###Markdown
Home AutoNormalize AutoNormalize is a Python library for automated datatable normalization. It allows you to build an EntitySet from a single denormalized table and generate features for machine learning using [Featuretools](https://github.com/alteryx/featuretools). Table of contents
###Code
.. toctree::
:maxdepth: 1
install
.. toctree::
:maxdepth: 2
guides/editing_dependencies
guides/kaggle_food_dataset
guides/kaggle_liquor_sales_dataset
.. toctree::
:maxdepth: 1
api_reference
release_notes
###Output
_____no_output_____
###Markdown
IPython Notebook Validation for py.test - Documentation One of the powerful uses of the IPython notebook is for documentation purposes, here we use a notebook to demonstrate the behaviour and usage of the IPython Notebook Validation plugin for py.test. The IPython notebook format `.ipynb` stores outputs as well as inputs. Validating the notebook means to rerun the notebook and make sure that it is generating the same output as has been stored.Therefore, the **user MUST make the following the distinction**:1. Running a notebook manually will likely change the output stored in the associated .ipynb file. These outputs will be used as references for the tests (i.e. the outputs from the last time you ran the notebook)2. Validating with py.test plugin - these tests run your notebook code seperately without storing the information, the outputs generated will be compared against those in the .ipynb fileThe purpose of the testing module is to ensure that the notebook is behaving as expected and that changes to underlying source code, haven't affected the results of an IPython notebook. For example, for documentation purposes - such as this. Command line usage The py.test program doesn't usually collect notebooks for testing; by passing the `--nbval` flag at the command line, the IPython Notebook Validation plugin will collect and test notebook cells, comparing their outputs with those saved in the file.```$ py.test --nbval my_notebook.ipynb```There is also an option `--nbval-lax`, which collects notebooks and runs them, failing if there is an error. This mode does not check the output of cells unless they are marked with a special `NBVAL_CHECK_OUTPUT` comment.```$ py.test --nbval-lax my_notebook.ipynb``` REGEX Output sanitizing Since all output is captured by the IPython notebook, some pesky messages and prompts (with time-stamped messages, for example) may fail tests always, which might be expected. The plugin allows the user to specify a sanitizing file at the command prompt using the following flag:```$ py.test --nbval my_notebook.ipynb --sanitize-with my_sanitize_file```This sanitize file contains a number of REGEX replacements. It is recommended, when removing output for the tests, that you replace the removed output with some sort of marker, this helps with debugging. The following file is written to the folder of this notebook and can be used to sanitize its outputs:
###Code
%%writefile doc_sanitize.cfg
[regex1]
regex: \d{1,2}/\d{1,2}/\d{2,4}
replace: DATE-STAMP
[regex2]
regex: \d{2}:\d{2}:\d{2}
replace: TIME-STAMP
###Output
Writing doc_sanitize.cfg
###Markdown
The first replacement finds dates in the given format replaces them with the label 'DATE-STAMP', likewise for strings that look like time. These will prevent the tests from failing due to time differences. Validate this notebook This documentation is written as a Notebook. You can validate this notebook yourself, as shown below; the outputs that you see here are stored in the ipynb file. If your system produces different outputs, the testing process will fail. Just use the following commands:```$ cd /path/to/repo/docs/source$ py.test --nbval index.ipynb --sanitize-with doc_sanitize.cfg``` Examples of plugin behaviour The following examples demonstrate how the plugin behaves during testing. Test this notebook yourself to see the validation in action! These two imports produce no output as standard, if any **warnings** are printed out the cell will fail. Under normal operating conditions they will pass.
###Code
import numpy as np
import time
###Output
_____no_output_____
###Markdown
If python doesn't consistently print 7, then something has gone terribly wrong. **Deterministic cells** are expected to pass everytime
###Code
print(5+2)
###Output
7
###Markdown
**Random outputs** will always fail.
###Code
print([np.random.rand() for i in range(4)])
print([np.random.rand() for i in range(4)])
###Output
[0.36133679016382714, 0.5043774697891126, 0.23281910875007927, 0.2713065513128683]
[0.5512421277985322, 0.02592706358897756, 0.05036036771084684, 0.7515926759190724]
###Markdown
**Inconsistent number of lines** of output will cause an error to be thrown.
###Code
for i in range(np.random.randint(1, 8)):
print(1)
###Output
1
1
1
###Markdown
Because the **time and date** will change with each run, we would expect this cell to fail everytime. Using the sanitize file `doc_sanitize.cfg` (created above) you can clean up these outputs.
###Code
print('The time is: ' + time.strftime('%H:%M:%S'))
print("Today's date is: " + time.strftime('%d/%m/%y'))
###Output
The time is: 15:28:30
Today's date is: 21/12/16
###Markdown
Avoid output comparison for specific cells In case we want to avoid the testing process in specific input cells, we can write the comment ** NBVAL_IGNORE_OUTPUT ** at thebeginning of the them:
###Code
# NBVAL_IGNORE_OUTPUT
print('This is not going to be tested')
print(np.random.randint(1, 20000))
###Output
This is not going to be tested
12544
###Markdown
There's also a counterpart, to ensure the output is tested even when using `--nbval-lax` :
###Code
# NBVAL_CHECK_OUTPUT
print("This will be tested")
print(6 * 7)
###Output
This will be tested
42
###Markdown
Note that unexecuted cells will always skip its output check:
###Code
print('This is not going to be tested when unrun')
print(np.random.randint(1, 20000))
###Output
_____no_output_____
###Markdown
Skipping specific cells If, for some reason, a cell should not be executed during testing, the comment ** NBVAL_SKIP** can be used: ```python NBVAL_SKIPprint("Entering infinite loop...")while True: pass``` Checking exceptions Sometimes, we might want to allow a notebook cell to raise an exception, and check that the traceback is as we expect. By annotating the cell with the comment ** NBVAL_RAISES_EXCEPTION ** you can indicate that the cell is expected to raise an exception. The full traceback is not compared, but rather just that the raised exception is the same as the stored exception.
###Code
# NBVAL_RAISES_EXCEPTION
print("This exception will be tested")
raise RuntimeError("Foo")
###Output
This exception will be tested
###Markdown
This composes with the per-cell checking comments, so if you would like to avoid exceptions creating a test failure, but do not want to check the traceback, use ` NBVAL_IGNORE_OUTPUT`
###Code
# NBVAL_RAISES_EXCEPTION
print("If the raised exception doesn't match the stored exception, we get a failure")
raise SyntaxError("Foo")
# NBVAL_IGNORE_OUTPUT
# NBVAL_RAISES_EXCEPTION
print("This exception will not be checked, but will not cause a failure.")
raise RuntimeError("Bar")
###Output
This exception will not be checked, but will not cause a failure.
###Markdown
Using tags instead of comments If you do not want to put nbval comment annotations in your notebook, or your source language is not compatible with such annotations, you can use cell tags instead. Cell tags are strings that are added to the cell metadata under the label "tags", and can be added and remove using the "Tags" toolbar from Notebook version 5. The tags that Nbval recognizes are the same as the comment names, except lowercase, and with dashes ('-') instead of underscores ('\_'). For instance, the comment "`NBVAL_IGNORE_OUTPUT`" becomes the tag "`nbval-ignore-output`". However, for "`NBVAL_RAISES_EXCEPTION`", either "`nbval-raises-exception`" or the plain "`raises-exception`" tag can be used, since as of Notebook 5.1, the latter is a special tag that tells the Notebook cell executor to continue running normally after an exception is raised. Figures
###Code
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Currently, only the matplotlib text output of the Figure is compared, but it is possible to modify the plugin to allow comparison of the image whole string.
###Code
plt.imshow(np.array([[i + j for i in range(3)]
for j in range(3)]),
interpolation='None'
)
###Output
_____no_output_____
###Markdown
Skipping certain output types In case nbval is comparing some cell outputs you do not care about, like:```Traceback:missing key: TESTING dict_keys(['stderr']) != REFERENCE dict_keys(['application/javascript', 'stderr'])```There is a workaround. Add the following to your conftest.py:
###Code
def pytest_collectstart(collector):
if collector.fspath and collector.fspath.ext == '.ipynb':
collector.skip_compare += 'text/html', 'application/javascript', 'stderr',
###Output
_____no_output_____
###Markdown
IPython Notebook Validation for py.test - Documentation One of the powerful uses of the IPython notebook is for documentation purposes, here we use a notebook to demonstrate the behaviour and usage of the IPython Notebook Validation plugin for py.test. The IPython notebook format `.ipynb` stores outputs as well as inputs. Validating the notebook means to rerun the notebook and make sure that it is generating the same output as has been stored.Therefore, the **user MUST make the following the distinction**:1. Running a notebook manually will likely change the output stored in the associated .ipynb file. These outputs will be used as references for the tests (i.e. the outputs from the last time you ran the notebook)2. Validating with py.test plugin - these tests run your notebook code seperately without storing the information, the outputs generated will be compared against those in the .ipynb fileThe purpose of the testing module is to ensure that the notebook is behaving as expected and that changes to underlying source code, haven't affected the results of an IPython notebook. For example, for documentation purposes - such as this. Command line usage The py.test program doesn't usually collect notebooks for testing; by passing the `--nbval` flag at the command line, the IPython Notebook Validation plugin will collect and test notebook cells, comparing their outputs with those saved in the file.```$ py.test --nbval my_notebook.ipynb```There is also an option `--nbval-lax`, which collects notebooks and runs them, failing if there is an error. This mode does not check the output of cells unless they are marked with a special `NBVAL_CHECK_OUTPUT` comment.```$ py.test --nbval-lax my_notebook.ipynb``` REGEX Output sanitizing Since all output is captured by the IPython notebook, some pesky messages and prompts (with time-stamped messages, for example) may fail tests always, which might be expected. The plugin allows the user to specify a sanitizing file at the command prompt using the following flag:```$ py.test --nbval my_notebook.ipynb --sanitize-with my_sanitize_file```This sanitize file contains a number of REGEX replacements. It is recommended, when removing output for the tests, that you replace the removed output with some sort of marker, this helps with debugging. The following file is written to the folder of this notebook and can be used to santize its outputs:
###Code
%%writefile doc_sanitize.cfg
[regex1]
regex: \d{1,2}/\d{1,2}/\d{2,4}
replace: DATE-STAMP
[regex2]
regex: \d{2}:\d{2}:\d{2}
replace: TIME-STAMP
###Output
Writing doc_sanitize.cfg
###Markdown
The first replacement finds dates in the given format replaces them with the label 'DATE-STAMP', likewise for strings that look like time. These will prevent the tests from failing due to time differences. Validate this notebook This documentation is written as a Notebook. You can validate this notebook yourself, as shown below; the outputs that you see here are stored in the ipynb file. If your system produces different outputs, the testing process will fail. Just use the following commands:```$ cd /path/to/repo/docs/source$ py.test --nbval index.ipynb --sanitize-with doc_sanitize.cfg``` Examples of plugin behaviour The following examples demonstrate how the plugin behaves during testing. Test this notebook yourself to see the validation in action! These two imports produce no output as standard, if any **warnings** are printed out the cell will fail. Under normal operating conditions they will pass.
###Code
import numpy as np
import time
###Output
_____no_output_____
###Markdown
If python doesn't consistently print 7, then something has gone terribly wrong. **Deterministic cells** are expected to pass everytime
###Code
print(5+2)
###Output
7
###Markdown
**Random outputs** will always fail.
###Code
print([np.random.rand() for i in range(4)])
print([np.random.rand() for i in range(4)])
###Output
[0.36133679016382714, 0.5043774697891126, 0.23281910875007927, 0.2713065513128683]
[0.5512421277985322, 0.02592706358897756, 0.05036036771084684, 0.7515926759190724]
###Markdown
**Inconsistent number of lines** of output will cause an error to be thrown.
###Code
for i in range(np.random.randint(1, 8)):
print(1)
###Output
1
1
1
###Markdown
Because the **time and date** will change with each run, we would expect this cell to fail everytime. Using the sanitize file `doc_sanitize.cfg` (created above) you can clean up these outputs.
###Code
print('The time is: ' + time.strftime('%H:%M:%S'))
print("Today's date is: " + time.strftime('%d/%m/%y'))
###Output
The time is: 15:28:30
Today's date is: 21/12/16
###Markdown
Avoid output comparison for specific cells In case we want to avoid the testing process in specific input cells, we can write the comment ** NBVAL_IGNORE_OUTPUT ** at thebeginning of the them:
###Code
# NBVAL_IGNORE_OUTPUT
print('This is not going to be tested')
print(np.random.randint(1, 20000))
###Output
This is not going to be tested
12544
###Markdown
There's also a counterpart, to ensure the output is tested even when using `--nbval-lax` :
###Code
# NBVAL_CHECK_OUTPUT
print("This will be tested")
print(6 * 7)
###Output
This will be tested
42
###Markdown
Note that unexecuted cells will always skip its output check:
###Code
print('This is not going to be tested when unrun')
print(np.random.randint(1, 20000))
###Output
_____no_output_____
###Markdown
Skipping specific cells If, for some reason, a cell should not be executed during testing, the comment ** NBVAL_SKIP** can be used: ```python NBVAL_SKIPprint("Entering infinite loop...")while True: pass``` Checking exceptions Sometimes, we might want to allow a notebook cell to raise an exception, and check that the traceback is as we expect. By annotating the cell with the comment ** NBVAL_RAISES_EXCEPTION ** you can indicate that the cell is expected to raise an exception. The full traceback is not compared, but rather just that the raised exception is the same as the stored exception.
###Code
# NBVAL_RAISES_EXCEPTION
print("This exception will be tested")
raise RuntimeError("Foo")
###Output
This exception will be tested
###Markdown
This composes with the per-cell checking comments, so if you would like to avoid exceptions creating a test failure, but do not want to check the traceback, use ` NBVAL_IGNORE_OUTPUT`
###Code
# NBVAL_RAISES_EXCEPTION
print("If the raised exception doesn't match the stored exception, we get a failure")
raise SyntaxError("Foo")
# NBVAL_IGNORE_OUTPUT
# NBVAL_RAISES_EXCEPTION
print("This exception will not be checked, but will not cause a failure.")
raise RuntimeError("Bar")
###Output
This exception will not be checked, but will not cause a failure.
###Markdown
Using tags instead of comments If you do not want to put nbval comment annotations in your notebook, or your source language is not compatible with such annotations, you can use cell tags instead. Cell tags are strings that are added to the cell metadata under the label "tags", and can be added and remove using the "Tags" toolbar from Notebook version 5. The tags that Nbval recognizes are the same as the comment names, except lowercase, and with dashes ('-') instead of underscores ('\_'). For instance, the comment "`NBVAL_IGNORE_OUTPUT`" becomes the tag "`nbval-ignore-output`". However, for "`NBVAL_RAISES_EXCEPTION`", either "`nbval-raises-exception`" or the plain "`raises-exception`" tag can be used, since as of Notebook 5.1, the latter is a special tag that tells the Notebook cell executor to continue running normally after an exception is raised. Figures
###Code
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Currently, only the matplotlib text output of the Figure is compared, but it is possible to modify the plugin to allow comparison of the image whole string.
###Code
plt.imshow(np.array([[i + j for i in range(3)]
for j in range(3)]),
interpolation='None'
)
###Output
_____no_output_____
###Markdown
Skipping certain output types In case nbval is comparing some cell outputs you do not care about, like:```Traceback:missing key: TESTING dict_keys(['stderr']) != REFERENCE dict_keys(['application/javascript', 'stderr'])```There is a workaround. Add the following to your conftest.py:
###Code
def pytest_collectstart(collector):
collector.skip_compare += 'text/html', 'application/javascript', 'stderr',
###Output
_____no_output_____
###Markdown
[Installation](installing.rst)[Tutorials](tutorials.rst)[Examples](examples.rst)[Gallery](gallery.rst)[API](api.rst)[Datasets](datasets.rst)[FAQ](faq.rst)
###Code
<style>
pre {
white-space: pre-wrap !important;
}
.table-striped > tbody > tr:nth-of-type(odd) {
background-color: #f9f9f9;
}
.table-striped > tbody > tr:nth-of-type(even) {
background-color: white;
}
.table-striped td, .table-striped th, .table-striped tr {
border: 1px solid black;
border-collapse: collapse;
margin: 1em 2em;
}
.rendered_html td, .rendered_html th {
text-align: left;
vertical-align: middle;
padding: 4px;
}
</style>
###Output
_____no_output_____
###Markdown
What is Vaex?Vaex is a python library for lazy **Out-of-Core DataFrames** (similar to Pandas), to visualize and explore big tabular datasets. It can calculate *statistics* such as mean, sum, count, standard deviation etc, on an *N-dimensional grid* up to **a billion** ($10^9$) objects/rows **per second**. Visualization is done using **histograms**, **density plots** and **3d volume rendering**, allowing interactive exploration of big data. Vaex uses memory mapping, a zero memory copy policy, and lazy computations for best performance (no memory wasted). Why vaex * **Performance:** works with huge tabular data, processes $\gt 10^9$ rows/second * **Lazy / Virtual columns:** compute on the fly, without wasting ram * **Memory efficient** no memory copies when doing filtering/selections/subsets. * **Visualization:** directly supported, a one-liner is often enough. * **User friendly API:** you will only need to deal with the DataFrame object, and tab completion + docstring will help you out: `ds.mean`, feels very similar to Pandas. * **Lean:** separated into multiple packages * `vaex-core`: DataFrame and core algorithms, takes numpy arrays as input columns. * `vaex-hdf5`: Provides memory mapped numpy arrays to a DataFrame. * `vaex-arrow`: [Arrow](https://arrow.apache.org/) support for cross language data sharing. * `vaex-viz`: Visualization based on matplotlib. * `vaex-jupyter`: Interactive visualization based on Jupyter widgets / ipywidgets, bqplot, ipyvolume and ipyleaflet. * `vaex-astro`: Astronomy related transformations and FITS file support. * `vaex-server`: Provides a server to access a DataFrame remotely. * `vaex-distributed`: (Deprecated) Now part of vaex-enterprise. * `vaex-qt`: Program written using Qt GUI. * `vaex`: Meta package that installs all of the above. * `vaex-ml`: [Machine learning](ml.ipynb) * **Jupyter integration**: vaex-jupyter will give you interactive visualization and selection in the Jupyter notebook and Jupyter lab. InstallationUsing conda: * `conda install -c conda-forge vaex`Using pip: * `pip install --upgrade vaex` Or read the [detailed instructions](installing.ipynb) Getting startedWe assume that you have installed vaex, and are running a [Jupyter notebook server](https://jupyter.readthedocs.io/en/latest/running.html). We start by importing vaex and asking it to give us an example dataset.
###Code
import vaex
df = vaex.example() # open the example dataset provided with vaex
###Output
_____no_output_____
###Markdown
Instead, you can [download some larger datasets](datasets.rst), or [read in your csv file](api.rstvaex.from_csv).
###Code
df # will pretty print the DataFrame
###Output
_____no_output_____
###Markdown
Using [square brackets[]](api.rstvaex.dataframe.DataFrame.__getitem__), we can easily filter or get different views on the DataFrame.
###Code
df_negative = df[df.x < 0] # easily filter your DataFrame, without making a copy
df_negative[:5][['x', 'y']] # take the first five rows, and only the 'x' and 'y' column (no memory copy!)
###Output
_____no_output_____
###Markdown
When dealing with huge datasets, say a billion rows ($10^9$), computations with the data can waste memory, up to 8 GB for a new column. Instead, vaex uses lazy computation, storing only a representation of the computation, and computations are done on the fly when needed. You can just use many of the numpy functions, as if it was a normal array.
###Code
import numpy as np
# creates an expression (nothing is computed)
some_expression = df.x + df.z
some_expression # for convenience, we print out some values
###Output
_____no_output_____
###Markdown
These expressions can be added to a DataFrame, creating what we call a *virtual column*. These virtual columns are similar to normal columns, except they do not waste memory.
###Code
df['r'] = some_expression # add a (virtual) column that will be computed on the fly
df.mean(df.x), df.mean(df.r) # calculate statistics on normal and virtual columns
###Output
_____no_output_____
###Markdown
One of the core features of vaex is its ability to calculate statistics on a regular (N-dimensional) grid. The dimensions of the grid are specified by the binby argument (analogous to SQL's grouby), and the shape and limits.
###Code
df.mean(df.r, binby=df.x, shape=32, limits=[-10, 10]) # create statistics on a regular grid (1d)
df.mean(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d
df.count(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d counts/histogram
###Output
_____no_output_____
###Markdown
These one and two dimensional grids can be visualized using any plotting library, such as matplotlib, but the setup can be tedious. For convenience we can use [plot1d](api.rstvaex.dataframe.DataFrame.plot1d), [plot](api.rstvaex.dataframe.DataFrame.plot), or see the [list of plotting commands](api.rstvisualization)
###Code
df.plot(df.x, df.y, show=True); # make a plot quickly
###Output
_____no_output_____
###Markdown
[Installation](installing.rst)[Tutorials](tutorials.rst)[Guides](guides.rst)[Configuration](conf.md)[API](api.rst)[Datasets](datasets.rst)[FAQ](faq.rst)
###Code
<style>
pre {
white-space: pre-wrap !important;
}
.table-striped > tbody > tr:nth-of-type(odd) {
background-color: #f9f9f9;
}
.table-striped > tbody > tr:nth-of-type(even) {
background-color: white;
}
.table-striped td, .table-striped th, .table-striped tr {
border: 1px solid black;
border-collapse: collapse;
margin: 1em 2em;
}
.rendered_html td, .rendered_html th {
text-align: left;
vertical-align: middle;
padding: 4px;
}
</style>
###Output
_____no_output_____
###Markdown
What is Vaex?Vaex is a python library for lazy **Out-of-Core DataFrames** (similar to Pandas), to visualize and explore big tabular datasets. It can calculate *statistics* such as mean, sum, count, standard deviation etc, on an *N-dimensional grid* up to **a billion** ($10^9$) objects/rows **per second**. Visualization is done using **histograms**, **density plots** and **3d volume rendering**, allowing interactive exploration of big data. Vaex uses memory mapping, a zero memory copy policy, and lazy computations for best performance (no memory wasted). Why vaex * **Performance:** works with huge tabular data, processes $\gt 10^9$ rows/second * **Lazy / Virtual columns:** compute on the fly, without wasting ram * **Memory efficient** no memory copies when doing filtering/selections/subsets. * **Visualization:** directly supported, a one-liner is often enough. * **User friendly API:** you will only need to deal with the DataFrame object, and tab completion + docstring will help you out: `ds.mean`, feels very similar to Pandas. * **Lean:** separated into multiple packages * `vaex-core`: DataFrame and core algorithms, takes numpy arrays as input columns. * `vaex-hdf5`: Provides memory mapped numpy arrays to a DataFrame. * `vaex-arrow`: [Arrow](https://arrow.apache.org/) support for cross language data sharing. * `vaex-viz`: Visualization based on matplotlib. * `vaex-jupyter`: Interactive visualization based on Jupyter widgets / ipywidgets, bqplot, ipyvolume and ipyleaflet. * `vaex-astro`: Astronomy related transformations and FITS file support. * `vaex-server`: Provides a server to access a DataFrame remotely. * `vaex-distributed`: (Deprecated) Now part of vaex-enterprise. * `vaex-qt`: Program written using Qt GUI. * `vaex`: Meta package that installs all of the above. * `vaex-ml`: [Machine learning](tutorial_ml.ipynb) * **Jupyter integration**: vaex-jupyter will give you interactive visualization and selection in the Jupyter notebook and Jupyter lab. InstallationUsing conda: * `conda install -c conda-forge vaex`Using pip: * `pip install --upgrade vaex` Or read the [detailed instructions](installing.ipynb) Getting startedWe assume that you have installed vaex, and are running a [Jupyter notebook server](https://jupyter.readthedocs.io/en/latest/running.html). We start by importing vaex and asking it to give us an example dataset.
###Code
import vaex
df = vaex.example() # open the example dataset provided with vaex
###Output
_____no_output_____
###Markdown
Instead, you can [download some larger datasets](datasets.rst), or [read in your csv file](api.rstvaex.from_csv).
###Code
df # will pretty print the DataFrame
###Output
_____no_output_____
###Markdown
Using [square brackets[]](api.rstvaex.dataframe.DataFrame.__getitem__), we can easily filter or get different views on the DataFrame.
###Code
df_negative = df[df.x < 0] # easily filter your DataFrame, without making a copy
df_negative[:5][['x', 'y']] # take the first five rows, and only the 'x' and 'y' column (no memory copy!)
###Output
_____no_output_____
###Markdown
When dealing with huge datasets, say a billion rows ($10^9$), computations with the data can waste memory, up to 8 GB for a new column. Instead, vaex uses lazy computation, storing only a representation of the computation, and computations are done on the fly when needed. You can just use many of the numpy functions, as if it was a normal array.
###Code
import numpy as np
# creates an expression (nothing is computed)
some_expression = df.x + df.z
some_expression # for convenience, we print out some values
###Output
_____no_output_____
###Markdown
These expressions can be added to a DataFrame, creating what we call a *virtual column*. These virtual columns are similar to normal columns, except they do not waste memory.
###Code
df['r'] = some_expression # add a (virtual) column that will be computed on the fly
df.mean(df.x), df.mean(df.r) # calculate statistics on normal and virtual columns
###Output
_____no_output_____
###Markdown
One of the core features of vaex is its ability to calculate statistics on a regular (N-dimensional) grid. The dimensions of the grid are specified by the binby argument (analogous to SQL's grouby), and the shape and limits.
###Code
df.mean(df.r, binby=df.x, shape=32, limits=[-10, 10]) # create statistics on a regular grid (1d)
df.mean(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d
df.count(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d counts/histogram
###Output
_____no_output_____
###Markdown
These one and two dimensional grids can be visualized using any plotting library, such as matplotlib, but the setup can be tedious. For convenience we can use [plot1d](api.rstvaex.dataframe.DataFrame.plot1d), [plot](api.rstvaex.dataframe.DataFrame.plot), or see the [list of plotting commands](api.rstvisualization)
###Code
df.plot(df.x, df.y, show=True); # make a plot quickly
###Output
_____no_output_____
###Markdown
sphinx-astrorefsAstro-style references in Sphinx documents
###Code
.. contents:: Table of Contents
:local:
###Output
_____no_output_____
###Markdown
About``sphinx-astrorefs`` is a [Sphinx](https://www.sphinx-doc.org/en/master/) extension for formatting citations and references in a style similar to that used in the astrophysics literature. It is built on top of [sphinxcontrib-bibtex](https://sphinxcontrib-bibtex.readthedocs.io), a Sphinx extension for including ``bibtex`` citations in Sphinx documents. By pre- and post-processing the input and output from Sphinx and ``sphinxcontrib-bibtex``, ``sphinx-astrorefs`` allows you to obtain citations in the astro-specific style in the HTML and LaTeX rendering of your Sphinx documents. InstallationInstallation is easiest using ``pip`````pip install sphinx-astrorefs```That's it! UsageTo start using this extension, add it to your ``extensions`` in your project's ``conf.py``, e.g.,```extensions= ['sphinxcontrib.bibtex','sphinx_astrorefs']```(note the underscore instead of a dash) where we include ``sphinxcontrib.bibtex``, because this extensions assumes that you are using this extension for bibliography management.
###Code
Then place the bibliography somewhere in your Sphinx document tree, using ``:style: astrostyle``, e.g.,
.. code-block:: rst
.. bibliography::
:cited:
:style: astrostyle
If you are using ``sphinxcontrib.bibtex`` version < 2, you would also specify the ``.bib file`` in the first line as, e.g., ``.. bibliography:: references.bib`` and you could indicate the bibtex's file encoding as another line, e.g., ``:encoding: latin``. In ``sphinxcontrib.bibtex`` version >= 2, you instead specify the ``.bib`` file using the ``bibtex_bibfiles`` configuration parameter in your ``conf.py`` (e.g., ``bibtex_bibfiles= ['references.bib']``) and the encoding is another configuration parameter (e.g., ``bibtex_encoding= 'latin'``).
Then you can use the same citation commands as you normally would in ``sphinxcontrib.bibtex``, e.g.,
.. code-block:: rst
Dark matter was first proposed by :cite:`Zwicky1933` to explain
the high velocity dispersion of galaxies in the Coma cluster
is rendered as
.. epigraph::
Dark matter was first proposed by :cite:`Zwicky1933` to explain the high velocity dispersion of galaxies in the Coma cluster.
A regular invocation of ``:cite:`` like this is rendered as ``AUTHOR (YEAR)`` (that is, as ``\citet`` in LaTeX's ``natbib`` package). However, if you enclose the ``:cite:`` command in parentheses, the citation is rendered as ``(AUTHOR YEAR)`` (that is, the equivalent of ``natbib``'s ``\citep``). For example,
.. code-block:: rst
Further evidence for the existence of dark matter was provided by the
flat rotation curve of the Andromeda galaxy (:cite:`RubinFord1970`).
is rendered as
.. epigraph::
Further evidence for the existence of dark matter was provided by the flat rotation curve of the Andromeda galaxy (:cite:`RubinFord1970`).
Placing a colon ``:`` in front of the ``:cite:`` command causes the citation to simply appear as ``AUTHOR YEAR`` (the equivalent of ``natbib``'s ``\citealt``), e.g.,
.. code-block:: rst
Simulations of structure formation in a Universe dominated by weakly-interacting,
cold dark matter revealed that these simulations' large-scale structure is
consistent with observations (e.g., ::cite:`DavisEfstathiouFrenkWhite1985`).
is rendered as
.. epigraph::
Simulations of structure formation in a Universe dominated by weakly-interacting, cold dark matter revealed that these simulations' large-scale structure is consistent with observations (e.g., ::cite:`DavisEfstathiouFrenkWhite1985`).
If you are writing in a Jupyter notebook included into your Sphinx document using, e.g., `nbsphinx <https://github.com/spatialaudio/nbsphinx>`__, the same rules apply, but the citation command in Markdown cells is ``<cite data-cite="LABEL">Somebody et al. (some year)</cite>``, for example
.. code-block:: html
Further simulations of the formation of individual galaxies in the cold-dark-matter
paradigm showed that the density profile of galaxies' dark-matter distributions
("halos") follows a universal
form (<cite data-cite="NavarroFrenkWhite1997">Navarro et al. 1997</cite>).
is rendered as
###Output
_____no_output_____
###Markdown
> Further simulations of the formation of individual galaxies in the cold-dark-matter paradigm showed that the density profile of galaxies' dark-matter distributions ("halos") follows a universal form (Navarro et al. (1997)).
###Code
Note here that what you put between the ``<cite>`` and ``</cite>`` tags does *not* get transferred to the output, the only part of this entire command that is used is the ``data-cite`` key, which is the bibtex key of the reference. Note further that you can use *any* HTML tag, as long as it has the ``data-cite`` attribute. Enclosing the ``<cite>...</cite>`` command in parentheses again produces an ``(AUTHOR YEAR)`` citation, while preceding the ``<cite>`` tag with a colon ``:`` produces ``AUTHOR YEAR``.
###Output
_____no_output_____
###Markdown
``sphinx-astrorefs`` follows the Astrophysical Journal's rules for the number of authors to be displayed in the reference list. If you have been looking at the [Reference section](References) below, you will have noticed that all authors of the papers referenced so far are included, as up to five authors are shown in the author list. For larger collaborations, the first five authors are shown, e.g.,
###Code
.. code-block:: rst
An important clue as to the identity of dark matter was provided by
the failure of microlensing searches to find enough microlensing events
by compact halo objects towards the Large Magellanic Cloud to account
for all of the Milky Way's dark matter (e.g., ::cite:`AlcockEtAl2000`).
Thus, dark matter is not comprised of faint or dead stars.
is rendered as
.. epigraph::
An important clue as to the identity of dark matter was provided by
the failure of microlensing searches to find enough microlensing events
by compact halo objects towards the Large Magellanic Cloud to account
for all of the Milky Way's dark matter (e.g., ::cite:`AlcockEtAl2000`).
Thus, dark matter is not comprised of faint or dead stars.
and clicking on the reference, you see that the author list is cut at five authors followed by *et al.*. ``sphinx-astrorefs`` will also correctly render collaborations as part of the author list, e.g.,
.. code-block:: rst
Currently, the best measurements of the cosmic abundance of dark matter
are provided by observations of the anisotropies in the cosmic
microwave background, which show that dark matter is about five times
more abundance than ordinary baryonic matter (:cite:`Planck2016`).
is rendered as
.. epigraph::
Currently, the best measurements of the cosmic abundance of dark matter
are provided by observations of the anisotropies in the cosmic
microwave background, which show that dark matter is about five times
more abundance than ordinary baryonic matter (:cite:`Planck2016`).
Again, click on the reference to see how it is rendered in the bibliography.
``sphinx-astrorefs`` will also correctly add a suffix 'a', 'b', etc. to the year if the labels of two bibliographical entries would otherwise be identical. For example,
.. code-block:: rst
In the last fifteen years, increasingly-large simulations of the formation
of individual galaxy halos have revealed the detailed small-scale properties
of dark-matter halos (e.g., ::cite:`SpringelEtAl2008a`), which in the
standard cold-dark-matter paradigm should have a large amount of substructure
down to Earth-mass scales. If dark-matter were to annihilate to photons, this
dense substructure would show up as extended gamma-ray sources in
the Milky Way's halo (:cite:`SpringelEtAl2008b`).
is rendered as
.. epigraph::
In the last fifteen years, increasingly-large simulations of the formation
of individual galaxy halos have revealed the detailed small-scale properties
of dark-matter halos (e.g., ::cite:`SpringelEtAl2008a`), which in the
standard cold-dark-matter paradigm should have a large amount of substructure
down to Earth-mass scales. If dark-matter were to annihilate to photons, this
dense substructure would show up as extended gamma-ray sources in
the Milky Way's halo (:cite:`SpringelEtAl2008b`).
###Output
_____no_output_____
###Markdown
The bibliographyCurrently, the bibliography style implemented deviates from that used in astronomical journals in that it includes the title. Please open an [issue](https://github.com/jobovy/sphinx-astrorefs/issues) if you would like the option of excluding the title.If they are included in the ``bibtex`` entry, ``sphinx-astrorefs`` will use the ``doi``, ``adsurl``, or ``eprint`` fields to create links to:* the DOI (typically the journal version) using the ``doi`` field (linked from the journal in the bibliography), * the [SAO/NASA Astrophysics Data System](https://ui.adsabs.harvard.edu/about/) (ADS) entry using the ``adsurl`` field (linked from the volume), and * the [arXiv.org](https://arxiv.org/) entry using the ``eprint`` field (linked from the pages). It is easiest to create your ``bibtex`` file by directly copying the ADS' bibtex citation. However, those entries contain macros defined by the AAS journals for different journals (e.g., ``\apj`` for the Astrophysical Journal), which are typically resolved by the LaTeX document classes of journals or by including a LaTeX style file. To resolve these macros in ``sphinx-astrorefs``, use the configuration value ``astrorefs_resolve_aas_macros= True`` in your Sphinx ``conf.py`` file. When you do this, you need to also provide an input and an output ``.bib`` filename for the ``bibtex`` file, where the input file is the one you create from ADS and the output one is the one used by Sphinx (and so is the one you would reference in the ``bibtex_bibfiles`` configuration parameter [``sphinxcontrib.bibtex`` version >= 2] or in the ``.. bibliography:: references.bib`` directive [``sphinxcontrib.bibtex`` version < 2]). For example,```astrorefs_resolve_aas_macros= Trueastrorefs_resolve_aas_macros_infile= 'refs.bib'astrorefs_resolve_aas_macros_outfile= 'references.bib'``` can be used when the ``bibtex`` file you create is ``refs.bib`` and you use the ``bibtex_files= ['references.bib']`` configuration parameter (``.. bibliography:: references.bib`` in ``sphinxcontrib.bibtex`` version < 2). Note that there is no support for processing multiple bibtex files like this at this point. The ``pybtex`` ``astrostyle``Under the hood, ``sphinx-astrorefs`` uses [pybtex](https://pybtex.org/) to define the ``:style: astrostyle`` style used in the bibliography directive. This style consists of the ``AUTHOR YEAR`` labels, the rendering of the references in the bibliography, and the sorting of the references.If you want to use this ``pybtex`` style in a different setting (that is, outside of Sphinx), you can simply do```from sphinx_astrorefs import pybtex_astropybtex_astro.register()``` and then you can use the ``astrostyle`` as a ``pybtex`` style. References
###Code
.. bibliography::
:cited:
:style: astrostyle
###Output
_____no_output_____
###Markdown
Spreadsheet widget for the Jupyter Notebook InstallationWith conda:```$ conda install -c conda-forge ipysheet```With pip:```$ pip install ipysheet```To make it work for Jupyter lab:```$ jupyter labextension ipysheet```If you have notebook 5.2 or below, you also need to execute:```$ jupyter nbextension enable --py --sys-prefix ipysheet$ jupyter nbextension enable --py --sys-prefix ipysheet.renderer_nbext``` Getting startedAlthough ipysheet contains an object oriented interface, we recomment using the "state machine" based interface, similar to matplotlib's pyplot/pylab interface. Comparible to matplotlib pylab interface, this interface keeps track of the current sheet. Using the [cell](api.rstipysheet.easy.cell) function, [Cell](api.rstipysheet.sheet.Cell) widgets are added to the current sheet.Importing ipysheet and invoking the [sheet](api.rstipysheet.easy.sheet) function will create the default spreadsheet widget. The function returns a [Sheet](api.rstipysheet.sheet.Sheet) instance, leaving that expression as a last statement of a code cell will display it, otherwise use `display(sheet)`.Note that this documentation is a Jupyter notebook, and you can try it out directly on Binder:[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/QuantStack/ipysheet/master?filepath=docs%2Fsource%2Findex.ipynb)
###Code
import ipysheet
sheet = ipysheet.sheet()
sheet
###Output
_____no_output_____
###Markdown
Using the [cell](api.rstipysheet.easy.cell) function, we can create [Cell](api.rstipysheet.sheet.Cell) widgets that are directly added to the current sheet.
###Code
sheet = ipysheet.sheet(rows=3, columns=4)
cell1 = ipysheet.cell(0, 0, 'Hello')
cell2 = ipysheet.cell(2, 0, 'World')
cell_value = ipysheet.cell(2,2, 42.)
sheet
###Output
_____no_output_____
###Markdown
EventsUsing link or observe we can link widgets together, or attach event handlers **Note:** The examples below contain event handler written in Python that needs a running kernel, they will not work in the pure html documentation. They do work in binder!
###Code
import ipywidgets as widgets
sheet = ipysheet.sheet(rows=3, columns=2, column_headers=False, row_headers=False)
cell_a = ipysheet.cell(0, 1, 1, label_left='a')
cell_b = ipysheet.cell(1, 1, 2, label_left='b')
cell_sum = ipysheet.cell(2, 1, 3, label_left='sum', read_only=True)
# create a slider linked to cell a
slider = widgets.FloatSlider(min=-10, max=10, description='a')
widgets.jslink((cell_a, 'value'), (slider, 'value'))
# changes in a or b should trigger this function
def calculate(change):
cell_sum.value = cell_a.value + cell_b.value
cell_a.observe(calculate, 'value')
cell_b.observe(calculate, 'value')
widgets.VBox([sheet, slider])
###Output
_____no_output_____
###Markdown
Cell rangesInstead of referring to a single cell, we can also refer to cell ranges, rows and columns.
###Code
sheet = ipysheet.sheet(rows=5, columns=4)
row = ipysheet.row(0, [0, 1, 2, 3], background_color="red")
column = ipysheet.column(1, ["a", "b", "c", "d"], row_start=1, background_color="green")
cells = ipysheet.cell_range([["hi", "ola"], ["ciao", "bonjour"], ["hallo", "guten tag"]],
row_start=1, column_start=2, background_color="yellow")
sheet
###Output
_____no_output_____
###Markdown
CalculationsSince this is such a common pattern, a helper decorator [calculation](api.rstipysheet.easy.calculation) is provided, shortening the above code considerably.
###Code
import ipywidgets as widgets
sheet = ipysheet.sheet(rows=3, columns=2, column_headers=False, row_headers=False)
cell_a = ipysheet.cell(0, 1, 1, label_left='a')
cell_b = ipysheet.cell(1, 1, 2, label_left='b')
cell_sum = ipysheet.cell(2, 1, 3, label_left='sum', read_only=True)
# create a slider linked to cell a
slider = widgets.FloatSlider(min=-10, max=10, description='a')
widgets.jslink((cell_a, 'value'), (slider, 'value'))
@ipysheet.calculation(inputs=[cell_a, cell_b], output=cell_sum)
def calculate(a, b):
return a + b
widgets.VBox([sheet, slider])
###Output
_____no_output_____
###Markdown
Renderersipysheet is build on Handsontable, which allows [custom renderers](https://docs.handsontable.com/demo-custom-renderers.html), which we also support. Note that this means ipysheet allows arbitrary JavaScript injection (TODO: make this part optional)
###Code
jscode_renderer_negative = """
function (instance, td, row, col, prop, value, cellProperties) {
Handsontable.renderers.TextRenderer.apply(this, arguments);
if (value < 0)
td.style.backgroundColor = 'red'
else
td.style.backgroundColor = 'green'
}
"""
ipysheet.renderer(code=jscode_renderer_negative, name='negative');
import random
s = ipysheet.sheet(rows=3, columns=4)
data = [[random.randint(-10, 10) for j in range(4)] for j in range(3)]
ipysheet.cell_range(data, renderer='negative')
s
###Output
_____no_output_____
###Markdown
If [flexx](http://flexx.readthedocs.io/en/stable/pyscript/index.html) is installed, Python code can be transpiled to JavaScript at runtime.
###Code
def renderer_negative(instance, td, row, col, prop, value, cellProperties):
Handsontable.renderers.TextRenderer.apply(this, arguments);
if value < 0:
td.style.backgroundColor = 'orange'
else:
td.style.backgroundColor = ''
ipysheet.renderer(code=renderer_negative, name='negative_transpiled');
import random
s = ipysheet.sheet(rows=3, columns=4)
data = [[random.randint(-10, 10) for j in range(4)] for j in range(3)]
ipysheet.cell_range(data, renderer='negative_transpiled')
s
###Output
_____no_output_____
###Markdown
[Installation](install.md)[Examples](examples.rst)[API](api.rst)[VR](vr.md)[pythreejs](pythreejs.ipynb) IpyvolumeIPyvolume is a Python library to visualize 3d volumes and glyphs (e.g.3d scatter plots), in the Jupyter notebook, with minimal configurationand effort. It is currently pre-1.0, so use at own risk. IPyvolume's*volshow* is to 3d arrays what matplotlib's imshow is to 2d arrays.Other (more mature but possibly more difficult to use) related packagesare [yt](http://yt-project.org/), [VTK](https://www.vtk.org) and/or[Mayavi](http://docs.enthought.com/mayavi/mayavi/).Feedback and contributions are welcome:[Github](https://github.com/maartenbreddels/ipyvolume),[Email](mailto:[email protected]) or[Twitter](https://twitter.com/maartenbreddels).— Quick intro VolumeFor quick resuls, use `ipyvolume.widgets.quickvolshow`. From a numpyarray, we create two boxes, using slicing, and visualize it.
###Code
import numpy as np
import ipyvolume as ipv
V = np.zeros((128,128,128)) # our 3d array
# outer box
V[30:-30,30:-30,30:-30] = 0.75
V[35:-35,35:-35,35:-35] = 0.0
# inner box
V[50:-50,50:-50,50:-50] = 0.25
V[55:-55,55:-55,55:-55] = 0.0
ipv.quickvolshow(V, level=[0.25, 0.75], opacity=0.03, level_width=0.1, data_min=0, data_max=1)
###Output
_____no_output_____
###Markdown
Scatter plotSimple scatter plots are also supported.
###Code
import ipyvolume as ipv
import numpy as np
x, y, z = np.random.random((3, 10000))
ipv.quickscatter(x, y, z, size=1, marker="sphere")
###Output
_____no_output_____
###Markdown
Quiver plotQuiver plots are also supported, showing a vector at each point.
###Code
import ipyvolume as ipv
import numpy as np
x, y, z, u, v, w = np.random.random((6, 1000))*2-1
ipv.quickquiver(x, y, z, u, v, w, size=5)
###Output
_____no_output_____
###Markdown
Mesh plotAnd surface/mesh plots, showing surfaces or wireframes.
###Code
import ipyvolume as ipv
x, y, z, u, v = ipv.examples.klein_bottle(draw=False)
ipv.figure()
m = ipv.plot_mesh(x, y, z, wireframe=False)
ipv.squarelim()
ipv.show()
###Output
_____no_output_____
###Markdown
Built on IpywidgetsFor anything more sophisticated, use `ipyvolume.pylab`, ipyvolume's copyof matplotlib's 3d plotting (+ volume rendering).Since ipyvolume is built on[ipywidgets](http://ipywidgets.readthedocs.io/), we can link widget'sproperties.
###Code
import ipyvolume as ipv
import numpy as np
x, y, z, u, v, w = np.random.random((6, 1000))*2-1
selected = np.random.randint(0, 1000, 100)
ipv.figure()
quiver = ipv.quiver(x, y, z, u, v, w, size=5, size_selected=8, selected=selected)
from ipywidgets import FloatSlider, ColorPicker, VBox, jslink
size = FloatSlider(min=0, max=30, step=0.1)
size_selected = FloatSlider(min=0, max=30, step=0.1)
color = ColorPicker()
color_selected = ColorPicker()
jslink((quiver, 'size'), (size, 'value'))
jslink((quiver, 'size_selected'), (size_selected, 'value'))
jslink((quiver, 'color'), (color, 'value'))
jslink((quiver, 'color_selected'), (color_selected, 'value'))
VBox([ipv.gcc(), size, size_selected, color, color_selected])
###Output
_____no_output_____
###Markdown
forallpeople documentation> *"For all time. For all people."*> - Nicolas de Caritat (Marquis de Condorcet), in regards to the metric system (now the SI system) `forallpeople` is a Python library for representing the SI base units to enable easy-to-use, units-aware calculations. In addition to the SI base units, `forallpeople` can be used to represent units which are defined by the SI unit system, such as US Customary units. Installation```pip install forallpeople``` A Simple Example
###Code
import forallpeople as si
si.environment('default')
g = 9.81 * si.m/si.s**2
m = 3500 * si.kg
force = m * g
force
###Output
_____no_output_____
###Markdown
Spreadsheet widget for the Jupyter Notebook InstallationWith conda:```$ conda install -c conda-forge ipysheet```With pip:```$ pip install ipysheet```To make it work for Jupyter lab:```$ jupyter labextension install ipysheet```If you have notebook 5.2 or below, you also need to execute:```$ jupyter nbextension enable --py --sys-prefix ipysheet``` Getting startedAlthough ipysheet contains an object oriented interface, we recomment using the "state machine" based interface, similar to matplotlib's pyplot/pylab interface. Comparible to matplotlib pylab interface, this interface keeps track of the current sheet. Using the [cell](api.rstipysheet.easy.cell) function, [Cell](api.rstipysheet.sheet.Cell) widgets are added to the current sheet.Importing ipysheet and invoking the [sheet](api.rstipysheet.easy.sheet) function will create the default spreadsheet widget. The function returns a [Sheet](api.rstipysheet.sheet.Sheet) instance, leaving that expression as a last statement of a code cell will display it, otherwise use `display(sheet)`.Note that this documentation is a Jupyter notebook, and you can try it out directly on Binder:[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/QuantStack/ipysheet/master?filepath=docs%2Fsource%2Findex.ipynb)
###Code
import ipysheet
sheet = ipysheet.sheet()
sheet
###Output
_____no_output_____
###Markdown
Using the [cell](api.rstipysheet.easy.cell) function, we can create [Cell](api.rstipysheet.sheet.Cell) widgets that are directly added to the current sheet.
###Code
sheet = ipysheet.sheet(rows=3, columns=4)
cell1 = ipysheet.cell(0, 0, 'Hello')
cell2 = ipysheet.cell(2, 0, 'World')
cell_value = ipysheet.cell(2,2, 42.)
sheet
###Output
_____no_output_____
###Markdown
EventsUsing link or observe we can link widgets together, or attach event handlers **Note:** The examples below contain event handler written in Python that needs a running kernel, they will not work in the pure html documentation. They do work in binder!
###Code
import ipywidgets as widgets
sheet = ipysheet.sheet(rows=3, columns=2, column_headers=False, row_headers=False)
cell_a = ipysheet.cell(0, 1, 1, label_left='a')
cell_b = ipysheet.cell(1, 1, 2, label_left='b')
cell_sum = ipysheet.cell(2, 1, 3, label_left='sum', read_only=True)
# create a slider linked to cell a
slider = widgets.FloatSlider(min=-10, max=10, description='a')
widgets.jslink((cell_a, 'value'), (slider, 'value'))
# changes in a or b should trigger this function
def calculate(change):
cell_sum.value = cell_a.value + cell_b.value
cell_a.observe(calculate, 'value')
cell_b.observe(calculate, 'value')
widgets.VBox([sheet, slider])
###Output
_____no_output_____
###Markdown
Cell rangesInstead of referring to a single cell, we can also refer to cell ranges, rows and columns.
###Code
sheet = ipysheet.sheet(rows=5, columns=4)
row = ipysheet.row(0, [0, 1, 2, 3], background_color="red")
column = ipysheet.column(1, ["a", "b", "c", "d"], row_start=1, background_color="green")
cells = ipysheet.cell_range([["hi", "ola"], ["ciao", "bonjour"], ["hallo", "guten tag"]],
row_start=1, column_start=2, background_color="yellow")
sheet
###Output
_____no_output_____
###Markdown
CalculationsSince this is such a common pattern, a helper decorator [calculation](api.rstipysheet.easy.calculation) is provided, shortening the above code considerably.
###Code
import ipywidgets as widgets
sheet = ipysheet.sheet(rows=3, columns=2, column_headers=False, row_headers=False)
cell_a = ipysheet.cell(0, 1, 1, label_left='a')
cell_b = ipysheet.cell(1, 1, 2, label_left='b')
cell_sum = ipysheet.cell(2, 1, 3, label_left='sum', read_only=True)
# create a slider linked to cell a
slider = widgets.FloatSlider(min=-10, max=10, description='a')
widgets.jslink((cell_a, 'value'), (slider, 'value'))
@ipysheet.calculation(inputs=[cell_a, cell_b], output=cell_sum)
def calculate(a, b):
return a + b
widgets.VBox([sheet, slider])
###Output
_____no_output_____
###Markdown
Renderersipysheet is build on Handsontable, which allows [custom renderers](https://docs.handsontable.com/demo-custom-renderers.html), which we also support.
###Code
jscode_renderer_negative = """function (value) {
return {
backgroundColor: value < 0 ? 'red' : 'green'
};
}
"""
ipysheet.renderer(code=jscode_renderer_negative, name='negative');
import random
s = ipysheet.sheet(rows=3, columns=4)
data = [[random.randint(-10, 10) for j in range(4)] for j in range(3)]
ipysheet.cell_range(data, renderer='negative')
s
###Output
_____no_output_____
###Markdown
If [flexx](http://flexx.readthedocs.io/en/stable/pyscript/index.html) is installed, Python code can be transpiled to JavaScript at runtime.
###Code
def renderer_negative(value):
return {
'backgroundColor': 'orange' if value < 0 else ''
}
ipysheet.renderer(code=renderer_negative, name='negative_transpiled');
import random
s = ipysheet.sheet(rows=3, columns=4)
data = [[random.randint(-10, 10) for j in range(4)] for j in range(3)]
ipysheet.cell_range(data, renderer='negative_transpiled')
s
###Output
_____no_output_____
###Markdown
Home ![Woodwork](images/woodwork.svg) Woodwork is a library that helps with data typing of 2-dimensional tabular data structures.It provides a special namespace on your DataFrame, `ww`, which contains the physical, logical, and semantic data types.It can be used with [Featuretools](https://www.featuretools.com), [EvalML](https://evalml.featurelabs.com/en/latest/), and general machine learning applications where logical and semantic typing information is important.Woodwork provides simple interfaces for adding and updating logical and semantic typing information, as well as selecting data columns based on the types. Quick StartBelow is an example of using Woodwork to automatically infer the Logical Types for a DataFrame and select columns with specific types.
###Code
import woodwork as ww
df = ww.demo.load_retail(nrows=100, init_woodwork=False)
df.ww.init(name="retail")
df.ww
filtered_df = df.ww.select(include=['numeric', 'Boolean'])
filtered_df.head(5)
###Output
_____no_output_____
###Markdown
Table of contents
###Code
.. toctree::
:maxdepth: 1
install
start
.. toctree::
:maxdepth: 2
guides/guides_index
.. toctree::
:maxdepth: 1
api_reference
release_notes
###Output
_____no_output_____
###Markdown
pyrecorder
###Code
|travis| |python| |license|
.. |travis| image:: https://travis-ci.com/julesy89/pyrecorder.svg?branch=master
:alt: build status
:target: https://travis-ci.com/julesy/pyrecorder
.. |python| image:: https://img.shields.io/badge/python-3.6-blue.svg
:alt: python 3.6
.. |license| image:: https://img.shields.io/badge/license-apache-orange.svg
:alt: license apache
:target: https://www.apache.org/licenses/LICENSE-2.0
###Output
_____no_output_____
###Markdown
![logo](_static/pyrecorder.png) **Github:** https://github.com/anyoptimization/pyrecorder Installation The framework is available at the PyPi Repository:
###Code
.. code-block:: bash
pip install -U pyrecorder
###Output
_____no_output_____
###Markdown
Matplotlib Please note that the example below are using the `vp80` codec to create a video which can be played in a browser and also this documentation. Nevertheless, without specifing the codec `mp4v` is used by default. Video
###Code
import numpy as np
import matplotlib.pyplot as plt
from pyrecorder.recorder import Recorder
from pyrecorder.writers.video import Video
from pyrecorder.converters.matplotlib import Matplotlib
# create a writer that takes the
writer = Video("example.webm", codec='vp80')
# use the with statement to close the recorder when done
with Recorder(writer) as rec:
# use black background for this plot
plt.style.use('dark_background')
# record 10 different snapshots
for t in range(50, 500, 5):
a = np.arange(t) * 0.1
plt.plot(a * np.sin(a), a * np.cos(a))
plt.xlim(-50, 50)
plt.ylim(-50, 50)
plt.axis('off')
# use the record to store the current plot
rec.record()
# revert to default settings for other plots
plt.style.use('default')
###Output
_____no_output_____
###Markdown
When the code has finished, the video has been written to the specified filename `example.mp4`. Let us look what has been recorded:
###Code
display("example.webm")
###Output
_____no_output_____
###Markdown
For this example the default settings have been used and the global drawing space of Matplotlib is recorded. Let us look at another example with a few modifications:
###Code
import numpy as np
import matplotlib.pyplot as plt
from pyrecorder.recorder import Recorder
from pyrecorder.writers.video import Video
from pyrecorder.converters.matplotlib import Matplotlib
# initialize the converter which is creates an image when `record()` is called
converter = Matplotlib(dpi=120)
writer = Video("example2.webm", codec='vp80')
rec = Recorder(writer, converter=converter)
for t in range(10):
# let us create a local figure object with two sub figures
fig, (ax1, ax2) = plt.subplots(2, figsize=(3, 4))
X = np.random.random((100, 2))
ax1.scatter(X[:, 0], X[:, 1], color="green")
X = np.random.random((100, 2))
ax2.scatter(X[:, 0], X[:, 1], color="red")
# fix the size of figure and legends
fig.tight_layout()
# take a snapshot the specific figure object with the recorder
rec.record(fig=fig)
rec.close()
display("example2.webm")
###Output
_____no_output_____
###Markdown
GIF
###Code
import matplotlib.pyplot as plt
import numpy as np
from pyrecorder.recorder import Recorder
from pyrecorder.writers.gif import GIF
with Recorder(GIF("example.gif", duration=0.2)) as rec:
for t in range(0, 200, 5):
x = np.linspace(0, 4, 100)
y = np.sin(2 * np.pi * (x - 0.01 * t))
plt.plot(x, y)
rec.record()
###Output
_____no_output_____
###Markdown
![My GIF](example.gif) Contact
###Code
Feel free to contact me if you have any question:
| `Julian Blank <http://julianblank.com>`_ (blankjul [at] egr.msu.edu)
| Michigan State University
| Computational Optimization and Innovation Laboratory (COIN)
| East Lansing, MI 48824, USA
###Output
_____no_output_____
###Markdown
pysampling
###Code
|python| |license|
.. |python| image:: https://img.shields.io/badge/python-3.6-blue.svg
:alt: python 3.6
.. |license| image:: https://img.shields.io/badge/license-apache-orange.svg
:alt: license apache
:target: https://www.apache.org/licenses/LICENSE-2.0
https://github.com/anyoptimization/pysampling
###Output
_____no_output_____
###Markdown
ezmodel - A common interface for models and model selection
###Code
|python| |license|
.. |python| image:: https://img.shields.io/badge/python-3.6-blue.svg
:alt: python 3.6
.. |license| image:: https://img.shields.io/badge/license-apache-orange.svg
:alt: license apache
:target: https://www.apache.org/licenses/LICENSE-2.0
###Output
_____no_output_____
###Markdown
Installation The framework is available at the PyPi Repository:
###Code
.. code:: bash
pip install -U ezmodel
###Output
_____no_output_____
###Markdown
Surrogate Models Benchmark
###Code
.. include:: ../../ezmodel/usage/usage_benchmark.py
:literal:
::
mean std min max median
label
Kriging[regr=constant,corr=gauss,thetaU=100,ARD=False] 0.017159 0.007472 0.009658 0.025359 0.014855
Kriging[regr=constant,corr=gauss,thetaU=20,ARD=False] 0.017159 0.007472 0.009658 0.025359 0.014855
Kriging[regr=linear,corr=gauss,thetaU=100,ARD=False] 0.018064 0.008069 0.010350 0.027456 0.014246
Kriging[regr=linear,corr=gauss,thetaU=20,ARD=False] 0.018064 0.008069 0.010350 0.027456 0.014246
Kriging[regr=constant,corr=gauss,thetaU=100,ARD=True] 0.021755 0.007409 0.011955 0.028896 0.025163
Kriging[regr=constant,corr=gauss,thetaU=20,ARD=True] 0.021755 0.007409 0.011955 0.028896 0.025163
Kriging[regr=linear,corr=gauss,thetaU=20,ARD=True] 0.025018 0.011348 0.011576 0.040585 0.022124
Kriging[regr=linear,corr=gauss,thetaU=100,ARD=True] 0.025018 0.011348 0.011576 0.040585 0.022124
Kriging[regr=constant,corr=exp,thetaU=100,ARD=False] 0.034493 0.009328 0.025092 0.045610 0.030661
Kriging[regr=constant,corr=exp,thetaU=20,ARD=False] 0.034493 0.009328 0.025092 0.045610 0.030661
Kriging[regr=linear,corr=exp,thetaU=100,ARD=False] 0.035734 0.009922 0.025611 0.047926 0.031473
Kriging[regr=linear,corr=exp,thetaU=20,ARD=False] 0.035734 0.009922 0.025611 0.047926 0.031473
Kriging[regr=constant,corr=exp,thetaU=100,ARD=True] 0.051527 0.010941 0.037944 0.065866 0.047440
Kriging[regr=constant,corr=exp,thetaU=20,ARD=True] 0.051527 0.010941 0.037944 0.065866 0.047440
Kriging[regr=linear,corr=exp,thetaU=100,ARD=True] 0.065867 0.025312 0.039058 0.104449 0.059957
Kriging[regr=linear,corr=exp,thetaU=20,ARD=True] 0.065867 0.025312 0.039058 0.104449 0.059957
RBF[kernel=cubic,tail=quadratic,normalized=True] 0.121947 0.033552 0.077895 0.167120 0.127345
RBF[kernel=cubic,tail=constant,normalized=True] 0.125348 0.037982 0.072579 0.169413 0.140753
RBF[kernel=cubic,tail=linear,normalized=True] 0.125474 0.038609 0.071268 0.169843 0.137987
RBF[kernel=cubic,tail=linear+quadratic,normalized=True] 0.126070 0.039773 0.071279 0.171862 0.135489
###Output
_____no_output_____
###Markdown
Kriging
###Code
.. include:: ../../ezmodel/usage/models/usage_kriging.py
:literal:
###Output
_____no_output_____
###Markdown
RBF
###Code
.. include:: ../../ezmodel/usage/models/usage_rbf.py
:literal:
###Output
_____no_output_____
###Markdown
Contact
###Code
Feel free to contact us if you have any question:
::
Julian Blank (blankjul [at] msu.edu)
Michigan State University
Computational Optimization and Innovation Laboratory (COIN)
East Lansing, MI 48824, USA
###Output
_____no_output_____
###Markdown
[Installation](installing.rst)[Tutorials](tutorials.rst)[Examples](examples.rst)[Server](server.md)[Configuration](conf.md)[Gallery](gallery.rst)[API](api.rst)[Datasets](datasets.rst)[FAQ](faq.rst)
###Code
<style>
pre {
white-space: pre-wrap !important;
}
.table-striped > tbody > tr:nth-of-type(odd) {
background-color: #f9f9f9;
}
.table-striped > tbody > tr:nth-of-type(even) {
background-color: white;
}
.table-striped td, .table-striped th, .table-striped tr {
border: 1px solid black;
border-collapse: collapse;
margin: 1em 2em;
}
.rendered_html td, .rendered_html th {
text-align: left;
vertical-align: middle;
padding: 4px;
}
</style>
###Output
_____no_output_____
###Markdown
What is Vaex?Vaex is a python library for lazy **Out-of-Core DataFrames** (similar to Pandas), to visualize and explore big tabular datasets. It can calculate *statistics* such as mean, sum, count, standard deviation etc, on an *N-dimensional grid* up to **a billion** ($10^9$) objects/rows **per second**. Visualization is done using **histograms**, **density plots** and **3d volume rendering**, allowing interactive exploration of big data. Vaex uses memory mapping, a zero memory copy policy, and lazy computations for best performance (no memory wasted). Why vaex * **Performance:** works with huge tabular data, processes $\gt 10^9$ rows/second * **Lazy / Virtual columns:** compute on the fly, without wasting ram * **Memory efficient** no memory copies when doing filtering/selections/subsets. * **Visualization:** directly supported, a one-liner is often enough. * **User friendly API:** you will only need to deal with the DataFrame object, and tab completion + docstring will help you out: `ds.mean`, feels very similar to Pandas. * **Lean:** separated into multiple packages * `vaex-core`: DataFrame and core algorithms, takes numpy arrays as input columns. * `vaex-hdf5`: Provides memory mapped numpy arrays to a DataFrame. * `vaex-arrow`: [Arrow](https://arrow.apache.org/) support for cross language data sharing. * `vaex-viz`: Visualization based on matplotlib. * `vaex-jupyter`: Interactive visualization based on Jupyter widgets / ipywidgets, bqplot, ipyvolume and ipyleaflet. * `vaex-astro`: Astronomy related transformations and FITS file support. * `vaex-server`: Provides a server to access a DataFrame remotely. * `vaex-distributed`: (Deprecated) Now part of vaex-enterprise. * `vaex-qt`: Program written using Qt GUI. * `vaex`: Meta package that installs all of the above. * `vaex-ml`: [Machine learning](ml.ipynb) * **Jupyter integration**: vaex-jupyter will give you interactive visualization and selection in the Jupyter notebook and Jupyter lab. InstallationUsing conda: * `conda install -c conda-forge vaex`Using pip: * `pip install --upgrade vaex` Or read the [detailed instructions](installing.ipynb) Getting startedWe assume that you have installed vaex, and are running a [Jupyter notebook server](https://jupyter.readthedocs.io/en/latest/running.html). We start by importing vaex and asking it to give us an example dataset.
###Code
import vaex
df = vaex.example() # open the example dataset provided with vaex
###Output
_____no_output_____
###Markdown
Instead, you can [download some larger datasets](datasets.rst), or [read in your csv file](api.rstvaex.from_csv).
###Code
df # will pretty print the DataFrame
###Output
_____no_output_____
###Markdown
Using [square brackets[]](api.rstvaex.dataframe.DataFrame.__getitem__), we can easily filter or get different views on the DataFrame.
###Code
df_negative = df[df.x < 0] # easily filter your DataFrame, without making a copy
df_negative[:5][['x', 'y']] # take the first five rows, and only the 'x' and 'y' column (no memory copy!)
###Output
_____no_output_____
###Markdown
When dealing with huge datasets, say a billion rows ($10^9$), computations with the data can waste memory, up to 8 GB for a new column. Instead, vaex uses lazy computation, storing only a representation of the computation, and computations are done on the fly when needed. You can just use many of the numpy functions, as if it was a normal array.
###Code
import numpy as np
# creates an expression (nothing is computed)
some_expression = df.x + df.z
some_expression # for convenience, we print out some values
###Output
_____no_output_____
###Markdown
These expressions can be added to a DataFrame, creating what we call a *virtual column*. These virtual columns are similar to normal columns, except they do not waste memory.
###Code
df['r'] = some_expression # add a (virtual) column that will be computed on the fly
df.mean(df.x), df.mean(df.r) # calculate statistics on normal and virtual columns
###Output
_____no_output_____
###Markdown
One of the core features of vaex is its ability to calculate statistics on a regular (N-dimensional) grid. The dimensions of the grid are specified by the binby argument (analogous to SQL's grouby), and the shape and limits.
###Code
df.mean(df.r, binby=df.x, shape=32, limits=[-10, 10]) # create statistics on a regular grid (1d)
df.mean(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d
df.count(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d counts/histogram
###Output
_____no_output_____
###Markdown
These one and two dimensional grids can be visualized using any plotting library, such as matplotlib, but the setup can be tedious. For convenience we can use [plot1d](api.rstvaex.dataframe.DataFrame.plot1d), [plot](api.rstvaex.dataframe.DataFrame.plot), or see the [list of plotting commands](api.rstvisualization)
###Code
df.plot(df.x, df.y, show=True); # make a plot quickly
###Output
_____no_output_____
###Markdown
[Installation](installing.rst)[Tutorial](tutorial.ipynb)[Examples](examples.rst)[API](api.rst)[Machine Learning](ml.rst)[Datasets](datasets.rst)
###Code
<style>
pre {
white-space: pre-wrap !important;
}
.table-striped > tbody > tr:nth-of-type(odd) {
background-color: #f9f9f9;
}
.table-striped > tbody > tr:nth-of-type(even) {
background-color: white;
}
.table-striped td, .table-striped th, .table-striped tr {
border: 1px solid black;
border-collapse: collapse;
margin: 1em 2em;
}
.rendered_html td, .rendered_html th {
text-align: left;
vertical-align: middle;
padding: 4px;
}
</style>
###Output
_____no_output_____
###Markdown
What is Vaex?Vaex is a python library for lazy **Out-of-Core DataFrames** (similar to Pandas), to visualize and explore big tabular datasets. It can calculate *statistics* such as mean, sum, count, standard deviation etc, on an *N-dimensional grid* up to **a billion** ($10^9$) objects/rows **per second**. Visualization is done using **histograms**, **density plots** and **3d volume rendering**, allowing interactive exploration of big data. Vaex uses memory mapping, zero memory copy policy and lazy computations for best performance (no memory wasted). Why vaex * **Performance:** Works with huge tabular data, process $\gt 10^9$ rows/second * **Lazy / Virtual columns:** compute on the fly, without wasting ram * **Memory efficient** no memory copies when doing filtering/selections/subsets. * **Visualization:** directly supported, a one-liner is often enough. * **User friendly API:** You will only need to deal with the DataFrame object, and tab completion + docstring will help you out: `ds.mean`, feels very similar to Pandas. * **Lean:** separated into multiple packages * `vaex-core`: DataFrame and core algorithms, takes numpy arrays as input columns. * `vaex-hdf5`: Provides memory mapped numpy arrays to a DataFrame. * `vaex-arrow`: [Arrow](https://arrow.apache.org/) support for cross language data sharing. * `vaex-viz`: Visualization based on matplotlib. * `vaex-jupyter`: Interactive visualization based on Jupyter widgets / ipywidgets, bqplot, ipyvolume and ipyleaflet. * `vaex-astro`: Astronomy related transformations and FITS file support. * `vaex-server`: Provides a server to access a DataFrame remotely. * `vaex-distributed`: (Proof of concept) combined multiple servers / cluster into a single DataFrame for distributed computations. * `vaex-qt`: Program written using Qt GUI. * `vaex`: meta package that installs all of the above. * `vaex-ml`: [Machine learning](ml.ipynb) * **Jupyter integration**: vaex-jupyter will give you interactive visualization and selection in the Jupyter notebook and Jupyter lab. InstallationUsing conda: * `conda install -c conda-forge vaex`Using pip: * `pip install --upgrade vaex` Or read the [detailed instructions](installing.ipynb) Getting startedWe assuming you have installed vaex, and are running a [Jupyter notebook server](https://jupyter.readthedocs.io/en/latest/running.html). We start by importing vaex and ask it to give us sample example dataset.
###Code
import vaex
df = vaex.example() # open the example dataset provided with vaex
###Output
_____no_output_____
###Markdown
Instead, you can [download some larger datasets](datasets.rst), or [read in your csv file](api.rstvaex.from_csv).
###Code
df # will pretty print the DataFrame
###Output
_____no_output_____
###Markdown
Using [square brackets[]](api.rstvaex.dataframe.DataFrame.__getitem__), we can easily filter or get different views on the DataFrame.
###Code
df_negative = df[df.x < 0] # easily filter your DataFrame, without making a copy
df_negative[:5][['x', 'y']] # take the first five rows, and only the 'x' and 'y' column (no memory copy!)
###Output
_____no_output_____
###Markdown
When dealing with huge datasets, say a billion rows ($10^9$), computations with the data can waste memory, up to 8 GB for a new column. Instead, vaex uses lazy computation, only a representation of the computation is stored, and computations done on the fly when needed. Even though, you can just many of the numpy functions, as if it was a normal array.
###Code
import numpy as np
# creates an expression (nothing is computed)
some_expression = df.x + df.z
some_expression # for convinience, we print out some values
###Output
_____no_output_____
###Markdown
These expressions can be added to a DataFrame, creating what we call a *virtual column*. These virtual columns are simular to normal columns, except they do not waste memory.
###Code
df['r'] = some_expression # add a (virtual) column that will be computed on the fly
df.mean(df.x), df.mean(df.r) # calculate statistics on normal and virtual columns
###Output
_____no_output_____
###Markdown
One of the core features of vaex is its ability to calculate statistics on a regular (N-dimensional) grid. The dimensions of the grid are specified by the binby argument (analogous to SQL's grouby), and the shape and limits.
###Code
df.mean(df.r, binby=df.x, shape=32, limits=[-10, 10]) # create statistics on a regular grid (1d)
df.mean(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d
df.count(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d counts/histogram
###Output
_____no_output_____
###Markdown
These one and two dimensional grids can be visualized using any plotting library, such as matplotlib, but the setup can be tedious. For convenience we can use [plot1d](api.rstvaex.dataframe.DataFrame.plot1d), [plot](api.rstvaex.dataframe.DataFrame.plot), or see the [list of plotting commands](api.rstvisualization)
###Code
df.plot(df.x, df.y, show=True); # make a plot quickly
###Output
_____no_output_____
###Markdown
.. _quick-start:
###Code
# What is Featuretools?
<img src="_static/images/featuretools_nav2.svg" width="500" align="center" alt="Featuretools">
**Featuretools** is a framework to perform automated feature engineering. It excels at transforming temporal and relational datasets into feature matrices for machine learning.
## 5 Minute Quick Start
Below is an example of using Deep Feature Synthesis (DFS) to perform automated feature engineering. In this example, we apply DFS to a multi-table dataset consisting of timestamped customer transactions.
###Output
_____no_output_____
###Markdown
import featuretools as ft
###Code
#### Load Mock Data
###Output
_____no_output_____
###Markdown
data = ft.demo.load_mock_customer()
###Code
#### Prepare data
In this toy dataset, there are 3 DataFrames.
- **customers**: unique customers who had sessions
- **sessions**: unique sessions and associated attributes
- **transactions**: list of events in this session
###Output
_____no_output_____
###Markdown
customers_df = data["customers"]customers_df sessions_df = data["sessions"]sessions_df.sample(5) transactions_df = data["transactions"]transactions_df.sample(5)
###Code
First, we specify a dictionary with all the DataFrames in our dataset. The DataFrames are passed in with their index column and time index column if one exists for the DataFrame.
###Output
_____no_output_____
###Markdown
dataframes = { "customers" : (customers_df, "customer_id"), "sessions" : (sessions_df, "session_id", "session_start"), "transactions" : (transactions_df, "transaction_id", "transaction_time")}
###Code
Second, we specify how the DataFrames are related. When two DataFrames have a one-to-many relationship, we call the "one" DataFrame, the "parent DataFrame". A relationship between a parent and child is defined like this:
(parent_dataframe, parent_column, child_dataframe, child_column)
In this dataset we have two relationships
###Output
_____no_output_____
###Markdown
relationships = [("sessions", "session_id", "transactions", "session_id"), ("customers", "customer_id", "sessions", "customer_id")]
###Code
.. note::
To manage setting up DataFrames and relationships, we recommend using the :class:`EntitySet <featuretools.EntitySet>` class which offers convenient APIs for managing data like this. See :doc:`getting_started/using_entitysets` for more information.
###Output
_____no_output_____
###Markdown
Run Deep Feature SynthesisA minimal input to DFS is a dictionary of DataFrames, a list of relationships, and the name of the target DataFrame whose features we want to calculate. The ouput of DFS is a feature matrix and the corresponding list of feature definitions.Let's first create a feature matrix for each customer in the data
###Code
feature_matrix_customers, features_defs = ft.dfs(dataframes=dataframes,
relationships=relationships,
target_dataframe_name="customers")
feature_matrix_customers
###Output
_____no_output_____
###Markdown
We now have dozens of new features to describe a customer's behavior. Change target DataFrameOne of the reasons DFS is so powerful is that it can create a feature matrix for *any* DataFrame in our EntitySet. For example, if we wanted to build features for sessions.
###Code
dataframes = {
"customers" : (customers_df.copy(), "customer_id"),
"sessions" : (sessions_df.copy(), "session_id", "session_start"),
"transactions" : (transactions_df.copy(), "transaction_id", "transaction_time")
}
feature_matrix_sessions, features_defs = ft.dfs(dataframes=dataframes,
relationships=relationships,
target_dataframe_name="sessions")
feature_matrix_sessions.head(5)
###Output
_____no_output_____
###Markdown
Understanding Feature Output~~~~~~~~~~~~~~~~~~~~~~~~~~~~In general, Featuretools references generated features through the feature name. In order to make features easier to understand, Featuretools offers two additional tools, :func:`featuretools.graph_feature` and :func:`featuretools.describe_feature`, to help explain what a feature is and the steps Featuretools took to generate it. Let's look at this example feature:
###Code
feature = features_defs[18]
feature
###Output
_____no_output_____
###Markdown
Feature lineage graphsFeature lineage graphs visually walk through feature generation. Starting from the base data, they show step by step the primitives applied and intermediate features generated to create the final feature.
###Code
ft.graph_feature(feature)
###Output
_____no_output_____
###Markdown
.. graphviz:: getting_started/graphs/demo_feat.dotFeature descriptions""""""""""""""""""""Featuretools can also automatically generate English sentence descriptions of features. Feature descriptions help to explain what a feature is, and can be further improved by including manually defined custom definitions. See :doc:`/guides/feature_descriptions` for more details on how to customize automatically generated feature descriptions.
###Code
ft.describe_feature(feature)
###Output
_____no_output_____
###Markdown
What's next?* Learn about [Representing Data with EntitySets](getting_started/using_entitysets.ipynb)* Apply automated feature engineering with [Deep Feature Synthesis](getting_started/afe.ipynb)* Explore [runnable demos](https://www.featuretools.com/demos) based on real world use cases* Can't find what you're looking for? Ask for [help](resources/help.rst)
###Code
Table of contents
-----------------
.. toctree::
:maxdepth: 1
install
.. toctree::
:maxdepth: 2
getting_started/getting_started_index
guides/guides_index
.. toctree::
:maxdepth: 1
:caption: Resources and References
resources/resources_index
api_reference
Primitives <https://primitives.featurelabs.com/>
release_notes
Other links
------------
* :ref:`genindex`
* :ref:`search`
###Output
_____no_output_____
###Markdown
[Installation](installing.rst)[Tutorials](tutorials.rst)[Guides](guides.rst)[Configuration](conf.md)[API](api.rst)[Datasets](datasets.rst)[FAQ](faq.rst)
###Code
<style>
pre {
white-space: pre-wrap !important;
}
.table-striped > tbody > tr:nth-of-type(odd) {
background-color: #f9f9f9;
}
.table-striped > tbody > tr:nth-of-type(even) {
background-color: white;
}
.table-striped td, .table-striped th, .table-striped tr {
border: 1px solid black;
border-collapse: collapse;
margin: 1em 2em;
}
.rendered_html td, .rendered_html th {
text-align: left;
vertical-align: middle;
padding: 4px;
}
</style>
###Output
_____no_output_____
###Markdown
What is Vaex?Vaex is a python library for lazy **Out-of-Core DataFrames** (similar to Pandas), to visualize and explore big tabular datasets. It can calculate *statistics* such as mean, sum, count, standard deviation etc, on an *N-dimensional grid* up to **a billion** ($10^9$) objects/rows **per second**. Visualization is done using **histograms**, **density plots** and **3d volume rendering**, allowing interactive exploration of big data. Vaex uses memory mapping, a zero memory copy policy, and lazy computations for best performance (no memory wasted). Why vaex * **Performance:** works with huge tabular data, processes $\gt 10^9$ rows/second * **Lazy / Virtual columns:** compute on the fly, without wasting ram * **Memory efficient** no memory copies when doing filtering/selections/subsets. * **Visualization:** directly supported, a one-liner is often enough. * **User friendly API:** you will only need to deal with the DataFrame object, and tab completion + docstring will help you out: `ds.mean`, feels very similar to Pandas. * **Lean:** separated into multiple packages * `vaex-core`: DataFrame and core algorithms, takes numpy arrays as input columns. * `vaex-hdf5`: Provides memory mapped numpy arrays to a DataFrame. * `vaex-arrow`: [Arrow](https://arrow.apache.org/) support for cross language data sharing. * `vaex-viz`: Visualization based on matplotlib. * `vaex-jupyter`: Interactive visualization based on Jupyter widgets / ipywidgets, bqplot, ipyvolume and ipyleaflet. * `vaex-astro`: Astronomy related transformations and FITS file support. * `vaex-server`: Provides a server to access a DataFrame remotely. * `vaex-distributed`: (Deprecated) Now part of vaex-enterprise. * `vaex-qt`: Program written using Qt GUI. * `vaex`: Meta package that installs all of the above. * `vaex-ml`: [Machine learning](ml.ipynb) * **Jupyter integration**: vaex-jupyter will give you interactive visualization and selection in the Jupyter notebook and Jupyter lab. InstallationUsing conda: * `conda install -c conda-forge vaex`Using pip: * `pip install --upgrade vaex` Or read the [detailed instructions](installing.ipynb) Getting startedWe assume that you have installed vaex, and are running a [Jupyter notebook server](https://jupyter.readthedocs.io/en/latest/running.html). We start by importing vaex and asking it to give us an example dataset.
###Code
import vaex
df = vaex.example() # open the example dataset provided with vaex
###Output
_____no_output_____
###Markdown
Instead, you can [download some larger datasets](datasets.rst), or [read in your csv file](api.rstvaex.from_csv).
###Code
df # will pretty print the DataFrame
###Output
_____no_output_____
###Markdown
Using [square brackets[]](api.rstvaex.dataframe.DataFrame.__getitem__), we can easily filter or get different views on the DataFrame.
###Code
df_negative = df[df.x < 0] # easily filter your DataFrame, without making a copy
df_negative[:5][['x', 'y']] # take the first five rows, and only the 'x' and 'y' column (no memory copy!)
###Output
_____no_output_____
###Markdown
When dealing with huge datasets, say a billion rows ($10^9$), computations with the data can waste memory, up to 8 GB for a new column. Instead, vaex uses lazy computation, storing only a representation of the computation, and computations are done on the fly when needed. You can just use many of the numpy functions, as if it was a normal array.
###Code
import numpy as np
# creates an expression (nothing is computed)
some_expression = df.x + df.z
some_expression # for convenience, we print out some values
###Output
_____no_output_____
###Markdown
These expressions can be added to a DataFrame, creating what we call a *virtual column*. These virtual columns are similar to normal columns, except they do not waste memory.
###Code
df['r'] = some_expression # add a (virtual) column that will be computed on the fly
df.mean(df.x), df.mean(df.r) # calculate statistics on normal and virtual columns
###Output
_____no_output_____
###Markdown
One of the core features of vaex is its ability to calculate statistics on a regular (N-dimensional) grid. The dimensions of the grid are specified by the binby argument (analogous to SQL's grouby), and the shape and limits.
###Code
df.mean(df.r, binby=df.x, shape=32, limits=[-10, 10]) # create statistics on a regular grid (1d)
df.mean(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d
df.count(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d counts/histogram
###Output
_____no_output_____
###Markdown
These one and two dimensional grids can be visualized using any plotting library, such as matplotlib, but the setup can be tedious. For convenience we can use [plot1d](api.rstvaex.dataframe.DataFrame.plot1d), [plot](api.rstvaex.dataframe.DataFrame.plot), or see the [list of plotting commands](api.rstvisualization)
###Code
df.plot(df.x, df.y, show=True); # make a plot quickly
###Output
_____no_output_____
###Markdown
Spreadsheet widget for the Jupyter Notebook InstallationTo install use pip:```$ pip install ipysheet```To make it work for Jupyter lab:```$ jupyter labextension ipysheet```If you have notebook 5.2 or below, you also need to execute:```$ jupyter nbextension enable --py --sys-prefix ipysheet$ jupyter nbextension enable --py --sys-prefix ipysheet.renderer_nbext``` Getting startedAlthough ipysheet contains an object oriented interface, we recomment using the "state machine" based interface, similar to matplotlib's pyplot/pylab interface. Comparible to matplotlib pylab interface, this interface keeps track of the current sheet. Using the [cell](api.rstipysheet.easy.cell) function, [Cell](api.rstipysheet.sheet.Cell) widgets are added to the current sheet.Importing ipysheet and invoking the [sheet](api.rstipysheet.easy.sheet) function will create the default spreadsheet widget. The function returns a [Sheet](api.rstipysheet.sheet.Sheet) instance, leaving that expression as a last statement of a code cell will display it, otherwise use `display(sheet)`.Note that this documentation is a Jupyter notebook, and you can try it out directly on Binder:[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/QuantStack/ipysheet/master?filepath=docs%2Fsource%2Findex.ipynb)
###Code
import ipysheet
sheet = ipysheet.sheet()
sheet
###Output
_____no_output_____
###Markdown
Using the [cell](api.rstipysheet.easy.cell) function, we can create [Cell](api.rstipysheet.sheet.Cell) widgets that are directly added to the current sheet.
###Code
sheet = ipysheet.sheet(rows=3, columns=4)
cell1 = ipysheet.cell(0, 0, 'Hello')
cell2 = ipysheet.cell(2, 0, 'World')
cell_value = ipysheet.cell(2,2, 42.)
sheet
###Output
_____no_output_____
###Markdown
EventsUsing link or observe we can link widgets together, or attach event handlers **Note:** The examples below contain event handler written in Python that needs a running kernel, they will not work in the pure html documentation. They do work in binder!
###Code
import ipywidgets as widgets
sheet = ipysheet.sheet(rows=3, columns=2, column_headers=False, row_headers=False)
cell_a = ipysheet.cell(0, 1, 1, label_left='a')
cell_b = ipysheet.cell(1, 1, 2, label_left='b')
cell_sum = ipysheet.cell(2, 1, 3, label_left='sum', read_only=True)
# create a slider linked to cell a
slider = widgets.FloatSlider(min=-10, max=10, description='a')
widgets.jslink((cell_a, 'value'), (slider, 'value'))
# changes in a or b should trigger this function
def calculate(change):
cell_sum.value = cell_a.value + cell_b.value
cell_a.observe(calculate, 'value')
cell_b.observe(calculate, 'value')
widgets.VBox([sheet, slider])
###Output
_____no_output_____
###Markdown
Cell rangesInstead of referring to a single cell, we can also refer to cell ranges, rows and columns.
###Code
sheet = ipysheet.sheet(rows=5, columns=4)
row = ipysheet.row(0, [0, 1, 2, 3], background_color="red")
column = ipysheet.column(1, ["a", "b", "c", "d"], row_start=1, background_color="green")
cells = ipysheet.cell_range([["hi", "ola"], ["ciao", "bonjour"], ["hallo", "guten tag"]],
row_start=1, column_start=2, background_color="yellow")
sheet
###Output
_____no_output_____
###Markdown
CalculationsSince this is such a common pattern, a helper decorator [calculation](api.rstipysheet.easy.calculation) is provided, shortening the above code considerably.
###Code
import ipywidgets as widgets
sheet = ipysheet.sheet(rows=3, columns=2, column_headers=False, row_headers=False)
cell_a = ipysheet.cell(0, 1, 1, label_left='a')
cell_b = ipysheet.cell(1, 1, 2, label_left='b')
cell_sum = ipysheet.cell(2, 1, 3, label_left='sum', read_only=True)
# create a slider linked to cell a
slider = widgets.FloatSlider(min=-10, max=10, description='a')
widgets.jslink((cell_a, 'value'), (slider, 'value'))
@ipysheet.calculation(inputs=[cell_a, cell_b], output=cell_sum)
def calculate(a, b):
return a + b
widgets.VBox([sheet, slider])
###Output
_____no_output_____
###Markdown
Renderersipysheet is build on Handsontable, which allows [custom renderers](https://docs.handsontable.com/demo-custom-renderers.html), which we also support. Note that this means ipysheet allows arbitrary JavaScript injection (TODO: make this part optional)
###Code
jscode_renderer_negative = """
function (instance, td, row, col, prop, value, cellProperties) {
Handsontable.renderers.TextRenderer.apply(this, arguments);
if (value < 0)
td.style.backgroundColor = 'red'
else
td.style.backgroundColor = 'green'
}
"""
ipysheet.renderer(code=jscode_renderer_negative, name='negative');
import random
s = ipysheet.sheet(rows=3, columns=4)
data = [[random.randint(-10, 10) for j in range(4)] for j in range(3)]
ipysheet.cell_range(data, renderer='negative')
s
###Output
_____no_output_____
###Markdown
If [flexx](http://flexx.readthedocs.io/en/stable/pyscript/index.html) is installed, Python code can be transpiled to JavaScript at runtime.
###Code
def renderer_negative(instance, td, row, col, prop, value, cellProperties):
Handsontable.renderers.TextRenderer.apply(this, arguments);
if value < 0:
td.style.backgroundColor = 'orange'
else:
td.style.backgroundColor = ''
ipysheet.renderer(code=renderer_negative, name='negative_transpiled');
import random
s = ipysheet.sheet(rows=3, columns=4)
data = [[random.randint(-10, 10) for j in range(4)] for j in range(3)]
ipysheet.cell_range(data, renderer='negative_transpiled')
s
###Output
_____no_output_____
###Markdown
[Installation](installing.rst)[Tutorial](tutorial.ipynb)[Examples](examples.rst)[API](api.rst)[Machine Learning](ml.rst)[Datasets](datasets.rst)
###Code
<style>
pre {
white-space: pre-wrap !important;
}
.table-striped > tbody > tr:nth-of-type(odd) {
background-color: #f9f9f9;
}
.table-striped > tbody > tr:nth-of-type(even) {
background-color: white;
}
.table-striped td, .table-striped th, .table-striped tr {
border: 1px solid black;
border-collapse: collapse;
margin: 1em 2em;
}
.rendered_html td, .rendered_html th {
text-align: left;
vertical-align: middle;
padding: 4px;
}
</style>
###Output
_____no_output_____
###Markdown
What is Vaex?Vaex is a python library for lazy **Out-of-Core DataFrames** (similar to Pandas), to visualize and explore big tabular datasets. It can calculate *statistics* such as mean, sum, count, standard deviation etc, on an *N-dimensional grid* up to **a billion** ($10^9$) objects/rows **per second**. Visualization is done using **histograms**, **density plots** and **3d volume rendering**, allowing interactive exploration of big data. Vaex uses memory mapping, a zero memory copy policy, and lazy computations for best performance (no memory wasted). Why vaex * **Performance:** works with huge tabular data, processes $\gt 10^9$ rows/second * **Lazy / Virtual columns:** compute on the fly, without wasting ram * **Memory efficient** no memory copies when doing filtering/selections/subsets. * **Visualization:** directly supported, a one-liner is often enough. * **User friendly API:** you will only need to deal with the DataFrame object, and tab completion + docstring will help you out: `ds.mean`, feels very similar to Pandas. * **Lean:** separated into multiple packages * `vaex-core`: DataFrame and core algorithms, takes numpy arrays as input columns. * `vaex-hdf5`: Provides memory mapped numpy arrays to a DataFrame. * `vaex-arrow`: [Arrow](https://arrow.apache.org/) support for cross language data sharing. * `vaex-viz`: Visualization based on matplotlib. * `vaex-jupyter`: Interactive visualization based on Jupyter widgets / ipywidgets, bqplot, ipyvolume and ipyleaflet. * `vaex-astro`: Astronomy related transformations and FITS file support. * `vaex-server`: Provides a server to access a DataFrame remotely. * `vaex-distributed`: (Proof of concept) combined multiple servers / cluster into a single DataFrame for distributed computations. * `vaex-qt`: Program written using Qt GUI. * `vaex`: Meta package that installs all of the above. * `vaex-ml`: [Machine learning](ml.ipynb) * **Jupyter integration**: vaex-jupyter will give you interactive visualization and selection in the Jupyter notebook and Jupyter lab. InstallationUsing conda: * `conda install -c conda-forge vaex`Using pip: * `pip install --upgrade vaex` Or read the [detailed instructions](installing.ipynb) Getting startedWe assume that you have installed vaex, and are running a [Jupyter notebook server](https://jupyter.readthedocs.io/en/latest/running.html). We start by importing vaex and asking it to give us an example dataset.
###Code
import vaex
df = vaex.example() # open the example dataset provided with vaex
###Output
_____no_output_____
###Markdown
Instead, you can [download some larger datasets](datasets.rst), or [read in your csv file](api.rstvaex.from_csv).
###Code
df # will pretty print the DataFrame
###Output
_____no_output_____
###Markdown
Using [square brackets[]](api.rstvaex.dataframe.DataFrame.__getitem__), we can easily filter or get different views on the DataFrame.
###Code
df_negative = df[df.x < 0] # easily filter your DataFrame, without making a copy
df_negative[:5][['x', 'y']] # take the first five rows, and only the 'x' and 'y' column (no memory copy!)
###Output
_____no_output_____
###Markdown
When dealing with huge datasets, say a billion rows ($10^9$), computations with the data can waste memory, up to 8 GB for a new column. Instead, vaex uses lazy computation, storing only a representation of the computation, and computations are done on the fly when needed. You can just use many of the numpy functions, as if it was a normal array.
###Code
import numpy as np
# creates an expression (nothing is computed)
some_expression = df.x + df.z
some_expression # for convenience, we print out some values
###Output
_____no_output_____
###Markdown
These expressions can be added to a DataFrame, creating what we call a *virtual column*. These virtual columns are similar to normal columns, except they do not waste memory.
###Code
df['r'] = some_expression # add a (virtual) column that will be computed on the fly
df.mean(df.x), df.mean(df.r) # calculate statistics on normal and virtual columns
###Output
_____no_output_____
###Markdown
One of the core features of vaex is its ability to calculate statistics on a regular (N-dimensional) grid. The dimensions of the grid are specified by the binby argument (analogous to SQL's grouby), and the shape and limits.
###Code
df.mean(df.r, binby=df.x, shape=32, limits=[-10, 10]) # create statistics on a regular grid (1d)
df.mean(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d
df.count(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d counts/histogram
###Output
_____no_output_____
###Markdown
These one and two dimensional grids can be visualized using any plotting library, such as matplotlib, but the setup can be tedious. For convenience we can use [plot1d](api.rstvaex.dataframe.DataFrame.plot1d), [plot](api.rstvaex.dataframe.DataFrame.plot), or see the [list of plotting commands](api.rstvisualization)
###Code
df.plot(df.x, df.y, show=True); # make a plot quickly
###Output
_____no_output_____
###Markdown
[Installation](installing.rst)[Tutorials](tutorials.rst)[Examples](examples.rst)[Gallery](gallery.rst)[API](api.rst)[Datasets](datasets.rst)[FAQ](faq.rst)
###Code
<style>
pre {
white-space: pre-wrap !important;
}
.table-striped > tbody > tr:nth-of-type(odd) {
background-color: #f9f9f9;
}
.table-striped > tbody > tr:nth-of-type(even) {
background-color: white;
}
.table-striped td, .table-striped th, .table-striped tr {
border: 1px solid black;
border-collapse: collapse;
margin: 1em 2em;
}
.rendered_html td, .rendered_html th {
text-align: left;
vertical-align: middle;
padding: 4px;
}
</style>
###Output
_____no_output_____
###Markdown
What is Vaex?Vaex is a python library for lazy **Out-of-Core DataFrames** (similar to Pandas), to visualize and explore big tabular datasets. It can calculate *statistics* such as mean, sum, count, standard deviation etc, on an *N-dimensional grid* up to **a billion** ($10^9$) objects/rows **per second**. Visualization is done using **histograms**, **density plots** and **3d volume rendering**, allowing interactive exploration of big data. Vaex uses memory mapping, a zero memory copy policy, and lazy computations for best performance (no memory wasted). Why vaex * **Performance:** works with huge tabular data, processes $\gt 10^9$ rows/second * **Lazy / Virtual columns:** compute on the fly, without wasting ram * **Memory efficient** no memory copies when doing filtering/selections/subsets. * **Visualization:** directly supported, a one-liner is often enough. * **User friendly API:** you will only need to deal with the DataFrame object, and tab completion + docstring will help you out: `ds.mean`, feels very similar to Pandas. * **Lean:** separated into multiple packages * `vaex-core`: DataFrame and core algorithms, takes numpy arrays as input columns. * `vaex-hdf5`: Provides memory mapped numpy arrays to a DataFrame. * `vaex-arrow`: [Arrow](https://arrow.apache.org/) support for cross language data sharing. * `vaex-viz`: Visualization based on matplotlib. * `vaex-jupyter`: Interactive visualization based on Jupyter widgets / ipywidgets, bqplot, ipyvolume and ipyleaflet. * `vaex-astro`: Astronomy related transformations and FITS file support. * `vaex-server`: Provides a server to access a DataFrame remotely. * `vaex-distributed`: (Proof of concept) combined multiple servers / cluster into a single DataFrame for distributed computations. * `vaex-qt`: Program written using Qt GUI. * `vaex`: Meta package that installs all of the above. * `vaex-ml`: [Machine learning](ml.ipynb) * **Jupyter integration**: vaex-jupyter will give you interactive visualization and selection in the Jupyter notebook and Jupyter lab. InstallationUsing conda: * `conda install -c conda-forge vaex`Using pip: * `pip install --upgrade vaex` Or read the [detailed instructions](installing.ipynb) Getting startedWe assume that you have installed vaex, and are running a [Jupyter notebook server](https://jupyter.readthedocs.io/en/latest/running.html). We start by importing vaex and asking it to give us an example dataset.
###Code
import vaex
df = vaex.example() # open the example dataset provided with vaex
###Output
_____no_output_____
###Markdown
Instead, you can [download some larger datasets](datasets.rst), or [read in your csv file](api.rstvaex.from_csv).
###Code
df # will pretty print the DataFrame
###Output
_____no_output_____
###Markdown
Using [square brackets[]](api.rstvaex.dataframe.DataFrame.__getitem__), we can easily filter or get different views on the DataFrame.
###Code
df_negative = df[df.x < 0] # easily filter your DataFrame, without making a copy
df_negative[:5][['x', 'y']] # take the first five rows, and only the 'x' and 'y' column (no memory copy!)
###Output
_____no_output_____
###Markdown
When dealing with huge datasets, say a billion rows ($10^9$), computations with the data can waste memory, up to 8 GB for a new column. Instead, vaex uses lazy computation, storing only a representation of the computation, and computations are done on the fly when needed. You can just use many of the numpy functions, as if it was a normal array.
###Code
import numpy as np
# creates an expression (nothing is computed)
some_expression = df.x + df.z
some_expression # for convenience, we print out some values
###Output
_____no_output_____
###Markdown
These expressions can be added to a DataFrame, creating what we call a *virtual column*. These virtual columns are similar to normal columns, except they do not waste memory.
###Code
df['r'] = some_expression # add a (virtual) column that will be computed on the fly
df.mean(df.x), df.mean(df.r) # calculate statistics on normal and virtual columns
###Output
_____no_output_____
###Markdown
One of the core features of vaex is its ability to calculate statistics on a regular (N-dimensional) grid. The dimensions of the grid are specified by the binby argument (analogous to SQL's grouby), and the shape and limits.
###Code
df.mean(df.r, binby=df.x, shape=32, limits=[-10, 10]) # create statistics on a regular grid (1d)
df.mean(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d
df.count(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d counts/histogram
###Output
_____no_output_____
###Markdown
These one and two dimensional grids can be visualized using any plotting library, such as matplotlib, but the setup can be tedious. For convenience we can use [plot1d](api.rstvaex.dataframe.DataFrame.plot1d), [plot](api.rstvaex.dataframe.DataFrame.plot), or see the [list of plotting commands](api.rstvisualization)
###Code
df.plot(df.x, df.y, show=True); # make a plot quickly
###Output
_____no_output_____
###Markdown
Installation The framework is available at the PyPi Repository:
###Code
.. code-block:: bash
pip install -U pysampling
###Output
_____no_output_____
###Markdown
Usage The method to be used for sampling using different algorithm must be import from pysampling.sample. Here, we use Latin Hypercube Sampling to generate 50 points in 2 dimensions.
###Code
from pysampling.sample import sample
X = sample("lhs", 50, 2)
###Output
_____no_output_____
###Markdown
Then, we recommend using matpotlib or other visualization libraries to have a look at the results:
###Code
import matplotlib.pyplot as plt
plt.scatter(X[:, 0], X[:, 1], s=30, facecolors='none', edgecolors='r')
plt.show()
###Output
_____no_output_____
###Markdown
Features So far our library provides the following implementations: - Random ('random')- Latin Hypercube Sampling ('lhs')- Sobol ('sobol')- Halton ('halton') The initialization of each of those will be shown in the following.Let us first define a method that helps us to visualize them in a 2d space.
###Code
import matplotlib.pyplot as plt
def show(X):
plt.scatter(X[:, 0], X[:, 1], s=30, facecolors='none', edgecolors='r')
plt.show()
###Output
_____no_output_____
###Markdown
Random ('random')
###Code
X = sample("random", 50, 2, seed=1)
show(X)
###Output
_____no_output_____
###Markdown
Latin Hypercube Sampling ('lhs')
###Code
X = sample("lhs", 50, 2, seed=1)
show(X)
###Output
_____no_output_____
###Markdown
Sobol ('sobol')
###Code
X = sample("sobol", 84, 2)
show(X)
X = sample("sobol", 84, 2, n_skip=100, n_leap=10)
show(X)
###Output
_____no_output_____
###Markdown
Halton ('halton')
###Code
X = sample("halton", 100, 2)
show(X)
###Output
_____no_output_____
###Markdown
Contact
###Code
.. |blankjul| raw:: html
<a href="http://www.cse.msu.edu/~blankjul/" target="_blank">My personal homepage</a>
|blankjul|
Feel free to contact me if you have any question:
::
Julian Blank (blankjul [at] egr.msu.edu)
Michigan State University
Computational Optimization and Innovation Laboratory (COIN)
East Lansing, MI 48824, USA
###Output
_____no_output_____
###Markdown
Home ![Woodwork](images/woodwork.svg) Woodwork is a library that helps with data typing of 2-dimensional tabular data structures.It provides a DataTable object, which contains the physical, logical, and semantic data types.It can be used with [Featuretools](https://www.featuretools.com), [EvalML](https://evalml.featurelabs.com/en/latest/), and general machine learning applications where logical and semantic typing information is important.Woodwork provides simple interfaces for adding and updating logical and semantic typing information, as well as selecting data columns based on the types. Quick StartBelow is an example of using a Woodwork DataTable to automatically infer the Logical Types for a data structure and select columns with specific types.
###Code
import woodwork as ww
data = ww.demo.load_retail(nrows=100, return_dataframe=True)
dt = ww.DataTable(data, name="retail")
dt
filtered_dt = dt.select(include=['numeric', 'Boolean'])
filtered_dt.to_dataframe().head(5)
###Output
_____no_output_____
###Markdown
Table of contents
###Code
.. toctree::
:maxdepth: 1
install
start
.. toctree::
:maxdepth: 2
guides/guides_index
.. toctree::
:maxdepth: 1
api_reference
release_notes
###Output
_____no_output_____
###Markdown
[Installation](installing.rst)[Tutorial](tutorial.ipynb)[Examples](examples.rst)[API](api.rst)[Machine Learning](ml.rst)[Datasets](datasets.rst)
###Code
<style>
pre {
white-space: pre-wrap !important;
}
.table-striped > tbody > tr:nth-of-type(odd) {
background-color: #f9f9f9;
}
.table-striped > tbody > tr:nth-of-type(even) {
background-color: white;
}
.table-striped td, .table-striped th, .table-striped tr {
border: 1px solid black;
border-collapse: collapse;
margin: 1em 2em;
}
.rendered_html td, .rendered_html th {
text-align: left;
vertical-align: middle;
padding: 4px;
}
</style>
###Output
_____no_output_____
###Markdown
What is Vaex?Vaex is a python library for lazy **Out-of-Core DataFrames** (similar to Pandas), to visualize and explore big tabular datasets. It can calculate *statistics* such as mean, sum, count, standard deviation etc, on an *N-dimensional grid* up to **a billion** ($10^9$) objects/rows **per second**. Visualization is done using **histograms**, **density plots** and **3d volume rendering**, allowing interactive exploration of big data. Vaex uses memory mapping, zero memory copy policy and lazy computations for best performance (no memory wasted). Why vaex * **Performance:** Works with huge tabular data, process $\gt 10^9$ rows/second * **Lazy / Virtual columns:** compute on the fly, without wasting RAM * **Memory efficient** no memory copies when doing filtering/selections/subsets. * **Visualization:** directly supported, a one-liner is often enough. * **User-friendly API:** You will only need to deal with the DataFrame object, and tab completion + docstring will help you out: `ds.mean`, feels very similar to Pandas. * **Lean:** functional areas are split into separate packages * `vaex-core`: DataFrame and core algorithms, takes numpy arrays as input columns. * `vaex-hdf5`: Provides memory mapped numpy arrays to a DataFrame. * `vaex-arrow`: [Arrow](https://arrow.apache.org/) support for cross-language data sharing. * `vaex-viz`: Visualization based on matplotlib. * `vaex-jupyter`: Interactive visualization based on Jupyter widgets / ipywidgets, bqplot, ipyvolume and ipyleaflet. * `vaex-astro`: Astronomy-related transformations and FITS file support. * `vaex-server`: Provides a server to access a DataFrame remotely. * `vaex-distributed`: (Proof of concept) Combines multiple servers / cluster into a single DataFrame for distributed computations. * `vaex-qt`: Program written using Qt GUI. * `vaex`: meta package that installs all of the above. * `vaex-ml`: [Machine learning](ml.ipynb) * **Jupyter integration**: vaex-jupyter will give you interactive visualization and selection in the Jupyter notebook and Jupyter lab. InstallationUsing conda: * `conda install -c conda-forge vaex`Using pip: * `pip install --upgrade vaex` Or read the [detailed instructions](installing.ipynb) Getting startedWe are assuming you have installed vaex, and are running a [Jupyter notebook server](https://jupyter.readthedocs.io/en/latest/running.html). We start by importing vaex and ask it to give us sample example dataset.
###Code
import vaex
df = vaex.example() # open the example dataset provided with vaex
###Output
_____no_output_____
###Markdown
Instead, you can [download some larger datasets](datasets.rst), or [read in your csv file](api.rstvaex.from_csv).
###Code
df # will pretty print the DataFrame
###Output
_____no_output_____
###Markdown
Using [square brackets[]](api.rstvaex.dataframe.DataFrame.__getitem__), we can easily filter or get different views on the DataFrame.
###Code
df_negative = df[df.x < 0] # easily filter your DataFrame __without duplicating data__
df_negative[:5][['x', 'y']] # take the first five rows, and only the 'x' and 'y' column (no memory used copying data!)
###Output
_____no_output_____
###Markdown
When dealing with huge datasets, say a billion ($10^9$) rows, computations with the data can waste memory – up to $8$ GB for a new column. Instead, vaex uses lazy computation; only a representation of the computation is stored, and computations are done on the fly when needed. Even so, you can use many of the numpy functions as if it was a normal array.
###Code
import numpy as np
# creates an expression (nothing is computed/evaluated)
some_expression = df.x + df.z
some_expression # for convenience, we print out some values
###Output
_____no_output_____
###Markdown
These expressions can be added to a DataFrame, creating what we call a *virtual column*. These virtual columns are similar to normal columns, except they do not waste memory.
###Code
df['r'] = some_expression # add a (virtual) column that will be computed on the fly
df.mean(df.x), df.mean(df.r) # calculate statistics on normal and virtual columns
###Output
_____no_output_____
###Markdown
One of the core features of vaex is its ability to calculate statistics on a regular (N-dimensional) grid. The dimensions of the grid are specified by the arguments: `binby` (analogous to SQL's `grouby`), `shape`, and `limits`.
###Code
df.mean(df.r, binby=df.x, shape=32, limits=[-10, 10]) # create statistics on a regular grid (1d)
df.mean(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d
df.count(df.r, binby=[df.x, df.y], shape=32, limits=[-10, 10]) # or 2d counts/histogram
###Output
_____no_output_____
###Markdown
These one and two dimensional grids can be visualized using any plotting library, such as matplotlib, but the setup can be tedious. For convenience, we can use [plot1d](api.rstvaex.dataframe.DataFrame.plot1d), [plot](api.rstvaex.dataframe.DataFrame.plot), or see the [list of plotting commands](api.rstvisualization)
###Code
df.plot(df.x, df.y, show=True); # make a quick plot
###Output
_____no_output_____
###Markdown
IPython Notebook Validation for py.test - Documentation One of the powerful uses of the IPython notebook is for documentation purposes, here we use a notebook to demonstrate the behaviour and usage of the IPython Notebook Validation plugin for py.test. The IPython notebook format `.ipynb` stores outputs as well as inputs. Validating the notebook means to rerun the notebook and make sure that it is generating the same output as has been stored.Therefore, the **user MUST make the following the distinction**:1. Running a notebook manually will likely change the output stored in the associated .ipynb file. These outputs will be used as references for the tests (i.e. the outputs from the last time you ran the notebook)2. Validating with py.test plugin - these tests run your notebook code seperately without storing the information, the outputs generated will be compared against those in the .ipynb fileThe purpose of the testing module is to ensure that the notebook is behaving as expected and that changes to underlying source code, haven't affected the results of an IPython notebook. For example, for documentation purposes - such as this. Command line usage The py.test program doesn't usually collect notebooks for testing; by passing the `--nbval` flag at the command line, the IPython Notebook Validation plugin will collect and test notebook cells, comparing their outputs with those saved in the file.```$ py.test --nbval my_notebook.ipynb```There is also an option `--nbval-lax`, which collects notebooks and runs them, failing if there is an error. This mode does not check the output of cells unless they are marked with a special `NBVAL_CHECK_OUTPUT` comment.```$ py.test --nbval-lax my_notebook.ipynb``` REGEX Output sanitizing Since all output is captured by the IPython notebook, some pesky messages and prompts (with time-stamped messages, for example) may fail tests always, which might be expected. The plugin allows the user to specify a sanitizing file at the command prompt using the following flag:```$ py.test --nbval my_notebook.ipynb --sanitize-with my_sanitize_file```This sanitize file contains a number of REGEX replacements. It is recommended, when removing output for the tests, that you replace the removed output with some sort of marker, this helps with debugging. The following file is written to the folder of this notebook and can be used to sanitize its outputs:
###Code
%%writefile doc_sanitize.cfg
[regex1]
regex: \d{1,2}/\d{1,2}/\d{2,4}
replace: DATE-STAMP
[regex2]
regex: \d{2}:\d{2}:\d{2}
replace: TIME-STAMP
###Output
Writing doc_sanitize.cfg
###Markdown
The first replacement finds dates in the given format replaces them with the label 'DATE-STAMP', likewise for strings that look like time. These will prevent the tests from failing due to time differences. Validate this notebook This documentation is written as a Notebook. You can validate this notebook yourself, as shown below; the outputs that you see here are stored in the ipynb file. If your system produces different outputs, the testing process will fail. Just use the following commands:```$ cd /path/to/repo/docs/source$ py.test --nbval index.ipynb --sanitize-with doc_sanitize.cfg``` Examples of plugin behaviour The following examples demonstrate how the plugin behaves during testing. Test this notebook yourself to see the validation in action! These two imports produce no output as standard, if any **warnings** are printed out the cell will fail. Under normal operating conditions they will pass.
###Code
import numpy as np
import time
###Output
_____no_output_____
###Markdown
If python doesn't consistently print 7, then something has gone terribly wrong. **Deterministic cells** are expected to pass everytime
###Code
print(5+2)
###Output
7
###Markdown
**Random outputs** will always fail.
###Code
print([np.random.rand() for i in range(4)])
print([np.random.rand() for i in range(4)])
###Output
[0.36133679016382714, 0.5043774697891126, 0.23281910875007927, 0.2713065513128683]
[0.5512421277985322, 0.02592706358897756, 0.05036036771084684, 0.7515926759190724]
###Markdown
**Inconsistent number of lines** of output will cause an error to be thrown.
###Code
for i in range(np.random.randint(1, 8)):
print(1)
###Output
1
1
1
###Markdown
Because the **time and date** will change with each run, we would expect this cell to fail everytime. Using the sanitize file `doc_sanitize.cfg` (created above) you can clean up these outputs.
###Code
print('The time is: ' + time.strftime('%H:%M:%S'))
print("Today's date is: " + time.strftime('%d/%m/%y'))
###Output
The time is: 15:28:30
Today's date is: 21/12/16
###Markdown
Avoid output comparison for specific cells In case we want to avoid the testing process in specific input cells, we can write the comment ** NBVAL_IGNORE_OUTPUT ** at thebeginning of the them:
###Code
# NBVAL_IGNORE_OUTPUT
print('This is not going to be tested')
print(np.random.randint(1, 20000))
###Output
This is not going to be tested
12544
###Markdown
There's also a counterpart, to ensure the output is tested even when using `--nbval-lax` :
###Code
# NBVAL_CHECK_OUTPUT
print("This will be tested")
print(6 * 7)
###Output
This will be tested
42
###Markdown
Note that unexecuted cells will always skip its output check:
###Code
print('This is not going to be tested when unrun')
print(np.random.randint(1, 20000))
###Output
_____no_output_____
###Markdown
Skipping specific cells If, for some reason, a cell should not be executed during testing, the comment ** NBVAL_SKIP** can be used: ```python NBVAL_SKIPprint("Entering infinite loop...")while True: pass``` Checking exceptions Sometimes, we might want to allow a notebook cell to raise an exception, and check that the traceback is as we expect. By annotating the cell with the comment ** NBVAL_RAISES_EXCEPTION ** you can indicate that the cell is expected to raise an exception. The full traceback is not compared, but rather just that the raised exception is the same as the stored exception.
###Code
# NBVAL_RAISES_EXCEPTION
print("This exception will be tested")
raise RuntimeError("Foo")
###Output
This exception will be tested
###Markdown
This composes with the per-cell checking comments, so if you would like to avoid exceptions creating a test failure, but do not want to check the traceback, use ` NBVAL_IGNORE_OUTPUT`
###Code
# NBVAL_RAISES_EXCEPTION
print("If the raised exception doesn't match the stored exception, we get a failure")
raise SyntaxError("Foo")
# NBVAL_IGNORE_OUTPUT
# NBVAL_RAISES_EXCEPTION
print("This exception will not be checked, but will not cause a failure.")
raise RuntimeError("Bar")
###Output
This exception will not be checked, but will not cause a failure.
###Markdown
Using tags instead of comments If you do not want to put nbval comment annotations in your notebook, or your source language is not compatible with such annotations, you can use cell tags instead. Cell tags are strings that are added to the cell metadata under the label "tags", and can be added and remove using the "Tags" toolbar from Notebook version 5. The tags that Nbval recognizes are the same as the comment names, except lowercase, and with dashes ('-') instead of underscores ('\_'). For instance, the comment "`NBVAL_IGNORE_OUTPUT`" becomes the tag "`nbval-ignore-output`". However, for "`NBVAL_RAISES_EXCEPTION`", either "`nbval-raises-exception`" or the plain "`raises-exception`" tag can be used, since as of Notebook 5.1, the latter is a special tag that tells the Notebook cell executor to continue running normally after an exception is raised. Figures
###Code
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Currently, only the matplotlib text output of the Figure is compared, but it is possible to modify the plugin to allow comparison of the image whole string.
###Code
plt.imshow(np.array([[i + j for i in range(3)]
for j in range(3)]),
interpolation='None'
)
###Output
_____no_output_____
###Markdown
Skipping certain output types In case nbval is comparing some cell outputs you do not care about, like:```Traceback:missing key: TESTING dict_keys(['stderr']) != REFERENCE dict_keys(['application/javascript', 'stderr'])```There is a workaround. Add the following to your conftest.py:
###Code
def pytest_collectstart(collector):
collector.skip_compare += 'text/html', 'application/javascript', 'stderr',
###Output
_____no_output_____ |
courses/udacity_intro_to_tensorflow_lite/tflite_c02_transfer_learning.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Transfer Learning with TensorFlow Hub for TFLite Run in Google Colab View source on GitHub Setup
###Code
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Select the Hub/TF2 module to useHub modules for TF 1.x won't work here, please use one of the selections provided.
###Code
module_selection = ("mobilenet_v2", 224, 1280) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true}
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {} and output dimension {}".format(
MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))
###Output
_____no_output_____
###Markdown
Data preprocessing Use [TensorFlow Datasets](http://tensorflow.org/datasets) to load the cats and dogs dataset.This `tfds` package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see [loading image data](../load_data/images.ipynb)
###Code
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
The `tfds.load` method downloads and caches the data, and returns a `tf.data.Dataset` object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.Since `"cats_vs_dog"` doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
###Code
(train_examples, validation_examples, test_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[80%:]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
###Output
_____no_output_____
###Markdown
Format the DataUse the `tf.image` module to format the images for the task.Resize the images to a fixes input size, and rescale the input channels
###Code
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
###Output
_____no_output_____
###Markdown
Now shuffle and batch the data
###Code
BATCH_SIZE = 32 #@param {type:"integer"}
train_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)
test_batches = test_examples.map(format_image).batch(1)
###Output
_____no_output_____
###Markdown
Inspect a batch
###Code
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
###Output
_____no_output_____
###Markdown
Defining the modelAll it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module.For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy.
###Code
do_fine_tuning = False #@param {type:"boolean"}
###Output
_____no_output_____
###Markdown
Load TFHub Module
###Code
feature_extractor = hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3,),
output_shape=[FV_SIZE],
trainable=do_fine_tuning)
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(num_classes)
])
model.summary()
#@title (Optional) Unfreeze some layers
NUM_LAYERS = 7 #@param {type:"slider", min:1, max:50, step:1}
if do_fine_tuning:
feature_extractor.trainable = True
for layer in model.layers[-NUM_LAYERS:]:
layer.trainable = True
else:
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Training the model
###Code
if do_fine_tuning:
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9),
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
else:
model.compile(
optimizer='adam',
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 5
hist = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Export the model
###Code
CATS_VS_DOGS_SAVED_MODEL = "exp_saved_model"
###Output
_____no_output_____
###Markdown
Export the SavedModel
###Code
tf.saved_model.save(model, CATS_VS_DOGS_SAVED_MODEL)
%%bash -s $CATS_VS_DOGS_SAVED_MODEL
saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default
loaded = tf.saved_model.load(CATS_VS_DOGS_SAVED_MODEL)
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_input_signature)
print(infer.structured_outputs)
###Output
_____no_output_____
###Markdown
Convert using TFLite's Converter Load the TFLiteConverter with the SavedModel
###Code
converter = tf.lite.TFLiteConverter.from_saved_model(CATS_VS_DOGS_SAVED_MODEL)
###Output
_____no_output_____
###Markdown
Post-training quantizationThe simplest form of post-training quantization quantizes weights from floating point to 8-bits of precision. This technique is enabled as an option in the TensorFlow Lite converter. At inference, weights are converted from 8-bits of precision to floating point and computed using floating-point kernels. This conversion is done once and cached to reduce latency.To further improve latency, hybrid operators dynamically quantize activations to 8-bits and perform computations with 8-bit weights and activations. This optimization provides latencies close to fully fixed-point inference. However, the outputs are still stored using floating point, so that the speedup with hybrid ops is less than a full fixed-point computation.
###Code
converter.optimizations = [tf.lite.Optimize.DEFAULT]
###Output
_____no_output_____
###Markdown
Post-training integer quantizationWe can get further latency improvements, reductions in peak memory usage, and access to integer only hardware accelerators by making sure all model math is quantized. To do this, we need to measure the dynamic range of activations and inputs with a representative data set. You can simply create an input data generator and provide it to our converter.
###Code
def representative_data_gen():
for input_value, _ in test_batches.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
###Output
_____no_output_____
###Markdown
The resulting model will be fully quantized but still take float input and output for convenience.Ops that do not have quantized implementations will automatically be left in floating point. This allows conversion to occur smoothly but may restrict deployment to accelerators that support float. Full integer quantizationTo require the converter to only output integer operations, one can specify:
###Code
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
###Output
_____no_output_____
###Markdown
Finally convert the model
###Code
tflite_model = converter.convert()
tflite_model_file = 'converted_model.tflite'
with open(tflite_model_file, "wb") as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
Test the TFLite model using the Python Interpreter
###Code
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=tflite_model_file)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
from tqdm import tqdm
# Gather results for the randomly sampled test images
predictions = []
test_labels, test_imgs = [], []
for img, label in tqdm(test_batches.take(10)):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label.numpy()[0])
test_imgs.append(img)
#@title Utility functions for plotting
# Utilities for plotting
class_names = ['cat', 'dog']
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
###Output
_____no_output_____
###Markdown
NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite doesn't have super optimized server CPU kernels. For this reason post-training full-integer quantized models may be slower here than the other kinds of optimized models. But for mobile CPUs, considerable speedup can be observed.
###Code
#@title Visualize the outputs { run: "auto" }
index = 0 #@param {type:"slider", min:0, max:9, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_imgs)
plt.show()
###Output
_____no_output_____
###Markdown
Download the model.**NOTE: You might have to run to the cell below twice**
###Code
labels = ['cat', 'dog']
with open('labels.txt', 'w') as f:
f.write('\n'.join(labels))
try:
from google.colab import files
files.download('converted_model.tflite')
files.download('labels.txt')
except:
pass
###Output
_____no_output_____
###Markdown
Prepare the test images for download (Optional) This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples
###Code
!mkdir -p test_images
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))
!ls test_images
!zip -qq cats_vs_dogs_test_images.zip -r test_images/
try:
files.download('cats_vs_dogs_test_images.zip')
except:
pass
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Transfer Learning with TensorFlow Hub for TFLite Run in Google Colab View source on GitHub Setup
###Code
try:
%tensorflow_version 2.x
except:
pass
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Select the Hub/TF2 module to useHub modules for TF 1.x won't work here, please use one of the selections provided.
###Code
module_selection = ("mobilenet_v2", 224, 1280) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true}
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {} and output dimension {}".format(
MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))
###Output
_____no_output_____
###Markdown
Data preprocessing Use [TensorFlow Datasets](http://tensorflow.org/datasets) to load the cats and dogs dataset.This `tfds` package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see [loading image data](../load_data/images.ipynb)
###Code
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
The `tfds.load` method downloads and caches the data, and returns a `tf.data.Dataset` object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.Since `"cats_vs_dog"` doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
###Code
(train_examples, validation_examples, test_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[80%:]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
###Output
_____no_output_____
###Markdown
Format the DataUse the `tf.image` module to format the images for the task.Resize the images to a fixes input size, and rescale the input channels
###Code
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
###Output
_____no_output_____
###Markdown
Now shuffle and batch the data
###Code
BATCH_SIZE = 32 #@param {type:"integer"}
train_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)
test_batches = test_examples.map(format_image).batch(1)
###Output
_____no_output_____
###Markdown
Inspect a batch
###Code
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
###Output
_____no_output_____
###Markdown
Defining the modelAll it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module.For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy.
###Code
do_fine_tuning = False #@param {type:"boolean"}
###Output
_____no_output_____
###Markdown
Load TFHub Module
###Code
feature_extractor = hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3,),
output_shape=[FV_SIZE],
trainable=do_fine_tuning)
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(num_classes)
])
model.summary()
#@title (Optional) Unfreeze some layers
NUM_LAYERS = 7 #@param {type:"slider", min:1, max:50, step:1}
if do_fine_tuning:
feature_extractor.trainable = True
for layer in model.layers[-NUM_LAYERS:]:
layer.trainable = True
else:
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Training the model
###Code
if do_fine_tuning:
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9),
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
else:
model.compile(
optimizer='adam',
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 5
hist = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Export the model
###Code
CATS_VS_DOGS_SAVED_MODEL = "exp_saved_model"
###Output
_____no_output_____
###Markdown
Export the SavedModel
###Code
tf.saved_model.save(model, CATS_VS_DOGS_SAVED_MODEL)
%%bash -s $CATS_VS_DOGS_SAVED_MODEL
saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default
loaded = tf.saved_model.load(CATS_VS_DOGS_SAVED_MODEL)
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_input_signature)
print(infer.structured_outputs)
###Output
_____no_output_____
###Markdown
Convert using TFLite's Converter Load the TFLiteConverter with the SavedModel
###Code
converter = tf.lite.TFLiteConverter.from_saved_model(CATS_VS_DOGS_SAVED_MODEL)
###Output
_____no_output_____
###Markdown
Post-training quantizationThe simplest form of post-training quantization quantizes weights from floating point to 8-bits of precision. This technique is enabled as an option in the TensorFlow Lite converter. At inference, weights are converted from 8-bits of precision to floating point and computed using floating-point kernels. This conversion is done once and cached to reduce latency.To further improve latency, hybrid operators dynamically quantize activations to 8-bits and perform computations with 8-bit weights and activations. This optimization provides latencies close to fully fixed-point inference. However, the outputs are still stored using floating point, so that the speedup with hybrid ops is less than a full fixed-point computation.
###Code
converter.optimizations = [tf.lite.Optimize.DEFAULT]
###Output
_____no_output_____
###Markdown
Post-training integer quantizationWe can get further latency improvements, reductions in peak memory usage, and access to integer only hardware accelerators by making sure all model math is quantized. To do this, we need to measure the dynamic range of activations and inputs with a representative data set. You can simply create an input data generator and provide it to our converter.
###Code
def representative_data_gen():
for input_value, _ in test_batches.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
###Output
_____no_output_____
###Markdown
The resulting model will be fully quantized but still take float input and output for convenience.Ops that do not have quantized implementations will automatically be left in floating point. This allows conversion to occur smoothly but may restrict deployment to accelerators that support float. Full integer quantizationTo require the converter to only output integer operations, one can specify:
###Code
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
###Output
_____no_output_____
###Markdown
Finally convert the model
###Code
tflite_model = converter.convert()
tflite_model_file = 'converted_model.tflite'
with open(tflite_model_file, "wb") as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
Test the TFLite model using the Python Interpreter
###Code
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=tflite_model_file)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
from tqdm import tqdm
# Gather results for the randomly sampled test images
predictions = []
test_labels, test_imgs = [], []
for img, label in tqdm(test_batches.take(10)):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label.numpy()[0])
test_imgs.append(img)
#@title Utility functions for plotting
# Utilities for plotting
class_names = ['cat', 'dog']
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
###Output
_____no_output_____
###Markdown
NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite doesn't have super optimized server CPU kernels. For this reason post-training full-integer quantized models may be slower here than the other kinds of optimized models. But for mobile CPUs, considerable speedup can be observed.
###Code
#@title Visualize the outputs { run: "auto" }
index = 0 #@param {type:"slider", min:0, max:9, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_imgs)
plt.show()
###Output
_____no_output_____
###Markdown
Download the model.**NOTE: You might have to run to the cell below twice**
###Code
labels = ['cat', 'dog']
with open('labels.txt', 'w') as f:
f.write('\n'.join(labels))
try:
from google.colab import files
files.download('converted_model.tflite')
files.download('labels.txt')
except:
pass
###Output
_____no_output_____
###Markdown
Prepare the test images for download (Optional) This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples
###Code
!mkdir -p test_images
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))
!ls test_images
!zip -qq cats_vs_dogs_test_images.zip -r test_images/
try:
files.download('cats_vs_dogs_test_images.zip')
except:
pass
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Transfer Learning with TensorFlow Hub for TFLite Run in Google Colab View source on GitHub Setup
###Code
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Select the Hub/TF2 module to useHub modules for TF 1.x won't work here, please use one of the selections provided.
###Code
module_selection = ("mobilenet_v2", 224, 1280) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true}
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {} and output dimension {}".format(
MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))
###Output
_____no_output_____
###Markdown
Data preprocessing Use [TensorFlow Datasets](http://tensorflow.org/datasets) to load the cats and dogs dataset.This `tfds` package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see [loading image data](../load_data/images.ipynb)
###Code
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
The `tfds.load` method downloads and caches the data, and returns a `tf.data.Dataset` object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.Since `"cats_vs_dogs"` doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
###Code
(train_examples, validation_examples, test_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[80%:]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
###Output
_____no_output_____
###Markdown
Format the DataUse the `tf.image` module to format the images for the task.Resize the images to a fixes input size, and rescale the input channels
###Code
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
###Output
_____no_output_____
###Markdown
Now shuffle and batch the data
###Code
BATCH_SIZE = 32 #@param {type:"integer"}
train_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)
test_batches = test_examples.map(format_image).batch(1)
###Output
_____no_output_____
###Markdown
Inspect a batch
###Code
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
###Output
_____no_output_____
###Markdown
Defining the modelAll it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module.For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy.
###Code
do_fine_tuning = False #@param {type:"boolean"}
###Output
_____no_output_____
###Markdown
Load TFHub Module
###Code
feature_extractor = hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3,),
output_shape=[FV_SIZE],
trainable=do_fine_tuning)
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(num_classes)
])
model.summary()
#@title (Optional) Unfreeze some layers
NUM_LAYERS = 7 #@param {type:"slider", min:1, max:50, step:1}
if do_fine_tuning:
feature_extractor.trainable = True
for layer in model.layers[-NUM_LAYERS:]:
layer.trainable = True
else:
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Training the model
###Code
if do_fine_tuning:
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9),
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
else:
model.compile(
optimizer='adam',
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 5
hist = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Export the model
###Code
CATS_VS_DOGS_SAVED_MODEL = "exp_saved_model"
###Output
_____no_output_____
###Markdown
Export the SavedModel
###Code
tf.saved_model.save(model, CATS_VS_DOGS_SAVED_MODEL)
%%bash -s $CATS_VS_DOGS_SAVED_MODEL
saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default
loaded = tf.saved_model.load(CATS_VS_DOGS_SAVED_MODEL)
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_input_signature)
print(infer.structured_outputs)
###Output
_____no_output_____
###Markdown
Convert using TFLite's Converter Load the TFLiteConverter with the SavedModel
###Code
converter = tf.lite.TFLiteConverter.from_saved_model(CATS_VS_DOGS_SAVED_MODEL)
###Output
_____no_output_____
###Markdown
Post-training quantizationThe simplest form of post-training quantization quantizes weights from floating point to 8-bits of precision. This technique is enabled as an option in the TensorFlow Lite converter. At inference, weights are converted from 8-bits of precision to floating point and computed using floating-point kernels. This conversion is done once and cached to reduce latency.To further improve latency, hybrid operators dynamically quantize activations to 8-bits and perform computations with 8-bit weights and activations. This optimization provides latencies close to fully fixed-point inference. However, the outputs are still stored using floating point, so that the speedup with hybrid ops is less than a full fixed-point computation.
###Code
converter.optimizations = [tf.lite.Optimize.DEFAULT]
###Output
_____no_output_____
###Markdown
Post-training integer quantizationWe can get further latency improvements, reductions in peak memory usage, and access to integer only hardware accelerators by making sure all model math is quantized. To do this, we need to measure the dynamic range of activations and inputs with a representative data set. You can simply create an input data generator and provide it to our converter.
###Code
def representative_data_gen():
for input_value, _ in test_batches.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
###Output
_____no_output_____
###Markdown
The resulting model will be fully quantized but still take float input and output for convenience.Ops that do not have quantized implementations will automatically be left in floating point. This allows conversion to occur smoothly but may restrict deployment to accelerators that support float. Full integer quantizationTo require the converter to only output integer operations, one can specify:
###Code
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
###Output
_____no_output_____
###Markdown
Finally convert the model
###Code
tflite_model = converter.convert()
tflite_model_file = 'converted_model.tflite'
with open(tflite_model_file, "wb") as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
Test the TFLite model using the Python Interpreter
###Code
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=tflite_model_file)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
from tqdm import tqdm
# Gather results for the randomly sampled test images
predictions = []
test_labels, test_imgs = [], []
for img, label in tqdm(test_batches.take(10)):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label.numpy()[0])
test_imgs.append(img)
#@title Utility functions for plotting
# Utilities for plotting
class_names = ['cat', 'dog']
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
###Output
_____no_output_____
###Markdown
NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite doesn't have super optimized server CPU kernels. For this reason post-training full-integer quantized models may be slower here than the other kinds of optimized models. But for mobile CPUs, considerable speedup can be observed.
###Code
#@title Visualize the outputs { run: "auto" }
index = 0 #@param {type:"slider", min:0, max:9, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_imgs)
plt.show()
###Output
_____no_output_____
###Markdown
Download the model.**NOTE: You might have to run to the cell below twice**
###Code
labels = ['cat', 'dog']
with open('labels.txt', 'w') as f:
f.write('\n'.join(labels))
try:
from google.colab import files
files.download('converted_model.tflite')
files.download('labels.txt')
except:
pass
###Output
_____no_output_____
###Markdown
Prepare the test images for download (Optional) This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples
###Code
!mkdir -p test_images
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))
!ls test_images
!zip -qq cats_vs_dogs_test_images.zip -r test_images/
try:
files.download('cats_vs_dogs_test_images.zip')
except:
pass
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Transfer Learning with TensorFlow Hub for TFLite Run in Google Colab View source on GitHub Setup
###Code
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Select the Hub/TF2 module to useHub modules for TF 1.x won't work here, please use one of the selections provided.
###Code
module_selection = ("mobilenet_v2", 224, 1280) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true}
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {} and output dimension {}".format(
MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))
###Output
_____no_output_____
###Markdown
Data preprocessing Use [TensorFlow Datasets](http://tensorflow.org/datasets) to load the cats and dogs dataset.This `tfds` package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see [loading image data](../load_data/images.ipynb)
###Code
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
The `tfds.load` method downloads and caches the data, and returns a `tf.data.Dataset` object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.Since `"cats_vs_dogs"` doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
###Code
(train_examples, validation_examples, test_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[80%:]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
###Output
_____no_output_____
###Markdown
Format the DataUse the `tf.image` module to format the images for the task.Resize the images to a fixes input size, and rescale the input channels
###Code
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
###Output
_____no_output_____
###Markdown
Now shuffle and batch the data
###Code
BATCH_SIZE = 32 #@param {type:"integer"}
train_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)
test_batches = test_examples.map(format_image).batch(1)
###Output
_____no_output_____
###Markdown
Inspect a batch
###Code
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
###Output
_____no_output_____
###Markdown
Defining the modelAll it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module.For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy.
###Code
do_fine_tuning = False #@param {type:"boolean"}
###Output
_____no_output_____
###Markdown
Load TFHub Module
###Code
feature_extractor = hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3,),
output_shape=[FV_SIZE],
trainable=do_fine_tuning)
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(num_classes)
])
model.summary()
#@title (Optional) Unfreeze some layers
NUM_LAYERS = 7 #@param {type:"slider", min:1, max:50, step:1}
if do_fine_tuning:
feature_extractor.trainable = True
for layer in model.layers[-NUM_LAYERS:]:
layer.trainable = True
else:
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Training the model
###Code
if do_fine_tuning:
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9),
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
else:
model.compile(
optimizer='adam',
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 5
hist = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Export the model
###Code
CATS_VS_DOGS_SAVED_MODEL = "exp_saved_model"
###Output
_____no_output_____
###Markdown
Export the SavedModel
###Code
tf.saved_model.save(model, CATS_VS_DOGS_SAVED_MODEL)
%%bash -s $CATS_VS_DOGS_SAVED_MODEL
saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default
loaded = tf.saved_model.load(CATS_VS_DOGS_SAVED_MODEL)
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_input_signature)
print(infer.structured_outputs)
###Output
_____no_output_____
###Markdown
Convert using TFLite's Converter Load the TFLiteConverter with the SavedModel
###Code
converter = tf.lite.TFLiteConverter.from_saved_model(CATS_VS_DOGS_SAVED_MODEL)
###Output
_____no_output_____
###Markdown
Post-training quantizationThe simplest form of post-training quantization quantizes weights from floating point to 8-bits of precision. This technique is enabled as an option in the TensorFlow Lite converter. At inference, weights are converted from 8-bits of precision to floating point and computed using floating-point kernels. This conversion is done once and cached to reduce latency.To further improve latency, hybrid operators dynamically quantize activations to 8-bits and perform computations with 8-bit weights and activations. This optimization provides latencies close to fully fixed-point inference. However, the outputs are still stored using floating point, so that the speedup with hybrid ops is less than a full fixed-point computation.
###Code
converter.optimizations = [tf.lite.Optimize.DEFAULT]
###Output
_____no_output_____
###Markdown
Post-training integer quantizationWe can get further latency improvements, reductions in peak memory usage, and access to integer only hardware accelerators by making sure all model math is quantized. To do this, we need to measure the dynamic range of activations and inputs with a representative data set. You can simply create an input data generator and provide it to our converter.
###Code
def representative_data_gen():
for input_value, _ in test_batches.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
###Output
_____no_output_____
###Markdown
The resulting model will be fully quantized but still take float input and output for convenience.Ops that do not have quantized implementations will automatically be left in floating point. This allows conversion to occur smoothly but may restrict deployment to accelerators that support float. Full integer quantizationTo require the converter to only output integer operations, one can specify:
###Code
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
###Output
_____no_output_____
###Markdown
Finally convert the model
###Code
tflite_model = converter.convert()
tflite_model_file = 'converted_model.tflite'
with open(tflite_model_file, "wb") as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
Test the TFLite model using the Python Interpreter
###Code
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=tflite_model_file)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
from tqdm import tqdm
# Gather results for the randomly sampled test images
predictions = []
test_labels, test_imgs = [], []
for img, label in tqdm(test_batches.take(10)):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label.numpy()[0])
test_imgs.append(img)
#@title Utility functions for plotting
# Utilities for plotting
class_names = ['cat', 'dog']
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
###Output
_____no_output_____
###Markdown
NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite doesn't have super optimized server CPU kernels. For this reason post-training full-integer quantized models may be slower here than the other kinds of optimized models. But for mobile CPUs, considerable speedup can be observed.
###Code
#@title Visualize the outputs { run: "auto" }
index = 0 #@param {type:"slider", min:0, max:9, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_imgs)
plt.show()
###Output
_____no_output_____
###Markdown
Download the model.**NOTE: You might have to run to the cell below twice**
###Code
labels = ['cat', 'dog']
with open('labels.txt', 'w') as f:
f.write('\n'.join(labels))
try:
from google.colab import files
files.download('converted_model.tflite')
files.download('labels.txt')
except:
pass
###Output
_____no_output_____
###Markdown
Prepare the test images for download (Optional) This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples
###Code
!mkdir -p test_images
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))
!ls test_images
!zip -qq cats_vs_dogs_test_images.zip -r test_images/
try:
files.download('cats_vs_dogs_test_images.zip')
except:
pass
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Transfer Learning with TensorFlow Hub for TFLite Run in Google Colab View source on GitHub Setup
###Code
try:
%tensorflow_version 2.x #gpu
except:
pass
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import matplotlib.pylab as plt
import numpy as np
tf.enable_v2_behavior()
import tensorflow_hub as hub
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Select the Hub/TF2 module to useHub modules for TF 1.x won't work here, please use one of the selections provided.
###Code
module_selection = ("mobilenet_v2", 224, 1280) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true}
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {} and output dimension {}".format(
MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))
###Output
_____no_output_____
###Markdown
Data preprocessing Use [TensorFlow Datasets](http://tensorflow.org/datasets) to load the cats and dogs dataset.This `tfds` package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see [loading image data](../load_data/images.ipynb)
###Code
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
The `tfds.load` method downloads and caches the data, and returns a `tf.data.Dataset` object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.Since `"cats_vs_dog"` doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
###Code
splits = tfds.Split.ALL.subsplit(weighted=(80, 10, 10))
splits, info = tfds.load('cats_vs_dogs', with_info=True, as_supervised=True, split = splits)
(train_examples, validation_examples, test_examples) = splits
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
###Output
_____no_output_____
###Markdown
Format the DataUse the `tf.image` module to format the images for the task.Resize the images to a fixes input size, and rescale the input channels
###Code
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
###Output
_____no_output_____
###Markdown
Now shuffle and batch the data
###Code
BATCH_SIZE = 32 #@param {type:"integer"}
train_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)
test_batches = test_examples.map(format_image).batch(1)
###Output
_____no_output_____
###Markdown
Inspect a batch
###Code
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
###Output
_____no_output_____
###Markdown
Defining the modelAll it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module.For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy.
###Code
do_fine_tuning = False #@param {type:"boolean"}
###Output
_____no_output_____
###Markdown
Load TFHub Module
###Code
feature_extractor = hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3,),
output_shape=[FV_SIZE],
trainable=do_fine_tuning)
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(num_classes, activation='softmax')
])
model.summary()
#@title (Optional) Unfreeze some layers
NUM_LAYERS = 7 #@param {type:"slider", min:1, max:50, step:1}
if do_fine_tuning:
feature_extractor.trainable = True
for layer in model.layers[-NUM_LAYERS:]:
layer.trainable = True
else:
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Training the model
###Code
if do_fine_tuning:
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
else:
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
EPOCHS = 5
hist = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Export the model
###Code
CATS_VS_DOGS_SAVED_MODEL = "exp_saved_model"
###Output
_____no_output_____
###Markdown
Export the SavedModel
###Code
tf.saved_model.save(model, CATS_VS_DOGS_SAVED_MODEL)
%%bash -s $CATS_VS_DOGS_SAVED_MODEL
saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default
loaded = tf.saved_model.load(CATS_VS_DOGS_SAVED_MODEL)
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_input_signature)
print(infer.structured_outputs)
###Output
_____no_output_____
###Markdown
Convert using TFLite's Converter Load the TFLiteConverter with the SavedModel
###Code
converter = tf.lite.TFLiteConverter.from_saved_model(CATS_VS_DOGS_SAVED_MODEL)
###Output
_____no_output_____
###Markdown
Post-training quantizationThe simplest form of post-training quantization quantizes weights from floating point to 8-bits of precision. This technique is enabled as an option in the TensorFlow Lite converter. At inference, weights are converted from 8-bits of precision to floating point and computed using floating-point kernels. This conversion is done once and cached to reduce latency.To further improve latency, hybrid operators dynamically quantize activations to 8-bits and perform computations with 8-bit weights and activations. This optimization provides latencies close to fully fixed-point inference. However, the outputs are still stored using floating point, so that the speedup with hybrid ops is less than a full fixed-point computation.
###Code
converter.optimizations = [tf.lite.Optimize.DEFAULT]
###Output
_____no_output_____
###Markdown
Post-training integer quantizationWe can get further latency improvements, reductions in peak memory usage, and access to integer only hardware accelerators by making sure all model math is quantized. To do this, we need to measure the dynamic range of activations and inputs with a representative data set. You can simply create an input data generator and provide it to our converter.
###Code
def representative_data_gen():
for input_value, _ in test_batches.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
###Output
_____no_output_____
###Markdown
The resulting model will be fully quantized but still take float input and output for convenience.Ops that do not have quantized implementations will automatically be left in floating point. This allows conversion to occur smoothly but may restrict deployment to accelerators that support float. Full integer quantizationTo require the converter to only output integer operations, one can specify:
###Code
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
###Output
_____no_output_____
###Markdown
Finally convert the model
###Code
tflite_model = converter.convert()
tflite_model_file = 'converted_model.tflite'
with open(tflite_model_file, "wb") as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
Test the TFLite model using the Python Interpreter
###Code
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=tflite_model_file)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
from tqdm import tqdm
# Gather results for the randomly sampled test images
predictions = []
test_labels, test_imgs = [], []
for img, label in tqdm(test_batches.take(10)):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label.numpy()[0])
test_imgs.append(img)
#@title Utility functions for plotting
# Utilities for plotting
class_names = ['cat', 'dog']
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
###Output
_____no_output_____
###Markdown
NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite doesn't have super optimized server CPU kernels. For this reason post-training full-integer quantized models may be slower here than the other kinds of optimized models. But for mobile CPUs, considerable speedup can be observed.
###Code
#@title Visualize the outputs { run: "auto" }
index = 0 #@param {type:"slider", min:0, max:9, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_imgs)
plt.show()
###Output
_____no_output_____
###Markdown
Download the model.**NOTE: You might have to run to the cell below twice**
###Code
labels = ['cat', 'dog']
with open('labels.txt', 'w') as f:
f.write('\n'.join(labels))
try:
from google.colab import files
files.download('converted_model.tflite')
files.download('labels.txt')
except:
pass
###Output
_____no_output_____
###Markdown
Prepare the test images for download (Optional) This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples
###Code
!mkdir -p test_images
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))
!ls test_images
!zip -qq cats_vs_dogs_test_images.zip -r test_images/
try:
files.download('cats_vs_dogs_test_images.zip')
except:
pass
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Transfer Learning with TensorFlow Hub for TFLite Run in Google Colab View source on GitHub Setup
###Code
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Select the Hub/TF2 module to useHub modules for TF 1.x won't work here, please use one of the selections provided.
###Code
module_selection = ("mobilenet_v2", 224, 1280) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true}
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {} and output dimension {}".format(
MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))
###Output
_____no_output_____
###Markdown
Data preprocessing Use [TensorFlow Datasets](http://tensorflow.org/datasets) to load the cats and dogs dataset.This `tfds` package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see [loading image data](../load_data/images.ipynb)
###Code
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
The `tfds.load` method downloads and caches the data, and returns a `tf.data.Dataset` object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.Since `"cats_vs_dog"` doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
###Code
(train_examples, validation_examples, test_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[80%:]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
###Output
_____no_output_____
###Markdown
Format the DataUse the `tf.image` module to format the images for the task.Resize the images to a fixes input size, and rescale the input channels
###Code
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
###Output
_____no_output_____
###Markdown
Now shuffle and batch the data
###Code
BATCH_SIZE = 32 #@param {type:"integer"}
train_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)
test_batches = test_examples.map(format_image).batch(1)
###Output
_____no_output_____
###Markdown
Inspect a batch
###Code
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
###Output
_____no_output_____
###Markdown
Defining the modelAll it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module.For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy.
###Code
do_fine_tuning = False #@param {type:"boolean"}
###Output
_____no_output_____
###Markdown
Load TFHub Module
###Code
feature_extractor = hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3,),
output_shape=[FV_SIZE],
trainable=do_fine_tuning)
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(num_classes)
])
model.summary()
#@title (Optional) Unfreeze some layers
NUM_LAYERS = 7 #@param {type:"slider", min:1, max:50, step:1}
if do_fine_tuning:
feature_extractor.trainable = True
for layer in model.layers[-NUM_LAYERS:]:
layer.trainable = True
else:
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Training the model
###Code
if do_fine_tuning:
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9),
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
else:
model.compile(
optimizer='adam',
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 5
hist = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Export the model
###Code
CATS_VS_DOGS_SAVED_MODEL = "exp_saved_model"
###Output
_____no_output_____
###Markdown
Export the SavedModel
###Code
tf.saved_model.save(model, CATS_VS_DOGS_SAVED_MODEL)
%%bash -s $CATS_VS_DOGS_SAVED_MODEL
saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default
loaded = tf.saved_model.load(CATS_VS_DOGS_SAVED_MODEL)
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_input_signature)
print(infer.structured_outputs)
###Output
_____no_output_____
###Markdown
Convert using TFLite's Converter Load the TFLiteConverter with the SavedModel
###Code
converter = tf.lite.TFLiteConverter.from_saved_model(CATS_VS_DOGS_SAVED_MODEL)
###Output
_____no_output_____
###Markdown
Post-training quantizationThe simplest form of post-training quantization quantizes weights from floating point to 8-bits of precision. This technique is enabled as an option in the TensorFlow Lite converter. At inference, weights are converted from 8-bits of precision to floating point and computed using floating-point kernels. This conversion is done once and cached to reduce latency.To further improve latency, hybrid operators dynamically quantize activations to 8-bits and perform computations with 8-bit weights and activations. This optimization provides latencies close to fully fixed-point inference. However, the outputs are still stored using floating point, so that the speedup with hybrid ops is less than a full fixed-point computation.
###Code
converter.optimizations = [tf.lite.Optimize.DEFAULT]
###Output
_____no_output_____
###Markdown
Post-training integer quantizationWe can get further latency improvements, reductions in peak memory usage, and access to integer only hardware accelerators by making sure all model math is quantized. To do this, we need to measure the dynamic range of activations and inputs with a representative data set. You can simply create an input data generator and provide it to our converter.
###Code
def representative_data_gen():
for input_value, _ in test_batches.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
###Output
_____no_output_____
###Markdown
The resulting model will be fully quantized but still take float input and output for convenience.Ops that do not have quantized implementations will automatically be left in floating point. This allows conversion to occur smoothly but may restrict deployment to accelerators that support float. Full integer quantizationTo require the converter to only output integer operations, one can specify:
###Code
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
###Output
_____no_output_____
###Markdown
Finally convert the model
###Code
tflite_model = converter.convert()
tflite_model_file = 'converted_model.tflite'
with open(tflite_model_file, "wb") as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
Test the TFLite model using the Python Interpreter
###Code
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=tflite_model_file)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
from tqdm import tqdm
# Gather results for the randomly sampled test images
predictions = []
test_labels, test_imgs = [], []
for img, label in tqdm(test_batches.take(10)):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label.numpy()[0])
test_imgs.append(img)
#@title Utility functions for plotting
# Utilities for plotting
class_names = ['cat', 'dog']
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
###Output
_____no_output_____
###Markdown
NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite doesn't have super optimized server CPU kernels. For this reason post-training full-integer quantized models may be slower here than the other kinds of optimized models. But for mobile CPUs, considerable speedup can be observed.
###Code
#@title Visualize the outputs { run: "auto" }
index = 0 #@param {type:"slider", min:0, max:9, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_imgs)
plt.show()
###Output
_____no_output_____
###Markdown
Download the model.**NOTE: You might have to run to the cell below twice**
###Code
labels = ['cat', 'dog']
with open('labels.txt', 'w') as f:
f.write('\n'.join(labels))
try:
from google.colab import files
files.download('converted_model.tflite')
files.download('labels.txt')
except:
pass
###Output
_____no_output_____
###Markdown
Prepare the test images for download (Optional) This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples
###Code
!mkdir -p test_images
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))
!ls test_images
!zip -qq cats_vs_dogs_test_images.zip -r test_images/
try:
files.download('cats_vs_dogs_test_images.zip')
except:
pass
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Transfer Learning with TensorFlow Hub for TFLite Run in Google Colab View source on GitHub Setup
###Code
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Select the Hub/TF2 module to useHub modules for TF 1.x won't work here, please use one of the selections provided.
###Code
module_selection = ("mobilenet_v2", 224, 1280) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true}
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {} and output dimension {}".format(
MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))
###Output
_____no_output_____
###Markdown
Data preprocessing Use [TensorFlow Datasets](http://tensorflow.org/datasets) to load the cats and dogs dataset.This `tfds` package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see [loading image data](../load_data/images.ipynb)
###Code
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
The `tfds.load` method downloads and caches the data, and returns a `tf.data.Dataset` object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.Since `"cats_vs_dog"` doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
###Code
(train_examples, validation_examples, test_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[80%:]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
###Output
_____no_output_____
###Markdown
Format the DataUse the `tf.image` module to format the images for the task.Resize the images to a fixes input size, and rescale the input channels
###Code
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
###Output
_____no_output_____
###Markdown
Now shuffle and batch the data
###Code
BATCH_SIZE = 32 #@param {type:"integer"}
train_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)
test_batches = test_examples.map(format_image).batch(1)
###Output
_____no_output_____
###Markdown
Inspect a batch
###Code
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
###Output
_____no_output_____
###Markdown
Defining the modelAll it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module.For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy.
###Code
do_fine_tuning = False #@param {type:"boolean"}
###Output
_____no_output_____
###Markdown
Load TFHub Module
###Code
feature_extractor = hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3,),
output_shape=[FV_SIZE],
trainable=do_fine_tuning)
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(num_classes)
])
model.summary()
#@title (Optional) Unfreeze some layers
NUM_LAYERS = 7 #@param {type:"slider", min:1, max:50, step:1}
if do_fine_tuning:
feature_extractor.trainable = True
for layer in model.layers[-NUM_LAYERS:]:
layer.trainable = True
else:
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Training the model
###Code
if do_fine_tuning:
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9),
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
else:
model.compile(
optimizer='adam',
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 5
hist = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Export the model
###Code
CATS_VS_DOGS_SAVED_MODEL = "exp_saved_model"
###Output
_____no_output_____
###Markdown
Export the SavedModel
###Code
tf.saved_model.save(model, CATS_VS_DOGS_SAVED_MODEL)
%%bash -s $CATS_VS_DOGS_SAVED_MODEL
saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default
loaded = tf.saved_model.load(CATS_VS_DOGS_SAVED_MODEL)
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_input_signature)
print(infer.structured_outputs)
###Output
_____no_output_____
###Markdown
Convert using TFLite's Converter Load the TFLiteConverter with the SavedModel
###Code
converter = tf.lite.TFLiteConverter.from_saved_model(CATS_VS_DOGS_SAVED_MODEL)
###Output
_____no_output_____
###Markdown
Post-training quantizationThe simplest form of post-training quantization quantizes weights from floating point to 8-bits of precision. This technique is enabled as an option in the TensorFlow Lite converter. At inference, weights are converted from 8-bits of precision to floating point and computed using floating-point kernels. This conversion is done once and cached to reduce latency.To further improve latency, hybrid operators dynamically quantize activations to 8-bits and perform computations with 8-bit weights and activations. This optimization provides latencies close to fully fixed-point inference. However, the outputs are still stored using floating point, so that the speedup with hybrid ops is less than a full fixed-point computation.
###Code
converter.optimizations = [tf.lite.Optimize.DEFAULT]
###Output
_____no_output_____
###Markdown
Post-training integer quantizationWe can get further latency improvements, reductions in peak memory usage, and access to integer only hardware accelerators by making sure all model math is quantized. To do this, we need to measure the dynamic range of activations and inputs with a representative data set. You can simply create an input data generator and provide it to our converter.
###Code
def representative_data_gen():
for input_value, _ in test_batches.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
###Output
_____no_output_____
###Markdown
The resulting model will be fully quantized but still take float input and output for convenience.Ops that do not have quantized implementations will automatically be left in floating point. This allows conversion to occur smoothly but may restrict deployment to accelerators that support float. Full integer quantizationTo require the converter to only output integer operations, one can specify:
###Code
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
###Output
_____no_output_____
###Markdown
Finally convert the model
###Code
tflite_model = converter.convert()
tflite_model_file = 'converted_model.tflite'
with open(tflite_model_file, "wb") as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
Test the TFLite model using the Python Interpreter
###Code
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=tflite_model_file)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
from tqdm import tqdm
# Gather results for the randomly sampled test images
predictions = []
test_labels, test_imgs = [], []
for img, label in tqdm(test_batches.take(10)):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label.numpy()[0])
test_imgs.append(img)
#@title Utility functions for plotting
# Utilities for plotting
class_names = ['cat', 'dog']
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
###Output
_____no_output_____
###Markdown
NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite doesn't have super optimized server CPU kernels. For this reason post-training full-integer quantized models may be slower here than the other kinds of optimized models. But for mobile CPUs, considerable speedup can be observed.
###Code
#@title Visualize the outputs { run: "auto" }
index = 0 #@param {type:"slider", min:0, max:9, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_imgs)
plt.show()
###Output
_____no_output_____
###Markdown
Download the model.**NOTE: You might have to run to the cell below twice**
###Code
labels = ['cat', 'dog']
with open('labels.txt', 'w') as f:
f.write('\n'.join(labels))
try:
from google.colab import files
files.download('converted_model.tflite')
files.download('labels.txt')
except:
pass
###Output
_____no_output_____
###Markdown
Prepare the test images for download (Optional) This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples
###Code
!mkdir -p test_images
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))
!ls test_images
!zip -qq cats_vs_dogs_test_images.zip -r test_images/
try:
files.download('cats_vs_dogs_test_images.zip')
except:
pass
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Transfer Learning with TensorFlow Hub for TFLite Run in Google Colab View source on GitHub Setup
###Code
try:
%tensorflow_version 2.x
except:
pass
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Select the Hub/TF2 module to useHub modules for TF 1.x won't work here, please use one of the selections provided.
###Code
module_selection = ("mobilenet_v2", 224, 1280) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true}
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {} and output dimension {}".format(
MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))
###Output
_____no_output_____
###Markdown
Data preprocessing Use [TensorFlow Datasets](http://tensorflow.org/datasets) to load the cats and dogs dataset.This `tfds` package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see [loading image data](../load_data/images.ipynb)
###Code
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
The `tfds.load` method downloads and caches the data, and returns a `tf.data.Dataset` object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.Since `"cats_vs_dog"` doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
###Code
(train_examples, validation_examples, test_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[80%:]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
###Output
_____no_output_____
###Markdown
Format the DataUse the `tf.image` module to format the images for the task.Resize the images to a fixes input size, and rescale the input channels
###Code
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
###Output
_____no_output_____
###Markdown
Now shuffle and batch the data
###Code
BATCH_SIZE = 32 #@param {type:"integer"}
train_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)
test_batches = test_examples.map(format_image).batch(1)
###Output
_____no_output_____
###Markdown
Inspect a batch
###Code
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
###Output
_____no_output_____
###Markdown
Defining the modelAll it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module.For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy.
###Code
do_fine_tuning = False #@param {type:"boolean"}
###Output
_____no_output_____
###Markdown
Load TFHub Module
###Code
feature_extractor = hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3,),
output_shape=[FV_SIZE],
trainable=do_fine_tuning)
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(num_classes, activation='softmax')
])
model.summary()
#@title (Optional) Unfreeze some layers
NUM_LAYERS = 7 #@param {type:"slider", min:1, max:50, step:1}
if do_fine_tuning:
feature_extractor.trainable = True
for layer in model.layers[-NUM_LAYERS:]:
layer.trainable = True
else:
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Training the model
###Code
if do_fine_tuning:
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
else:
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
EPOCHS = 5
hist = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Export the model
###Code
CATS_VS_DOGS_SAVED_MODEL = "exp_saved_model"
###Output
_____no_output_____
###Markdown
Export the SavedModel
###Code
tf.saved_model.save(model, CATS_VS_DOGS_SAVED_MODEL)
%%bash -s $CATS_VS_DOGS_SAVED_MODEL
saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default
loaded = tf.saved_model.load(CATS_VS_DOGS_SAVED_MODEL)
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_input_signature)
print(infer.structured_outputs)
###Output
_____no_output_____
###Markdown
Convert using TFLite's Converter Load the TFLiteConverter with the SavedModel
###Code
converter = tf.lite.TFLiteConverter.from_saved_model(CATS_VS_DOGS_SAVED_MODEL)
###Output
_____no_output_____
###Markdown
Post-training quantizationThe simplest form of post-training quantization quantizes weights from floating point to 8-bits of precision. This technique is enabled as an option in the TensorFlow Lite converter. At inference, weights are converted from 8-bits of precision to floating point and computed using floating-point kernels. This conversion is done once and cached to reduce latency.To further improve latency, hybrid operators dynamically quantize activations to 8-bits and perform computations with 8-bit weights and activations. This optimization provides latencies close to fully fixed-point inference. However, the outputs are still stored using floating point, so that the speedup with hybrid ops is less than a full fixed-point computation.
###Code
converter.optimizations = [tf.lite.Optimize.DEFAULT]
###Output
_____no_output_____
###Markdown
Post-training integer quantizationWe can get further latency improvements, reductions in peak memory usage, and access to integer only hardware accelerators by making sure all model math is quantized. To do this, we need to measure the dynamic range of activations and inputs with a representative data set. You can simply create an input data generator and provide it to our converter.
###Code
def representative_data_gen():
for input_value, _ in test_batches.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
###Output
_____no_output_____
###Markdown
The resulting model will be fully quantized but still take float input and output for convenience.Ops that do not have quantized implementations will automatically be left in floating point. This allows conversion to occur smoothly but may restrict deployment to accelerators that support float. Full integer quantizationTo require the converter to only output integer operations, one can specify:
###Code
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
###Output
_____no_output_____
###Markdown
Finally convert the model
###Code
tflite_model = converter.convert()
tflite_model_file = 'converted_model.tflite'
with open(tflite_model_file, "wb") as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
Test the TFLite model using the Python Interpreter
###Code
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=tflite_model_file)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
from tqdm import tqdm
# Gather results for the randomly sampled test images
predictions = []
test_labels, test_imgs = [], []
for img, label in tqdm(test_batches.take(10)):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label.numpy()[0])
test_imgs.append(img)
#@title Utility functions for plotting
# Utilities for plotting
class_names = ['cat', 'dog']
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
###Output
_____no_output_____
###Markdown
NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite doesn't have super optimized server CPU kernels. For this reason post-training full-integer quantized models may be slower here than the other kinds of optimized models. But for mobile CPUs, considerable speedup can be observed.
###Code
#@title Visualize the outputs { run: "auto" }
index = 0 #@param {type:"slider", min:0, max:9, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_imgs)
plt.show()
###Output
_____no_output_____
###Markdown
Download the model.**NOTE: You might have to run to the cell below twice**
###Code
labels = ['cat', 'dog']
with open('labels.txt', 'w') as f:
f.write('\n'.join(labels))
try:
from google.colab import files
files.download('converted_model.tflite')
files.download('labels.txt')
except:
pass
###Output
_____no_output_____
###Markdown
Prepare the test images for download (Optional) This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples
###Code
!mkdir -p test_images
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))
!ls test_images
!zip -qq cats_vs_dogs_test_images.zip -r test_images/
try:
files.download('cats_vs_dogs_test_images.zip')
except:
pass
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Transfer Learning with TensorFlow Hub for TFLite Run in Google Colab View source on GitHub Setup
###Code
try:
%tensorflow_version 2.x
except:
pass
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Select the Hub/TF2 module to useHub modules for TF 1.x won't work here, please use one of the selections provided.
###Code
module_selection = ("mobilenet_v2", 224, 1280) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true}
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {} and output dimension {}".format(
MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))
###Output
_____no_output_____
###Markdown
Data preprocessing Use [TensorFlow Datasets](http://tensorflow.org/datasets) to load the cats and dogs dataset.This `tfds` package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see [loading image data](../load_data/images.ipynb)
###Code
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
The `tfds.load` method downloads and caches the data, and returns a `tf.data.Dataset` object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.Since `"cats_vs_dog"` doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
###Code
(train_examples, validation_examples, test_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[80%:]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
###Output
_____no_output_____
###Markdown
Format the DataUse the `tf.image` module to format the images for the task.Resize the images to a fixes input size, and rescale the input channels
###Code
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
###Output
_____no_output_____
###Markdown
Now shuffle and batch the data
###Code
BATCH_SIZE = 32 #@param {type:"integer"}
train_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)
test_batches = test_examples.map(format_image).batch(1)
###Output
_____no_output_____
###Markdown
Inspect a batch
###Code
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
###Output
_____no_output_____
###Markdown
Defining the modelAll it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module.For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy.
###Code
do_fine_tuning = False #@param {type:"boolean"}
###Output
_____no_output_____
###Markdown
Load TFHub Module
###Code
feature_extractor = hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3,),
output_shape=[FV_SIZE],
trainable=do_fine_tuning)
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(num_classes)
])
model.summary()
#@title (Optional) Unfreeze some layers
NUM_LAYERS = 7 #@param {type:"slider", min:1, max:50, step:1}
if do_fine_tuning:
feature_extractor.trainable = True
for layer in model.layers[-NUM_LAYERS:]:
layer.trainable = True
else:
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Training the model
###Code
if do_fine_tuning:
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9),
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
else:
model.compile(
optimizer='adam',
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 5
hist = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Export the model
###Code
CATS_VS_DOGS_SAVED_MODEL = "exp_saved_model"
###Output
_____no_output_____
###Markdown
Export the SavedModel
###Code
tf.saved_model.save(model, CATS_VS_DOGS_SAVED_MODEL)
%%bash -s $CATS_VS_DOGS_SAVED_MODEL
saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default
loaded = tf.saved_model.load(CATS_VS_DOGS_SAVED_MODEL)
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_input_signature)
print(infer.structured_outputs)
###Output
_____no_output_____
###Markdown
Convert using TFLite's Converter Load the TFLiteConverter with the SavedModel
###Code
converter = tf.lite.TFLiteConverter.from_saved_model(CATS_VS_DOGS_SAVED_MODEL)
###Output
_____no_output_____
###Markdown
Post-training quantizationThe simplest form of post-training quantization quantizes weights from floating point to 8-bits of precision. This technique is enabled as an option in the TensorFlow Lite converter. At inference, weights are converted from 8-bits of precision to floating point and computed using floating-point kernels. This conversion is done once and cached to reduce latency.To further improve latency, hybrid operators dynamically quantize activations to 8-bits and perform computations with 8-bit weights and activations. This optimization provides latencies close to fully fixed-point inference. However, the outputs are still stored using floating point, so that the speedup with hybrid ops is less than a full fixed-point computation.
###Code
converter.optimizations = [tf.lite.Optimize.DEFAULT]
###Output
_____no_output_____
###Markdown
Post-training integer quantizationWe can get further latency improvements, reductions in peak memory usage, and access to integer only hardware accelerators by making sure all model math is quantized. To do this, we need to measure the dynamic range of activations and inputs with a representative data set. You can simply create an input data generator and provide it to our converter.
###Code
def representative_data_gen():
for input_value, _ in test_batches.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
###Output
_____no_output_____
###Markdown
The resulting model will be fully quantized but still take float input and output for convenience.Ops that do not have quantized implementations will automatically be left in floating point. This allows conversion to occur smoothly but may restrict deployment to accelerators that support float. Full integer quantizationTo require the converter to only output integer operations, one can specify:
###Code
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
###Output
_____no_output_____
###Markdown
Finally convert the model
###Code
tflite_model = converter.convert()
tflite_model_file = 'converted_model.tflite'
with open(tflite_model_file, "wb") as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
Test the TFLite model using the Python Interpreter
###Code
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=tflite_model_file)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
from tqdm import tqdm
# Gather results for the randomly sampled test images
predictions = []
test_labels, test_imgs = [], []
for img, label in tqdm(test_batches.take(10)):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label.numpy()[0])
test_imgs.append(img)
#@title Utility functions for plotting
# Utilities for plotting
class_names = ['cat', 'dog']
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
###Output
_____no_output_____
###Markdown
NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite doesn't have super optimized server CPU kernels. For this reason post-training full-integer quantized models may be slower here than the other kinds of optimized models. But for mobile CPUs, considerable speedup can be observed.
###Code
#@title Visualize the outputs { run: "auto" }
index = 0 #@param {type:"slider", min:0, max:9, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_imgs)
plt.show()
###Output
_____no_output_____
###Markdown
Download the model.**NOTE: You might have to run to the cell below twice**
###Code
labels = ['cat', 'dog']
with open('labels.txt', 'w') as f:
f.write('\n'.join(labels))
try:
from google.colab import files
files.download('converted_model.tflite')
files.download('labels.txt')
except:
pass
###Output
_____no_output_____
###Markdown
Prepare the test images for download (Optional) This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples
###Code
!mkdir -p test_images
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))
!ls test_images
!zip -qq cats_vs_dogs_test_images.zip -r test_images/
try:
files.download('cats_vs_dogs_test_images.zip')
except:
pass
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Transfer Learning with TensorFlow Hub for TFLite Run in Google Colab View source on GitHub Setup
###Code
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Select the Hub/TF2 module to useHub modules for TF 1.x won't work here, please use one of the selections provided.
###Code
module_selection = ("mobilenet_v2", 224, 1280) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true}
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {} and output dimension {}".format(
MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))
###Output
_____no_output_____
###Markdown
Data preprocessing Use [TensorFlow Datasets](http://tensorflow.org/datasets) to load the cats and dogs dataset.This `tfds` package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see [loading image data](../load_data/images.ipynb)
###Code
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
The `tfds.load` method downloads and caches the data, and returns a `tf.data.Dataset` object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.Since `"cats_vs_dogs"` doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
###Code
(train_examples, validation_examples, test_examples), info = tfds.load(
'cats_vs_dogs',
split=['train[80%:]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
###Output
_____no_output_____
###Markdown
Format the DataUse the `tf.image` module to format the images for the task.Resize the images to a fixes input size, and rescale the input channels
###Code
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
###Output
_____no_output_____
###Markdown
Now shuffle and batch the data
###Code
BATCH_SIZE = 32 #@param {type:"integer"}
train_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)
test_batches = test_examples.map(format_image).batch(1)
###Output
_____no_output_____
###Markdown
Inspect a batch
###Code
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
###Output
_____no_output_____
###Markdown
Defining the modelAll it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module.For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy.
###Code
do_fine_tuning = False #@param {type:"boolean"}
###Output
_____no_output_____
###Markdown
Load TFHub Module
###Code
feature_extractor = hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3,),
output_shape=[FV_SIZE],
trainable=do_fine_tuning)
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(num_classes)
])
model.summary()
#@title (Optional) Unfreeze some layers
NUM_LAYERS = 7 #@param {type:"slider", min:1, max:50, step:1}
if do_fine_tuning:
feature_extractor.trainable = True
for layer in model.layers[-NUM_LAYERS:]:
layer.trainable = True
else:
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Training the model
###Code
if do_fine_tuning:
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9),
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
else:
model.compile(
optimizer='adam',
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 5
hist = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Export the model
###Code
CATS_VS_DOGS_SAVED_MODEL = "exp_saved_model"
###Output
_____no_output_____
###Markdown
Export the SavedModel
###Code
tf.saved_model.save(model, CATS_VS_DOGS_SAVED_MODEL)
%%bash -s $CATS_VS_DOGS_SAVED_MODEL
saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default
loaded = tf.saved_model.load(CATS_VS_DOGS_SAVED_MODEL)
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_input_signature)
print(infer.structured_outputs)
###Output
_____no_output_____
###Markdown
Convert using TFLite's Converter Load the TFLiteConverter with the SavedModel
###Code
converter = tf.lite.TFLiteConverter.from_saved_model(CATS_VS_DOGS_SAVED_MODEL)
###Output
_____no_output_____
###Markdown
Post-training quantizationThe simplest form of post-training quantization quantizes weights from floating point to 8-bits of precision. This technique is enabled as an option in the TensorFlow Lite converter. At inference, weights are converted from 8-bits of precision to floating point and computed using floating-point kernels. This conversion is done once and cached to reduce latency.To further improve latency, hybrid operators dynamically quantize activations to 8-bits and perform computations with 8-bit weights and activations. This optimization provides latencies close to fully fixed-point inference. However, the outputs are still stored using floating point, so that the speedup with hybrid ops is less than a full fixed-point computation.
###Code
converter.optimizations = [tf.lite.Optimize.DEFAULT]
###Output
_____no_output_____
###Markdown
Post-training integer quantizationWe can get further latency improvements, reductions in peak memory usage, and access to integer only hardware accelerators by making sure all model math is quantized. To do this, we need to measure the dynamic range of activations and inputs with a representative data set. You can simply create an input data generator and provide it to our converter.
###Code
def representative_data_gen():
for input_value, _ in test_batches.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
###Output
_____no_output_____
###Markdown
The resulting model will be fully quantized but still take float input and output for convenience.Ops that do not have quantized implementations will automatically be left in floating point. This allows conversion to occur smoothly but may restrict deployment to accelerators that support float. Full integer quantizationTo require the converter to only output integer operations, one can specify:
###Code
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
###Output
_____no_output_____
###Markdown
Finally convert the model
###Code
tflite_model = converter.convert()
tflite_model_file = 'converted_model.tflite'
with open(tflite_model_file, "wb") as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
Test the TFLite model using the Python Interpreter
###Code
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=tflite_model_file)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
from tqdm import tqdm
# Gather results for the randomly sampled test images
predictions = []
test_labels, test_imgs = [], []
for img, label in tqdm(test_batches.take(10)):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label.numpy()[0])
test_imgs.append(img)
#@title Utility functions for plotting
# Utilities for plotting
class_names = ['cat', 'dog']
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
###Output
_____no_output_____
###Markdown
NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite doesn't have super optimized server CPU kernels. For this reason post-training full-integer quantized models may be slower here than the other kinds of optimized models. But for mobile CPUs, considerable speedup can be observed.
###Code
#@title Visualize the outputs { run: "auto" }
index = 0 #@param {type:"slider", min:0, max:9, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_imgs)
plt.show()
###Output
_____no_output_____
###Markdown
Download the model.**NOTE: You might have to run to the cell below twice**
###Code
labels = ['cat', 'dog']
with open('labels.txt', 'w') as f:
f.write('\n'.join(labels))
try:
from google.colab import files
files.download('converted_model.tflite')
files.download('labels.txt')
except:
pass
###Output
_____no_output_____
###Markdown
Prepare the test images for download (Optional) This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples
###Code
!mkdir -p test_images
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))
!ls test_images
!zip -qq cats_vs_dogs_test_images.zip -r test_images/
try:
files.download('cats_vs_dogs_test_images.zip')
except:
pass
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Transfer Learning with TensorFlow Hub for TFLite Run in Google Colab View source on GitHub Setup
###Code
try:
%tensorflow_version 2.x
except:
pass
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Select the Hub/TF2 module to useHub modules for TF 1.x won't work here, please use one of the selections provided.
###Code
module_selection = ("mobilenet_v2", 224, 1280) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true}
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {} and output dimension {}".format(
MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))
###Output
_____no_output_____
###Markdown
Data preprocessing Use [TensorFlow Datasets](http://tensorflow.org/datasets) to load the cats and dogs dataset.This `tfds` package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see [loading image data](../load_data/images.ipynb)
###Code
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
The `tfds.load` method downloads and caches the data, and returns a `tf.data.Dataset` object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.Since `"cats_vs_dog"` doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
###Code
splits = tfds.Split.ALL.subsplit(weighted=(80, 10, 10))
splits, info = tfds.load('cats_vs_dogs', with_info=True, as_supervised=True, split = splits)
(train_examples, validation_examples, test_examples) = splits
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
###Output
_____no_output_____
###Markdown
Format the DataUse the `tf.image` module to format the images for the task.Resize the images to a fixes input size, and rescale the input channels
###Code
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
###Output
_____no_output_____
###Markdown
Now shuffle and batch the data
###Code
BATCH_SIZE = 32 #@param {type:"integer"}
train_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)
test_batches = test_examples.map(format_image).batch(1)
###Output
_____no_output_____
###Markdown
Inspect a batch
###Code
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
###Output
_____no_output_____
###Markdown
Defining the modelAll it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module.For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy.
###Code
do_fine_tuning = False #@param {type:"boolean"}
###Output
_____no_output_____
###Markdown
Load TFHub Module
###Code
feature_extractor = hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3,),
output_shape=[FV_SIZE],
trainable=do_fine_tuning)
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(num_classes, activation='softmax')
])
model.summary()
#@title (Optional) Unfreeze some layers
NUM_LAYERS = 7 #@param {type:"slider", min:1, max:50, step:1}
if do_fine_tuning:
feature_extractor.trainable = True
for layer in model.layers[-NUM_LAYERS:]:
layer.trainable = True
else:
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
Training the model
###Code
if do_fine_tuning:
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
else:
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
EPOCHS = 5
hist = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
_____no_output_____
###Markdown
Export the model
###Code
CATS_VS_DOGS_SAVED_MODEL = "exp_saved_model"
###Output
_____no_output_____
###Markdown
Export the SavedModel
###Code
tf.saved_model.save(model, CATS_VS_DOGS_SAVED_MODEL)
%%bash -s $CATS_VS_DOGS_SAVED_MODEL
saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default
loaded = tf.saved_model.load(CATS_VS_DOGS_SAVED_MODEL)
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_input_signature)
print(infer.structured_outputs)
###Output
_____no_output_____
###Markdown
Convert using TFLite's Converter Load the TFLiteConverter with the SavedModel
###Code
converter = tf.lite.TFLiteConverter.from_saved_model(CATS_VS_DOGS_SAVED_MODEL)
###Output
_____no_output_____
###Markdown
Post-training quantizationThe simplest form of post-training quantization quantizes weights from floating point to 8-bits of precision. This technique is enabled as an option in the TensorFlow Lite converter. At inference, weights are converted from 8-bits of precision to floating point and computed using floating-point kernels. This conversion is done once and cached to reduce latency.To further improve latency, hybrid operators dynamically quantize activations to 8-bits and perform computations with 8-bit weights and activations. This optimization provides latencies close to fully fixed-point inference. However, the outputs are still stored using floating point, so that the speedup with hybrid ops is less than a full fixed-point computation.
###Code
converter.optimizations = [tf.lite.Optimize.DEFAULT]
###Output
_____no_output_____
###Markdown
Post-training integer quantizationWe can get further latency improvements, reductions in peak memory usage, and access to integer only hardware accelerators by making sure all model math is quantized. To do this, we need to measure the dynamic range of activations and inputs with a representative data set. You can simply create an input data generator and provide it to our converter.
###Code
def representative_data_gen():
for input_value, _ in test_batches.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
###Output
_____no_output_____
###Markdown
The resulting model will be fully quantized but still take float input and output for convenience.Ops that do not have quantized implementations will automatically be left in floating point. This allows conversion to occur smoothly but may restrict deployment to accelerators that support float. Full integer quantizationTo require the converter to only output integer operations, one can specify:
###Code
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
###Output
_____no_output_____
###Markdown
Finally convert the model
###Code
tflite_model = converter.convert()
tflite_model_file = 'converted_model.tflite'
with open(tflite_model_file, "wb") as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
Test the TFLite model using the Python Interpreter
###Code
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=tflite_model_file)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
from tqdm import tqdm
# Gather results for the randomly sampled test images
predictions = []
test_labels, test_imgs = [], []
for img, label in tqdm(test_batches.take(10)):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label.numpy()[0])
test_imgs.append(img)
#@title Utility functions for plotting
# Utilities for plotting
class_names = ['cat', 'dog']
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
###Output
_____no_output_____
###Markdown
NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite doesn't have super optimized server CPU kernels. For this reason post-training full-integer quantized models may be slower here than the other kinds of optimized models. But for mobile CPUs, considerable speedup can be observed.
###Code
#@title Visualize the outputs { run: "auto" }
index = 0 #@param {type:"slider", min:0, max:9, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_imgs)
plt.show()
###Output
_____no_output_____
###Markdown
Download the model.**NOTE: You might have to run to the cell below twice**
###Code
labels = ['cat', 'dog']
with open('labels.txt', 'w') as f:
f.write('\n'.join(labels))
try:
from google.colab import files
files.download('converted_model.tflite')
files.download('labels.txt')
except:
pass
###Output
_____no_output_____
###Markdown
Prepare the test images for download (Optional) This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples
###Code
!mkdir -p test_images
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))
!ls test_images
!zip -qq cats_vs_dogs_test_images.zip -r test_images/
try:
files.download('cats_vs_dogs_test_images.zip')
except:
pass
###Output
_____no_output_____ |
Array/0924/926. Flip String to Monotone Increasing.ipynb | ###Markdown
说明: 如果字符串“ 0”和“ 1”由一定数量的“ 0”(可能为0),然后由一定数量的“ 1”(也可能为0)组成,则它是单调递增。 我们可以将任何“ 0”翻转为“ 1”或将“ 1”翻转为“ 0”。 返回最小翻转次数以使S单调增加。Example 1: Input: "00110" Output: 1 Explanation: We flip the last digit to get 00111.Example 2: Input: "010110" Output: 2 Explanation: We flip to get 011111, or alternatively 000111.Example 3: Input: "00011000" Output: 2 Explanation: We flip to get 00000000. Note: 1、1 <= S.length <= 20000 2、S only consists of '0' and '1' characters.
###Code
from collections import Counter
class Solution:
def minFlipsMonoIncr(self, S: str) -> int:
s_freq = Counter(S)
res = min(s_freq['0'], s_freq['1'])
zeros, ones = 0, 0
for i in range(len(S) - 1):
if S[i] == '0':
zeros += 1
else:
ones += 1
res = min(res, s_freq['0'] - zeros + ones)
return res
class Solution:
def minFlipsMonoIncr(self, S: str) -> int:
d0, d1 = 0, 0
for i in range(len(S)):
if S[i] == '1':
d0 += 1
else:
d1 += 1
d1 = min(d0, d1)
return min(d0, d1)
solution = Solution()
solution.minFlipsMonoIncr("010110")
###Output
_____no_output_____ |
finance/DRL.ipynb | ###Markdown
& Deep Reinforcement Learning for Optimal Execution of Portfolio Transactions IntroductionThis notebook demonstrates how to use Deep Reinforcement Learning (DRL) for optimizing the execution of large portfolio transactions. We begin with a brief review of reinforcement learning and actor-critic methods. Then, you will use an actor-critic method to generate optimal trading strategies that maximize profit when liquidating a block of shares. Actor-Critic MethodsIn reinforcement learning, an agent makes observations and takes actions within an environment, and in return it receives rewards. Its objective is to learn to act in a way that will maximize its expected long-term rewards. Fig 1. - Reinforcement Learning. There are several types of RL algorithms, and they can be divided into three groups:- **Critic-Only**: Critic-Only methods, also known as Value-Based methods, first find the optimal value function and then derive an optimal policy from it. - **Actor-Only**: Actor-Only methods, also known as Policy-Based methods, search directly for the optimal policy in policy space. This is typically done by using a parameterized family of policies over which optimization procedures can be used directly. - **Actor-Critic**: Actor-Critic methods combine the advantages of actor-only and critic-only methods. In this method, the critic learns the value function and uses it to determine how the actor's policy parramerters should be changed. In this case, the actor brings the advantage of computing continuous actions without the need for optimization procedures on a value function, while the critic supplies the actor with knowledge of the performance. Actor-critic methods usually have good convergence properties, in contrast to critic-only methods. The **Deep Deterministic Policy Gradients (DDPG)** algorithm is one example of an actor-critic method. Fig 2. - Actor-Critic Reinforcement Learning. In this notebook, we will use DDPG to determine the optimal execution of portfolio transactions. In other words, we will use the DDPG algorithm to solve the optimal liquidation problem. But before we can apply the DDPG algorithm we first need to formulate the optimal liquidation problem so that in can be solved using reinforcement learning. In the next section we will see how to do this. Modeling Optimal Execution as a Reinforcement Learning ProblemAs we learned in the previous lessons, the optimal liquidation problem is a minimization problem, *i.e.* we need to find the trading list that minimizes the implementation shortfall. In order to solve this problem through reinforcement learning, we need to restate the optimal liquidation problem in terms of **States**, **Actions**, and **Rewards**. Let's start by defining our States. StatesThe optimal liquidation problem entails that we sell all our shares within a given time frame. Therefore, our state vector must contain some information about the time remaining, or what is equivalent, the number trades remaning. We will use the latter and use the following features to define the state vector at time $t_k$:$$[r_{k-5},\, r_{k-4},\, r_{k-3},\, r_{k-2},\, r_{k-1},\, r_{k},\, m_{k},\, i_{k}]$$where:- $r_{k} = \log\left(\frac{\tilde{S}_k}{\tilde{S}_{k-1}}\right)$ is the log-return at time $t_k$- $m_{k} = \frac{N_k}{N}$ is the number of trades remaining at time $t_k$ normalized by the total number of trades.- $i_{k} = \frac{x_k}{X}$ is the remaining number of shares at time $t_k$ normalized by the total number of shares.The log-returns capture information about stock prices before time $t_k$, which can be used to detect possible price trends. The number of trades and shares remaining allow the agent to learn to sell all the shares within a given time frame. It is important to note that in real world trading scenarios, this state vector can hold many more variables. ActionsSince the optimal liquidation problem only requires us to sell stocks, it is reasonable to define the action $a_k$ to be the number of shares to sell at time $t_{k}$. However, if we start with millions of stocks, intepreting the action directly as the number of shares to sell at each time step can lead to convergence problems, because, the agent will need to produce actions with very high values. Instead, we will interpret the action $a_k$ as a **percentage**. In this case, the actions produced by the agent will only need to be between 0 and 1. Using this interpretation, we can determine the number of shares to sell at each time step using:$$n_k = a_k \times x_k$$where $x_k$ is the number of shares remaining at time $t_k$. RewardsDefining the rewards is trickier than defining states and actions, since the original problem is a minimization problem. One option is to use the difference between two consecutive utility functions. Remeber the utility function is given by:$$U(x) = E(x) + λ V(x)$$After each time step, we compute the utility using the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model for the remaining time and inventory while holding parameter λ constant. Denoting the optimal trading trajectory computed at time $t$ as $x^*_t$, we define the reward as: $$R_{t} = {{U_t(x^*_t) - U_{t+1}(x^*_{t+1})}\over{U_t(x^*_t)}}$$Where we have normalized the difference to train the actor-critic model easier. Simulation EnvironmentIn order to train our DDPG algorithm we will use a very simple simulated trading environment. This environment simulates stock prices that follow a discrete arithmetic random walk and that the permanent and temporary market impact functions are linear functions of the rate of trading, just like in the Almgren and Chriss model. This simple trading environment serves as a starting point to create more complex trading environments. You are encouraged to extend this simple trading environment by adding more complexity to simulte real world trading dynamics, such as book orders, network latencies, trading fees, etc... The simulated enviroment is contained in the **syntheticChrissAlmgren.py** module. You are encouraged to take a look it and modify its parameters as you wish. Let's take a look at the default parameters of our simulation environment. We have set the intial stock price to be $S_0 = 50$, and the total number of shares to sell to one million. This gives an initial portfolio value of $\$50$ Million dollars. We have also set the trader's risk aversion to $\lambda = 10^{-6}$.The stock price will have 12\% annual volatility, a [bid-ask spread](https://www.investopedia.com/terms/b/bid-askspread.asp) of 1/8 and an average daily trading volume of 5 million shares. Assuming there are 250 trading days in a year, this gives a daily volatility in stock price of $0.12 / \sqrt{250} \approx 0.8\%$. We will use a liquiditation time of $T = 60$ days and we will set the number of trades $N = 60$. This means that $\tau=\frac{T}{N} = 1$ which means we will be making one trade per day. For the temporary cost function we will set the fixed cost of selling to be 1/2 of the bid-ask spread, $\epsilon = 1/16$. we will set $\eta$ such that for each one percent of the daily volume we trade, we incur a price impact equal to the bid-askspread. For example, trading at a rate of $5\%$ of the daily trading volume incurs a one-time cost on each trade of 5/8. Under this assumption we have $\eta =(1/8)/(0.01 \times 5 \times 10^6) = 2.5 \times 10^{-6}$.For the permanent costs, a common rule of thumb is that price effects become significant when we sell $10\%$ of the daily volume. If we suppose that significant means that the price depression is one bid-ask spread, and that the effect is linear for smaller and larger trading rates, then we have $\gamma = (1/8)/(0.1 \times 5 \times 10^6) = 2.5 \times 10^{-7}$. The tables below summarize the default parameters of the simulation environment
###Code
import utils
# Get the default financial and AC Model parameters
financial_params, ac_params = utils.get_env_param()
financial_params
ac_params
###Output
_____no_output_____
###Markdown
Reinforcement LearningIn the code below we use DDPG to find a policy that can generate optimal trading trajectories that minimize implementation shortfall, and can be benchmarked against the Almgren and Chriss model. We will implement a typical reinforcement learning workflow to train the actor and critic using the simulation environment. We feed the states observed from our simulator to an agent. The Agent first predicts an action using the actor model and performs the action in the environment. Then, environment returns the reward and new state. This process continues for the given number of episodes. To get accurate results, you should run the code at least 10,000 episodes.
###Code
import numpy as np
import syntheticChrissAlmgren as sca
from ddpg_agent import Agent
from collections import deque
# Create simulation environment
env = sca.MarketEnvironment()
# Initialize Feed-forward DNNs for Actor and Critic models.
agent = Agent(state_size=env.observation_space_dimension(), action_size=env.action_space_dimension(), random_seed=0)
# Set the liquidation time
lqt = 60
# Set the number of trades
n_trades = 60
# Set trader's risk aversion
tr = 1e-6
# Set the number of episodes to run the simulation
episodes = 10000
shortfall_hist = np.array([])
shortfall_deque = deque(maxlen=100)
for episode in range(episodes):
# Reset the enviroment
cur_state = env.reset(seed = episode, liquid_time = lqt, num_trades = n_trades, lamb = tr)
# set the environment to make transactions
env.start_transactions()
for i in range(n_trades + 1):
# Predict the best action for the current state.
action = agent.act(cur_state, add_noise = True)
# Action is performed and new state, reward, info are received.
new_state, reward, done, info = env.step(action)
# current state, action, reward, new state are stored in the experience replay
agent.step(cur_state, action, reward, new_state, done)
# roll over new state
cur_state = new_state
if info.done:
shortfall_hist = np.append(shortfall_hist, info.implementation_shortfall)
shortfall_deque.append(info.implementation_shortfall)
break
if (episode + 1) % 100 == 0: # print average shortfall over last 100 episodes
print('\rEpisode [{}/{}]\tAverage Shortfall: ${:,.2f}'.format(episode + 1, episodes, np.mean(shortfall_deque)))
print('\nAverage Implementation Shortfall: ${:,.2f} \n'.format(np.mean(shortfall_hist)))
###Output
C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\functional.py:1614: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.
warnings.warn("nn.functional.tanh is deprecated. Use torch.tanh instead.")
###Markdown
& Deep Reinforcement Learning for Optimal Execution of Portfolio Transactions IntroductionThis notebook demonstrates how to use Deep Reinforcement Learning (DRL) for optimizing the execution of large portfolio transactions. We begin with a brief review of reinforcement learning and actor-critic methods. Then, you will use an actor-critic method to generate optimal trading strategies that maximize profit when liquidating a block of shares. Actor-Critic MethodsIn reinforcement learning, an agent makes observations and takes actions within an environment, and in return it receives rewards. Its objective is to learn to act in a way that will maximize its expected long-term rewards. Fig 1. - Reinforcement Learning. There are several types of RL algorithms, and they can be divided into three groups:- **Critic-Only**: Critic-Only methods, also known as Value-Based methods, first find the optimal value function and then derive an optimal policy from it. - **Actor-Only**: Actor-Only methods, also known as Policy-Based methods, search directly for the optimal policy in policy space. This is typically done by using a parameterized family of policies over which optimization procedures can be used directly. - **Actor-Critic**: Actor-Critic methods combine the advantages of actor-only and critic-only methods. In this method, the critic learns the value function and uses it to determine how the actor's policy parramerters should be changed. In this case, the actor brings the advantage of computing continuous actions without the need for optimization procedures on a value function, while the critic supplies the actor with knowledge of the performance. Actor-critic methods usually have good convergence properties, in contrast to critic-only methods. The **Deep Deterministic Policy Gradients (DDPG)** algorithm is one example of an actor-critic method. Fig 2. - Actor-Critic Reinforcement Learning. In this notebook, we will use DDPG to determine the optimal execution of portfolio transactions. In other words, we will use the DDPG algorithm to solve the optimal liquidation problem. But before we can apply the DDPG algorithm we first need to formulate the optimal liquidation problem so that in can be solved using reinforcement learning. In the next section we will see how to do this. Modeling Optimal Execution as a Reinforcement Learning ProblemAs we learned in the previous lessons, the optimal liquidation problem is a minimization problem, *i.e.* we need to find the trading list that minimizes the implementation shortfall. In order to solve this problem through reinforcement learning, we need to restate the optimal liquidation problem in terms of **States**, **Actions**, and **Rewards**. Let's start by defining our States. StatesThe optimal liquidation problem entails that we sell all our shares within a given time frame. Therefore, our state vector must contain some information about the time remaining, or what is equivalent, the number trades remaning. We will use the latter and use the following features to define the state vector at time $t_k$:$$[r_{k-5},\, r_{k-4},\, r_{k-3},\, r_{k-2},\, r_{k-1},\, r_{k},\, m_{k},\, i_{k}]$$where:- $r_{k} = \log\left(\frac{\tilde{S}_k}{\tilde{S}_{k-1}}\right)$ is the log-return at time $t_k$- $m_{k} = \frac{N_k}{N}$ is the number of trades remaining at time $t_k$ normalized by the total number of trades.- $i_{k} = \frac{x_k}{X}$ is the remaining number of shares at time $t_k$ normalized by the total number of shares.The log-returns capture information about stock prices before time $t_k$, which can be used to detect possible price trends. The number of trades and shares remaining allow the agent to learn to sell all the shares within a given time frame. It is important to note that in real world trading scenarios, this state vector can hold many more variables. ActionsSince the optimal liquidation problem only requires us to sell stocks, it is reasonable to define the action $a_k$ to be the number of shares to sell at time $t_{k}$. However, if we start with millions of stocks, intepreting the action directly as the number of shares to sell at each time step can lead to convergence problems, because, the agent will need to produce actions with very high values. Instead, we will interpret the action $a_k$ as a **percentage**. In this case, the actions produced by the agent will only need to be between 0 and 1. Using this interpretation, we can determine the number of shares to sell at each time step using:$$n_k = a_k \times x_k$$where $x_k$ is the number of shares remaining at time $t_k$. RewardsDefining the rewards is trickier than defining states and actions, since the original problem is a minimization problem. One option is to use the difference between two consecutive utility functions. Remeber the utility function is given by:$$U(x) = E(x) + λ V(x)$$After each time step, we compute the utility using the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model for the remaining time and inventory while holding parameter λ constant. Denoting the optimal trading trajectory computed at time $t$ as $x^*_t$, we define the reward as: $$R_{t} = {{U_t(x^*_t) - U_{t+1}(x^*_{t+1})}\over{U_t(x^*_t)}}$$Where we have normalized the difference to train the actor-critic model easier. Simulation EnvironmentIn order to train our DDPG algorithm we will use a very simple simulated trading environment. This environment simulates stock prices that follow a discrete arithmetic random walk and that the permanent and temporary market impact functions are linear functions of the rate of trading, just like in the Almgren and Chriss model. This simple trading environment serves as a starting point to create more complex trading environments. You are encouraged to extend this simple trading environment by adding more complexity to simulte real world trading dynamics, such as book orders, network latencies, trading fees, etc... The simulated enviroment is contained in the **syntheticChrissAlmgren.py** module. You are encouraged to take a look it and modify its parameters as you wish. Let's take a look at the default parameters of our simulation environment. We have set the intial stock price to be $S_0 = 50$, and the total number of shares to sell to one million. This gives an initial portfolio value of $\$50$ Million dollars. We have also set the trader's risk aversion to $\lambda = 10^{-6}$.The stock price will have 12\% annual volatility, a [bid-ask spread](https://www.investopedia.com/terms/b/bid-askspread.asp) of 1/8 and an average daily trading volume of 5 million shares. Assuming there are 250 trading days in a year, this gives a daily volatility in stock price of $0.12 / \sqrt{250} \approx 0.8\%$. We will use a liquiditation time of $T = 60$ days and we will set the number of trades $N = 60$. This means that $\tau=\frac{T}{N} = 1$ which means we will be making one trade per day. For the temporary cost function we will set the fixed cost of selling to be 1/2 of the bid-ask spread, $\epsilon = 1/16$. we will set $\eta$ such that for each one percent of the daily volume we trade, we incur a price impact equal to the bid-askspread. For example, trading at a rate of $5\%$ of the daily trading volume incurs a one-time cost on each trade of 5/8. Under this assumption we have $\eta =(1/8)/(0.01 \times 5 \times 10^6) = 2.5 \times 10^{-6}$.For the permanent costs, a common rule of thumb is that price effects become significant when we sell $10\%$ of the daily volume. If we suppose that significant means that the price depression is one bid-ask spread, and that the effect is linear for smaller and larger trading rates, then we have $\gamma = (1/8)/(0.1 \times 5 \times 10^6) = 2.5 \times 10^{-7}$. The tables below summarize the default parameters of the simulation environment
###Code
import utils
# Get the default financial and AC Model parameters
financial_params, ac_params = utils.get_env_param()
financial_params
ac_params
###Output
_____no_output_____
###Markdown
Reinforcement LearningIn the code below we use DDPG to find a policy that can generate optimal trading trajectories that minimize implementation shortfall, and can be benchmarked against the Almgren and Chriss model. We will implement a typical reinforcement learning workflow to train the actor and critic using the simulation environment. We feed the states observed from our simulator to an agent. The Agent first predicts an action using the actor model and performs the action in the environment. Then, environment returns the reward and new state. This process continues for the given number of episodes. To get accurate results, you should run the code at least 10,000 episodes.
###Code
import numpy as np
import syntheticChrissAlmgren as sca
from ddpg_agent import Agent
from collections import deque
# Create simulation environment
env = sca.MarketEnvironment()
# Initialize Feed-forward DNNs for Actor and Critic models.
agent = Agent(state_size=env.observation_space_dimension(), action_size=env.action_space_dimension(), random_seed=0)
# Set the liquidation time
lqt = 60
# Set the number of trades
n_trades = 60
# Set trader's risk aversion
tr = 1e-6
# Set the number of episodes to run the simulation
episodes = 10000
shortfall_hist = np.array([])
shortfall_deque = deque(maxlen=100)
for episode in range(episodes):
# Reset the enviroment
cur_state = env.reset(seed = episode, liquid_time = lqt, num_trades = n_trades, lamb = tr)
# set the environment to make transactions
env.start_transactions()
for i in range(n_trades + 1):
# Predict the best action for the current state.
action = agent.act(cur_state, add_noise = True)
# Action is performed and new state, reward, info are received.
new_state, reward, done, info = env.step(action)
# current state, action, reward, new state are stored in the experience replay
agent.step(cur_state, action, reward, new_state, done)
# roll over new state
cur_state = new_state
if info.done:
shortfall_hist = np.append(shortfall_hist, info.implementation_shortfall)
shortfall_deque.append(info.implementation_shortfall)
break
if (episode + 1) % 100 == 0: # print average shortfall over last 100 episodes
print('\rEpisode [{}/{}]\tAverage Shortfall: ${:,.2f}'.format(episode + 1, episodes, np.mean(shortfall_deque)))
print('\nAverage Implementation Shortfall: ${:,.2f} \n'.format(np.mean(shortfall_hist)))
###Output
_____no_output_____
###Markdown
& Deep Reinforcement Learning for Optimal Execution of Portfolio Transactions IntroductionThis notebook demonstrates how to use Deep Reinforcement Learning (DRL) for optimizing the execution of large portfolio transactions. We begin with a brief review of reinforcement learning and actor-critic methods. Then, you will use an actor-critic method to generate optimal trading strategies that maximize profit when liquidating a block of shares. Actor-Critic MethodsIn reinforcement learning, an agent makes observations and takes actions within an environment, and in return it receives rewards. Its objective is to learn to act in a way that will maximize its expected long-term rewards. Fig 1. - Reinforcement Learning. There are several types of RL algorithms, and they can be divided into three groups:- **Critic-Only**: Critic-Only methods, also known as Value-Based methods, first find the optimal value function and then derive an optimal policy from it. - **Actor-Only**: Actor-Only methods, also known as Policy-Based methods, search directly for the optimal policy in policy space. This is typically done by using a parameterized family of policies over which optimization procedures can be used directly. - **Actor-Critic**: Actor-Critic methods combine the advantages of actor-only and critic-only methods. In this method, the critic learns the value function and uses it to determine how the actor's policy parramerters should be changed. In this case, the actor brings the advantage of computing continuous actions without the need for optimization procedures on a value function, while the critic supplies the actor with knowledge of the performance. Actor-critic methods usually have good convergence properties, in contrast to critic-only methods. The **Deep Deterministic Policy Gradients (DDPG)** algorithm is one example of an actor-critic method. Fig 2. - Actor-Critic Reinforcement Learning. In this notebook, we will use DDPG to determine the optimal execution of portfolio transactions. In other words, we will use the DDPG algorithm to solve the optimal liquidation problem. But before we can apply the DDPG algorithm we first need to formulate the optimal liquidation problem so that in can be solved using reinforcement learning. In the next section we will see how to do this. Modeling Optimal Execution as a Reinforcement Learning ProblemAs we learned in the previous lessons, the optimal liquidation problem is a minimization problem, *i.e.* we need to find the trading list that minimizes the implementation shortfall. In order to solve this problem through reinforcement learning, we need to restate the optimal liquidation problem in terms of **States**, **Actions**, and **Rewards**. Let's start by defining our States. StatesThe optimal liquidation problem entails that we sell all our shares within a given time frame. Therefore, our state vector must contain some information about the time remaining, or what is equivalent, the number trades remaning. We will use the latter and use the following features to define the state vector at time $t_k$:$$[r_{k-5},\, r_{k-4},\, r_{k-3},\, r_{k-2},\, r_{k-1},\, r_{k},\, m_{k},\, i_{k}]$$where:- $r_{k} = \log\left(\frac{\tilde{S}_k}{\tilde{S}_{k-1}}\right)$ is the log-return at time $t_k$- $m_{k} = \frac{N_k}{N}$ is the number of trades remaining at time $t_k$ normalized by the total number of trades.- $i_{k} = \frac{x_k}{X}$ is the remaining number of shares at time $t_k$ normalized by the total number of shares.The log-returns capture information about stock prices before time $t_k$, which can be used to detect possible price trends. The number of trades and shares remaining allow the agent to learn to sell all the shares within a given time frame. It is important to note that in real world trading scenarios, this state vector can hold many more variables. ActionsSince the optimal liquidation problem only requires us to sell stocks, it is reasonable to define the action $a_k$ to be the number of shares to sell at time $t_{k}$. However, if we start with millions of stocks, intepreting the action directly as the number of shares to sell at each time step can lead to convergence problems, because, the agent will need to produce actions with very high values. Instead, we will interpret the action $a_k$ as a **percentage**. In this case, the actions produced by the agent will only need to be between 0 and 1. Using this interpretation, we can determine the number of shares to sell at each time step using:$$n_k = a_k \times x_k$$where $x_k$ is the number of shares remaining at time $t_k$. RewardsDefining the rewards is trickier than defining states and actions, since the original problem is a minimization problem. One option is to use the difference between two consecutive utility functions. Remeber the utility function is given by:$$U(x) = E(x) + λ V(x)$$After each time step, we compute the utility using the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model for the remaining time and inventory while holding parameter λ constant. Denoting the optimal trading trajectory computed at time $t$ as $x^*_t$, we define the reward as: $$R_{t} = {{U_t(x^*_t) - U_{t+1}(x^*_{t+1})}\over{U_t(x^*_t)}}$$Where we have normalized the difference to train the actor-critic model easier. Simulation EnvironmentIn order to train our DDPG algorithm we will use a very simple simulated trading environment. This environment simulates stock prices that follow a discrete arithmetic random walk and that the permanent and temporary market impact functions are linear functions of the rate of trading, just like in the Almgren and Chriss model. This simple trading environment serves as a starting point to create more complex trading environments. You are encouraged to extend this simple trading environment by adding more complexity to simulte real world trading dynamics, such as book orders, network latencies, trading fees, etc... The simulated enviroment is contained in the **syntheticChrissAlmgren.py** module. You are encouraged to take a look it and modify its parameters as you wish. Let's take a look at the default parameters of our simulation environment. We have set the intial stock price to be $S_0 = 50$, and the total number of shares to sell to one million. This gives an initial portfolio value of $\$50$ Million dollars. We have also set the trader's risk aversion to $\lambda = 10^{-6}$.The stock price will have 12\% annual volatility, a [bid-ask spread](https://www.investopedia.com/terms/b/bid-askspread.asp) of 1/8 and an average daily trading volume of 5 million shares. Assuming there are 250 trading days in a year, this gives a daily volatility in stock price of $0.12 / \sqrt{250} \approx 0.8\%$. We will use a liquiditation time of $T = 60$ days and we will set the number of trades $N = 60$. This means that $\tau=\frac{T}{N} = 1$ which means we will be making one trade per day. For the temporary cost function we will set the fixed cost of selling to be 1/2 of the bid-ask spread, $\epsilon = 1/16$. we will set $\eta$ such that for each one percent of the daily volume we trade, we incur a price impact equal to the bid-askspread. For example, trading at a rate of $5\%$ of the daily trading volume incurs a one-time cost on each trade of 5/8. Under this assumption we have $\eta =(1/8)/(0.01 \times 5 \times 10^6) = 2.5 \times 10^{-6}$.For the permanent costs, a common rule of thumb is that price effects become significant when we sell $10\%$ of the daily volume. If we suppose that significant means that the price depression is one bid-ask spread, and that the effect is linear for smaller and larger trading rates, then we have $\gamma = (1/8)/(0.1 \times 5 \times 10^6) = 2.5 \times 10^{-7}$. The tables below summarize the default parameters of the simulation environment
###Code
import utils
# Get the default financial and AC Model parameters
financial_params, ac_params = utils.get_env_param()
financial_params
ac_params
###Output
_____no_output_____
###Markdown
Reinforcement LearningIn the code below we use DDPG to find a policy that can generate optimal trading trajectories that minimize implementation shortfall, and can be benchmarked against the Almgren and Chriss model. We will implement a typical reinforcement learning workflow to train the actor and critic using the simulation environment. We feed the states observed from our simulator to an agent. The Agent first predicts an action using the actor model and performs the action in the environment. Then, environment returns the reward and new state. This process continues for the given number of episodes. To get accurate results, you should run the code at least 10,000 episodes.
###Code
import numpy as np
import syntheticChrissAlmgren as sca
from ddpg_agent import Agent
from collections import deque
# Create simulation environment
env = sca.MarketEnvironment()
# Initialize Feed-forward DNNs for Actor and Critic models.
agent = Agent(state_size=env.observation_space_dimension(), action_size=env.action_space_dimension(), random_seed=0)
# Set the liquidation time
lqt = 60
# Set the number of trades
n_trades = 60
# Set trader's risk aversion
tr = 1e-6
# Set the number of episodes to run the simulation
episodes = 10000
shortfall_hist = np.array([])
shortfall_deque = deque(maxlen=100)
for episode in range(episodes):
# Reset the enviroment
cur_state = env.reset(seed = episode, liquid_time = lqt, num_trades = n_trades, lamb = tr)
# set the environment to make transactions
env.start_transactions()
for i in range(n_trades + 1):
# Predict the best action for the current state.
action = agent.act(cur_state, add_noise = True)
# Action is performed and new state, reward, info are received.
new_state, reward, done, info = env.step(action)
# current state, action, reward, new state are stored in the experience replay
agent.step(cur_state, action, reward, new_state, done)
# roll over new state
cur_state = new_state
if info.done:
shortfall_hist = np.append(shortfall_hist, info.implementation_shortfall)
shortfall_deque.append(info.implementation_shortfall)
break
if (episode + 1) % 100 == 0: # print average shortfall over last 100 episodes
print('\rEpisode [{}/{}]\tAverage Shortfall: ${:,.2f}'.format(episode + 1, episodes, np.mean(shortfall_deque)))
print('\nAverage Implementation Shortfall: ${:,.2f} \n'.format(np.mean(shortfall_hist)))
###Output
_____no_output_____
###Markdown
& Deep Reinforcement Learning for Optimal Execution of Portfolio Transactions IntroductionThis notebook demonstrates how to use Deep Reinforcement Learning (DRL) for optimizing the execution of large portfolio transactions. We begin with a brief review of reinforcement learning and actor-critic methods. Then, you will use an actor-critic method to generate optimal trading strategies that maximize profit when liquidating a block of shares. Actor-Critic MethodsIn reinforcement learning, an agent makes observations and takes actions within an environment, and in return it receives rewards. Its objective is to learn to act in a way that will maximize its expected long-term rewards. Fig 1. - Reinforcement Learning. There are several types of RL algorithms, and they can be divided into three groups:- **Critic-Only**: Critic-Only methods, also known as Value-Based methods, first find the optimal value function and then derive an optimal policy from it. - **Actor-Only**: Actor-Only methods, also known as Policy-Based methods, search directly for the optimal policy in policy space. This is typically done by using a parameterized family of policies over which optimization procedures can be used directly. - **Actor-Critic**: Actor-Critic methods combine the advantages of actor-only and critic-only methods. In this method, the critic learns the value function and uses it to determine how the actor's policy parramerters should be changed. In this case, the actor brings the advantage of computing continuous actions without the need for optimization procedures on a value function, while the critic supplies the actor with knowledge of the performance. Actor-critic methods usually have good convergence properties, in contrast to critic-only methods. The **Deep Deterministic Policy Gradients (DDPG)** algorithm is one example of an actor-critic method. Fig 2. - Actor-Critic Reinforcement Learning. In this notebook, we will use DDPG to determine the optimal execution of portfolio transactions. In other words, we will use the DDPG algorithm to solve the optimal liquidation problem. But before we can apply the DDPG algorithm we first need to formulate the optimal liquidation problem so that in can be solved using reinforcement learning. In the next section we will see how to do this. Modeling Optimal Execution as a Reinforcement Learning ProblemAs we learned in the previous lessons, the optimal liquidation problem is a minimization problem, *i.e.* we need to find the trading list that minimizes the implementation shortfall. In order to solve this problem through reinforcement learning, we need to restate the optimal liquidation problem in terms of **States**, **Actions**, and **Rewards**. Let's start by defining our States. StatesThe optimal liquidation problem entails that we sell all our shares within a given time frame. Therefore, our state vector must contain some information about the time remaining, or what is equivalent, the number trades remaning. We will use the latter and use the following features to define the state vector at time $t_k$:$$[r_{k-5},\, r_{k-4},\, r_{k-3},\, r_{k-2},\, r_{k-1},\, r_{k},\, m_{k},\, i_{k}]$$where:- $r_{k} = \log\left(\frac{\tilde{S}_k}{\tilde{S}_{k-1}}\right)$ is the log-return at time $t_k$- $m_{k} = \frac{N_k}{N}$ is the number of trades remaining at time $t_k$ normalized by the total number of trades.- $i_{k} = \frac{x_k}{X}$ is the remaining number of shares at time $t_k$ normalized by the total number of shares.The log-returns capture information about stock prices before time $t_k$, which can be used to detect possible price trends. The number of trades and shares remaining allow the agent to learn to sell all the shares within a given time frame. It is important to note that in real world trading scenarios, this state vector can hold many more variables. ActionsSince the optimal liquidation problem only requires us to sell stocks, it is reasonable to define the action $a_k$ to be the number of shares to sell at time $t_{k}$. However, if we start with millions of stocks, intepreting the action directly as the number of shares to sell at each time step can lead to convergence problems, because, the agent will need to produce actions with very high values. Instead, we will interpret the action $a_k$ as a **percentage**. In this case, the actions produced by the agent will only need to be between 0 and 1. Using this interpretation, we can determine the number of shares to sell at each time step using:$$n_k = a_k \times x_k$$where $x_k$ is the number of shares remaining at time $t_k$. RewardsDefining the rewards is trickier than defining states and actions, since the original problem is a minimization problem. One option is to use the difference between two consecutive utility functions. Remeber the utility function is given by:$$U(x) = E(x) + λ V(x)$$After each time step, we compute the utility using the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model for the remaining time and inventory while holding parameter λ constant. Denoting the optimal trading trajectory computed at time $t$ as $x^*_t$, we define the reward as: $$R_{t} = {{U_t(x^*_t) - U_{t+1}(x^*_{t+1})}\over{U_t(x^*_t)}}$$Where we have normalized the difference to train the actor-critic model easier. Simulation EnvironmentIn order to train our DDPG algorithm we will use a very simple simulated trading environment. This environment simulates stock prices that follow a discrete arithmetic random walk and that the permanent and temporary market impact functions are linear functions of the rate of trading, just like in the Almgren and Chriss model. This simple trading environment serves as a starting point to create more complex trading environments. You are encouraged to extend this simple trading environment by adding more complexity to simulte real world trading dynamics, such as book orders, network latencies, trading fees, etc... The simulated enviroment is contained in the **syntheticChrissAlmgren.py** module. You are encouraged to take a look it and modify its parameters as you wish. Let's take a look at the default parameters of our simulation environment. We have set the intial stock price to be $S_0 = 50$, and the total number of shares to sell to one million. This gives an initial portfolio value of $\$50$ Million dollars. We have also set the trader's risk aversion to $\lambda = 10^{-6}$.The stock price will have 12\% annual volatility, a [bid-ask spread](https://www.investopedia.com/terms/b/bid-askspread.asp) of 1/8 and an average daily trading volume of 5 million shares. Assuming there are 250 trading days in a year, this gives a daily volatility in stock price of $0.12 / \sqrt{250} \approx 0.8\%$. We will use a liquiditation time of $T = 60$ days and we will set the number of trades $N = 60$. This means that $\tau=\frac{T}{N} = 1$ which means we will be making one trade per day. For the temporary cost function we will set the fixed cost of selling to be 1/2 of the bid-ask spread, $\epsilon = 1/16$. we will set $\eta$ such that for each one percent of the daily volume we trade, we incur a price impact equal to the bid-askspread. For example, trading at a rate of $5\%$ of the daily trading volume incurs a one-time cost on each trade of 5/8. Under this assumption we have $\eta =(1/8)/(0.01 \times 5 \times 10^6) = 2.5 \times 10^{-6}$.For the permanent costs, a common rule of thumb is that price effects become significant when we sell $10\%$ of the daily volume. If we suppose that significant means that the price depression is one bid-ask spread, and that the effect is linear for smaller and larger trading rates, then we have $\gamma = (1/8)/(0.1 \times 5 \times 10^6) = 2.5 \times 10^{-7}$. The tables below summarize the default parameters of the simulation environment
###Code
import utils
# Get the default financial and AC Model parameters
financial_params, ac_params = utils.get_env_param()
financial_params
ac_params
###Output
_____no_output_____
###Markdown
Reinforcement LearningIn the code below we use DDPG to find a policy that can generate optimal trading trajectories that minimize implementation shortfall, and can be benchmarked against the Almgren and Chriss model. We will implement a typical reinforcement learning workflow to train the actor and critic using the simulation environment. We feed the states observed from our simulator to an agent. The Agent first predicts an action using the actor model and performs the action in the environment. Then, environment returns the reward and new state. This process continues for the given number of episodes. To get accurate results, you should run the code at least 10,000 episodes.
###Code
import numpy as np
import syntheticChrissAlmgren as sca
from ddpg_agent import Agent
from collections import deque
# Create simulation environment
env = sca.MarketEnvironment()
# Initialize Feed-forward DNNs for Actor and Critic models.
agent = Agent(state_size=env.observation_space_dimension(), action_size=env.action_space_dimension(), random_seed=0)
# Set the liquidation time
lqt = 60
# Set the number of trades
n_trades = 60
# Set trader's risk aversion
tr = 1e-6
# Set the number of episodes to run the simulation
episodes = 10000
shortfall_hist = np.array([])
shortfall_deque = deque(maxlen=100)
for episode in range(episodes):
# Reset the enviroment
cur_state = env.reset(seed = episode, liquid_time = lqt, num_trades = n_trades, lamb = tr)
# set the environment to make transactions
env.start_transactions()
for i in range(n_trades + 1):
# Predict the best action for the current state.
action = agent.act(cur_state, add_noise = True)
# Action is performed and new state, reward, info are received.
new_state, reward, done, info = env.step(action)
# current state, action, reward, new state are stored in the experience replay
agent.step(cur_state, action, reward, new_state, done)
# roll over new state
cur_state = new_state
if info.done:
shortfall_hist = np.append(shortfall_hist, info.implementation_shortfall)
shortfall_deque.append(info.implementation_shortfall)
break
if (episode + 1) % 100 == 0: # print average shortfall over last 100 episodes
print('\rEpisode [{}/{}]\tAverage Shortfall: ${:,.2f}'.format(episode + 1, episodes, np.mean(shortfall_deque)))
print('\nAverage Implementation Shortfall: ${:,.2f} \n'.format(np.mean(shortfall_hist)))
###Output
_____no_output_____
###Markdown
& Deep Reinforcement Learning for Optimal Execution of Portfolio Transactions IntroductionThis notebook demonstrates how to use Deep Reinforcement Learning (DRL) for optimizing the execution of large portfolio transactions. We begin with a brief review of reinforcement learning and actor-critic methods. Then, you will use an actor-critic method to generate optimal trading strategies that maximize profit when liquidating a block of shares. Actor-Critic MethodsIn reinforcement learning, an agent makes observations and takes actions within an environment, and in return it receives rewards. Its objective is to learn to act in a way that will maximize its expected long-term rewards. Fig 1. - Reinforcement Learning. There are several types of RL algorithms, and they can be divided into three groups:- **Critic-Only**: Critic-Only methods, also known as Value-Based methods, first find the optimal value function and then derive an optimal policy from it. - **Actor-Only**: Actor-Only methods, also known as Policy-Based methods, search directly for the optimal policy in policy space. This is typically done by using a parameterized family of policies over which optimization procedures can be used directly. - **Actor-Critic**: Actor-Critic methods combine the advantages of actor-only and critic-only methods. In this method, the critic learns the value function and uses it to determine how the actor's policy parramerters should be changed. In this case, the actor brings the advantage of computing continuous actions without the need for optimization procedures on a value function, while the critic supplies the actor with knowledge of the performance. Actor-critic methods usually have good convergence properties, in contrast to critic-only methods. The **Deep Deterministic Policy Gradients (DDPG)** algorithm is one example of an actor-critic method. Fig 2. - Actor-Critic Reinforcement Learning. In this notebook, we will use DDPG to determine the optimal execution of portfolio transactions. In other words, we will use the DDPG algorithm to solve the optimal liquidation problem. But before we can apply the DDPG algorithm we first need to formulate the optimal liquidation problem so that in can be solved using reinforcement learning. In the next section we will see how to do this. Modeling Optimal Execution as a Reinforcement Learning ProblemAs we learned in the previous lessons, the optimal liquidation problem is a minimization problem, *i.e.* we need to find the trading list that minimizes the implementation shortfall. In order to solve this problem through reinforcement learning, we need to restate the optimal liquidation problem in terms of **States**, **Actions**, and **Rewards**. Let's start by defining our States. StatesThe optimal liquidation problem entails that we sell all our shares within a given time frame. Therefore, our state vector must contain some information about the time remaining, or what is equivalent, the number trades remaning. We will use the latter and use the following features to define the state vector at time $t_k$:$$[r_{k-5},\, r_{k-4},\, r_{k-3},\, r_{k-2},\, r_{k-1},\, r_{k},\, m_{k},\, i_{k}]$$where:- $r_{k} = \log\left(\frac{\tilde{S}_k}{\tilde{S}_{k-1}}\right)$ is the log-return at time $t_k$- $m_{k} = \frac{N_k}{N}$ is the number of trades remaining at time $t_k$ normalized by the total number of trades.- $i_{k} = \frac{x_k}{X}$ is the remaining number of shares at time $t_k$ normalized by the total number of shares.The log-returns capture information about stock prices before time $t_k$, which can be used to detect possible price trends. The number of trades and shares remaining allow the agent to learn to sell all the shares within a given time frame. It is important to note that in real world trading scenarios, this state vector can hold many more variables. ActionsSince the optimal liquidation problem only requires us to sell stocks, it is reasonable to define the action $a_k$ to be the number of shares to sell at time $t_{k}$. However, if we start with millions of stocks, intepreting the action directly as the number of shares to sell at each time step can lead to convergence problems, because, the agent will need to produce actions with very high values. Instead, we will interpret the action $a_k$ as a **percentage**. In this case, the actions produced by the agent will only need to be between 0 and 1. Using this interpretation, we can determine the number of shares to sell at each time step using:$$n_k = a_k \times x_k$$where $x_k$ is the number of shares remaining at time $t_k$. RewardsDefining the rewards is trickier than defining states and actions, since the original problem is a minimization problem. One option is to use the difference between two consecutive utility functions. Remeber the utility function is given by:$$U(x) = E(x) + λ V(x)$$After each time step, we compute the utility using the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model for the remaining time and inventory while holding parameter λ constant. Denoting the optimal trading trajectory computed at time $t$ as $x^*_t$, we define the reward as: $$R_{t} = {{U_t(x^*_t) - U_{t+1}(x^*_{t+1})}\over{U_t(x^*_t)}}$$Where we have normalized the difference to train the actor-critic model easier. Simulation EnvironmentIn order to train our DDPG algorithm we will use a very simple simulated trading environment. This environment simulates stock prices that follow a discrete arithmetic random walk and that the permanent and temporary market impact functions are linear functions of the rate of trading, just like in the Almgren and Chriss model. This simple trading environment serves as a starting point to create more complex trading environments. You are encouraged to extend this simple trading environment by adding more complexity to simulte real world trading dynamics, such as book orders, network latencies, trading fees, etc... The simulated enviroment is contained in the **syntheticChrissAlmgren.py** module. You are encouraged to take a look it and modify its parameters as you wish. Let's take a look at the default parameters of our simulation environment. We have set the intial stock price to be $S_0 = 50$, and the total number of shares to sell to one million. This gives an initial portfolio value of $\$50$ Million dollars. We have also set the trader's risk aversion to $\lambda = 10^{-6}$.The stock price will have 12\% annual volatility, a [bid-ask spread](https://www.investopedia.com/terms/b/bid-askspread.asp) of 1/8 and an average daily trading volume of 5 million shares. Assuming there are 250 trading days in a year, this gives a daily volatility in stock price of $0.12 / \sqrt{250} \approx 0.8\%$. We will use a liquiditation time of $T = 60$ days and we will set the number of trades $N = 60$. This means that $\tau=\frac{T}{N} = 1$ which means we will be making one trade per day. For the temporary cost function we will set the fixed cost of selling to be 1/2 of the bid-ask spread, $\epsilon = 1/16$. we will set $\eta$ such that for each one percent of the daily volume we trade, we incur a price impact equal to the bid-askspread. For example, trading at a rate of $5\%$ of the daily trading volume incurs a one-time cost on each trade of 5/8. Under this assumption we have $\eta =(1/8)/(0.01 \times 5 \times 10^6) = 2.5 \times 10^{-6}$.For the permanent costs, a common rule of thumb is that price effects become significant when we sell $10\%$ of the daily volume. If we suppose that significant means that the price depression is one bid-ask spread, and that the effect is linear for smaller and larger trading rates, then we have $\gamma = (1/8)/(0.1 \times 5 \times 10^6) = 2.5 \times 10^{-7}$. The tables below summarize the default parameters of the simulation environment
###Code
import utils
# Get the default financial and AC Model parameters
financial_params, ac_params = utils.get_env_param()
financial_params
ac_params
###Output
_____no_output_____
###Markdown
Reinforcement LearningIn the code below we use DDPG to find a policy that can generate optimal trading trajectories that minimize implementation shortfall, and can be benchmarked against the Almgren and Chriss model. We will implement a typical reinforcement learning workflow to train the actor and critic using the simulation environment. We feed the states observed from our simulator to an agent. The Agent first predicts an action using the actor model and performs the action in the environment. Then, environment returns the reward and new state. This process continues for the given number of episodes. To get accurate results, you should run the code at least 10,000 episodes.
###Code
import numpy as np
import syntheticChrissAlmgren as sca
from ddpg_agent import Agent
from collections import deque
# Create simulation environment
env = sca.MarketEnvironment()
# Initialize Feed-forward DNNs for Actor and Critic models.
agent = Agent(state_size=env.observation_space_dimension(), action_size=env.action_space_dimension(), random_seed=0)
# Set the liquidation time
lqt = 60
# Set the number of trades
n_trades = 60
# Set trader's risk aversion
tr = 1e-6
# Set the number of episodes to run the simulation
episodes = 10000
shortfall_hist = np.array([])
shortfall_deque = deque(maxlen=100)
for episode in range(episodes):
# Reset the enviroment
cur_state = env.reset(seed = episode, liquid_time = lqt, num_trades = n_trades, lamb = tr)
# set the environment to make transactions
env.start_transactions()
for i in range(n_trades + 1):
# Predict the best action for the current state.
action = agent.act(cur_state, add_noise = True)
# Action is performed and new state, reward, info are received.
new_state, reward, done, info = env.step(action)
# current state, action, reward, new state are stored in the experience replay
agent.step(cur_state, action, reward, new_state, done)
# roll over new state
cur_state = new_state
if info.done:
shortfall_hist = np.append(shortfall_hist, info.implementation_shortfall)
shortfall_deque.append(info.implementation_shortfall)
break
if (episode + 1) % 100 == 0: # print average shortfall over last 100 episodes
print('\rEpisode [{}/{}]\tAverage Shortfall: ${:,.2f}'.format(episode + 1, episodes, np.mean(shortfall_deque)))
print('\nAverage Implementation Shortfall: ${:,.2f} \n'.format(np.mean(shortfall_hist)))
###Output
c:\users\weizhao\anaconda3\envs\drlnd\lib\site-packages\torch\nn\functional.py:1614: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.
warnings.warn("nn.functional.tanh is deprecated. Use torch.tanh instead.")
###Markdown
& Deep Reinforcement Learning for Optimal Execution of Portfolio Transactions IntroductionThis notebook demonstrates how to use Deep Reinforcement Learning (DRL) for optimizing the execution of large portfolio transactions. We begin with a brief review of reinforcement learning and actor-critic methods. Then, you will use an actor-critic method to generate optimal trading strategies that maximize profit when liquidating a block of shares. Actor-Critic MethodsIn reinforcement learning, an agent makes observations and takes actions within an environment, and in return it receives rewards. Its objective is to learn to act in a way that will maximize its expected long-term rewards. Fig 1. - Reinforcement Learning. There are several types of RL algorithms, and they can be divided into three groups:- **Critic-Only**: Critic-Only methods, also known as Value-Based methods, first find the optimal value function and then derive an optimal policy from it. - **Actor-Only**: Actor-Only methods, also known as Policy-Based methods, search directly for the optimal policy in policy space. This is typically done by using a parameterized family of policies over which optimization procedures can be used directly. - **Actor-Critic**: Actor-Critic methods combine the advantages of actor-only and critic-only methods. In this method, the critic learns the value function and uses it to determine how the actor's policy parramerters should be changed. In this case, the actor brings the advantage of computing continuous actions without the need for optimization procedures on a value function, while the critic supplies the actor with knowledge of the performance. Actor-critic methods usually have good convergence properties, in contrast to critic-only methods. The **Deep Deterministic Policy Gradients (DDPG)** algorithm is one example of an actor-critic method. Fig 2. - Actor-Critic Reinforcement Learning. In this notebook, we will use DDPG to determine the optimal execution of portfolio transactions. In other words, we will use the DDPG algorithm to solve the optimal liquidation problem. But before we can apply the DDPG algorithm we first need to formulate the optimal liquidation problem so that in can be solved using reinforcement learning. In the next section we will see how to do this. Modeling Optimal Execution as a Reinforcement Learning ProblemAs we learned in the previous lessons, the optimal liquidation problem is a minimization problem, *i.e.* we need to find the trading list that minimizes the implementation shortfall. In order to solve this problem through reinforcement learning, we need to restate the optimal liquidation problem in terms of **States**, **Actions**, and **Rewards**. Let's start by defining our States. StatesThe optimal liquidation problem entails that we sell all our shares within a given time frame. Therefore, our state vector must contain some information about the time remaining, or what is equivalent, the number trades remaning. We will use the latter and use the following features to define the state vector at time $t_k$:$$[r_{k-5},\, r_{k-4},\, r_{k-3},\, r_{k-2},\, r_{k-1},\, r_{k},\, m_{k},\, i_{k}]$$where:- $r_{k} = \log\left(\frac{\tilde{S}_k}{\tilde{S}_{k-1}}\right)$ is the log-return at time $t_k$- $m_{k} = \frac{N_k}{N}$ is the number of trades remaining at time $t_k$ normalized by the total number of trades.- $i_{k} = \frac{x_k}{X}$ is the remaining number of shares at time $t_k$ normalized by the total number of shares.The log-returns capture information about stock prices before time $t_k$, which can be used to detect possible price trends. The number of trades and shares remaining allow the agent to learn to sell all the shares within a given time frame. It is important to note that in real world trading scenarios, this state vector can hold many more variables. ActionsSince the optimal liquidation problem only requires us to sell stocks, it is reasonable to define the action $a_k$ to be the number of shares to sell at time $t_{k}$. However, if we start with millions of stocks, intepreting the action directly as the number of shares to sell at each time step can lead to convergence problems, because, the agent will need to produce actions with very high values. Instead, we will interpret the action $a_k$ as a **percentage**. In this case, the actions produced by the agent will only need to be between 0 and 1. Using this interpretation, we can determine the number of shares to sell at each time step using:$$n_k = a_k \times x_k$$where $x_k$ is the number of shares remaining at time $t_k$. RewardsDefining the rewards is trickier than defining states and actions, since the original problem is a minimization problem. One option is to use the difference between two consecutive utility functions. Remeber the utility function is given by:$$U(x) = E(x) + λ V(x)$$After each time step, we compute the utility using the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model for the remaining time and inventory while holding parameter λ constant. Denoting the optimal trading trajectory computed at time $t$ as $x^*_t$, we define the reward as: $$R_{t} = {{U_t(x^*_t) - U_{t+1}(x^*_{t+1})}\over{U_t(x^*_t)}}$$Where we have normalized the difference to train the actor-critic model easier. Simulation EnvironmentIn order to train our DDPG algorithm we will use a very simple simulated trading environment. This environment simulates stock prices that follow a discrete arithmetic random walk and that the permanent and temporary market impact functions are linear functions of the rate of trading, just like in the Almgren and Chriss model. This simple trading environment serves as a starting point to create more complex trading environments. You are encouraged to extend this simple trading environment by adding more complexity to simulte real world trading dynamics, such as book orders, network latencies, trading fees, etc... The simulated enviroment is contained in the **syntheticChrissAlmgren.py** module. You are encouraged to take a look it and modify its parameters as you wish. Let's take a look at the default parameters of our simulation environment. We have set the intial stock price to be $S_0 = 50$, and the total number of shares to sell to one million. This gives an initial portfolio value of $\$50$ Million dollars. We have also set the trader's risk aversion to $\lambda = 10^{-6}$.The stock price will have 12\% annual volatility, a [bid-ask spread](https://www.investopedia.com/terms/b/bid-askspread.asp) of 1/8 and an average daily trading volume of 5 million shares. Assuming there are 250 trading days in a year, this gives a daily volatility in stock price of $0.12 / \sqrt{250} \approx 0.8\%$. We will use a liquiditation time of $T = 60$ days and we will set the number of trades $N = 60$. This means that $\tau=\frac{T}{N} = 1$ which means we will be making one trade per day. For the temporary cost function we will set the fixed cost of selling to be 1/2 of the bid-ask spread, $\epsilon = 1/16$. we will set $\eta$ such that for each one percent of the daily volume we trade, we incur a price impact equal to the bid-askspread. For example, trading at a rate of $5\%$ of the daily trading volume incurs a one-time cost on each trade of 5/8. Under this assumption we have $\eta =(1/8)/(0.01 \times 5 \times 10^6) = 2.5 \times 10^{-6}$.For the permanent costs, a common rule of thumb is that price effects become significant when we sell $10\%$ of the daily volume. If we suppose that significant means that the price depression is one bid-ask spread, and that the effect is linear for smaller and larger trading rates, then we have $\gamma = (1/8)/(0.1 \times 5 \times 10^6) = 2.5 \times 10^{-7}$. The tables below summarize the default parameters of the simulation environment
###Code
import utils
# Get the default financial and AC Model parameters
financial_params, ac_params = utils.get_env_param()
financial_params
ac_params
###Output
_____no_output_____
###Markdown
Reinforcement LearningIn the code below we use DDPG to find a policy that can generate optimal trading trajectories that minimize implementation shortfall, and can be benchmarked against the Almgren and Chriss model. We will implement a typical reinforcement learning workflow to train the actor and critic using the simulation environment. We feed the states observed from our simulator to an agent. The Agent first predicts an action using the actor model and performs the action in the environment. Then, environment returns the reward and new state. This process continues for the given number of episodes. To get accurate results, you should run the code at least 10,000 episodes.
###Code
import numpy as np
import syntheticChrissAlmgren as sca
from ddpg_agent import Agent
from collections import deque
# Create simulation environment
env = sca.MarketEnvironment()
# Initialize Feed-forward DNNs for Actor and Critic models.
agent = Agent(state_size=env.observation_space_dimension(), action_size=env.action_space_dimension(), random_seed=0)
# Set the liquidation time
lqt = 60
# Set the number of trades
n_trades = 60
# Set trader's risk aversion
tr = 1e-6
# Set the number of episodes to run the simulation
episodes = 10000
shortfall_hist = np.array([])
shortfall_deque = deque(maxlen=100)
for episode in range(episodes):
# Reset the enviroment
cur_state = env.reset(seed = episode, liquid_time = lqt, num_trades = n_trades, lamb = tr)
# set the environment to make transactions
env.start_transactions()
for i in range(n_trades + 1):
# Predict the best action for the current state.
action = agent.act(cur_state, add_noise = True)
# Action is performed and new state, reward, info are received.
new_state, reward, done, info = env.step(action)
# current state, action, reward, new state are stored in the experience replay
agent.step(cur_state, action, reward, new_state, done)
# roll over new state
cur_state = new_state
if info.done:
shortfall_hist = np.append(shortfall_hist, info.implementation_shortfall)
shortfall_deque.append(info.implementation_shortfall)
break
if (episode + 1) % 100 == 0: # print average shortfall over last 100 episodes
print('\rEpisode [{}/{}]\tAverage Shortfall: ${:,.2f}'.format(episode + 1, episodes, np.mean(shortfall_deque)))
print('\nAverage Implementation Shortfall: ${:,.2f} \n'.format(np.mean(shortfall_hist)))
###Output
_____no_output_____
###Markdown
& Deep Reinforcement Learning for Optimal Execution of Portfolio Transactions IntroductionThis notebook demonstrates how to use Deep Reinforcement Learning (DRL) for optimizing the execution of large portfolio transactions. We begin with a brief review of reinforcement learning and actor-critic methods. Then, you will use an actor-critic method to generate optimal trading strategies that maximize profit when liquidating a block of shares. Actor-Critic MethodsIn reinforcement learning, an agent makes observations and takes actions within an environment, and in return it receives rewards. Its objective is to learn to act in a way that will maximize its expected long-term rewards. Fig 1. - Reinforcement Learning. There are several types of RL algorithms, and they can be divided into three groups:- **Critic-Only**: Critic-Only methods, also known as Value-Based methods, first find the optimal value function and then derive an optimal policy from it. - **Actor-Only**: Actor-Only methods, also known as Policy-Based methods, search directly for the optimal policy in policy space. This is typically done by using a parameterized family of policies over which optimization procedures can be used directly. - **Actor-Critic**: Actor-Critic methods combine the advantages of actor-only and critic-only methods. In this method, the critic learns the value function and uses it to determine how the actor's policy parramerters should be changed. In this case, the actor brings the advantage of computing continuous actions without the need for optimization procedures on a value function, while the critic supplies the actor with knowledge of the performance. Actor-critic methods usually have good convergence properties, in contrast to critic-only methods. The **Deep Deterministic Policy Gradients (DDPG)** algorithm is one example of an actor-critic method. Fig 2. - Actor-Critic Reinforcement Learning. In this notebook, we will use DDPG to determine the optimal execution of portfolio transactions. In other words, we will use the DDPG algorithm to solve the optimal liquidation problem. But before we can apply the DDPG algorithm we first need to formulate the optimal liquidation problem so that in can be solved using reinforcement learning. In the next section we will see how to do this. Modeling Optimal Execution as a Reinforcement Learning ProblemAs we learned in the previous lessons, the optimal liquidation problem is a minimization problem, *i.e.* we need to find the trading list that minimizes the implementation shortfall. In order to solve this problem through reinforcement learning, we need to restate the optimal liquidation problem in terms of **States**, **Actions**, and **Rewards**. Let's start by defining our States. StatesThe optimal liquidation problem entails that we sell all our shares within a given time frame. Therefore, our state vector must contain some information about the time remaining, or what is equivalent, the number trades remaning. We will use the latter and use the following features to define the state vector at time $t_k$:$$[r_{k-5},\, r_{k-4},\, r_{k-3},\, r_{k-2},\, r_{k-1},\, r_{k},\, m_{k},\, i_{k}]$$where:- $r_{k} = \log\left(\frac{\tilde{S}_k}{\tilde{S}_{k-1}}\right)$ is the log-return at time $t_k$- $m_{k} = \frac{N_k}{N}$ is the number of trades remaining at time $t_k$ normalized by the total number of trades.- $i_{k} = \frac{x_k}{X}$ is the remaining number of shares at time $t_k$ normalized by the total number of shares.The log-returns capture information about stock prices before time $t_k$, which can be used to detect possible price trends. The number of trades and shares remaining allow the agent to learn to sell all the shares within a given time frame. It is important to note that in real world trading scenarios, this state vector can hold many more variables. ActionsSince the optimal liquidation problem only requires us to sell stocks, it is reasonable to define the action $a_k$ to be the number of shares to sell at time $t_{k}$. However, if we start with millions of stocks, intepreting the action directly as the number of shares to sell at each time step can lead to convergence problems, because, the agent will need to produce actions with very high values. Instead, we will interpret the action $a_k$ as a **percentage**. In this case, the actions produced by the agent will only need to be between 0 and 1. Using this interpretation, we can determine the number of shares to sell at each time step using:$$n_k = a_k \times x_k$$where $x_k$ is the number of shares remaining at time $t_k$. RewardsDefining the rewards is trickier than defining states and actions, since the original problem is a minimization problem. One option is to use the difference between two consecutive utility functions. Remeber the utility function is given by:$$U(x) = E(x) + λ V(x)$$After each time step, we compute the utility using the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model for the remaining time and inventory while holding parameter λ constant. Denoting the optimal trading trajectory computed at time $t$ as $x^*_t$, we define the reward as: $$R_{t} = {{U_t(x^*_t) - U_{t+1}(x^*_{t+1})}\over{U_t(x^*_t)}}$$Where we have normalized the difference to train the actor-critic model easier. Simulation EnvironmentIn order to train our DDPG algorithm we will use a very simple simulated trading environment. This environment simulates stock prices that follow a discrete arithmetic random walk and that the permanent and temporary market impact functions are linear functions of the rate of trading, just like in the Almgren and Chriss model. This simple trading environment serves as a starting point to create more complex trading environments. You are encouraged to extend this simple trading environment by adding more complexity to simulte real world trading dynamics, such as book orders, network latencies, trading fees, etc... The simulated enviroment is contained in the **syntheticChrissAlmgren.py** module. You are encouraged to take a look it and modify its parameters as you wish. Let's take a look at the default parameters of our simulation environment. We have set the intial stock price to be $S_0 = 50$, and the total number of shares to sell to one million. This gives an initial portfolio value of $\$50$ Million dollars. We have also set the trader's risk aversion to $\lambda = 10^{-6}$.The stock price will have 12\% annual volatility, a [bid-ask spread](https://www.investopedia.com/terms/b/bid-askspread.asp) of 1/8 and an average daily trading volume of 5 million shares. Assuming there are 250 trading days in a year, this gives a daily volatility in stock price of $0.12 / \sqrt{250} \approx 0.8\%$. We will use a liquiditation time of $T = 60$ days and we will set the number of trades $N = 60$. This means that $\tau=\frac{T}{N} = 1$ which means we will be making one trade per day. For the temporary cost function we will set the fixed cost of selling to be 1/2 of the bid-ask spread, $\epsilon = 1/16$. we will set $\eta$ such that for each one percent of the daily volume we trade, we incur a price impact equal to the bid-askspread. For example, trading at a rate of $5\%$ of the daily trading volume incurs a one-time cost on each trade of 5/8. Under this assumption we have $\eta =(1/8)/(0.01 \times 5 \times 10^6) = 2.5 \times 10^{-6}$.For the permanent costs, a common rule of thumb is that price effects become significant when we sell $10\%$ of the daily volume. If we suppose that significant means that the price depression is one bid-ask spread, and that the effect is linear for smaller and larger trading rates, then we have $\gamma = (1/8)/(0.1 \times 5 \times 10^6) = 2.5 \times 10^{-7}$. The tables below summarize the default parameters of the simulation environment
###Code
import utils
# Get the default financial and AC Model parameters
financial_params, ac_params = utils.get_env_param()
financial_params
ac_params
###Output
_____no_output_____
###Markdown
Reinforcement LearningIn the code below we use DDPG to find a policy that can generate optimal trading trajectories that minimize implementation shortfall, and can be benchmarked against the Almgren and Chriss model. We will implement a typical reinforcement learning workflow to train the actor and critic using the simulation environment. We feed the states observed from our simulator to an agent. The Agent first predicts an action using the actor model and performs the action in the environment. Then, environment returns the reward and new state. This process continues for the given number of episodes. To get accurate results, you should run the code at least 10,000 episodes.
###Code
import numpy as np
import syntheticChrissAlmgren as sca
from ddpg_agent import Agent
from collections import deque
# Create simulation environment
env = sca.MarketEnvironment()
# Initialize Feed-forward DNNs for Actor and Critic models.
agent = Agent(state_size=env.observation_space_dimension(), action_size=env.action_space_dimension(), random_seed=0)
# Set the liquidation time
lqt = 60
# Set the number of trades
n_trades = 60
# Set trader's risk aversion
tr = 1e-6
# Set the number of episodes to run the simulation
episodes = 10000
shortfall_hist = np.array([])
shortfall_deque = deque(maxlen=100)
for episode in range(episodes):
# Reset the enviroment
cur_state = env.reset(seed = episode, liquid_time = lqt, num_trades = n_trades, lamb = tr)
# set the environment to make transactions
env.start_transactions()
for i in range(n_trades + 1):
# Predict the best action for the current state.
action = agent.act(cur_state, add_noise = True)
# Action is performed and new state, reward, info are received.
new_state, reward, done, info = env.step(action)
# current state, action, reward, new state are stored in the experience replay
agent.step(cur_state, action, reward, new_state, done)
# roll over new state
cur_state = new_state
if info.done:
shortfall_hist = np.append(shortfall_hist, info.implementation_shortfall)
shortfall_deque.append(info.implementation_shortfall)
break
if (episode + 1) % 100 == 0: # print average shortfall over last 100 episodes
print('\rEpisode [{}/{}]\tAverage Shortfall: ${:,.2f}'.format(episode + 1, episodes, np.mean(shortfall_deque)))
print('\nAverage Implementation Shortfall: ${:,.2f} \n'.format(np.mean(shortfall_hist)))
###Output
_____no_output_____
###Markdown
& Deep Reinforcement Learning for Optimal Execution of Portfolio Transactions IntroductionThis notebook demonstrates how to use Deep Reinforcement Learning (DRL) for optimizing the execution of large portfolio transactions. We begin with a brief review of reinforcement learning and actor-critic methods. Then, you will use an actor-critic method to generate optimal trading strategies that maximize profit when liquidating a block of shares. Actor-Critic MethodsIn reinforcement learning, an agent makes observations and takes actions within an environment, and in return it receives rewards. Its objective is to learn to act in a way that will maximize its expected long-term rewards. Fig 1. - Reinforcement Learning. There are several types of RL algorithms, and they can be divided into three groups:- **Critic-Only**: Critic-Only methods, also known as Value-Based methods, first find the optimal value function and then derive an optimal policy from it. - **Actor-Only**: Actor-Only methods, also known as Policy-Based methods, search directly for the optimal policy in policy space. This is typically done by using a parameterized family of policies over which optimization procedures can be used directly. - **Actor-Critic**: Actor-Critic methods combine the advantages of actor-only and critic-only methods. In this method, the critic learns the value function and uses it to determine how the actor's policy parramerters should be changed. In this case, the actor brings the advantage of computing continuous actions without the need for optimization procedures on a value function, while the critic supplies the actor with knowledge of the performance. Actor-critic methods usually have good convergence properties, in contrast to critic-only methods. The **Deep Deterministic Policy Gradients (DDPG)** algorithm is one example of an actor-critic method. Fig 2. - Actor-Critic Reinforcement Learning. In this notebook, we will use DDPG to determine the optimal execution of portfolio transactions. In other words, we will use the DDPG algorithm to solve the optimal liquidation problem. But before we can apply the DDPG algorithm we first need to formulate the optimal liquidation problem so that in can be solved using reinforcement learning. In the next section we will see how to do this. Modeling Optimal Execution as a Reinforcement Learning ProblemAs we learned in the previous lessons, the optimal liquidation problem is a minimization problem, *i.e.* we need to find the trading list that minimizes the implementation shortfall. In order to solve this problem through reinforcement learning, we need to restate the optimal liquidation problem in terms of **States**, **Actions**, and **Rewards**. Let's start by defining our States. StatesThe optimal liquidation problem entails that we sell all our shares within a given time frame. Therefore, our state vector must contain some information about the time remaining, or what is equivalent, the number trades remaning. We will use the latter and use the following features to define the state vector at time $t_k$:$$[r_{k-5},\, r_{k-4},\, r_{k-3},\, r_{k-2},\, r_{k-1},\, r_{k},\, m_{k},\, i_{k}]$$where:- $r_{k} = \log\left(\frac{\tilde{S}_k}{\tilde{S}_{k-1}}\right)$ is the log-return at time $t_k$- $m_{k} = \frac{N_k}{N}$ is the number of trades remaining at time $t_k$ normalized by the total number of trades.- $i_{k} = \frac{x_k}{X}$ is the remaining number of shares at time $t_k$ normalized by the total number of shares.The log-returns capture information about stock prices before time $t_k$, which can be used to detect possible price trends. The number of trades and shares remaining allow the agent to learn to sell all the shares within a given time frame. It is important to note that in real world trading scenarios, this state vector can hold many more variables. ActionsSince the optimal liquidation problem only requires us to sell stocks, it is reasonable to define the action $a_k$ to be the number of shares to sell at time $t_{k}$. However, if we start with millions of stocks, intepreting the action directly as the number of shares to sell at each time step can lead to convergence problems, because, the agent will need to produce actions with very high values. Instead, we will interpret the action $a_k$ as a **percentage**. In this case, the actions produced by the agent will only need to be between 0 and 1. Using this interpretation, we can determine the number of shares to sell at each time step using:$$n_k = a_k \times x_k$$where $x_k$ is the number of shares remaining at time $t_k$. RewardsDefining the rewards is trickier than defining states and actions, since the original problem is a minimization problem. One option is to use the difference between two consecutive utility functions. Remeber the utility function is given by:$$U(x) = E(x) + λ V(x)$$After each time step, we compute the utility using the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model for the remaining time and inventory while holding parameter λ constant. Denoting the optimal trading trajectory computed at time $t$ as $x^*_t$, we define the reward as: $$R_{t} = {{U_t(x^*_t) - U_{t+1}(x^*_{t+1})}\over{U_t(x^*_t)}}$$Where we have normalized the difference to train the actor-critic model easier. Simulation EnvironmentIn order to train our DDPG algorithm we will use a very simple simulated trading environment. This environment simulates stock prices that follow a discrete arithmetic random walk and that the permanent and temporary market impact functions are linear functions of the rate of trading, just like in the Almgren and Chriss model. This simple trading environment serves as a starting point to create more complex trading environments. You are encouraged to extend this simple trading environment by adding more complexity to simulte real world trading dynamics, such as book orders, network latencies, trading fees, etc... The simulated enviroment is contained in the **syntheticChrissAlmgren.py** module. You are encouraged to take a look it and modify its parameters as you wish. Let's take a look at the default parameters of our simulation environment. We have set the intial stock price to be $S_0 = 50$, and the total number of shares to sell to one million. This gives an initial portfolio value of $\$50$ Million dollars. We have also set the trader's risk aversion to $\lambda = 10^{-6}$.The stock price will have 12\% annual volatility, a [bid-ask spread](https://www.investopedia.com/terms/b/bid-askspread.asp) of 1/8 and an average daily trading volume of 5 million shares. Assuming there are 250 trading days in a year, this gives a daily volatility in stock price of $0.12 / \sqrt{250} \approx 0.8\%$. We will use a liquiditation time of $T = 60$ days and we will set the number of trades $N = 60$. This means that $\tau=\frac{T}{N} = 1$ which means we will be making one trade per day. For the temporary cost function we will set the fixed cost of selling to be 1/2 of the bid-ask spread, $\epsilon = 1/16$. we will set $\eta$ such that for each one percent of the daily volume we trade, we incur a price impact equal to the bid-askspread. For example, trading at a rate of $5\%$ of the daily trading volume incurs a one-time cost on each trade of 5/8. Under this assumption we have $\eta =(1/8)/(0.01 \times 5 \times 10^6) = 2.5 \times 10^{-6}$.For the permanent costs, a common rule of thumb is that price effects become significant when we sell $10\%$ of the daily volume. If we suppose that significant means that the price depression is one bid-ask spread, and that the effect is linear for smaller and larger trading rates, then we have $\gamma = (1/8)/(0.1 \times 5 \times 10^6) = 2.5 \times 10^{-7}$. The tables below summarize the default parameters of the simulation environment
###Code
import utils
# Get the default financial and AC Model parameters
financial_params, ac_params = utils.get_env_param()
financial_params
ac_params
###Output
_____no_output_____
###Markdown
Reinforcement LearningIn the code below we use DDPG to find a policy that can generate optimal trading trajectories that minimize implementation shortfall, and can be benchmarked against the Almgren and Chriss model. We will implement a typical reinforcement learning workflow to train the actor and critic using the simulation environment. We feed the states observed from our simulator to an agent. The Agent first predicts an action using the actor model and performs the action in the environment. Then, environment returns the reward and new state. This process continues for the given number of episodes. To get accurate results, you should run the code at least 10,000 episodes.
###Code
import numpy as np
import syntheticChrissAlmgren as sca
from ddpg_agent import Agent
from collections import deque
# Create simulation environment
env = sca.MarketEnvironment()
# Initialize Feed-forward DNNs for Actor and Critic models.
agent = Agent(state_size=env.observation_space_dimension(), action_size=env.action_space_dimension(), random_seed=0)
# Set the liquidation time
lqt = 60
# Set the number of trades
n_trades = 60
# Set trader's risk aversion
tr = 1e-6
# Set the number of episodes to run the simulation
episodes = 10000
shortfall_hist = np.array([])
shortfall_deque = deque(maxlen=100)
for episode in range(episodes):
# Reset the enviroment
cur_state = env.reset(seed = episode, liquid_time = lqt, num_trades = n_trades, lamb = tr)
# set the environment to make transactions
env.start_transactions()
for i in range(n_trades + 1):
# Predict the best action for the current state.
action = agent.act(cur_state, add_noise = True)
# Action is performed and new state, reward, info are received.
new_state, reward, done, info = env.step(action)
# current state, action, reward, new state are stored in the experience replay
agent.step(cur_state, action, reward, new_state, done)
# roll over new state
cur_state = new_state
if info.done:
shortfall_hist = np.append(shortfall_hist, info.implementation_shortfall)
shortfall_deque.append(info.implementation_shortfall)
break
if (episode + 1) % 100 == 0: # print average shortfall over last 100 episodes
print('\rEpisode [{}/{}]\tAverage Shortfall: ${:,.2f}'.format(episode + 1, episodes, np.mean(shortfall_deque)))
print('\nAverage Implementation Shortfall: ${:,.2f} \n'.format(np.mean(shortfall_hist)))
###Output
_____no_output_____
###Markdown
& Deep Reinforcement Learning for Optimal Execution of Portfolio Transactions IntroductionThis notebook demonstrates how to use Deep Reinforcement Learning (DRL) for optimizing the execution of large portfolio transactions. We begin with a brief review of reinforcement learning and actor-critic methods. Then, you will use an actor-critic method to generate optimal trading strategies that maximize profit when liquidating a block of shares. Actor-Critic MethodsIn reinforcement learning, an agent makes observations and takes actions within an environment, and in return it receives rewards. Its objective is to learn to act in a way that will maximize its expected long-term rewards. Fig 1. - Reinforcement Learning. There are several types of RL algorithms, and they can be divided into three groups:- **Critic-Only**: Critic-Only methods, also known as Value-Based methods, first find the optimal value function and then derive an optimal policy from it. - **Actor-Only**: Actor-Only methods, also known as Policy-Based methods, search directly for the optimal policy in policy space. This is typically done by using a parameterized family of policies over which optimization procedures can be used directly. - **Actor-Critic**: Actor-Critic methods combine the advantages of actor-only and critic-only methods. In this method, the critic learns the value function and uses it to determine how the actor's policy parramerters should be changed. In this case, the actor brings the advantage of computing continuous actions without the need for optimization procedures on a value function, while the critic supplies the actor with knowledge of the performance. Actor-critic methods usually have good convergence properties, in contrast to critic-only methods. The **Deep Deterministic Policy Gradients (DDPG)** algorithm is one example of an actor-critic method. Fig 2. - Actor-Critic Reinforcement Learning. In this notebook, we will use DDPG to determine the optimal execution of portfolio transactions. In other words, we will use the DDPG algorithm to solve the optimal liquidation problem. But before we can apply the DDPG algorithm we first need to formulate the optimal liquidation problem so that in can be solved using reinforcement learning. In the next section we will see how to do this. Modeling Optimal Execution as a Reinforcement Learning ProblemAs we learned in the previous lessons, the optimal liquidation problem is a minimization problem, *i.e.* we need to find the trading list that minimizes the implementation shortfall. In order to solve this problem through reinforcement learning, we need to restate the optimal liquidation problem in terms of **States**, **Actions**, and **Rewards**. Let's start by defining our States. StatesThe optimal liquidation problem entails that we sell all our shares within a given time frame. Therefore, our state vector must contain some information about the time remaining, or what is equivalent, the number trades remaning. We will use the latter and use the following features to define the state vector at time $t_k$:$$[r_{k-5},\, r_{k-4},\, r_{k-3},\, r_{k-2},\, r_{k-1},\, r_{k},\, m_{k},\, i_{k}]$$where:- $r_{k} = \log\left(\frac{\tilde{S}_k}{\tilde{S}_{k-1}}\right)$ is the log-return at time $t_k$- $m_{k} = \frac{N_k}{N}$ is the number of trades remaining at time $t_k$ normalized by the total number of trades.- $i_{k} = \frac{x_k}{X}$ is the remaining number of shares at time $t_k$ normalized by the total number of shares.The log-returns capture information about stock prices before time $t_k$, which can be used to detect possible price trends. The number of trades and shares remaining allow the agent to learn to sell all the shares within a given time frame. It is important to note that in real world trading scenarios, this state vector can hold many more variables. ActionsSince the optimal liquidation problem only requires us to sell stocks, it is reasonable to define the action $a_k$ to be the number of shares to sell at time $t_{k}$. However, if we start with millions of stocks, intepreting the action directly as the number of shares to sell at each time step can lead to convergence problems, because, the agent will need to produce actions with very high values. Instead, we will interpret the action $a_k$ as a **percentage**. In this case, the actions produced by the agent will only need to be between 0 and 1. Using this interpretation, we can determine the number of shares to sell at each time step using:$$n_k = a_k \times x_k$$where $x_k$ is the number of shares remaining at time $t_k$. RewardsDefining the rewards is trickier than defining states and actions, since the original problem is a minimization problem. One option is to use the difference between two consecutive utility functions. Remeber the utility function is given by:$$U(x) = E(x) + λ V(x)$$After each time step, we compute the utility using the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model for the remaining time and inventory while holding parameter λ constant. Denoting the optimal trading trajectory computed at time $t$ as $x^*_t$, we define the reward as: $$R_{t} = {{U_t(x^*_t) - U_{t+1}(x^*_{t+1})}\over{U_t(x^*_t)}}$$Where we have normalized the difference to train the actor-critic model easier. Simulation EnvironmentIn order to train our DDPG algorithm we will use a very simple simulated trading environment. This environment simulates stock prices that follow a discrete arithmetic random walk and that the permanent and temporary market impact functions are linear functions of the rate of trading, just like in the Almgren and Chriss model. This simple trading environment serves as a starting point to create more complex trading environments. You are encouraged to extend this simple trading environment by adding more complexity to simulte real world trading dynamics, such as book orders, network latencies, trading fees, etc... The simulated enviroment is contained in the **syntheticChrissAlmgren.py** module. You are encouraged to take a look it and modify its parameters as you wish. Let's take a look at the default parameters of our simulation environment. We have set the intial stock price to be $S_0 = 50$, and the total number of shares to sell to one million. This gives an initial portfolio value of $\$50$ Million dollars. We have also set the trader's risk aversion to $\lambda = 10^{-6}$.The stock price will have 12\% annual volatility, a [bid-ask spread](https://www.investopedia.com/terms/b/bid-askspread.asp) of 1/8 and an average daily trading volume of 5 million shares. Assuming there are 250 trading days in a year, this gives a daily volatility in stock price of $0.12 / \sqrt{250} \approx 0.8\%$. We will use a liquiditation time of $T = 60$ days and we will set the number of trades $N = 60$. This means that $\tau=\frac{T}{N} = 1$ which means we will be making one trade per day. For the temporary cost function we will set the fixed cost of selling to be 1/2 of the bid-ask spread, $\epsilon = 1/16$. we will set $\eta$ such that for each one percent of the daily volume we trade, we incur a price impact equal to the bid-askspread. For example, trading at a rate of $5\%$ of the daily trading volume incurs a one-time cost on each trade of 5/8. Under this assumption we have $\eta =(1/8)/(0.01 \times 5 \times 10^6) = 2.5 \times 10^{-6}$.For the permanent costs, a common rule of thumb is that price effects become significant when we sell $10\%$ of the daily volume. If we suppose that significant means that the price depression is one bid-ask spread, and that the effect is linear for smaller and larger trading rates, then we have $\gamma = (1/8)/(0.1 \times 5 \times 10^6) = 2.5 \times 10^{-7}$. The tables below summarize the default parameters of the simulation environment
###Code
%load_ext autoreload
%autoreload 2
from os import path
import sys
repo_path= path.dirname(path.dirname(path.abspath("__file__")))
sys.path.append(repo_path)
import pandas as pd
pd.__version__
import statsmodels.api as sm
sm.show_versions()
import utils
# Get the default financial and AC Model parameters
financial_params, ac_params = utils.get_env_param()
financial_params
ac_params
###Output
_____no_output_____
###Markdown
Reinforcement LearningIn the code below we use DDPG to find a policy that can generate optimal trading trajectories that minimize implementation shortfall, and can be benchmarked against the Almgren and Chriss model. We will implement a typical reinforcement learning workflow to train the actor and critic using the simulation environment. We feed the states observed from our simulator to an agent. The Agent first predicts an action using the actor model and performs the action in the environment. Then, environment returns the reward and new state. This process continues for the given number of episodes. To get accurate results, you should run the code at least 10,000 episodes.
###Code
import numpy as np
import syntheticChrissAlmgren as sca
from ddpg_agent import Agent
from collections import deque
# Create simulation environment
env = sca.MarketEnvironment()
# Initialize Feed-forward DNNs for Actor and Critic models.
agent = Agent(state_size=env.observation_space_dimension(), action_size=env.action_space_dimension(), random_seed=0)
# Set the liquidation time
lqt = 60
# Set the number of trades
n_trades = 60
# Set trader's risk aversion
tr = 1e-6
# Set the number of episodes to run the simulation
episodes = 10000
shortfall_hist = np.array([])
shortfall_deque = deque(maxlen=100)
for episode in range(episodes):
# Reset the enviroment
cur_state = env.reset(seed = episode, liquid_time = lqt, num_trades = n_trades, lamb = tr)
# set the environment to make transactions
env.start_transactions()
for i in range(n_trades + 1):
# Predict the best action for the current state.
action = agent.act(cur_state, add_noise = True)
# Action is performed and new state, reward, info are received.
new_state, reward, done, info = env.step(action)
# current state, action, reward, new state are stored in the experience replay
agent.step(cur_state, action, reward, new_state, done)
# roll over new state
cur_state = new_state
if info.done:
shortfall_hist = np.append(shortfall_hist, info.implementation_shortfall)
shortfall_deque.append(info.implementation_shortfall)
break
if (episode + 1) % 100 == 0: # print average shortfall over last 100 episodes
print('\rEpisode [{}/{}]\tAverage Shortfall: ${:,.2f}'.format(episode + 1, episodes, np.mean(shortfall_deque)))
print('\nAverage Implementation Shortfall: ${:,.2f} \n'.format(np.mean(shortfall_hist)))
import matplotlib.pyplot as plt
plt.plot(shortfall_hist)
import torch
torch.save(agent.actor_local.state_dict(), 'models/checkpoint-actor.pth')
torch.save(agent.critic_local.state_dict(), 'models/checkpoint-critic.pth')
###Output
_____no_output_____
###Markdown
& Deep Reinforcement Learning for Optimal Execution of Portfolio Transactions IntroductionThis notebook demonstrates how to use Deep Reinforcement Learning (DRL) for optimizing the execution of large portfolio transactions. We begin with a brief review of reinforcement learning and actor-critic methods. Then, you will use an actor-critic method to generate optimal trading strategies that maximize profit when liquidating a block of shares. Actor-Critic MethodsIn reinforcement learning, an agent makes observations and takes actions within an environment, and in return it receives rewards. Its objective is to learn to act in a way that will maximize its expected long-term rewards. Fig 1. - Reinforcement Learning. There are several types of RL algorithms, and they can be divided into three groups:- **Critic-Only**: Critic-Only methods, also known as Value-Based methods, first find the optimal value function and then derive an optimal policy from it. - **Actor-Only**: Actor-Only methods, also known as Policy-Based methods, search directly for the optimal policy in policy space. This is typically done by using a parameterized family of policies over which optimization procedures can be used directly. - **Actor-Critic**: Actor-Critic methods combine the advantages of actor-only and critic-only methods. In this method, the critic learns the value function and uses it to determine how the actor's policy parramerters should be changed. In this case, the actor brings the advantage of computing continuous actions without the need for optimization procedures on a value function, while the critic supplies the actor with knowledge of the performance. Actor-critic methods usually have good convergence properties, in contrast to critic-only methods. The **Deep Deterministic Policy Gradients (DDPG)** algorithm is one example of an actor-critic method. Fig 2. - Actor-Critic Reinforcement Learning. In this notebook, we will use DDPG to determine the optimal execution of portfolio transactions. In other words, we will use the DDPG algorithm to solve the optimal liquidation problem. But before we can apply the DDPG algorithm we first need to formulate the optimal liquidation problem so that in can be solved using reinforcement learning. In the next section we will see how to do this. Modeling Optimal Execution as a Reinforcement Learning ProblemAs we learned in the previous lessons, the optimal liquidation problem is a minimization problem, *i.e.* we need to find the trading list that minimizes the implementation shortfall. In order to solve this problem through reinforcement learning, we need to restate the optimal liquidation problem in terms of **States**, **Actions**, and **Rewards**. Let's start by defining our States. StatesThe optimal liquidation problem entails that we sell all our shares within a given time frame. Therefore, our state vector must contain some information about the time remaining, or what is equivalent, the number trades remaning. We will use the latter and use the following features to define the state vector at time $t_k$:$$[r_{k-5},\, r_{k-4},\, r_{k-3},\, r_{k-2},\, r_{k-1},\, r_{k},\, m_{k},\, i_{k}]$$where:- $r_{k} = \log\left(\frac{\tilde{S}_k}{\tilde{S}_{k-1}}\right)$ is the log-return at time $t_k$- $m_{k} = \frac{N_k}{N}$ is the number of trades remaining at time $t_k$ normalized by the total number of trades.- $i_{k} = \frac{x_k}{X}$ is the remaining number of shares at time $t_k$ normalized by the total number of shares.The log-returns capture information about stock prices before time $t_k$, which can be used to detect possible price trends. The number of trades and shares remaining allow the agent to learn to sell all the shares within a given time frame. It is important to note that in real world trading scenarios, this state vector can hold many more variables. ActionsSince the optimal liquidation problem only requires us to sell stocks, it is reasonable to define the action $a_k$ to be the number of shares to sell at time $t_{k}$. However, if we start with millions of stocks, intepreting the action directly as the number of shares to sell at each time step can lead to convergence problems, because, the agent will need to produce actions with very high values. Instead, we will interpret the action $a_k$ as a **percentage**. In this case, the actions produced by the agent will only need to be between 0 and 1. Using this interpretation, we can determine the number of shares to sell at each time step using:$$n_k = a_k \times x_k$$where $x_k$ is the number of shares remaining at time $t_k$. RewardsDefining the rewards is trickier than defining states and actions, since the original problem is a minimization problem. One option is to use the difference between two consecutive utility functions. Remeber the utility function is given by:$$U(x) = E(x) + λ V(x)$$After each time step, we compute the utility using the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model for the remaining time and inventory while holding parameter λ constant. Denoting the optimal trading trajectory computed at time $t$ as $x^*_t$, we define the reward as: $$R_{t} = {{U_t(x^*_t) - U_{t+1}(x^*_{t+1})}\over{U_t(x^*_t)}}$$Where we have normalized the difference to train the actor-critic model easier. Simulation EnvironmentIn order to train our DDPG algorithm we will use a very simple simulated trading environment. This environment simulates stock prices that follow a discrete arithmetic random walk and that the permanent and temporary market impact functions are linear functions of the rate of trading, just like in the Almgren and Chriss model. This simple trading environment serves as a starting point to create more complex trading environments. You are encouraged to extend this simple trading environment by adding more complexity to simulte real world trading dynamics, such as book orders, network latencies, trading fees, etc... The simulated enviroment is contained in the **syntheticChrissAlmgren.py** module. You are encouraged to take a look it and modify its parameters as you wish. Let's take a look at the default parameters of our simulation environment. We have set the intial stock price to be $S_0 = 50$, and the total number of shares to sell to one million. This gives an initial portfolio value of $\$50$ Million dollars. We have also set the trader's risk aversion to $\lambda = 10^{-6}$.The stock price will have 12\% annual volatility, a [bid-ask spread](https://www.investopedia.com/terms/b/bid-askspread.asp) of 1/8 and an average daily trading volume of 5 million shares. Assuming there are 250 trading days in a year, this gives a daily volatility in stock price of $0.12 / \sqrt{250} \approx 0.8\%$. We will use a liquiditation time of $T = 60$ days and we will set the number of trades $N = 60$. This means that $\tau=\frac{T}{N} = 1$ which means we will be making one trade per day. For the temporary cost function we will set the fixed cost of selling to be 1/2 of the bid-ask spread, $\epsilon = 1/16$. we will set $\eta$ such that for each one percent of the daily volume we trade, we incur a price impact equal to the bid-askspread. For example, trading at a rate of $5\%$ of the daily trading volume incurs a one-time cost on each trade of 5/8. Under this assumption we have $\eta =(1/8)/(0.01 \times 5 \times 10^6) = 2.5 \times 10^{-6}$.For the permanent costs, a common rule of thumb is that price effects become significant when we sell $10\%$ of the daily volume. If we suppose that significant means that the price depression is one bid-ask spread, and that the effect is linear for smaller and larger trading rates, then we have $\gamma = (1/8)/(0.1 \times 5 \times 10^6) = 2.5 \times 10^{-7}$. The tables below summarize the default parameters of the simulation environment
###Code
import utils
# Get the default financial and AC Model parameters
financial_params, ac_params = utils.get_env_param()
financial_params
ac_params
###Output
_____no_output_____
###Markdown
Reinforcement LearningIn the code below we use DDPG to find a policy that can generate optimal trading trajectories that minimize implementation shortfall, and can be benchmarked against the Almgren and Chriss model. We will implement a typical reinforcement learning workflow to train the actor and critic using the simulation environment. We feed the states observed from our simulator to an agent. The Agent first predicts an action using the actor model and performs the action in the environment. Then, environment returns the reward and new state. This process continues for the given number of episodes. To get accurate results, you should run the code at least 10,000 episodes.
###Code
import numpy as np
import syntheticChrissAlmgren as sca
from ddpg_agent import Agent
from collections import deque
# Create simulation environment
env = sca.MarketEnvironment()
# Initialize Feed-forward DNNs for Actor and Critic models.
agent = Agent(state_size=env.observation_space_dimension(), action_size=env.action_space_dimension(), random_seed=0)
# Set the liquidation time
lqt = 60
# Set the number of trades
n_trades = 60
# Set trader's risk aversion
tr = 1e-6
# Set the number of episodes to run the simulation
episodes = 3000
shortfall_hist = np.array([])
shortfall_deque = deque(maxlen=100)
for episode in range(episodes):
# Reset the enviroment
cur_state = env.reset(seed = episode, liquid_time = lqt, num_trades = n_trades, lamb = tr)
# set the environment to make transactions
env.start_transactions()
for i in range(n_trades + 1):
# Predict the best action for the current state.
action = agent.act(cur_state, add_noise = True)
# Action is performed and new state, reward, info are received.
new_state, reward, done, info = env.step(action)
# current state, action, reward, new state are stored in the experience replay
agent.step(cur_state, action, reward, new_state, done)
# roll over new state
cur_state = new_state
if info.done:
shortfall_hist = np.append(shortfall_hist, info.implementation_shortfall)
shortfall_deque.append(info.implementation_shortfall)
break
if (episode + 1) % 100 == 0: # print average shortfall over last 100 episodes
print('\rEpisode [{}/{}]\tAverage Shortfall: ${:,.2f}'.format(episode + 1, episodes, np.mean(shortfall_deque)))
print('\nAverage Implementation Shortfall: ${:,.2f} \n'.format(np.mean(shortfall_hist)))
###Output
Episode [100/3000] Average Shortfall: $2,276,780.07
Episode [200/3000] Average Shortfall: $2,562,254.63
Episode [300/3000] Average Shortfall: $2,562,500.00
Episode [400/3000] Average Shortfall: $2,562,500.00
Episode [500/3000] Average Shortfall: $2,562,500.00
Episode [600/3000] Average Shortfall: $2,562,500.00
Episode [700/3000] Average Shortfall: $2,562,500.00
Episode [800/3000] Average Shortfall: $2,562,500.00
Episode [900/3000] Average Shortfall: $2,562,500.00
Episode [1000/3000] Average Shortfall: $2,562,500.00
Episode [1100/3000] Average Shortfall: $2,562,500.00
Episode [1200/3000] Average Shortfall: $2,562,500.00
Episode [1300/3000] Average Shortfall: $2,562,500.00
Episode [1400/3000] Average Shortfall: $2,562,500.00
Episode [1500/3000] Average Shortfall: $2,562,500.00
Episode [1600/3000] Average Shortfall: $2,562,500.00
Episode [1700/3000] Average Shortfall: $2,562,500.00
Episode [1800/3000] Average Shortfall: $2,562,500.00
Episode [1900/3000] Average Shortfall: $2,562,500.00
Episode [2000/3000] Average Shortfall: $2,562,500.00
Episode [2100/3000] Average Shortfall: $2,248,029.72
Episode [2200/3000] Average Shortfall: $720,982.30
Episode [2300/3000] Average Shortfall: $660,355.06
Episode [2400/3000] Average Shortfall: $671,785.43
Episode [2500/3000] Average Shortfall: $591,557.90
Episode [2600/3000] Average Shortfall: $706,885.56
Episode [2700/3000] Average Shortfall: $663,251.46
Episode [2800/3000] Average Shortfall: $655,084.07
Episode [2900/3000] Average Shortfall: $669,189.00
Episode [3000/3000] Average Shortfall: $661,448.00
Average Implementation Shortfall: $1,973,753.44
###Markdown
& Deep Reinforcement Learning for Optimal Execution of Portfolio Transactions IntroductionThis notebook demonstrates how to use Deep Reinforcement Learning (DRL) for optimizing the execution of large portfolio transactions. We begin with a brief review of reinforcement learning and actor-critic methods. Then, you will use an actor-critic method to generate optimal trading strategies that maximize profit when liquidating a block of shares. Actor-Critic MethodsIn reinforcement learning, an agent makes observations and takes actions within an environment, and in return it receives rewards. Its objective is to learn to act in a way that will maximize its expected long-term rewards. Fig 1. - Reinforcement Learning. There are several types of RL algorithms, and they can be divided into three groups:- **Critic-Only**: Critic-Only methods, also known as Value-Based methods, first find the optimal value function and then derive an optimal policy from it. - **Actor-Only**: Actor-Only methods, also known as Policy-Based methods, search directly for the optimal policy in policy space. This is typically done by using a parameterized family of policies over which optimization procedures can be used directly. - **Actor-Critic**: Actor-Critic methods combine the advantages of actor-only and critic-only methods. In this method, the critic learns the value function and uses it to determine how the actor's policy parramerters should be changed. In this case, the actor brings the advantage of computing continuous actions without the need for optimization procedures on a value function, while the critic supplies the actor with knowledge of the performance. Actor-critic methods usually have good convergence properties, in contrast to critic-only methods. The **Deep Deterministic Policy Gradients (DDPG)** algorithm is one example of an actor-critic method. Fig 2. - Actor-Critic Reinforcement Learning. In this notebook, we will use DDPG to determine the optimal execution of portfolio transactions. In other words, we will use the DDPG algorithm to solve the optimal liquidation problem. But before we can apply the DDPG algorithm we first need to formulate the optimal liquidation problem so that in can be solved using reinforcement learning. In the next section we will see how to do this. Modeling Optimal Execution as a Reinforcement Learning ProblemAs we learned in the previous lessons, the optimal liquidation problem is a minimization problem, *i.e.* we need to find the trading list that minimizes the implementation shortfall. In order to solve this problem through reinforcement learning, we need to restate the optimal liquidation problem in terms of **States**, **Actions**, and **Rewards**. Let's start by defining our States. StatesThe optimal liquidation problem entails that we sell all our shares within a given time frame. Therefore, our state vector must contain some information about the time remaining, or what is equivalent, the number trades remaning. We will use the latter and use the following features to define the state vector at time $t_k$:$$[r_{k-5},\, r_{k-4},\, r_{k-3},\, r_{k-2},\, r_{k-1},\, r_{k},\, m_{k},\, i_{k}]$$where:- $r_{k} = \log\left(\frac{\tilde{S}_k}{\tilde{S}_{k-1}}\right)$ is the log-return at time $t_k$- $m_{k} = \frac{N_k}{N}$ is the number of trades remaining at time $t_k$ normalized by the total number of trades.- $i_{k} = \frac{x_k}{X}$ is the remaining number of shares at time $t_k$ normalized by the total number of shares.The log-returns capture information about stock prices before time $t_k$, which can be used to detect possible price trends. The number of trades and shares remaining allow the agent to learn to sell all the shares within a given time frame. It is important to note that in real world trading scenarios, this state vector can hold many more variables. ActionsSince the optimal liquidation problem only requires us to sell stocks, it is reasonable to define the action $a_k$ to be the number of shares to sell at time $t_{k}$. However, if we start with millions of stocks, intepreting the action directly as the number of shares to sell at each time step can lead to convergence problems, because, the agent will need to produce actions with very high values. Instead, we will interpret the action $a_k$ as a **percentage**. In this case, the actions produced by the agent will only need to be between 0 and 1. Using this interpretation, we can determine the number of shares to sell at each time step using:$$n_k = a_k \times x_k$$where $x_k$ is the number of shares remaining at time $t_k$. RewardsDefining the rewards is trickier than defining states and actions, since the original problem is a minimization problem. One option is to use the difference between two consecutive utility functions. Remeber the utility function is given by:$$U(x) = E(x) + λ V(x)$$After each time step, we compute the utility using the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model for the remaining time and inventory while holding parameter λ constant. Denoting the optimal trading trajectory computed at time $t$ as $x^*_t$, we define the reward as: $$R_{t} = {{U_t(x^*_t) - U_{t+1}(x^*_{t+1})}\over{U_t(x^*_t)}}$$Where we have normalized the difference to train the actor-critic model easier. Simulation EnvironmentIn order to train our DDPG algorithm we will use a very simple simulated trading environment. This environment simulates stock prices that follow a discrete arithmetic random walk and that the permanent and temporary market impact functions are linear functions of the rate of trading, just like in the Almgren and Chriss model. This simple trading environment serves as a starting point to create more complex trading environments. You are encouraged to extend this simple trading environment by adding more complexity to simulte real world trading dynamics, such as book orders, network latencies, trading fees, etc... The simulated enviroment is contained in the **syntheticChrissAlmgren.py** module. You are encouraged to take a look it and modify its parameters as you wish. Let's take a look at the default parameters of our simulation environment. We have set the intial stock price to be $S_0 = 50$, and the total number of shares to sell to one million. This gives an initial portfolio value of $\$50$ Million dollars. We have also set the trader's risk aversion to $\lambda = 10^{-6}$.The stock price will have 12\% annual volatility, a [bid-ask spread](https://www.investopedia.com/terms/b/bid-askspread.asp) of 1/8 and an average daily trading volume of 5 million shares. Assuming there are 250 trading days in a year, this gives a daily volatility in stock price of $0.12 / \sqrt{250} \approx 0.8\%$. We will use a liquiditation time of $T = 60$ days and we will set the number of trades $N = 60$. This means that $\tau=\frac{T}{N} = 1$ which means we will be making one trade per day. For the temporary cost function we will set the fixed cost of selling to be 1/2 of the bid-ask spread, $\epsilon = 1/16$. we will set $\eta$ such that for each one percent of the daily volume we trade, we incur a price impact equal to the bid-askspread. For example, trading at a rate of $5\%$ of the daily trading volume incurs a one-time cost on each trade of 5/8. Under this assumption we have $\eta =(1/8)/(0.01 \times 5 \times 10^6) = 2.5 \times 10^{-6}$.For the permanent costs, a common rule of thumb is that price effects become significant when we sell $10\%$ of the daily volume. If we suppose that significant means that the price depression is one bid-ask spread, and that the effect is linear for smaller and larger trading rates, then we have $\gamma = (1/8)/(0.1 \times 5 \times 10^6) = 2.5 \times 10^{-7}$. The tables below summarize the default parameters of the simulation environment
###Code
import utils
# Get the default financial and AC Model parameters
financial_params, ac_params = utils.get_env_param()
financial_params
ac_params
###Output
_____no_output_____
###Markdown
Reinforcement LearningIn the code below we use DDPG to find a policy that can generate optimal trading trajectories that minimize implementation shortfall, and can be benchmarked against the Almgren and Chriss model. We will implement a typical reinforcement learning workflow to train the actor and critic using the simulation environment. We feed the states observed from our simulator to an agent. The Agent first predicts an action using the actor model and performs the action in the environment. Then, environment returns the reward and new state. This process continues for the given number of episodes. To get accurate results, you should run the code at least 10,000 episodes.
###Code
import numpy as np
import syntheticChrissAlmgren as sca
from ddpg_agent import Agent
from collections import deque
# Create simulation environment
env = sca.MarketEnvironment()
# Initialize Feed-forward DNNs for Actor and Critic models.
agent = Agent(state_size=env.observation_space_dimension(), action_size=env.action_space_dimension(), random_seed=0)
# Set the liquidation time
lqt = 60
# Set the number of trades
n_trades = 60
# Set trader's risk aversion
tr = 1e-6
# Set the number of episodes to run the simulation
episodes = 10000
shortfall_hist = np.array([])
shortfall_deque = deque(maxlen=100)
for episode in range(episodes):
# Reset the enviroment
cur_state = env.reset(seed = episode, liquid_time = lqt, num_trades = n_trades, lamb = tr)
# set the environment to make transactions
env.start_transactions()
for i in range(n_trades + 1):
# Predict the best action for the current state.
action = agent.act(cur_state, add_noise = True)
# Action is performed and new state, reward, info are received.
new_state, reward, done, info = env.step(action)
# current state, action, reward, new state are stored in the experience replay
agent.step(cur_state, action, reward, new_state, done)
# roll over new state
cur_state = new_state
if info.done:
shortfall_hist = np.append(shortfall_hist, info.implementation_shortfall)
shortfall_deque.append(info.implementation_shortfall)
break
if (episode + 1) % 100 == 0: # print average shortfall over last 100 episodes
print('\rEpisode [{}/{}]\tAverage Shortfall: ${:,.2f}'.format(episode + 1, episodes, np.mean(shortfall_deque)))
print('\nAverage Implementation Shortfall: ${:,.2f} \n'.format(np.mean(shortfall_hist)))
###Output
_____no_output_____ |
python/ejercicios/votes/build-votes-with-cities.ipynb | ###Markdown
Preparación del CSV de votosEste script se limita a leer el CSV que contiene el número de votos por partido y municipio y lo normaliza. Además lo combina con el CSV con los datos de municipios, de manera que el CSV que genera incluye el código del municipio, la comunidad, la provicia, el municipio, el partido y el número de votos. Se ha usado Spark, pero cualquier otro método (pandas, por ejemplo) sería perfectamente válido.Sólo se debe usar este CSV si no se consigue avanzar con la combinación de ambos datasets en KSQL o en Spark Streaming.
###Code
from pyspark.sql import SparkSession
from pyspark.sql.functions import col
import pyspark.sql.functions as fn
spark = SparkSession \
.builder \
.appName("buildvoteswithcities") \
.getOrCreate()
df_votes = spark.read.csv('votos-elecciones.csv', sep=";", header=True)
parties = df_votes.columns
parties
parties.remove('Codigo')
parties.remove('Mesas')
parties.remove('Censo')
parties.remove('Votantes')
parties.remove('Validos')
len(parties)
array_of_cols = ["'{0}', {0}".format(p) for p in parties]
print(array_of_cols)
string_of_cols = ", ".join(array_of_cols)
print(string_of_cols)
df_votes_expl = df_votes.select('Codigo', 'Mesas', 'Censo', 'Votantes', 'Validos',
fn.expr("stack(" + str(len(parties)) + ", " + string_of_cols + ") as (Partido, Votos)")).\
where(col('Votos') > 0)
df_votes_expl.show()
df_cities = spark.read.csv('PECMunicipios.csv', sep=";", header=True, encoding='ISO-8859-1')
df_cities = df_cities.withColumn('Comunidad', fn.trim(col('Comunidad'))).\
withColumn('Provincia', fn.trim(col('Provincia'))).\
withColumn('Municipio', fn.trim(col('Municipio'))).\
withColumnRenamed('Codigo', 'CodigoPoblacion')
df_cities.take(1)
df_result = df_cities.join(df_votes_expl, df_cities.CodigoPoblacion == df_votes_expl.Codigo, how='inner').\
drop('CodigoPoblacion', 'Censo', 'Votantes', 'Validos', 'Mesas')
df_result.printSchema()
df_result.write.csv('votes', header=True)
!mv votes/part*.csv votes.csv
###Output
_____no_output_____ |
notebooks/benchmark/crash_predict_benchmark.ipynb | ###Markdown
Benchmark Model for crash prediction Developed by: bpben Details steps of data processing, feature engineering and model tuning/testing for crash and road data
###Code
import re
import csv
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import scipy.stats as ss
from glob import glob
from sklearn.metrics import classification_report
from sklearn.preprocessing import StandardScaler
from scipy.stats import describe
###Output
_____no_output_____
###Markdown
Helpers for tuning/testing models, available [here](https://github.com/bpben/model_helpers) as well
###Code
import numpy as np
import pandas as pd
import sklearn.ensemble as ske
import sklearn.svm as svm
import sklearn.linear_model as skl
import xgboost as xgb
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.model_selection import RandomizedSearchCV, KFold, StratifiedKFold, GroupKFold, GroupShuffleSplit
from sklearn.calibration import CalibratedClassifierCV
from sklearn.preprocessing import StandardScaler
class Indata():
scoring = None
data = None
train_x, train_y, test_x, test_y = None, None, None, None
is_split = 0
#init with pandas DF and target column name, specify scoring observations
def __init__(self, data, target, scoring=None):
#If scoring observations, store under scoring attribute
if scoring is not None:
self.data = data[~(scoring)]
self.scoring = data[scoring]
else:
self.data = data
self.target = target
# Split into train/test
# pct : percent training observations
# datesort : specify date column for sorting values
# If this is not None, split will be non-random (i.e. split on sorted obs)
def tr_te_split(self, pct, datesort=None, group_col=None):
"""
Split into train/test
pct : percent training observations
datesort : specify date column for sorting values
If this is not None, split will be non-random (i.e. split on sorted obs)
group_col : group column name for groupkfold split
Will also be passed to tuner
"""
if group_col:
self.group_col = group_col
grouper = GroupShuffleSplit(n_splits=1, train_size=pct)
g = grouper.split(self.data, groups=self.data[group_col])
# get the actual indexes of the training set
inds, _ = tuple(*g)
# translate that into boolean array
inds = self.data.index[inds]
inds = self.data.index.isin(inds)
elif datesort:
self.data.sort_values(datesort, inplace=True)
self.data.reset_index(drop=True, inplace=True)
inds = np.arange(0.0,len(self.data)) / len(self.data) < pct
else:
inds = np.random.rand(len(self.data)) < pct
self.train_x = self.data[inds]
print('Train obs:', len(self.train_x))
self.train_y = self.data[self.target][inds]
self.test_x = self.data[~inds]
print('Test obs:', len(self.test_x))
self.test_y = self.data[self.target][~inds]
self.is_split = 1
class Tuner():
"""
Initiates with indata class, will tune series of models according to parameters.
Outputs RandomizedGridCV results and parameterized model in dictionary
"""
data = None
train_x, train_y = None, None
group_col = None
def __init__(self, indata, best_models=None, grid_results=None):
if indata.is_split == 0:
raise ValueError('Data is not split, cannot be tested')
# check if grouped by some column
if hasattr(indata,'group_col'):
self.group_col = indata.group_col
self.data = indata.data
self.train_x = indata.train_x
self.train_y = indata.train_y
if best_models is None:
self.best_models = {}
if grid_results is None:
self.grid_results = pd.DataFrame()
def make_grid(self, model, cvparams, mparams):
#Makes CV grid
# to implement, no capability for GroupKFold for randomizedsearch
#if self.group_col:
#cv = GroupKFold(cvparams['folds'])
grid = RandomizedSearchCV(
model(),scoring=cvparams['pmetric'],
cv = KFold(cvparams['folds'], cvparams['shuffle']),
refit=False, n_iter=cvparams['iter'],
param_distributions=mparams, verbose=1)
return(grid)
def run_grid(self, grid, train_x, train_y):
grid.fit(train_x, train_y)
results = pd.DataFrame(grid.cv_results_)[['mean_test_score','mean_train_score','params']]
best = {}
best['bp'] = grid.best_params_
best[grid.scoring] = grid.best_score_
return(best, results)
def tune(self, name, m_name, features, cvparams, mparams):
if hasattr(ske, m_name):
model = getattr(ske, m_name)
elif hasattr(skl, m_name):
model = getattr(skl, m_name)
elif hasattr(xgb, m_name):
model = getattr(xgb, m_name)
elif hasattr(svm, m_name):
model = getattr(svm, m_name)
else:
raise ValueError('Model name is invalid.')
grid = self.make_grid(model, cvparams, mparams)
best, results = self.run_grid(grid, self.train_x[features], self.train_y)
results['name'] = name
results['m_name'] = m_name
self.grid_results = self.grid_results.append(results)
best['model'] = model(**best['bp'])
best['features'] = list(features)
self.best_models.update({name: best})
class Tester():
"""
Initiates with indata class, receives parameterized sklearn models, prints and stores results
"""
def __init__(self, data, rundict=None):
if data.is_split == 0 :
raise ValueError('Data is not split, cannot be tested')
else:
self.data = data
if rundict is None:
self.rundict = {}
def init_tuned(self, tuned):
""" pass Tuner object, populatest with names, models, features """
if tuned.best_models=={}:
raise ValueError('No tuned models found')
else:
self.rundict.update(tuned.best_models)
def predsprobs(self, model, test_x):
""" Produce predicted class and probabilities """
# if the model doesn't have predict proba, will be treated as GLM
if hasattr(model, 'predict_proba'):
preds = model.predict(test_x)
probs = model.predict_proba(test_x)[:,1]
else:
probs = model.predict(test_x)
preds = (probs>=.5).astype(int)
return(preds, probs)
def get_metrics(self, preds, probs, test_y):
""" Produce metrics (f1 score, AUC, brier) """
# if test is not binary, just run brier
if len(np.unique(test_y))==2:
f1_s = metrics.f1_score(test_y, preds)
roc = metrics.roc_auc_score(test_y, probs)
else:
f1_s, roc = None, None
brier = metrics.brier_score_loss(test_y, probs)
return(f1_s, roc, brier)
def make_result(self, model, test_x, test_y):
""" gets predictions and runs metrics """
preds, probs = self.predsprobs(model, test_x)
f1_s, roc, brier = self.get_metrics(preds, probs, test_y)
print("f1_score: ", f1_s)
print("roc auc: ", roc)
print("brier_score: ", brier)
result = {}
result['f1_s'] = f1_s
result['roc'] = roc
result['brier'] = brier
return(result)
def run_model(self, name, model, features, cal=True, cal_m='sigmoid'):
"""
Run a specific model (not from Tuner classs)
By default, calibrates predictions and produces metrics for them
Will also store in rundict object
"""
results = {}
results['features'] = list(features)
results['model'] = model
print("Fitting {} model with {} features".format(name, len(features)))
if cal:
# Need disjoint calibration/training datasets
# Split 50/50
rnd_ind = np.random.rand(len(self.data.train_x)) < .5
train_x = self.data.train_x[features][rnd_ind]
train_y = self.data.train_y[rnd_ind]
cal_x = self.data.train_x[features][~rnd_ind]
cal_y = self.data.train_y[~rnd_ind]
else:
train_x = self.data.train_x[features]
train_y = self.data.train_y
m_fit = model.fit(train_x, train_y)
result = self.make_result(
m_fit,
self.data.test_x[features],
self.data.test_y)
results['raw'] = result
results['m_fit'] = m_fit
if cal:
print("calibrated:")
m_c = CalibratedClassifierCV(model, method = cal_m)
m_fit_c = m_c.fit(cal_x, cal_y)
result_c = self.make_result(m_fit_c, self.data.test_x[features], self.data.test_y)
results['calibrated'] = result_c
print("\n")
if name in self.rundict:
self.rundict[name].update(results)
else:
self.rundict.update({name:results})
def run_tuned(self, name, cal=True, cal_m='sigmoid'):
""" Wrapper for run_model when using Tuner object """
self.run_model(name, self.rundict[name]['model'], self.rundict[name]['features'], cal, cal_m)
###Output
_____no_output_____
###Markdown
Data processingThe approach here is to create 3 time-lag features:1. crashes in the past week2. crashes in the past month3. crashes in the past quarter (three months)4. average crashes per week up to target weekAll features except 4 are calculated to exclude one another. That is, crashes in the past month does not include the past week's crashes. Crashes in the past quarter do not include the past month.
###Code
SEG_CHARS = ['AADT', 'SPEEDLIMIT', 'Struct_Cnd', 'Surface_Tp', 'F_F_Class']
# Read in data
data = pd.read_csv('../../data/processed/vz_predict_dataset.csv.gz', compression='gzip', dtype={'segment_id':'str'})
data.sort_values(['segment_id', 'year', 'week'], inplace=True)
# get segments with non-zero crashes
data_nonzero = data.set_index('segment_id').loc[data.groupby('segment_id').crash.sum()>0]
data_nonzero.reset_index(inplace=True)
def format_crash_data(data, col, target_week, target_year):
""" formats crash data for train/test
target_week: week to predict (make into binary target)
target_year: year for predicted week
note: data must be available for 4 months prior to target
gets previous week count, previous month count, previous quarter count, avg per week
"""
assert target_week>16
pre_week = target_week - 1
pre_month = range(pre_week-4, target_week)
pre_quarter = range(pre_month[0]-12, target_week)
# week interval for each segment
# full range = pre_quarter : target
sliced = data.loc[(slice(None),slice(target_year,target_year), slice(1, target_week)),:]
week_data = sliced[col].unstack(2)
week_data.reset_index(level=1, inplace=True)
# aggregate
week_data['pre_month'] = week_data[pre_month].sum(axis=1)
week_data['pre_quarter'] = week_data[pre_quarter].sum(axis=1)
week_data['pre_week'] = week_data[pre_week]
# avg as of target week
except_target = data.loc[(slice(None),
slice(target_year,target_year),
slice(target_week,None)),:].index
avg_week = data.drop(except_target)
avg_week = avg_week.reset_index().groupby('segment_id')[col].mean()
avg_week.name = 'avg_week'
# join to week data
week_data = week_data.join(avg_week)
# binarize target
week_data['target'] = (week_data[target_week]>0).astype(int)
week_data = week_data.reset_index()
return(week_data[['segment_id','target', 'pre_week',
'pre_month', 'pre_quarter', 'avg_week']])
# simple add concern, any concern reported 2016
concern_observed = data_nonzero[data_nonzero.year==2016].groupby('segment_id').concern.max()
concern_observed.name = 'concern_observed'
crash_lags = format_crash_data(data_nonzero.set_index(['segment_id','year','week']), 'crash', 19, 2017)
data_segs = data_nonzero.groupby('segment_id')[SEG_CHARS].max() # grab the highest values from each column for a segment, not used in model?
data_segs.reset_index(inplace=True)
# add in atrs
atrs = pd.read_csv('../../data/processed/atrs_predicted.csv', dtype={'id':'str'})
# for some reason pandas reads the id as float before str conversions
atrs['id'] = atrs.id.apply(lambda x: x.split('.')[0])
data_segs = data_segs.merge(atrs[['id','speed_coalesced', 'volume_coalesced']],
left_on='segment_id', right_on='id')
# add in tmcs - conflicts
# it either has some or doesn't
# I think just filling na = 0 should work for now
tmcs = pd.read_json('../../data/processed/tmc_summary.json',
dtype={'near_id':str})[['near_id','Conflict']]
data_segs = data_segs.merge(tmcs, left_on='segment_id', right_on='near_id', how='left')
data_segs.Conflict.fillna(0, inplace=True)
data_model = crash_lags.merge(data_segs, left_on='segment_id', right_on='segment_id')
# add concerns
data_model = data_model.merge(concern_observed.reset_index(), on='segment_id')
# Add in adjacency info
adj_info = pd.read_csv('../../data/processed/adjacency_info.csv', usecols=['segment_id', 'orig_id'],
dtype={'segment_id':'str', 'orig_id':'str'})
# link adjacent segments for segments with crashes
adj_info = adj_info[adj_info.segment_id.isin(data_model.segment_id)]
adj_mat = adj_info.merge(adj_info, on='orig_id')
adj_mat = adj_mat[['segment_id_x', 'segment_id_y']]
adj_mat.drop_duplicates(inplace=True)
# including segments with only self-adjacent
# for this, need to ensure they don't join to their own data
adj_mat.loc[adj_mat.segment_id_x==adj_mat.segment_id_y, 'segment_id_y'] = np.NaN
def get_adj_crash_lags(target_week, target_year):
"""calculate total number of crashes that occurred
in adjacent segments for target week and lags as defined in format_crash_data
"""
lag_data = format_crash_data(data_nonzero.set_index(['segment_id','year','week']), 'crash', target_week, target_year)
merge_lags = adj_mat.merge(lag_data, left_on='segment_id_y', right_on='segment_id', how='left')
adj_lags = merge_lags.groupby(['segment_id_x'])['pre_week', 'pre_month', 'pre_quarter'].sum()
return adj_lags
adj_lags = get_adj_crash_lags(19, 2017)
# fill those with only self-adj zero
adj_lags.fillna(0, inplace=True)
data_model = data_model.merge(adj_lags, how='left', left_on='segment_id', right_index=True, suffixes=('', '_adj'))
data_model.fillna(0, inplace=True)
# standardize for LR
#from sklearn.preprocessing import scale
#data_scaled = pd.DataFrame(scale(data_model['AADT', 'SPEEDLIMIT']),
# columns=[f+'_scaled' for f in features])
#data_model = pd.concat([data_model, data_scaled], axis=1)
# trying a different feature set
dummy_att = ['SPEEDLIMIT', 'Struct_Cnd', 'Surface_Tp', 'F_F_Class']
for d in dummy_att:
t = pd.get_dummies(data_model[d])
t.columns = [d+str(c) for c in t.columns]
data_model = pd.concat([data_model, t], axis=1)
# aadt - log-transform
data_model['log_aadt'] = np.log(data_model.AADT+1)
# add segment type
data_model['intersection'] = data_model.segment_id.map(lambda x: x[:2]!='00').astype(int)
# features
features = data_model.filter(regex='[0-9]').columns.tolist() + ['log_aadt', 'intersection']
# Features
#features = [u'pre_week', u'pre_month', u'pre_quarter', 'avg_week', u'AADT', u'SPEEDLIMIT',
# u'Struct_Cnd', u'Surface_Tp', u'F_F_Class', u'pre_week_adj',
# u'pre_month_adj', u'pre_quarter_adj']
features += [u'pre_week', u'pre_month', u'pre_quarter', 'avg_week', 'concern_observed']
#features += ['speed_coalesced', 'volume_coalesced']
#features += ['Conflict']
lm_features = list(set(features) - set(['SPEEDLIMIT1', 'Struct_Cnd0', 'F_F_Class0']))
features = [u'pre_week', u'pre_month', u'pre_quarter', 'avg_week', 'concern_observed']
lm_features = [u'pre_week', u'pre_month', u'pre_quarter', 'avg_week', 'concern_observed']
###Output
_____no_output_____
###Markdown
Model tuningThis uses the model helpers above. They're based on sklearn and implement a randomized grid search with K-fold crossvalidation.
###Code
#Initialize data
df = Indata(data_model, 'target')
#Create train/test split
df.tr_te_split(.7)
#Parameters for model
# class weight
a = data_model['target'].value_counts(normalize=True)
w = 1/a[1]
#Model parameters
params = dict()
#cv parameters
cvp = dict()
cvp['pmetric'] = 'roc_auc'
cvp['iter'] = 5 #number of iterations
cvp['folds'] = 5 #folds for cv (default)
cvp['shuffle'] = True
#LR parameters
mp = dict()
mp['LogisticRegression'] = dict()
mp['LogisticRegression']['penalty'] = ['l1','l2']
mp['LogisticRegression']['C'] = ss.beta(a=5,b=2) #beta distribution for selecting reg strength
mp['LogisticRegression']['class_weight'] = ['balanced']
#RF model parameters
mp['RandomForestClassifier'] = dict()
mp['RandomForestClassifier']['n_estimators'] = [2**8] #number of trees in the forest
mp['RandomForestClassifier']['max_features'] = ss.beta(a=5,b=1) #number of features at split
mp['RandomForestClassifier']['max_leaf_nodes'] = ss.nbinom(n=2,p=0.001,loc=100) #max number of leaves to create
#mp['RandomForestClassifier']['class_weight'] = ['balanced']
mp['RandomForestClassifier']['class_weight'] = [{0:1,1:w}]
#xgBoost model parameters
mp['XGBClassifier'] = dict()
mp['XGBClassifier']['max_depth'] = range(3, 7)
mp['XGBClassifier']['min_child_weight'] = range(1, 5)
mp['XGBClassifier']['learning_rate'] = ss.beta(a=2,b=15)
mp['XGBClassifier']['scale_pos_weight'] = [w]
#Initialize tuner
tune = Tuner(df)
#Base XG model
tune.tune('XG_base', 'XGBClassifier', features, cvp, mp['XGBClassifier'])
#Base RF model
tune.tune('RF_base', 'RandomForestClassifier', features, cvp, mp['RandomForestClassifier'])
#Base LR model
tune.tune('LR_base', 'LogisticRegression', lm_features, cvp, mp['LogisticRegression'])
#Display results
tune.grid_results
# Run test
test = Tester(df)
test.init_tuned(tune)
test.run_tuned('RF_base', cal=False)
test.run_tuned('LR_base', cal=False)
test.run_tuned('XG_base', cal=False)
# Run test
test = Tester(df)
test.init_tuned(tune)
test.run_tuned('RF_base', cal=False)
test.run_tuned('LR_base', cal=False)
test.run_tuned('XG_base', cal=False)
t = test.rundict['XG_base']['m_fit'].predict_proba(test.data.test_x[features])[::,1]
metrics.roc_auc_score(test.data.test_y,t)
# Check feature importance
f_importance = test.rundict['XG_base']['m_fit'].feature_importances_
fi = list(zip(features, f_importance))
print sorted(fi, key=lambda x: x[1], reverse=True)
from sklearn.metrics import roc_auc_score
# trying some other models
minus_adj = list(set(lm_features) - set([x for x in lm_features if x.find('volume')!=-1]))
xg = xgb.XGBClassifier(**test.rundict['XG_base']['bp'])
xg.fit(test.data.train_x[minus_adj], test.data.train_y)
preds = xg.predict_proba(
test.data.test_x[minus_adj])[::,1]
roc_auc_score(test.data.test_y, preds)
# trying some other models
minus_adj = list(set(lm_features) - set([x for x in lm_features if x.find('adj')!=-1]))
lr = skl.LogisticRegression(**test.rundict['LR_base']['bp'])
lr.fit(test.data.train_x[minus_adj], test.data.train_y)
preds = lr.predict_proba(
test.data.test_x[minus_adj])[::,1]
roc_auc_score(test.data.test_y, preds)
lr = skl.LogisticRegression(**test.rundict['LR_base']['bp'])
lr.fit(test.data.train_x['avg_week'].reshape(-1,1), test.data.train_y)
preds = lr.predict_proba(
test.data.test_x['avg_week'].reshape(-1,1))[::,1]
roc_auc_score(test.data.test_y, preds)
###Output
/Users/B/anaconda/envs/boston-crash-model/lib/python2.7/site-packages/ipykernel_launcher.py:2: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead
/Users/B/anaconda/envs/boston-crash-model/lib/python2.7/site-packages/ipykernel_launcher.py:4: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead
after removing the cwd from sys.path.
###Markdown
Lift chart by "risk bin"The classifier problem is difficult because the classes are unbalanced (.05% have crashes at target week). More useful are the probabilities being produced by the model, which give some idea of risk.
###Code
def lift_chart(x_col, y_col, data, ax=None):
p = sns.barplot(x=x_col, y=y_col, data=data,
palette='Reds', ax = None, ci=None)
vals = p.get_yticks()
p.set_yticklabels(['{:3.0f}%'.format(i*100) for i in vals])
xvals = [x.get_text().split(',')[-1].strip(']') for x in p.get_xticklabels()]
xvals = ['{:3.0f}%'.format(float(x)*100) for x in xvals]
p.set_xticklabels(xvals)
p.set_facecolor('white')
p.set_xlabel('')
p.set_ylabel('')
p.set_title('Predicted probability vs actual percent')
return(p)
def density(data, score, ax=None):
p = sns.kdeplot(risk_df['risk_score'], ax=ax)
p.set_facecolor('white')
p.legend('')
p.set_xlabel('Predicted probability of crash')
p.set_title('KDE plot predictions')
return(p)
#pd.qcut(risk_df['risk_score'], 4)
risk_scores = test.rundict['LR_base']['m_fit'].predict_proba(test.data.test_x[features])[:,1]
risk_df = pd.DataFrame({'risk_score':risk_scores, 'crash':test.data.test_y})
print risk_df.risk_score.describe()
risk_df['categories'] = pd.qcut(risk_df['risk_score'], 4)
risk_mean = risk_df.groupby('categories')['crash'].count()
print risk_mean
fig, axes = plt.subplots(1, 2)
lift_chart('categories', 'crash', risk_df,
ax=axes[1])
density(risk_df, 'risk_score', ax=axes[0])
# output predictions
# predict on all segments
data_model['risk_score'] = test.rundict['RF_base']['m_fit'].predict_proba(data_model[features])[:,1]
data_model.to_csv('seg_with_risk_score_adj.csv', index=False)
###Output
_____no_output_____
###Markdown
Check sensitivity to weekI predicted an arbitrary week as target here, but I'd like to see whether things change significantly if I change that week. A good metric to measure that is brier score loss. It'll be low throughout as the classifier doesn't perform great, but it shouldn't vary a huge amount.
###Code
def run_model_for_week(weeks=[20, 30, 40, 50], output=False):
for w in [20, 30, 40, 50]:
print "week ", w
crash_lags = format_crash_data(data_nonzero.set_index(['segment_id','year','week']), 'crash', w, 2016)
data = crash_lags.merge(data_segs, left_on='segment_id', right_on='segment_id')
adj_lags = get_adj_crash_lags(w, 2016)
data = data.merge(adj_lags, left_on='segment_id', right_index=True, suffixes=('', '_adj'))
data.fillna(0, inplace=True)
df = Indata(data, 'target')
# create train/test split
df.tr_te_split(.7)
test = Tester(df)
test.init_tuned(tune)
test.run_tuned('LR_base', cal=False)
print '\n'
if output==True:
return(test.rundict['LR_base']['m_fit'].pred)
run_model_for_week()
# week predictions output
###Output
_____no_output_____
###Markdown
Benchmark Model for crash prediction Developed by: bpben Details steps of data processing, feature engineering and model tuning/testing for crash and road data
###Code
import re
import csv
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import scipy.stats as ss
from glob import glob
from sklearn.metrics import classification_report
from sklearn.preprocessing import StandardScaler
from scipy.stats import describe
###Output
_____no_output_____
###Markdown
Helpers for tuning/testing models, available [here](https://github.com/bpben/model_helpers) as well
###Code
import numpy as np
import pandas as pd
import sklearn.ensemble as ske
import sklearn.svm as svm
import sklearn.linear_model as skl
import xgboost as xgb
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.model_selection import RandomizedSearchCV, KFold, StratifiedKFold, GroupKFold, GroupShuffleSplit
from sklearn.calibration import CalibratedClassifierCV
from sklearn.preprocessing import StandardScaler
class Indata():
scoring = None
data = None
train_x, train_y, test_x, test_y = None, None, None, None
is_split = 0
#init with pandas DF and target column name, specify scoring observations
def __init__(self, data, target, scoring=None):
#If scoring observations, store under scoring attribute
if scoring is not None:
self.data = data[~(scoring)]
self.scoring = data[scoring]
else:
self.data = data
self.target = target
# Split into train/test
# pct : percent training observations
# datesort : specify date column for sorting values
# If this is not None, split will be non-random (i.e. split on sorted obs)
def tr_te_split(self, pct, datesort=None, group_col=None):
"""
Split into train/test
pct : percent training observations
datesort : specify date column for sorting values
If this is not None, split will be non-random (i.e. split on sorted obs)
group_col : group column name for groupkfold split
Will also be passed to tuner
"""
if group_col:
self.group_col = group_col
grouper = GroupShuffleSplit(n_splits=1, train_size=pct)
g = grouper.split(self.data, groups=self.data[group_col])
# get the actual indexes of the training set
inds, _ = tuple(*g)
# translate that into boolean array
inds = self.data.index[inds]
inds = self.data.index.isin(inds)
elif datesort:
self.data.sort_values(datesort, inplace=True)
self.data.reset_index(drop=True, inplace=True)
inds = np.arange(0.0,len(self.data)) / len(self.data) < pct
else:
inds = np.random.rand(len(self.data)) < pct
self.train_x = self.data[inds]
print 'Train obs:', len(self.train_x)
self.train_y = self.data[self.target][inds]
self.test_x = self.data[~inds]
print 'Test obs:', len(self.test_x)
self.test_y = self.data[self.target][~inds]
self.is_split = 1
class Tuner():
"""
Initiates with indata class, will tune series of models according to parameters.
Outputs RandomizedGridCV results and parameterized model in dictionary
"""
data = None
train_x, train_y = None, None
group_col = None
def __init__(self, indata, best_models=None, grid_results=None):
if indata.is_split == 0:
raise ValueError('Data is not split, cannot be tested')
# check if grouped by some column
if hasattr(indata,'group_col'):
self.group_col = indata.group_col
self.data = indata.data
self.train_x = indata.train_x
self.train_y = indata.train_y
if best_models is None:
self.best_models = {}
if grid_results is None:
self.grid_results = pd.DataFrame()
def make_grid(self, model, cvparams, mparams):
#Makes CV grid
# to implement, no capability for GroupKFold for randomizedsearch
#if self.group_col:
#cv = GroupKFold(cvparams['folds'])
grid = RandomizedSearchCV(
model(),scoring=cvparams['pmetric'],
cv = KFold(cvparams['folds'], cvparams['shuffle']),
refit=False, n_iter=cvparams['iter'],
param_distributions=mparams, verbose=1)
return(grid)
def run_grid(self, grid, train_x, train_y):
grid.fit(train_x, train_y)
results = pd.DataFrame(grid.cv_results_)[['mean_test_score','mean_train_score','params']]
best = {}
best['bp'] = grid.best_params_
best[grid.scoring] = grid.best_score_
return(best, results)
def tune(self, name, m_name, features, cvparams, mparams):
if hasattr(ske, m_name):
model = getattr(ske, m_name)
elif hasattr(skl, m_name):
model = getattr(skl, m_name)
elif hasattr(xgb, m_name):
model = getattr(xgb, m_name)
elif hasattr(svm, m_name):
model = getattr(svm, m_name)
else:
raise ValueError('Model name is invalid.')
grid = self.make_grid(model, cvparams, mparams)
best, results = self.run_grid(grid, self.train_x[features], self.train_y)
results['name'] = name
results['m_name'] = m_name
self.grid_results = self.grid_results.append(results)
best['model'] = model(**best['bp'])
best['features'] = list(features)
self.best_models.update({name: best})
class Tester():
"""
Initiates with indata class, receives parameterized sklearn models, prints and stores results
"""
def __init__(self, data, rundict=None):
if data.is_split == 0 :
raise ValueError('Data is not split, cannot be tested')
else:
self.data = data
if rundict is None:
self.rundict = {}
def init_tuned(self, tuned):
""" pass Tuner object, populatest with names, models, features """
if tuned.best_models=={}:
raise ValueError('No tuned models found')
else:
self.rundict.update(tuned.best_models)
def predsprobs(self, model, test_x):
""" Produce predicted class and probabilities """
# if the model doesn't have predict proba, will be treated as GLM
if hasattr(model, 'predict_proba'):
preds = model.predict(test_x)
probs = model.predict_proba(test_x)[:,1]
else:
probs = model.predict(test_x)
preds = (probs>=.5).astype(int)
return(preds, probs)
def get_metrics(self, preds, probs, test_y):
""" Produce metrics (f1 score, AUC, brier) """
# if test is not binary, just run brier
if len(np.unique(test_y))==2:
f1_s = metrics.f1_score(test_y, preds)
roc = metrics.roc_auc_score(test_y, probs)
else:
f1_s, roc = None, None
brier = metrics.brier_score_loss(test_y, probs)
return(f1_s, roc, brier)
def make_result(self, model, test_x, test_y):
""" gets predictions and runs metrics """
preds, probs = self.predsprobs(model, test_x)
f1_s, roc, brier = self.get_metrics(preds, probs, test_y)
print "f1_score: ", f1_s
print "roc auc: ", roc
print "brier_score: ", brier
result = {}
result['f1_s'] = f1_s
result['roc'] = roc
result['brier'] = brier
return(result)
def run_model(self, name, model, features, cal=True, cal_m='sigmoid'):
"""
Run a specific model (not from Tuner classs)
By default, calibrates predictions and produces metrics for them
Will also store in rundict object
"""
results = {}
results['features'] = list(features)
results['model'] = model
print "Fitting {} model with {} features".format(name, len(features))
if cal:
# Need disjoint calibration/training datasets
# Split 50/50
rnd_ind = np.random.rand(len(self.data.train_x)) < .5
train_x = self.data.train_x[features][rnd_ind]
train_y = self.data.train_y[rnd_ind]
cal_x = self.data.train_x[features][~rnd_ind]
cal_y = self.data.train_y[~rnd_ind]
else:
train_x = self.data.train_x[features]
train_y = self.data.train_y
m_fit = model.fit(train_x, train_y)
result = self.make_result(
m_fit,
self.data.test_x[features],
self.data.test_y)
results['raw'] = result
results['m_fit'] = m_fit
if cal:
print "calibrated:"
m_c = CalibratedClassifierCV(model, method = cal_m)
m_fit_c = m_c.fit(cal_x, cal_y)
result_c = self.make_result(m_fit_c, self.data.test_x[features], self.data.test_y)
results['calibrated'] = result_c
print "\n"
if name in self.rundict:
self.rundict[name].update(results)
else:
self.rundict.update({name:results})
def run_tuned(self, name, cal=True, cal_m='sigmoid'):
""" Wrapper for run_model when using Tuner object """
self.run_model(name, self.rundict[name]['model'], self.rundict[name]['features'], cal, cal_m)
###Output
/Users/B/anaconda/envs/boston-crash-model/lib/python2.7/site-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
###Markdown
Data processingThe approach here is to create 3 time-lag features:1. crashes in the past week2. crashes in the past month3. crashes in the past quarter (three months)4. average crashes per week up to target weekAll features except 4 are calculated to exclude one another. That is, crashes in the past month does not include the past week's crashes. Crashes in the past quarter do not include the past month.
###Code
SEG_CHARS = ['AADT', 'SPEEDLIMIT', 'Struct_Cnd', 'Surface_Tp', 'F_F_Class']
# Read in data
data = pd.read_csv('../../data/processed/vz_predict_dataset.csv.gz', compression='gzip', dtype={'segment_id':'str'})
data.sort_values(['segment_id', 'year', 'week'], inplace=True)
# get segments with non-zero crashes
data_nonzero = data.set_index('segment_id').loc[data.groupby('segment_id').crash.sum()>0]
data_nonzero.reset_index(inplace=True)
def format_crash_data(data, col, target_week, target_year):
""" formats crash data for train/test
target_week: week to predict (make into binary target)
target_year: year for predicted week
note: data must be available for 4 months prior to target
gets previous week count, previous month count, previous quarter count, avg per week
"""
assert target_week>16
pre_week = target_week - 1
pre_month = range(pre_week-4, target_week)
pre_quarter = range(pre_month[0]-12, target_week)
# week interval for each segment
# full range = pre_quarter : target
sliced = data.loc[(slice(None),slice(target_year,target_year), slice(1, target_week)),:]
week_data = sliced[col].unstack(2)
week_data.reset_index(level=1, inplace=True)
# aggregate
week_data['pre_month'] = week_data[pre_month].sum(axis=1)
week_data['pre_quarter'] = week_data[pre_quarter].sum(axis=1)
week_data['pre_week'] = week_data[pre_week]
# avg as of target week
except_target = data.loc[(slice(None),
slice(target_year,target_year),
slice(target_week,None)),:].index
avg_week = data.drop(except_target)
avg_week = avg_week.reset_index().groupby('segment_id')[col].mean()
avg_week.name = 'avg_week'
# join to week data
week_data = week_data.join(avg_week)
# binarize target
week_data['target'] = (week_data[target_week]>0).astype(int)
week_data = week_data.reset_index()
return(week_data[['segment_id','target', 'pre_week',
'pre_month', 'pre_quarter', 'avg_week']])
# simple add concern, any concern reported 2016
concern_observed = data_nonzero[data_nonzero.year==2016].groupby('segment_id').concern.max()
concern_observed.name = 'concern_observed'
crash_lags = format_crash_data(data_nonzero.set_index(['segment_id','year','week']), 'crash', 19, 2017)
data_segs = data_nonzero.groupby('segment_id')[SEG_CHARS].max() # grab the highest values from each column for a segment, not used in model?
data_segs.reset_index(inplace=True)
# add in atrs
atrs = pd.read_csv('../../data/processed/atrs_predicted.csv', dtype={'id':'str'})
# for some reason pandas reads the id as float before str conversions
atrs['id'] = atrs.id.apply(lambda x: x.split('.')[0])
data_segs = data_segs.merge(atrs[['id','speed_coalesced', 'volume_coalesced']],
left_on='segment_id', right_on='id')
# add in tmcs - conflicts
# it either has some or doesn't
# I think just filling na = 0 should work for now
tmcs = pd.read_json('../../data/processed/tmc_summary.json',
dtype={'near_id':str})[['near_id','Conflict']]
data_segs = data_segs.merge(tmcs, left_on='segment_id', right_on='near_id', how='left')
data_segs.Conflict.fillna(0, inplace=True)
data_model = crash_lags.merge(data_segs, left_on='segment_id', right_on='segment_id')
# add concerns
data_model = data_model.merge(concern_observed.reset_index(), on='segment_id')
# Add in adjacency info
adj_info = pd.read_csv('../../data/processed/adjacency_info.csv', usecols=['segment_id', 'orig_id'],
dtype={'segment_id':'str', 'orig_id':'str'})
# link adjacent segments for segments with crashes
adj_info = adj_info[adj_info.segment_id.isin(data_model.segment_id)]
adj_mat = adj_info.merge(adj_info, on='orig_id')
adj_mat = adj_mat[['segment_id_x', 'segment_id_y']]
adj_mat.drop_duplicates(inplace=True)
# including segments with only self-adjacent
# for this, need to ensure they don't join to their own data
adj_mat.loc[adj_mat.segment_id_x==adj_mat.segment_id_y, 'segment_id_y'] = np.NaN
def get_adj_crash_lags(target_week, target_year):
"""calculate total number of crashes that occurred
in adjacent segments for target week and lags as defined in format_crash_data
"""
lag_data = format_crash_data(data_nonzero.set_index(['segment_id','year','week']), 'crash', target_week, target_year)
merge_lags = adj_mat.merge(lag_data, left_on='segment_id_y', right_on='segment_id', how='left')
adj_lags = merge_lags.groupby(['segment_id_x'])['pre_week', 'pre_month', 'pre_quarter'].sum()
return adj_lags
adj_lags = get_adj_crash_lags(19, 2017)
# fill those with only self-adj zero
adj_lags.fillna(0, inplace=True)
data_model = data_model.merge(adj_lags, how='left', left_on='segment_id', right_index=True, suffixes=('', '_adj'))
data_model.fillna(0, inplace=True)
# standardize for LR
#from sklearn.preprocessing import scale
#data_scaled = pd.DataFrame(scale(data_model['AADT', 'SPEEDLIMIT']),
# columns=[f+'_scaled' for f in features])
#data_model = pd.concat([data_model, data_scaled], axis=1)
# trying a different feature set
dummy_att = ['SPEEDLIMIT', 'Struct_Cnd', 'Surface_Tp', 'F_F_Class']
for d in dummy_att:
t = pd.get_dummies(data_model[d])
t.columns = [d+str(c) for c in t.columns]
data_model = pd.concat([data_model, t], axis=1)
# aadt - log-transform
data_model['log_aadt'] = np.log(data_model.AADT+1)
# add segment type
data_model['intersection'] = data_model.segment_id.map(lambda x: x[:2]!='00').astype(int)
# features
features = data_model.filter(regex='[0-9]').columns.tolist() + ['log_aadt', 'intersection']
# Features
#features = [u'pre_week', u'pre_month', u'pre_quarter', 'avg_week', u'AADT', u'SPEEDLIMIT',
# u'Struct_Cnd', u'Surface_Tp', u'F_F_Class', u'pre_week_adj',
# u'pre_month_adj', u'pre_quarter_adj']
features += [u'pre_week', u'pre_month', u'pre_quarter', 'avg_week', 'concern_observed']
#features += ['speed_coalesced', 'volume_coalesced']
#features += ['Conflict']
lm_features = list(set(features) - set(['SPEEDLIMIT1', 'Struct_Cnd0', 'F_F_Class0']))
features = [u'pre_week', u'pre_month', u'pre_quarter', 'avg_week', 'concern_observed']
lm_features = [u'pre_week', u'pre_month', u'pre_quarter', 'avg_week', 'concern_observed']
###Output
_____no_output_____
###Markdown
Model tuningThis uses the model helpers above. They're based on sklearn and implement a randomized grid search with K-fold crossvalidation.
###Code
#Initialize data
df = Indata(data_model, 'target')
#Create train/test split
df.tr_te_split(.7)
#Parameters for model
# class weight
a = data_model['target'].value_counts(normalize=True)
w = 1/a[1]
#Model parameters
params = dict()
#cv parameters
cvp = dict()
cvp['pmetric'] = 'roc_auc'
cvp['iter'] = 5 #number of iterations
cvp['folds'] = 5 #folds for cv (default)
cvp['shuffle'] = True
#LR parameters
mp = dict()
mp['LogisticRegression'] = dict()
mp['LogisticRegression']['penalty'] = ['l1','l2']
mp['LogisticRegression']['C'] = ss.beta(a=5,b=2) #beta distribution for selecting reg strength
mp['LogisticRegression']['class_weight'] = ['balanced']
#RF model parameters
mp['RandomForestClassifier'] = dict()
mp['RandomForestClassifier']['n_estimators'] = [2**8] #number of trees in the forest
mp['RandomForestClassifier']['max_features'] = ss.beta(a=5,b=1) #number of features at split
mp['RandomForestClassifier']['max_leaf_nodes'] = ss.nbinom(n=2,p=0.001,loc=100) #max number of leaves to create
#mp['RandomForestClassifier']['class_weight'] = ['balanced']
mp['RandomForestClassifier']['class_weight'] = [{0:1,1:w}]
#xgBoost model parameters
mp['XGBClassifier'] = dict()
mp['XGBClassifier']['max_depth'] = range(3, 7)
mp['XGBClassifier']['min_child_weight'] = range(1, 5)
mp['XGBClassifier']['learning_rate'] = ss.beta(a=2,b=15)
mp['XGBClassifier']['scale_pos_weight'] = [w]
#Initialize tuner
tune = Tuner(df)
#Base XG model
tune.tune('XG_base', 'XGBClassifier', features, cvp, mp['XGBClassifier'])
#Base RF model
tune.tune('RF_base', 'RandomForestClassifier', features, cvp, mp['RandomForestClassifier'])
#Base LR model
tune.tune('LR_base', 'LogisticRegression', lm_features, cvp, mp['LogisticRegression'])
#Display results
tune.grid_results
# Run test
test = Tester(df)
test.init_tuned(tune)
test.run_tuned('RF_base', cal=False)
test.run_tuned('LR_base', cal=False)
test.run_tuned('XG_base', cal=False)
# Run test
test = Tester(df)
test.init_tuned(tune)
test.run_tuned('RF_base', cal=False)
test.run_tuned('LR_base', cal=False)
test.run_tuned('XG_base', cal=False)
t = test.rundict['XG_base']['m_fit'].predict_proba(test.data.test_x[features])[::,1]
metrics.roc_auc_score(test.data.test_y,t)
# Check feature importance
f_importance = test.rundict['XG_base']['m_fit'].feature_importances_
fi = list(zip(features, f_importance))
print sorted(fi, key=lambda x: x[1], reverse=True)
from sklearn.metrics import roc_auc_score
# trying some other models
minus_adj = list(set(lm_features) - set([x for x in lm_features if x.find('volume')!=-1]))
xg = xgb.XGBClassifier(**test.rundict['XG_base']['bp'])
xg.fit(test.data.train_x[minus_adj], test.data.train_y)
preds = xg.predict_proba(
test.data.test_x[minus_adj])[::,1]
roc_auc_score(test.data.test_y, preds)
# trying some other models
minus_adj = list(set(lm_features) - set([x for x in lm_features if x.find('adj')!=-1]))
lr = skl.LogisticRegression(**test.rundict['LR_base']['bp'])
lr.fit(test.data.train_x[minus_adj], test.data.train_y)
preds = lr.predict_proba(
test.data.test_x[minus_adj])[::,1]
roc_auc_score(test.data.test_y, preds)
lr = skl.LogisticRegression(**test.rundict['LR_base']['bp'])
lr.fit(test.data.train_x['avg_week'].reshape(-1,1), test.data.train_y)
preds = lr.predict_proba(
test.data.test_x['avg_week'].reshape(-1,1))[::,1]
roc_auc_score(test.data.test_y, preds)
###Output
/Users/B/anaconda/envs/boston-crash-model/lib/python2.7/site-packages/ipykernel_launcher.py:2: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead
/Users/B/anaconda/envs/boston-crash-model/lib/python2.7/site-packages/ipykernel_launcher.py:4: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead
after removing the cwd from sys.path.
###Markdown
Lift chart by "risk bin"The classifier problem is difficult because the classes are unbalanced (.05% have crashes at target week). More useful are the probabilities being produced by the model, which give some idea of risk.
###Code
def lift_chart(x_col, y_col, data, ax=None):
p = sns.barplot(x=x_col, y=y_col, data=data,
palette='Reds', ax = None, ci=None)
vals = p.get_yticks()
p.set_yticklabels(['{:3.0f}%'.format(i*100) for i in vals])
xvals = [x.get_text().split(',')[-1].strip(']') for x in p.get_xticklabels()]
xvals = ['{:3.0f}%'.format(float(x)*100) for x in xvals]
p.set_xticklabels(xvals)
p.set_facecolor('white')
p.set_xlabel('')
p.set_ylabel('')
p.set_title('Predicted probability vs actual percent')
return(p)
def density(data, score, ax=None):
p = sns.kdeplot(risk_df['risk_score'], ax=ax)
p.set_facecolor('white')
p.legend('')
p.set_xlabel('Predicted probability of crash')
p.set_title('KDE plot predictions')
return(p)
#pd.qcut(risk_df['risk_score'], 4)
risk_scores = test.rundict['LR_base']['m_fit'].predict_proba(test.data.test_x[features])[:,1]
risk_df = pd.DataFrame({'risk_score':risk_scores, 'crash':test.data.test_y})
print risk_df.risk_score.describe()
risk_df['categories'] = pd.qcut(risk_df['risk_score'], 4)
risk_mean = risk_df.groupby('categories')['crash'].count()
print risk_mean
fig, axes = plt.subplots(1, 2)
lift_chart('categories', 'crash', risk_df,
ax=axes[1])
density(risk_df, 'risk_score', ax=axes[0])
# output predictions
# predict on all segments
data_model['risk_score'] = test.rundict['RF_base']['m_fit'].predict_proba(data_model[features])[:,1]
data_model.to_csv('seg_with_risk_score_adj.csv', index=False)
###Output
_____no_output_____
###Markdown
Check sensitivity to weekI predicted an arbitrary week as target here, but I'd like to see whether things change significantly if I change that week. A good metric to measure that is brier score loss. It'll be low throughout as the classifier doesn't perform great, but it shouldn't vary a huge amount.
###Code
def run_model_for_week(weeks=[20, 30, 40, 50], output=False):
for w in [20, 30, 40, 50]:
print "week ", w
crash_lags = format_crash_data(data_nonzero.set_index(['segment_id','year','week']), 'crash', w, 2016)
data = crash_lags.merge(data_segs, left_on='segment_id', right_on='segment_id')
adj_lags = get_adj_crash_lags(w, 2016)
data = data.merge(adj_lags, left_on='segment_id', right_index=True, suffixes=('', '_adj'))
data.fillna(0, inplace=True)
df = Indata(data, 'target')
# create train/test split
df.tr_te_split(.7)
test = Tester(df)
test.init_tuned(tune)
test.run_tuned('LR_base', cal=False)
print '\n'
if output==True:
return(test.rundict['LR_base']['m_fit'].pred)
run_model_for_week()
# week predictions output
###Output
_____no_output_____ |
.ipynb_checkpoints/1-Data-reading-manipulation-checkpoint.ipynb | ###Markdown
0. Setup
###Code
import os
import pandas as pd
import pickle
from io import StringIO
# change into the folder that contains the unzipped data (in the folder "DataManagementIntergration_Data")
#data_path = r'C:\Users\sjants\Desktop\Data' # simon
data_path = r'../DataManagementIntergration_Data/OriginalData_wRouteTest' # ivo
###Output
_____no_output_____
###Markdown
1. Get overview of files
###Code
dict_folder_file = {} # initialize empty dictionary
for subfolder in os.listdir(data_path):
if subfolder not in dict_folder_file.keys(): # check if the dictionary already contains entries for the subfolder
dict_folder_file[subfolder] = [] # if not, add an empty list to as value for that entry
for entry in os.listdir('/'.join((data_path, subfolder))):
if entry == 'UnityDataSave': # if the subfolder is UnityDataSave
contents = os.listdir('/'.join((data_path, subfolder, entry))) # get the contents of the folder
contents = ['/'.join((entry, i)) for i in contents] # construct the path to the file
for entr in contents:
dict_folder_file[subfolder].append(entr) # append 'UnityDataSave/filename' to the dictionary
else:
dict_folder_file[subfolder].append(entry) # append the list for that entry with the respective files
#dict_folder_file # keys: subfolders, values: list of the contained files, e.g. dict = {folder1: [file1, file2], ...}
###Output
_____no_output_____
###Markdown
2. Read in files Inspect files
###Code
if False:
for key in dict_folder_file.keys(): # iterate through each subfolder
print(key) # print key (folder)
print(dict_folder_file[key]) # print dict entries (files)
print(dict_folder_file[key][0]) # print first file
###Output
_____no_output_____
###Markdown
2.2 Create `df_detailed_subj`
###Code
list_dfs = []
for key in dict_folder_file.keys(): #iterate through each subfolder
df = pd.read_csv('/'.join((data_path,key,dict_folder_file[key][0]))) #read in one file as data frame
ID = []
for i in range(1, len(df) + 1):
ID.append(i)
df.insert(1, 'TaskID', ID)
list_dfs.append(df) #append data frames
df_detailed_subj = pd.concat(list_dfs, axis = 0, ignore_index = True) # concatenate dfs into one
with open('./data_raw/dumps_detailed_subj.pkl', 'wb') as f:
pickle.dump(df_detailed_subj, f)
#df_detailed_subj # inspect
###Output
_____no_output_____
###Markdown
2.3 Create `df_ptsot_results`
###Code
list_dfs = []
for key in dict_folder_file.keys():
df = pd.read_csv('/'.join((data_path,key,dict_folder_file[key][1])),
names = ['QuestionNumber','CorrectResponseAngle','ActualResponseAngle','AbsoluteAngularError'],
header = None)
ID = []
for i in range(0, len(df)):
ID.append(int(key[4:]))
df.insert(0, 'UserID', ID)
list_dfs.append(df)
df_ptsot_results = pd.concat(list_dfs, axis = 0, ignore_index = True, sort = False) # concatenate dfs into one
df_ptsot_results
with open('./data_raw/dumps_ptsot_results.pkl', 'wb') as f:
pickle.dump(df_ptsot_results, f)
#df_ptsot_results # inspect
###Output
_____no_output_____
###Markdown
2.4 Create `df_JRD`
###Code
list_dfs = []
for key in dict_folder_file.keys():
if len(dict_folder_file[key]) > 3:
df = pd.read_csv('/'.join((data_path, key, dict_folder_file[key][3])), skipinitialspace = True)
df.columns = ['UserID' if x == 'PartID ' else x for x in df.columns]
df.drop(df.index[0], inplace = True)
list_dfs.append(df)
df_JRD = pd.concat(list_dfs, axis = 0, ignore_index = True,sort = False)
with open('./data_raw/dumps_JRD.pkl', 'wb') as f:
pickle.dump(df_JRD, f)
#df_JRD # inspect
###Output
_____no_output_____
###Markdown
2.5 Create `df_sbsod`
###Code
list_dfs = []
for key in dict_folder_file.keys():
if len(dict_folder_file[key]) > 3:
with open('/'.join((data_path, key, dict_folder_file[key][2]))) as file:
data = file.read().replace("merken,", "merken;").replace("nachdenke,", "nachdenke;").replace("(N,S,O,W)", "(N;S;O;W)").replace("(N, S, E, W)", "(N; S; E; W)").replace("Probleme,", "Probleme;").replace("wichtig,", "wichtig;").replace("erinnern,", "erinnern;")
if len(data) > 0:
TESTDATA = StringIO(data)
df = pd.read_csv(TESTDATA, sep = ",")
ID = []
for i in range(0, len(df)):
ID.append(int(key[4:]))
df.insert(0, 'UserID', ID)
list_dfs.append(df)
df_sbsod = pd.concat(list_dfs, axis = 0, ignore_index = True, sort = False)
with open('./data_raw/dumps_sbsod.pkl', 'wb') as f:
pickle.dump(df_sbsod, f)
#df_sbsod # inspect
###Output
_____no_output_____
###Markdown
2.6 Create `df_RouteTest`
###Code
list_dfs = []
for key in dict_folder_file.keys():
if len(dict_folder_file[key]) == 6:
print(key)
df = pd.read_csv('/'.join((data_path, key, dict_folder_file[key][4])), skipinitialspace = True)
list_dfs.append(df)
df_RouteTest = pd.concat(list, axis = 0, ignore_index = True, sort = False)
with open('./data_raw/dumps_RouteTest.pkl', 'wb') as f:
pickle.dump(df_RouteTest, f)
#df_RouteTest # inspect
###Output
_____no_output_____ |
notebooks/collision_avoidance/live_demo_resnet18_build_trt.ipynb | ###Markdown
Collision Avoidance - Build TensorRT model for live demoIn this notebook we'll use the model we trained to detect whether the robot is ``free`` or ``blocked`` to enable a collision avoidance behavior on the robot. Load the trained modelWe'll assumed that you've already downloaded the ``best_model.pth`` to your workstation as instructed in the training notebook. Now, you should upload this model into this notebook'sdirectory by using the Jupyter Lab upload tool. Once that's finished there should be a file named ``best_model.pth`` in this notebook's directory. > Please make sure the file has uploaded fully before calling the next cellExecute the code below to initialize the PyTorch model. This should look very familiar from the training notebook.
###Code
import torch
import torchvision
model = torchvision.models.resnet18(pretrained=False)
model.fc = torch.nn.Linear(512, 2)
model = model.cuda().eval().half()
###Output
_____no_output_____
###Markdown
Next, load the trained weights from the ``best_model_resnet18.pth`` file that you uploaded
###Code
model.load_state_dict(torch.load('best_model_resnet18.pth'))
###Output
_____no_output_____
###Markdown
Currently, the model weights are located on the CPU memory execute the code below to transfer to the GPU device.
###Code
device = torch.device('cuda')
###Output
_____no_output_____
###Markdown
TensorRT > If your setup does not have `torch2trt` installed, you need to first install `torch2trt` by executing the following in the console.```bashcd $HOMEgit clone https://github.com/NVIDIA-AI-IOT/torch2trtcd torch2trtsudo python3 setup.py install```Convert and optimize the model using torch2trt for faster inference with TensorRT. Please see the torch2trt readme for more details.> This optimization process can take a couple minutes to complete.
###Code
from torch2trt import torch2trt
data = torch.zeros((1, 3, 224, 224)).cuda().half()
model_trt = torch2trt(model, [data], fp16_mode=True)
###Output
_____no_output_____
###Markdown
Save the optimized model using the cell below
###Code
torch.save(model_trt.state_dict(), 'best_model_trt.pth')
###Output
_____no_output_____
###Markdown
Collision Avoidance - Build TensorRT model for live demoIn this notebook we'll use the model we trained to detect whether the robot is ``free`` or ``blocked`` to enable a collision avoidance behavior on the robot. Load the trained modelWe'll assumed that you've already downloaded the ``best_model.pth`` to your workstation as instructed in the training notebook. Now, you should upload this model into this notebook'sdirectory by using the Jupyter Lab upload tool. Once that's finished there should be a file named ``best_model.pth`` in this notebook's directory. > Please make sure the file has uploaded fully before calling the next cellExecute the code below to initialize the PyTorch model. This should look very familiar from the training notebook.
###Code
import torch
import torchvision
model = torchvision.models.resnet18(pretrained=False)
model.fc = torch.nn.Linear(512, 2)
model = model.cuda().eval().half()
###Output
_____no_output_____
###Markdown
Next, load the trained weights from the ``best_model_resnet18.pth`` file that you uploaded
###Code
model.load_state_dict(torch.load('best_model_resnet18.pth'))
###Output
_____no_output_____
###Markdown
Currently, the model weights are located on the CPU memory execute the code below to transfer to the GPU device.
###Code
device = torch.device('cuda')
###Output
_____no_output_____
###Markdown
TensorRT > If your setup does not have `torch2trt` installed, you need to first install `torch2trt` by executing the following in the console.```bashcd $HOMEgit clone https://github.com/NVIDIA-AI-IOT/torch2trtcd torch2trtsudo python3 setup.py install```Convert and optimize the model using torch2trt for faster inference with TensorRT. Please see the torch2trt readme for more details.> This optimization process can take a couple minutes to complete.
###Code
from torch2trt import torch2trt
data = torch.zeros((1, 3, 224, 224)).cuda().half()
model_trt = torch2trt(model, [data], fp16_mode=True)
###Output
_____no_output_____
###Markdown
Save the optimized model using the cell below
###Code
torch.save(model_trt.state_dict(), 'best_model_trt.pth')
###Output
_____no_output_____
###Markdown
Collision Avoidance - Build TensorRT model for live demoIn this notebook we'll use the model we trained to detect whether the robot is ``free`` or ``blocked`` to enable a collision avoidance behavior on the robot. Load the trained modelWe'll assumed that you've already downloaded the ``best_model.pth`` to your workstation as instructed in the training notebook. Now, you should upload this model into this notebook'sdirectory by using the Jupyter Lab upload tool. Once that's finished there should be a file named ``best_model.pth`` in this notebook's directory. > Please make sure the file has uploaded fully before calling the next cellExecute the code below to initialize the PyTorch model. This should look very familiar from the training notebook.
###Code
import torch
import torchvision
model = torchvision.models.resnet18(pretrained=False)
model.fc = torch.nn.Linear(512, 2)
model = model.cuda().eval().half()
###Output
_____no_output_____
###Markdown
Next, load the trained weights from the ``best_model_resnet18.pth`` file that you uploaded
###Code
model.load_state_dict(torch.load('../../../ml/my_models/model.pth'))
###Output
_____no_output_____
###Markdown
Currently, the model weights are located on the CPU memory execute the code below to transfer to the GPU device.
###Code
device = torch.device('cuda')
###Output
_____no_output_____
###Markdown
TensorRT > If your setup does not have `torch2trt` installed, you need to first install `torch2trt` by executing the following in the console.```bashcd $HOMEgit clone https://github.com/NVIDIA-AI-IOT/torch2trtcd torch2trtsudo python3 setup.py install```Convert and optimize the model using torch2trt for faster inference with TensorRT. Please see the torch2trt readme for more details.> This optimization process can take a couple minutes to complete.
###Code
from torch2trt import torch2trt
data = torch.zeros((1, 3, 224, 224)).cuda().half()
model_trt = torch2trt(model, [data], fp16_mode=True)
###Output
_____no_output_____
###Markdown
Save the optimized model using the cell below
###Code
torch.save(model_trt.state_dict(), 'best_model_trt.pth')
###Output
_____no_output_____
###Markdown
Collision Avoidance - Build TensorRT model for live demoIn this notebook we'll use the model we trained to detect whether the robot is ``free`` or ``blocked`` to enable a collision avoidance behavior on the robot. Load the trained modelWe'll assumed that you've already downloaded the ``best_model.pth`` to your workstation as instructed in the training notebook. Now, you should upload this model into this notebook'sdirectory by using the Jupyter Lab upload tool. Once that's finished there should be a file named ``best_model.pth`` in this notebook's directory. > Please make sure the file has uploaded fully before calling the next cellExecute the code below to initialize the PyTorch model. This should look very familiar from the training notebook.
###Code
import torch
import torchvision
model = torchvision.models.resnet18(pretrained=False)
model.fc = torch.nn.Linear(512, 2)
model = model.cuda().eval().half()
###Output
_____no_output_____
###Markdown
Next, load the trained weights from the ``best_model_resnet18.pth`` file that you uploaded
###Code
model.load_state_dict(torch.load('best_model_resnet18.pth'))
###Output
_____no_output_____
###Markdown
Currently, the model weights are located on the CPU memory execute the code below to transfer to the GPU device.
###Code
device = torch.device('cuda')
###Output
_____no_output_____
###Markdown
TensorRT > If your setup does not have `torch2trt` installed, you need to first install `torch2trt` by executing the following in the console.```bashcd $HOMEgit clone https://github.com/NVIDIA-AI-IOT/torch2trtcd torch2trtsudo python3 setup.py install```Convert and optimize the model using torch2trt for faster inference with TensorRT. Please see the torch2trt readme for more details.> This optimization process can take a couple minutes to complete.
###Code
from torch2trt import torch2trt
data = torch.zeros((1, 3, 224, 224)).cuda().half()
model_trt = torch2trt(model, [data], fp16_mode=True)
###Output
_____no_output_____
###Markdown
Save the optimized model using the cell below
###Code
torch.save(model_trt.state_dict(), 'best_model_trt.pth')
###Output
_____no_output_____ |
S2ML_Art_Generator.ipynb | ###Markdown
Follow & tag me in your art, revision requests etc. on Twitter: [@somewheresy](https://twitter.com/somewheresy) **LEGACY VERSION**: [S2 VQGAN+CLIP Classic.ipynb](https://github.com/justin-bennington/somewhere-ml/blob/main/S2_VQGAN%2BCLIP_Classic.ipynb)The notebook you're currently using is a multimodal GAN art generator patched together from various ML notebooks for generative art (see license). The results of this notebook may be distinct from others in the space.This notebook is great for procedurally generating new images from a text prompt or input image. At Somewhere Systems we use this for everything from generative landscapes to materials design for 3D graphics. Consider checking out our work @ https://s2.lol and hiring us to demystify technology like AR, ML, etc.
###Code
#@title MIT License
#
# Copyright (c) 2021 Katherine Crowson
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#@markdown What GPU am I using?
#@markdown V100 > P100 > everything else
!nvidia-smi --query-gpu=gpu_name,gpu_bus_id,vbios_version --format=csv
gpu_name = !nvidia-smi --query-gpu=gpu_name, --format=csv
###Output
name, pci.bus_id, vbios_version
NVIDIA TITAN V, 00000000:03:00.0, 88.00.41.00.03
###Markdown
**Filesystem Setup**
###Code
#@markdown Use Temp Filesystem (not recommended)
import os
abs_root_path = "/workspaces/S2ML-Art-Generator/contents"
def ensureProperRootPath():
if len(abs_root_path) > 0:
os.chdir(abs_root_path) # Changes directory to absolute root path
print("Root path check: ")
!pwd
ensureProperRootPath()
print("Well, I guess you're really bent on using the temporary runtime directory for some reason. Anyway, your root directory is: ")
!pwd
###Output
Root path check:
/workspaces/S2ML-Art-Generator/contents
Well, I guess you're really bent on using the temporary runtime directory for some reason. Anyway, your root directory is:
/workspaces/S2ML-Art-Generator/contents
###Markdown
**Dependencies**
###Code
# @title Library Installation
import os
!nvidia-smi
print("Downloading CLIP...")
!git clone https://github.com/openai/CLIP &> /dev/null
!git clone https://github.com/crowsonkb/guided-diffusion &> /dev/null
!pip install -e ./CLIP &> /dev/null
print("Installing library for guided diffusion...")
!pip install -e ./guided-diffusion &> /dev/null
print("Installing Python Libraries for AI")
!git clone https://github.com/CompVis/taming-transformers &> /dev/null
!pip install ftfy regex tqdm omegaconf pytorch-lightning &> /dev/null
!pip install kornia &> /dev/null
!pip install einops &> /dev/null
print("Installing transformers library...")
!pip install transformers &> /dev/null
print("Installing libraries for managing metadata...")
!pip install stegano &> /dev/null
!sudo apt update
!sudo apt -y install exempi
!sudo apt clean
!pip install python-xmp-toolkit &> /dev/null
!pip install imgtag &> /dev/null
!pip install pillow==7.1.2 &> /dev/null
!pip install taming-transformers &> /dev/null
!pip install imageio &> /dev/null
print("Installing ESRGAN for image upscaling...")
!git clone https://github.com/xinntao/ESRGAN &> /dev/null
print("Installing ffmpeg for creating videos...")
!pip install imageio-ffmpeg &> /dev/null
if not os.path.exists(abs_root_path + "/vqgan-steps"):
!mkdir "vqgan-steps" &> /dev/null
print("No directory for VQGAN+CLIP image output found. Made directory: ~/vqgan-steps")
if not os.path.exists(abs_root_path + "/diffusion-steps"):
!mkdir "diffusion-steps" &> /dev/null
print("No directory for CLIP-guided diffusion image output found. Made directory: ~/diffusion-steps")
!pip freeze > requirements.txt
print("Installation finished.")
#@title Download pre-trained models
#@markdown Ensure you select a model you've downloaded in the parameters block
#@markdown The below radio button downloads the model for CLIP-guided diffusion, a method that takes a bit longer to produce good results but generally makes more "realistic" interpretations of the prompt. If you'd like to use this, make sure to download Katherine Crowson's diffusion model.
diffusion = True #@param {type: "boolean"}
#@markdown Models for VQGAN+CLIP method
imagenet_1024 = False #@param {type:"boolean"}
imagenet_16384 = True #@param {type:"boolean"}
coco = False #@param {type:"boolean"}
# faceshq = True #@param {type:"boolean"}
# wikiart_1024 = True #@param {type:"boolean"}
wikiart_16384 = False #@param {type:"boolean"}
sflckr = False #@param {type:"boolean"}
if not os.path.exists(abs_root_path + "/models"):
os.mkdir(abs_root_path + "/models")
os.chdir(abs_root_path + "/models")
print("changing to models subdirectory: ")
!pwd
if imagenet_1024:
# !curl -L -o vqgan_imagenet_f16_1024.yaml -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_1024.yaml' #ImageNet 1024
# !curl -L -o vqgan_imagenet_f16_1024.ckpt -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_1024.ckpt' #ImageNet 1024
!curl -L -o vqgan_imagenet_f16_1024.ckpt -C - 'https://heibox.uni-heidelberg.de/f/140747ba53464f49b476/?dl=1'
!curl -L -o vqgan_imagenet_f16_1024.yaml -C - 'https://heibox.uni-heidelberg.de/f/6ecf2af6c658432c8298/?dl=1'
if imagenet_16384:
# !curl -L -o vqgan_imagenet_f16_16384.yaml -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_16384.yaml' #ImageNet 16384
# !curl -L -o vqgan_imagenet_f16_16384.ckpt -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_16384.ckpt' #ImageNet 16384
!curl -L -o vqgan_imagenet_f16_16384.ckpt -C - 'https://heibox.uni-heidelberg.de/f/867b05fc8c4841768640/?dl=1'
!curl -L -o vqgan_imagenet_f16_16384.yaml -C - 'https://heibox.uni-heidelberg.de/f/274fb24ed38341bfa753/?dl=1'
if coco:
!curl -L -o coco.yaml -C - 'https://dl.nmkd.de/ai/clip/coco/coco.yaml' #COCO
!curl -L -o coco.ckpt -C - 'https://dl.nmkd.de/ai/clip/coco/coco.ckpt' #COCO
#if faceshq:
#!curl -L -o faceshq.yaml -C - 'https://drive.google.com/uc?export=download&id=1fHwGx_hnBtC8nsq7hesJvs-Klv-P0gzT' #FacesHQ
#!curl -L -o faceshq.ckpt -C - 'https://app.koofr.net/content/links/a04deec9-0c59-4673-8b37-3d696fe63a5d/files/get/last.ckpt?path=%2F2020-11-13T21-41-45_faceshq_transformer%2Fcheckpoints%2Flast.ckpt' #FacesHQ
#if wikiart_1024:
#!curl -L -o wikiart_1024.yaml -C - 'http://mirror.io.community/blob/vqgan/wikiart.yaml' #WikiArt 1024
#!curl -L -o wikiart_1024.ckpt -C - 'http://mirror.io.community/blob/vqgan/wikiart.ckpt' #WikiArt 1024
if wikiart_16384:
# !curl -L -o wikiart_16384.yaml -C - 'http://mirror.io.community/blob/vqgan/wikiart_16384.yaml' #WikiArt 16384
# !curl -L -o wikiart_16384.ckpt -C - 'http://mirror.io.community/blob/vqgan/wikiart_16384.ckpt' #WikiArt 16384
!curl -L -o wikiart_16384.ckpt -C - 'http://eaidata.bmk.sh/data/Wikiart_16384/wikiart_f16_16384_8145600.ckpt'
!curl -L -o wikiart_16384.yaml -C - 'http://eaidata.bmk.sh/data/Wikiart_16384/wikiart_f16_16384_8145600.yaml'
if sflckr:
!curl -L -o sflckr.yaml -C - 'https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/files/?p=%2Fconfigs%2F2020-11-09T13-31-51-project.yaml&dl=1' #S-FLCKR
!curl -L -o sflckr.ckpt -C - 'https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/files/?p=%2Fcheckpoints%2Flast.ckpt&dl=1' #S-FLCKR
if diffusion:
# Download the diffusion model
!curl -L -o 512x512_diffusion_uncond_finetune_008100.pt -C - 'https://the-eye.eu/public/AI/models/512x512_diffusion_unconditional_ImageNet/512x512_diffusion_uncond_finetune_008100.pt' #DIFFUSION 512
!curl -L -o 512x512_diffusion_uncond_finetune_008100.pt -C - 'https://the-eye.eu/public/AI/models/512x512_diffusion_unconditional_ImageNet/512x512_diffusion_uncond_finetune_008100.pt' #DIFFUSION 256
# @title Load libraries and definitions
print(abs_root_path)
os.chdir(abs_root_path)
!pwd
import argparse
import math
from pathlib import Path
import io
import sys
sys.path.append('./taming-transformers')
from IPython import display
from base64 import b64encode
from omegaconf import OmegaConf
from PIL import Image
from taming.models import cond_transformer, vqgan
import torch
from torch import nn, optim
from torch.nn import functional as F
from torchvision import transforms
from torchvision.transforms import functional as TF
from tqdm.notebook import tqdm
from CLIP import clip
import kornia.augmentation as K
import numpy as np
import imageio
from PIL import ImageFile, Image
from imgtag import ImgTag # metadata
from libxmp import * # metadata
import libxmp # metadata
from stegano import lsb
import json
ImageFile.LOAD_TRUNCATED_IMAGES = True
sys.path.append('./CLIP')
sys.path.append('./guided-diffusion')
import clip
from guided_diffusion.script_util import create_model_and_diffusion, model_and_diffusion_defaults
def sinc(x):
return torch.where(x != 0, torch.sin(math.pi * x) / (math.pi * x), x.new_ones([]))
def lanczos(x, a):
cond = torch.logical_and(-a < x, x < a)
out = torch.where(cond, sinc(x) * sinc(x/a), x.new_zeros([]))
return out / out.sum()
def ramp(ratio, width):
n = math.ceil(width / ratio + 1)
out = torch.empty([n])
cur = 0
for i in range(out.shape[0]):
out[i] = cur
cur += ratio
return torch.cat([-out[1:].flip([0]), out])[1:-1]
def resample(input, size, align_corners=True):
n, c, h, w = input.shape
dh, dw = size
input = input.view([n * c, 1, h, w])
if dh < h:
kernel_h = lanczos(ramp(dh / h, 2), 2).to(input.device, input.dtype)
pad_h = (kernel_h.shape[0] - 1) // 2
input = F.pad(input, (0, 0, pad_h, pad_h), 'reflect')
input = F.conv2d(input, kernel_h[None, None, :, None])
if dw < w:
kernel_w = lanczos(ramp(dw / w, 2), 2).to(input.device, input.dtype)
pad_w = (kernel_w.shape[0] - 1) // 2
input = F.pad(input, (pad_w, pad_w, 0, 0), 'reflect')
input = F.conv2d(input, kernel_w[None, None, None, :])
input = input.view([n, c, h, w])
return F.interpolate(input, size, mode='bicubic', align_corners=align_corners)
class ReplaceGrad(torch.autograd.Function):
@staticmethod
def forward(ctx, x_forward, x_backward):
ctx.shape = x_backward.shape
return x_forward
@staticmethod
def backward(ctx, grad_in):
return None, grad_in.sum_to_size(ctx.shape)
replace_grad = ReplaceGrad.apply
class ClampWithGrad(torch.autograd.Function):
@staticmethod
def forward(ctx, input, min, max):
ctx.min = min
ctx.max = max
ctx.save_for_backward(input)
return input.clamp(min, max)
@staticmethod
def backward(ctx, grad_in):
input, = ctx.saved_tensors
return grad_in * (grad_in * (input - input.clamp(ctx.min, ctx.max)) >= 0), None, None
clamp_with_grad = ClampWithGrad.apply
def vector_quantize(x, codebook):
d = x.pow(2).sum(dim=-1, keepdim=True) + codebook.pow(2).sum(dim=1) - 2 * x @ codebook.T
indices = d.argmin(-1)
x_q = F.one_hot(indices, codebook.shape[0]).to(d.dtype) @ codebook
return replace_grad(x_q, x)
class Prompt(nn.Module):
def __init__(self, embed, weight=1., stop=float('-inf')):
super().__init__()
self.register_buffer('embed', embed)
self.register_buffer('weight', torch.as_tensor(weight))
self.register_buffer('stop', torch.as_tensor(stop))
def forward(self, input):
input_normed = F.normalize(input.unsqueeze(1), dim=2)
embed_normed = F.normalize(self.embed.unsqueeze(0), dim=2)
dists = input_normed.sub(embed_normed).norm(dim=2).div(2).arcsin().pow(2).mul(2)
dists = dists * self.weight.sign()
return self.weight.abs() * replace_grad(dists, torch.maximum(dists, self.stop)).mean()
def parse_prompt(prompt):
vals = prompt.rsplit(':', 2)
vals = vals + ['', '1', '-inf'][len(vals):]
return vals[0], float(vals[1]), float(vals[2])
class MakeCutouts(nn.Module):
def __init__(self, cut_size, cutn, cut_pow=1.):
super().__init__()
self.cut_size = cut_size
self.cutn = cutn
self.cut_pow = cut_pow
self.augs = nn.Sequential(
K.RandomHorizontalFlip(p=0.5),
# K.RandomSolarize(0.01, 0.01, p=0.7),
K.RandomSharpness(0.3,p=0.4),
K.RandomAffine(degrees=30, translate=0.1, p=0.8, padding_mode='border'),
K.RandomPerspective(0.2,p=0.4),
K.ColorJitter(hue=0.01, saturation=0.01, p=0.7))
self.noise_fac = 0.1
def forward(self, input):
sideY, sideX = input.shape[2:4]
max_size = min(sideX, sideY)
min_size = min(sideX, sideY, self.cut_size)
cutouts = []
for _ in range(self.cutn):
size = int(torch.rand([])**self.cut_pow * (max_size - min_size) + min_size)
offsetx = torch.randint(0, sideX - size + 1, ())
offsety = torch.randint(0, sideY - size + 1, ())
cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]
cutouts.append(resample(cutout, (self.cut_size, self.cut_size)))
batch = self.augs(torch.cat(cutouts, dim=0))
if self.noise_fac:
facs = batch.new_empty([self.cutn, 1, 1, 1]).uniform_(0, self.noise_fac)
batch = batch + facs * torch.randn_like(batch)
return batch
def load_vqgan_model(config_path, checkpoint_path):
config = OmegaConf.load(config_path)
if config.model.target == 'taming.models.vqgan.VQModel':
model = vqgan.VQModel(**config.model.params)
model.eval().requires_grad_(False)
model.init_from_ckpt(checkpoint_path)
elif config.model.target == 'taming.models.cond_transformer.Net2NetTransformer':
parent_model = cond_transformer.Net2NetTransformer(**config.model.params)
parent_model.eval().requires_grad_(False)
parent_model.init_from_ckpt(checkpoint_path)
model = parent_model.first_stage_model
else:
raise ValueError(f'unknown model type: {config.model.target}')
del model.loss
return model
def resize_image(image, out_size):
ratio = image.size[0] / image.size[1]
area = min(image.size[0] * image.size[1], out_size[0] * out_size[1])
size = round((area * ratio)**0.5), round((area / ratio)**0.5)
return image.resize(size, Image.LANCZOS)
# Define necessary functions for CLIP guided diffusion
def fetch(url_or_path):
if str(url_or_path).startswith('http://') or str(url_or_path).startswith('https://'):
r = requests.get(url_or_path)
r.raise_for_status()
fd = io.BytesIO()
fd.write(r.content)
fd.seek(0)
return fd
return open(url_or_path, 'rb')
class MakeCutouts(nn.Module):
def __init__(self, cut_size, cutn, cut_pow=1.):
super().__init__()
self.cut_size = cut_size
self.cutn = cutn
self.cut_pow = cut_pow
def forward(self, input):
sideY, sideX = input.shape[2:4]
max_size = min(sideX, sideY)
min_size = min(sideX, sideY, self.cut_size)
cutouts = []
for _ in range(self.cutn):
size = int(torch.rand([])**self.cut_pow * (max_size - min_size) + min_size)
offsetx = torch.randint(0, sideX - size + 1, ())
offsety = torch.randint(0, sideY - size + 1, ())
cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]
cutouts.append(F.adaptive_avg_pool2d(cutout, self.cut_size))
return torch.cat(cutouts)
def spherical_dist_loss(x, y):
x = F.normalize(x, dim=-1)
y = F.normalize(y, dim=-1)
return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)
def tv_loss(input):
"""L2 total variation loss, as in Mahendran et al."""
input = F.pad(input, (0, 1, 0, 1), 'replicate')
x_diff = input[..., :-1, 1:] - input[..., :-1, :-1]
y_diff = input[..., 1:, :-1] - input[..., :-1, :-1]
return (x_diff**2 + y_diff**2).mean([1, 2, 3])
###Output
/workspaces/S2ML-Art-Generator/contents
/workspaces/S2ML-Art-Generator/contents
###Markdown
**Image Generation**
###Code
#@markdown ### Set Global Parameters
os.chdir(abs_root_path)
seed = -1#@param {type:"number"}
display_frequency = 50#@param {type:"number"}
usingDiffusion = False;
#@markdown #**VQGAN+CLIP**
prompts = "We're expecting a baby girl | strawberry | cactus" #@param {type:"string"}
width = 512#@param {type:"number"}
height = 512#@param {type:"number"}
#@markdown Note: x4 and x16 models for CLIP may not work reliably on lower-memory machines
clip_model = "ViT-B/32" #@param ['RN50', 'RN101', 'RN50x4', 'RN50x16', 'ViT-B/32','ViT-B/16']
vqgan_model = "vqgan_imagenet_f16_16384" #@param ["vqgan_imagenet_f16_16384", "vqgan_imagenet_f16_1024", "wikiart_16384", "coco", "sflckr"]
initial_image = ""#@param {type:"string"}
target_images = ""#@param {type:"string"}
max_iterations = 10000#@param {type:"number"}
input_images = ""
#@markdown ### Advanced VQGAN+CLIP Parameters
vq_init_weight = 0.0#@param {type:"number"}
vq_step_size = 0.1#@param {type:"number"}
vq_cutn = 64#@param {type:"number"}
vq_cutpow = 1.0#@param {type:"number"}
model_names={"vqgan_imagenet_f16_16384": 'ImageNet 16384',"vqgan_imagenet_f16_1024":"ImageNet 1024",
"wikiart_1024":"WikiArt 1024", "wikiart_16384":"WikiArt 16384", "coco":"COCO-Stuff", "faceshq":"FacesHQ", "sflckr":"S-FLCKR"}
model_name = model_names[vqgan_model]
torch.cuda.empty_cache()
with torch.no_grad():
torch.cuda.empty_cache()
if seed == -1:
seed = None
if initial_image == "None":
initial_image = None
if target_images == "None" or not target_images:
target_images = []
else:
target_images = target_images.split("|")
target_images = [image.strip() for image in target_images]
if initial_image or target_images != []:
input_images = True
prompts = [frase.strip() for frase in prompts.split("|")]
if prompts == ['']:
prompts = []
args = argparse.Namespace(
prompts=prompts,
image_prompts=target_images,
noise_prompt_seeds=[],
noise_prompt_weights=[],
size=[width, height],
init_image=initial_image,
init_weight=0.,
clip_model= clip_model,
vqgan_config=f'models/{vqgan_model}.yaml',
vqgan_checkpoint=f'models/{vqgan_model}.ckpt',
step_size=vq_step_size,
cutn=vq_cutn,
cut_pow=vq_cutpow,
display_freq=display_frequency,
seed=seed,
)
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
notebook_name = "VQGAN+CLIP"
print('Executing using VQGAN+CLIP method')
print('Using device:', device)
if prompts:
print('Using text prompt:', prompts)
if target_images:
print('Using image prompts:', target_images)
if args.seed is None:
seed = torch.seed()
else:
seed = args.seed
torch.manual_seed(seed)
print('Using seed:', seed)
model = load_vqgan_model(args.vqgan_config, args.vqgan_checkpoint).to(device)
perceptor = clip.load(args.clip_model, jit=False)[0].eval().requires_grad_(False).to(device)
cut_size = perceptor.visual.input_resolution
e_dim = model.quantize.e_dim
f = 2**(model.decoder.num_resolutions - 1)
make_cutouts = MakeCutouts(cut_size, args.cutn, cut_pow=args.cut_pow)
n_toks = model.quantize.n_e
toksX, toksY = args.size[0] // f, args.size[1] // f
sideX, sideY = toksX * f, toksY * f
z_min = model.quantize.embedding.weight.min(dim=0).values[None, :, None, None]
z_max = model.quantize.embedding.weight.max(dim=0).values[None, :, None, None]
if args.init_image:
pil_image = Image.open(args.init_image).convert('RGB')
pil_image = pil_image.resize((sideX, sideY), Image.LANCZOS)
z, *_ = model.encode(TF.to_tensor(pil_image).to(device).unsqueeze(0) * 2 - 1)
else:
one_hot = F.one_hot(torch.randint(n_toks, [toksY * toksX], device=device), n_toks).float()
z = one_hot @ model.quantize.embedding.weight
z = z.view([-1, toksY, toksX, e_dim]).permute(0, 3, 1, 2)
z_orig = z.clone()
z.requires_grad_(True)
opt = optim.Adam([z], lr=args.step_size)
normalize = transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711])
pMs = []
for prompt in args.prompts:
txt, weight, stop = parse_prompt(prompt)
embed = perceptor.encode_text(clip.tokenize(txt).to(device)).float()
pMs.append(Prompt(embed, weight, stop).to(device))
for prompt in args.image_prompts:
path, weight, stop = parse_prompt(prompt)
img = resize_image(Image.open(path).convert('RGB'), (sideX, sideY))
batch = make_cutouts(TF.to_tensor(img).unsqueeze(0).to(device))
embed = perceptor.encode_image(normalize(batch)).float()
pMs.append(Prompt(embed, weight, stop).to(device))
for seed, weight in zip(args.noise_prompt_seeds, args.noise_prompt_weights):
gen = torch.Generator().manual_seed(seed)
embed = torch.empty([1, perceptor.visual.output_dim]).normal_(generator=gen)
pMs.append(Prompt(embed, weight).to(device))
def synth(z):
z_q = vector_quantize(z.movedim(1, 3), model.quantize.embedding.weight).movedim(3, 1)
return clamp_with_grad(model.decode(z_q).add(1).div(2), 0, 1)
def add_xmp_data(nombrefichero):
image = ImgTag(filename=nombrefichero)
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'creator', 'VQGAN+CLIP', {"prop_array_is_ordered":True, "prop_value_is_array":True})
if args.prompts:
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'title', " | ".join(args.prompts), {"prop_array_is_ordered":True, "prop_value_is_array":True})
else:
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'title', 'None', {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'i', str(i), {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'model', model_name, {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'seed',str(seed) , {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'input_images',str(input_images) , {"prop_array_is_ordered":True, "prop_value_is_array":True})
#for frases in args.prompts:
# image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'Prompt' ,frases, {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.close()
def add_stegano_data(filename):
data = {
"title": " | ".join(args.prompts) if args.prompts else None,
"notebook": notebook_name,
"i": i,
"model": model_name,
"seed": str(seed),
"input_images": input_images
}
lsb.hide(filename, json.dumps(data)).save(filename)
@torch.no_grad()
def checkin(i, losses):
losses_str = ', '.join(f'{loss.item():g}' for loss in losses)
tqdm.write(f'i: {i}, loss: {sum(losses).item():g}, losses: {losses_str}')
out = synth(z)
TF.to_pil_image(out[0].cpu()).save('progress.png')
add_stegano_data('progress.png')
add_xmp_data('progress.png')
display.display(display.Image('progress.png'))
def ascend_txt():
global i
out = synth(z)
iii = perceptor.encode_image(normalize(make_cutouts(out))).float()
result = []
if args.init_weight:
result.append(F.mse_loss(z, z_orig) * args.init_weight / 2)
for prompt in pMs:
result.append(prompt(iii))
img = np.array(out.mul(255).clamp(0, 255)[0].cpu().detach().numpy().astype(np.uint8))[:,:,:]
img = np.transpose(img, (1, 2, 0))
filename = f"vqgan-steps/{i:04}.png"
imageio.imwrite(filename, np.array(img))
add_stegano_data(filename)
add_xmp_data(filename)
return result
def train(i):
opt.zero_grad()
lossAll = ascend_txt()
if i % args.display_freq == 0:
checkin(i, lossAll)
loss = sum(lossAll)
loss.backward()
opt.step()
with torch.no_grad():
z.copy_(z.maximum(z_min).minimum(z_max))
i = 0
try:
with tqdm() as pbar:
while True:
train(i)
if i == max_iterations:
break
i += 1
pbar.update()
except KeyboardInterrupt:
pass
#@markdown # **CLIP-Guided Diffusion**
#@markdown ##### WARNING: This requires access to 16GB of VRAM reliably, so may not work for users not using Colab Pro/+
usingDiffusion = True;
torch.cuda.empty_cache()
with torch.no_grad():
torch.cuda.empty_cache()
prompt = "A group of vultures decide the behavior of the stock market\""#@param {type:"string"}
batch_size = 1#@param {type:"number"}
#@markdown Note: x4 and x16 models for CLIP may not work reliably on lower-memory machines
clip_model = "ViT-B/32" #@param ['RN50', 'RN101', 'RN50x4', 'RN50x16', 'ViT-B/32','ViT-B/16']
#@markdown Controls how much the image should look like the prompt.
clip_guidance_scale = 1000#@param {type:"number"}
#@markdown Controls the smoothness of the final output.
tv_scale = 150#@param {type:"number"}
cutn = 32#@param {type:"number"}
cut_pow = 0.5#@param {type:"number"}
n_batches = 1#@param {type:"number"}
#@markdown This can be an URL or Colab local path and must be in quotes.
init_image = None #@param {type:"string"}
#@markdown This needs to be between approx. 200 and 500 when using an init image.
#@markdown Higher values make the output look more like the init.
skip_timesteps = 0#@param {type:"number"}
diffusion_steps = 1500#@param {type:"number"}
if seed == -1:
seed = None
diff_image_size = 256 # size of image when using diffusion
diff_image_size = int(diff_image_size)
model_config = model_and_diffusion_defaults()
model_config.update({
'attention_resolutions': '32, 16, 8',
'class_cond': False,
'diffusion_steps': diffusion_steps,
'rescale_timesteps': True,
'timestep_respacing': str(diffusion_steps), # Modify this value to decrease the number of
# timesteps.
'image_size': 512,
'learn_sigma': True,
'noise_schedule': 'linear',
'num_channels': 256,
'num_head_channels': 64,
'num_res_blocks': 2,
'resblock_updown': True,
'use_fp16': True,
'use_scale_shift_norm': True,
})
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print('Executing using CLIP guided diffusion method')
if (prompt != None):
print('Using prompt: '+ prompt)
print('Using device:', device)
model, diffusion = create_model_and_diffusion(**model_config)
model.load_state_dict(torch.load(abs_root_path + "/models/" + '512x512_diffusion_uncond_finetune_008100.pt', map_location='cpu'))
model.requires_grad_(False).eval().to(device)
for name, param in model.named_parameters():
if 'qkv' in name or 'norm' in name or 'proj' in name:
param.requires_grad_()
if model_config['use_fp16']:
model.convert_to_fp16()
clip_model = clip.load(clip_model, jit=False)[0].eval().requires_grad_(False).to(device)
clip_size = clip_model.visual.input_resolution
normalize = transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711])
def do_run():
if seed is not None:
torch.manual_seed(seed)
text_embed = clip_model.encode_text(clip.tokenize(prompt).to(device)).float()
init = None
if init_image is not None:
init = Image.open(fetch(init_image)).convert('RGB')
init = init.resize((model_config['image_size'], model_config['image_size']), Image.LANCZOS)
init = TF.to_tensor(init).to(device).unsqueeze(0).mul(2).sub(1)
make_cutouts = MakeCutouts(clip_size, cutn, cut_pow)
cur_t = None
def cond_fn(x, t, y=None):
with torch.enable_grad():
x = x.detach().requires_grad_()
n = x.shape[0]
my_t = torch.ones([n], device=device, dtype=torch.long) * cur_t
out = diffusion.p_mean_variance(model, x, my_t, clip_denoised=False, model_kwargs={'y': y})
fac = diffusion.sqrt_one_minus_alphas_cumprod[cur_t]
x_in = out['pred_xstart'] * fac + x * (1 - fac)
clip_in = normalize(make_cutouts(x_in.add(1).div(2)))
image_embeds = clip_model.encode_image(clip_in).float().view([cutn, n, -1])
dists = spherical_dist_loss(image_embeds, text_embed.unsqueeze(0))
losses = dists.mean(0)
tv_losses = tv_loss(x_in)
loss = losses.sum() * clip_guidance_scale + tv_losses.sum() * tv_scale
return -torch.autograd.grad(loss, x)[0]
if model_config['timestep_respacing'].startswith('ddim'):
sample_fn = diffusion.ddim_sample_loop_progressive
else:
sample_fn = diffusion.p_sample_loop_progressive
for i in range(n_batches):
cur_t = diffusion.num_timesteps - skip_timesteps - 1
samples = sample_fn(
model,
(batch_size, 3, model_config['image_size'], model_config['image_size']),
clip_denoised=False,
model_kwargs={},
cond_fn=cond_fn,
progress=True,
skip_timesteps=skip_timesteps,
init_image=init,
randomize_class=True,
)
for j, sample in enumerate(samples):
cur_t -= 1
for k, image in enumerate(sample['pred_xstart']):
filename = f'diffusion-steps/{batch_size * j:05}.png'
TF.to_pil_image(image.add(1).div(2).clamp(0, 1)).save(filename)
if j % display_frequency == 0 or cur_t == -1:
tqdm.write(f'Batch {i}, step {j}, output {k}:')
print()
display.display(display.Image(filename))
do_run()
###Output
_____no_output_____
###Markdown
**Upscale an image or a folder of images**
###Code
#@title Image Upscaling Setup
# partially adapted from https://colab.research.google.com/github/AhabbscienceStudioPak/ESRGAN/blob/master/ESRGAN_Colab.ipynb#scrollTo=MZuFBZncXRy1
torch.cuda.empty_cache()
with torch.no_grad():
torch.cuda.empty_cache()
ESRGAN_path = abs_root_path + "/ESRGAN"
if not os.path.exists(ESRGAN_path):
os.mkdir(ESRGAN_path)
import gdown
print("Downloading pretrained models")
output1 = ESRGAN_path + '/models/RRDB_ESRGAN_x4.pth'
output2 = ESRGAN_path + '/models/RRDB_PSNR_x4.pth'
output3 = ESRGAN_path + '/models/PPON_D.pth'
output4 = ESRGAN_path + '/models/PPON_G.pth'
print ('Downloading RRDB_ESRGAN_x4.pth')
gdown.download('https://drive.google.com/uc?id=1TPrz5QKd8DHHt1k8SRtm6tMiPjz_Qene', output1, quiet=True)
print ('Downloading RRDB_PSNR_x4.pth')
gdown.download('https://drive.google.com/uc?id=1pJ_T-V1dpb1ewoEra1TGSWl5e6H7M4NN', output2, quiet=True)
print ('Downloading PPON_D.pth by Zheng Hui')
gdown.download('https://drive.google.com/uc?id=1Fr5aKCD6mw6P-hI0BZr6My2gHNhtUk-V', output3, quiet=True)
print ('Downloading PPON_G.pth by Zheng Hui')
gdown.download('https://drive.google.com/uc?id=12uR3BSftNA0HDYiKda23GyAj_crpSjOm', output4, quiet=True)
#@title **Execute Image Upscaling**
import os.path as osp
import glob
import cv2
import numpy as np
import torch
from ESRGAN import RRDBNet_arch as arch
import requests
import imageio
import requests
import warnings
warnings.filterwarnings("ignore")
from google.colab import files
Choose_device = "cuda"
model_path = 'models/RRDB_PSNR_x4.pth' #@param ['models/RRDB_ESRGAN_x4.pth','models/RRDB_PSNR_x4.pth','models/PPON_G.pth','models/PPON_D.pth']
device = torch.device(Choose_device)
model_path = ESRGAN_path + '/' + model_path
esr_target_directory = 'your path in quotes' #@param string
test_img_folder = esr_target_directory
model = arch.RRDBNet(3, 3, 64, 23, gc=32)
model.load_state_dict(torch.load(model_path), strict=True)
model.eval()
model = model.to(device)
print('Model path {:s}. \nTesting...'.format(model_path))
idx = 0
for filename in os.listdir(test_img_folder):
filename = test_img_folder + "/" + filename
idx += 1
base = osp.splitext(osp.basename(filename))[0]
print(idx, base)
# read images
img = cv2.imread(filename, cv2.IMREAD_COLOR)
img = img * 1.0 / 255.0
img = torch.from_numpy(np.transpose(img[:, :, [2, 1, 0]], (2, 0, 1))).float()
img_LR = img.unsqueeze(0)
img_LR = img_LR.to(device)
with torch.no_grad():
output = model(img_LR).data.squeeze().float().cpu().clamp_(0, 1).numpy()
output = np.transpose(output[[2, 1, 0], :, :], (1, 2, 0))
output = (output * 255.0).round()
imageio.imwrite('ESRGAN/results/{:s}.png'.format(base), output.astype(np.uint8))
###Output
_____no_output_____
###Markdown
**Generate a video** Consider playing with higher FPS rates if you used the CLIP-guided diffusion method!
###Code
#@title Generate video using ffmpeg
init_frame = 1#@param {type: "number"}
last_frame = 25#@param {type: "number"}
min_fps = 60#@param {type: "number"}
max_fps = 60#@param {type: "number"}
total_frames = last_frame-init_frame
# Desired video runtime in seconds
length = 1#@param {type: "number"}
use_upscaled_images = False #@param {type: "boolean"}
frames = []
tqdm.write('Generating video...')
if use_upscaled_images == True:
for filename in os.listdir(ESRGAN_path + "/results/"):
filename = f"{ESRGAN_path}/results/{filename}"
frames.append(Image.open(filename))
elif use_upscaled_images == False:
for i in range(init_frame,last_frame): #
if usingDiffusion == False:
filename = f"{abs_root_path}/vqgan-steps/{i:04}.png"
frames.append(Image.open(filename))
elif usingDiffusion == True:
filename = f"{abs_root_path}/diffusion-steps/{i:05}.png"
frames.append(Image.open(filename))
#fps = last_frame/10
fps = np.clip(total_frames/length,min_fps,max_fps)
# Names the video after the prompt if there is one, if not, defaults to video.mp4
def listToString(s):
# initialize an empty string
str1 = ""
# traverse in the string
for ele in s:
str1 += ele
# return string
return str1
video_filename = "video" #@param {type: "string"}
#@markdown Note: using images previously upscaled by ESRGAN may take longer to generate
video_filename = listToString(video_filename).replace(" ","_")
print("Video filename: "+ video_filename)
video_filename = video_filename + ".mp4"
from subprocess import Popen, PIPE
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-r', str(fps), '-pix_fmt', 'yuv420p', '-crf', '17', '-preset', 'veryslow', video_filename], stdin=PIPE)
for im in tqdm(frames):
im.save(p.stdin, 'PNG')
p.stdin.close()
print("Compressing video...")
p.wait()
print("Video ready.")
# @title Download video
from google.colab import files
files.download(video_filename)
# @title View video in browser
#@markdown
mp4 = open(video_filename,'rb').read()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
display.HTML("""
<video width=400 controls>
<source src="%s" type="video/mp4">
</video>
""" % data_url)
###Output
_____no_output_____
###Markdown
**Clear all generated files**
###Code
!rm -rf {abs_root_path}"/diffusion-steps"
!rm -rf {abs_root_path}"/vqgan-steps"
!rm -rf {ESRGAN_path}"/results"
###Output
_____no_output_____
###Markdown
Follow & tag me in your art, revision requests etc. on Twitter: [@somewheresy](https://twitter.com/somewheresy) **LEGACY VERSION**: [S2 VQGAN+CLIP Classic.ipynb](https://github.com/justin-bennington/somewhere-ml/blob/main/S2_VQGAN%2BCLIP_Classic.ipynb)The notebook you're currently using is a multimodal GAN art generator patched together from various ML notebooks for generative art (see license). The results of this notebook may be distinct from others in the space.This notebook is great for procedurally generating new images from a text prompt or input image. At Somewhere Systems we use this for everything from generative landscapes to materials design for 3D graphics. Consider checking out our work @ https://s2.lol and hiring us to demystify technology like AR, ML, etc.
###Code
#@title MIT License
#
# Copyright (c) 2021 Katherine Crowson
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#@markdown What GPU am I using?
#@markdown V100 > P100 > everything else
!nvidia-smi --query-gpu=gpu_name,gpu_bus_id,vbios_version --format=csv
gpu_name = !nvidia-smi --query-gpu=gpu_name, --format=csv
###Output
_____no_output_____
###Markdown
**Filesystem Setup**
###Code
#@markdown Use Temp Filesystem (not recommended)
import os
abs_root_path = "/content"
def ensureProperRootPath():
if len(abs_root_path) > 0:
os.chdir(abs_root_path) # Changes directory to absolute root path
print("Root path check: ")
!pwd
ensureProperRootPath()
print("Well, I guess you're really bent on using the temporary runtime directory for some reason. Anyway, your root directory is: ")
!pwd
#@title Connect Google Drive (recommended)
import os
abs_root_path = "/content"
from google.colab import drive
drive.mount('/content/drive')
def ensureProperRootPath():
if len(abs_root_path) > 0:
os.chdir(abs_root_path) # Changes directory to absolute root path
print("Root path check: ")
!pwd
ensureProperRootPath()
#@title Make a new folder & set root path to that folder (recommended)
#@markdown Saves a step if you don't have a folder in your Google Drive for this. Makes one, sets the root_path to that new folder. You can name it whatever you'd like:
folder_name = "AI_ART" #@param {type: "string"}
abs_root_path = "/content"
if len(folder_name) > 0:
path_tmp = abs_root_path + "/drive/MyDrive/" + folder_name
if not os.path.exists(path_tmp):
os.mkdir(path_tmp)
abs_root_path = path_tmp
print("Created folder & set root path to: " + abs_root_path)
#@markdown Make & assign path to a project subfolder (optional)
project_name = "ALL_DATASETS_TEST" #@param {type: "string"}
if len(project_name) > 0:
path_tmp = abs_root_path + "/" + project_name
if not os.path.exists(path_tmp):
os.mkdir(path_tmp)
abs_root_path = path_tmp
print("Created project subfolder & set root path to: " + abs_root_path)
ensureProperRootPath()
###Output
_____no_output_____
###Markdown
**Dependencies**
###Code
# @title Library Installation
import os
!nvidia-smi
print("Downloading CLIP...")
!git clone https://github.com/openai/CLIP &> /dev/null
!git clone https://github.com/crowsonkb/guided-diffusion &> /dev/null
!pip install -e ./CLIP &> /dev/null
print("Installing library for guided diffusion...")
!pip install -e ./guided-diffusion &> /dev/null
print("Installing Python Libraries for AI")
!git clone https://github.com/CompVis/taming-transformers &> /dev/null
!pip install ftfy regex tqdm omegaconf pytorch-lightning &> /dev/null
!pip install kornia &> /dev/null
!pip install einops &> /dev/null
print("Installing transformers library...")
!pip install transformers &> /dev/null
print("Installing libraries for managing metadata...")
!pip install stegano &> /dev/null
!apt install exempi &> /dev/null
!pip install python-xmp-toolkit &> /dev/null
!pip install imgtag &> /dev/null
!pip install pillow==7.1.2 &> /dev/null
!pip install taming-transformers &> /dev/null
print("Installing ESRGAN for image upscaling...")
!git clone https://github.com/xinntao/ESRGAN &> /dev/null
print("Installing ffmpeg for creating videos...")
!pip install imageio-ffmpeg &> /dev/null
if not os.path.exists(abs_root_path + "/vqgan-steps"):
!mkdir "vqgan-steps" &> /dev/null
print("No directory for VQGAN+CLIP image output found. Made directory: ~/vqgan-steps")
if not os.path.exists(abs_root_path + "/diffusion-steps"):
!mkdir "diffusion-steps" &> /dev/null
print("No directory for CLIP-guided diffusion image output found. Made directory: ~/diffusion-steps")
!pip freeze > requirements.txt
print("Installation finished.")
#@title Download pre-trained models
#@markdown Ensure you select a model you've downloaded in the parameters block
#@markdown The below radio button downloads the model for CLIP-guided diffusion, a method that takes a bit longer to produce good results but generally makes more "realistic" interpretations of the prompt. If you'd like to use this, make sure to download Katherine Crowson's diffusion model.
diffusion = True #@param {type: "boolean"}
#@markdown Models for VQGAN+CLIP method
imagenet_1024 = False #@param {type:"boolean"}
imagenet_16384 = False #@param {type:"boolean"}
coco = False #@param {type:"boolean"}
# faceshq = True #@param {type:"boolean"}
# wikiart_1024 = True #@param {type:"boolean"}
wikiart_16384 = False #@param {type:"boolean"}
sflckr = False #@param {type:"boolean"}
if not os.path.exists(abs_root_path + "/models"):
os.mkdir(abs_root_path + "/models")
os.chdir(abs_root_path + "/models")
print("changing to models subdirectory: ")
!pwd
if imagenet_1024:
# !curl -L -o vqgan_imagenet_f16_1024.yaml -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_1024.yaml' #ImageNet 1024
# !curl -L -o vqgan_imagenet_f16_1024.ckpt -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_1024.ckpt' #ImageNet 1024
!curl -L -o vqgan_imagenet_f16_1024.ckpt -C - 'https://heibox.uni-heidelberg.de/f/140747ba53464f49b476/?dl=1'
!curl -L -o vqgan_imagenet_f16_1024.yaml -C - 'https://heibox.uni-heidelberg.de/f/6ecf2af6c658432c8298/?dl=1'
if imagenet_16384:
# !curl -L -o vqgan_imagenet_f16_16384.yaml -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_16384.yaml' #ImageNet 16384
# !curl -L -o vqgan_imagenet_f16_16384.ckpt -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_16384.ckpt' #ImageNet 16384
!curl -L -o vqgan_imagenet_f16_16384.ckpt -C - 'https://heibox.uni-heidelberg.de/f/867b05fc8c4841768640/?dl=1'
!curl -L -o vqgan_imagenet_f16_16384.yaml -C - 'https://heibox.uni-heidelberg.de/f/274fb24ed38341bfa753/?dl=1'
if coco:
!curl -L -o coco.yaml -C - 'https://dl.nmkd.de/ai/clip/coco/coco.yaml' #COCO
!curl -L -o coco.ckpt -C - 'https://dl.nmkd.de/ai/clip/coco/coco.ckpt' #COCO
#if faceshq:
#!curl -L -o faceshq.yaml -C - 'https://drive.google.com/uc?export=download&id=1fHwGx_hnBtC8nsq7hesJvs-Klv-P0gzT' #FacesHQ
#!curl -L -o faceshq.ckpt -C - 'https://app.koofr.net/content/links/a04deec9-0c59-4673-8b37-3d696fe63a5d/files/get/last.ckpt?path=%2F2020-11-13T21-41-45_faceshq_transformer%2Fcheckpoints%2Flast.ckpt' #FacesHQ
#if wikiart_1024:
#!curl -L -o wikiart_1024.yaml -C - 'http://mirror.io.community/blob/vqgan/wikiart.yaml' #WikiArt 1024
#!curl -L -o wikiart_1024.ckpt -C - 'http://mirror.io.community/blob/vqgan/wikiart.ckpt' #WikiArt 1024
if wikiart_16384:
# !curl -L -o wikiart_16384.yaml -C - 'http://mirror.io.community/blob/vqgan/wikiart_16384.yaml' #WikiArt 16384
# !curl -L -o wikiart_16384.ckpt -C - 'http://mirror.io.community/blob/vqgan/wikiart_16384.ckpt' #WikiArt 16384
!curl -L -o wikiart_16384.ckpt -C - 'http://eaidata.bmk.sh/data/Wikiart_16384/wikiart_f16_16384_8145600.ckpt'
!curl -L -o wikiart_16384.yaml -C - 'http://eaidata.bmk.sh/data/Wikiart_16384/wikiart_f16_16384_8145600.yaml'
if sflckr:
!curl -L -o sflckr.yaml -C - 'https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/files/?p=%2Fconfigs%2F2020-11-09T13-31-51-project.yaml&dl=1' #S-FLCKR
!curl -L -o sflckr.ckpt -C - 'https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/files/?p=%2Fcheckpoints%2Flast.ckpt&dl=1' #S-FLCKR
if diffusion:
# Download the diffusion model
!curl -L -o 512x512_diffusion_uncond_finetune_008100.pt -C - 'https://the-eye.eu/public/AI/models/512x512_diffusion_unconditional_ImageNet/512x512_diffusion_uncond_finetune_008100.pt' #DIFFUSION 512
!curl -L -o 512x512_diffusion_uncond_finetune_008100.pt -C - 'https://the-eye.eu/public/AI/models/512x512_diffusion_unconditional_ImageNet/512x512_diffusion_uncond_finetune_008100.pt' #DIFFUSION 256
# @title Load libraries and definitions
print(abs_root_path)
os.chdir(abs_root_path)
!pwd
import argparse
import math
from pathlib import Path
import io
import sys
sys.path.append('./taming-transformers')
from IPython import display
from base64 import b64encode
from omegaconf import OmegaConf
from PIL import Image
from taming.models import cond_transformer, vqgan
import torch
from torch import nn, optim
from torch.nn import functional as F
from torchvision import transforms
from torchvision.transforms import functional as TF
from tqdm.notebook import tqdm
from CLIP import clip
import kornia.augmentation as K
import numpy as np
import imageio
from PIL import ImageFile, Image
from imgtag import ImgTag # metadata
from libxmp import * # metadata
import libxmp # metadata
from stegano import lsb
import json
ImageFile.LOAD_TRUNCATED_IMAGES = True
sys.path.append('./CLIP')
sys.path.append('./guided-diffusion')
import clip
from guided_diffusion.script_util import create_model_and_diffusion, model_and_diffusion_defaults
def sinc(x):
return torch.where(x != 0, torch.sin(math.pi * x) / (math.pi * x), x.new_ones([]))
def lanczos(x, a):
cond = torch.logical_and(-a < x, x < a)
out = torch.where(cond, sinc(x) * sinc(x/a), x.new_zeros([]))
return out / out.sum()
def ramp(ratio, width):
n = math.ceil(width / ratio + 1)
out = torch.empty([n])
cur = 0
for i in range(out.shape[0]):
out[i] = cur
cur += ratio
return torch.cat([-out[1:].flip([0]), out])[1:-1]
def resample(input, size, align_corners=True):
n, c, h, w = input.shape
dh, dw = size
input = input.view([n * c, 1, h, w])
if dh < h:
kernel_h = lanczos(ramp(dh / h, 2), 2).to(input.device, input.dtype)
pad_h = (kernel_h.shape[0] - 1) // 2
input = F.pad(input, (0, 0, pad_h, pad_h), 'reflect')
input = F.conv2d(input, kernel_h[None, None, :, None])
if dw < w:
kernel_w = lanczos(ramp(dw / w, 2), 2).to(input.device, input.dtype)
pad_w = (kernel_w.shape[0] - 1) // 2
input = F.pad(input, (pad_w, pad_w, 0, 0), 'reflect')
input = F.conv2d(input, kernel_w[None, None, None, :])
input = input.view([n, c, h, w])
return F.interpolate(input, size, mode='bicubic', align_corners=align_corners)
class ReplaceGrad(torch.autograd.Function):
@staticmethod
def forward(ctx, x_forward, x_backward):
ctx.shape = x_backward.shape
return x_forward
@staticmethod
def backward(ctx, grad_in):
return None, grad_in.sum_to_size(ctx.shape)
replace_grad = ReplaceGrad.apply
class ClampWithGrad(torch.autograd.Function):
@staticmethod
def forward(ctx, input, min, max):
ctx.min = min
ctx.max = max
ctx.save_for_backward(input)
return input.clamp(min, max)
@staticmethod
def backward(ctx, grad_in):
input, = ctx.saved_tensors
return grad_in * (grad_in * (input - input.clamp(ctx.min, ctx.max)) >= 0), None, None
clamp_with_grad = ClampWithGrad.apply
def vector_quantize(x, codebook):
d = x.pow(2).sum(dim=-1, keepdim=True) + codebook.pow(2).sum(dim=1) - 2 * x @ codebook.T
indices = d.argmin(-1)
x_q = F.one_hot(indices, codebook.shape[0]).to(d.dtype) @ codebook
return replace_grad(x_q, x)
class Prompt(nn.Module):
def __init__(self, embed, weight=1., stop=float('-inf')):
super().__init__()
self.register_buffer('embed', embed)
self.register_buffer('weight', torch.as_tensor(weight))
self.register_buffer('stop', torch.as_tensor(stop))
def forward(self, input):
input_normed = F.normalize(input.unsqueeze(1), dim=2)
embed_normed = F.normalize(self.embed.unsqueeze(0), dim=2)
dists = input_normed.sub(embed_normed).norm(dim=2).div(2).arcsin().pow(2).mul(2)
dists = dists * self.weight.sign()
return self.weight.abs() * replace_grad(dists, torch.maximum(dists, self.stop)).mean()
def parse_prompt(prompt):
vals = prompt.rsplit(':', 2)
vals = vals + ['', '1', '-inf'][len(vals):]
return vals[0], float(vals[1]), float(vals[2])
class MakeCutouts(nn.Module):
def __init__(self, cut_size, cutn, cut_pow=1.):
super().__init__()
self.cut_size = cut_size
self.cutn = cutn
self.cut_pow = cut_pow
self.augs = nn.Sequential(
K.RandomHorizontalFlip(p=0.5),
# K.RandomSolarize(0.01, 0.01, p=0.7),
K.RandomSharpness(0.3,p=0.4),
K.RandomAffine(degrees=30, translate=0.1, p=0.8, padding_mode='border'),
K.RandomPerspective(0.2,p=0.4),
K.ColorJitter(hue=0.01, saturation=0.01, p=0.7))
self.noise_fac = 0.1
def forward(self, input):
sideY, sideX = input.shape[2:4]
max_size = min(sideX, sideY)
min_size = min(sideX, sideY, self.cut_size)
cutouts = []
for _ in range(self.cutn):
size = int(torch.rand([])**self.cut_pow * (max_size - min_size) + min_size)
offsetx = torch.randint(0, sideX - size + 1, ())
offsety = torch.randint(0, sideY - size + 1, ())
cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]
cutouts.append(resample(cutout, (self.cut_size, self.cut_size)))
batch = self.augs(torch.cat(cutouts, dim=0))
if self.noise_fac:
facs = batch.new_empty([self.cutn, 1, 1, 1]).uniform_(0, self.noise_fac)
batch = batch + facs * torch.randn_like(batch)
return batch
def load_vqgan_model(config_path, checkpoint_path):
config = OmegaConf.load(config_path)
if config.model.target == 'taming.models.vqgan.VQModel':
model = vqgan.VQModel(**config.model.params)
model.eval().requires_grad_(False)
model.init_from_ckpt(checkpoint_path)
elif config.model.target == 'taming.models.cond_transformer.Net2NetTransformer':
parent_model = cond_transformer.Net2NetTransformer(**config.model.params)
parent_model.eval().requires_grad_(False)
parent_model.init_from_ckpt(checkpoint_path)
model = parent_model.first_stage_model
else:
raise ValueError(f'unknown model type: {config.model.target}')
del model.loss
return model
def resize_image(image, out_size):
ratio = image.size[0] / image.size[1]
area = min(image.size[0] * image.size[1], out_size[0] * out_size[1])
size = round((area * ratio)**0.5), round((area / ratio)**0.5)
return image.resize(size, Image.LANCZOS)
# Define necessary functions for CLIP guided diffusion
def fetch(url_or_path):
if str(url_or_path).startswith('http://') or str(url_or_path).startswith('https://'):
r = requests.get(url_or_path)
r.raise_for_status()
fd = io.BytesIO()
fd.write(r.content)
fd.seek(0)
return fd
return open(url_or_path, 'rb')
class MakeCutouts(nn.Module):
def __init__(self, cut_size, cutn, cut_pow=1.):
super().__init__()
self.cut_size = cut_size
self.cutn = cutn
self.cut_pow = cut_pow
def forward(self, input):
sideY, sideX = input.shape[2:4]
max_size = min(sideX, sideY)
min_size = min(sideX, sideY, self.cut_size)
cutouts = []
for _ in range(self.cutn):
size = int(torch.rand([])**self.cut_pow * (max_size - min_size) + min_size)
offsetx = torch.randint(0, sideX - size + 1, ())
offsety = torch.randint(0, sideY - size + 1, ())
cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]
cutouts.append(F.adaptive_avg_pool2d(cutout, self.cut_size))
return torch.cat(cutouts)
def spherical_dist_loss(x, y):
x = F.normalize(x, dim=-1)
y = F.normalize(y, dim=-1)
return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)
def tv_loss(input):
"""L2 total variation loss, as in Mahendran et al."""
input = F.pad(input, (0, 1, 0, 1), 'replicate')
x_diff = input[..., :-1, 1:] - input[..., :-1, :-1]
y_diff = input[..., 1:, :-1] - input[..., :-1, :-1]
return (x_diff**2 + y_diff**2).mean([1, 2, 3])
###Output
_____no_output_____
###Markdown
**Image Generation**
###Code
#@markdown ### Set Global Parameters
os.chdir(abs_root_path)
seed = -1#@param {type:"number"}
display_frequency = 50#@param {type:"number"}
usingDiffusion = False;
#@markdown #**VQGAN+CLIP**
prompts = "A duck sitting in a pond" #@param {type:"string"}
width = 512#@param {type:"number"}
height = 512#@param {type:"number"}
#@markdown Note: x4 and x16 models for CLIP may not work reliably on lower-memory machines
clip_model = "ViT-B/32" #@param ['RN50', 'RN101', 'RN50x4', 'RN50x16', 'ViT-B/32','ViT-B/16']
vqgan_model = "vqgan_imagenet_f16_16384" #@param ["vqgan_imagenet_f16_16384", "vqgan_imagenet_f16_1024", "wikiart_16384", "coco", "sflckr"]
initial_image = ""#@param {type:"string"}
target_images = ""#@param {type:"string"}
max_iterations = 10000#@param {type:"number"}
input_images = ""
#@markdown ### Advanced VQGAN+CLIP Parameters
vq_init_weight = 0.0#@param {type:"number"}
vq_step_size = 0.1#@param {type:"number"}
vq_cutn = 64#@param {type:"number"}
vq_cutpow = 1.0#@param {type:"number"}
model_names={"vqgan_imagenet_f16_16384": 'ImageNet 16384',"vqgan_imagenet_f16_1024":"ImageNet 1024",
"wikiart_1024":"WikiArt 1024", "wikiart_16384":"WikiArt 16384", "coco":"COCO-Stuff", "faceshq":"FacesHQ", "sflckr":"S-FLCKR"}
model_name = model_names[vqgan_model]
torch.cuda.empty_cache()
with torch.no_grad():
torch.cuda.empty_cache()
if seed == -1:
seed = None
if initial_image == "None":
initial_image = None
if target_images == "None" or not target_images:
target_images = []
else:
target_images = target_images.split("|")
target_images = [image.strip() for image in target_images]
if initial_image or target_images != []:
input_images = True
prompts = [frase.strip() for frase in prompts.split("|")]
if prompts == ['']:
prompts = []
args = argparse.Namespace(
prompts=prompts,
image_prompts=target_images,
noise_prompt_seeds=[],
noise_prompt_weights=[],
size=[width, height],
init_image=initial_image,
init_weight=0.,
clip_model= clip_model,
vqgan_config=f'models/{vqgan_model}.yaml',
vqgan_checkpoint=f'models/{vqgan_model}.ckpt',
step_size=vq_step_size,
cutn=vq_cutn,
cut_pow=vq_cutpow,
display_freq=display_frequency,
seed=seed,
)
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
notebook_name = "VQGAN+CLIP"
print('Executing using VQGAN+CLIP method')
print('Using device:', device)
if prompts:
print('Using text prompt:', prompts)
if target_images:
print('Using image prompts:', target_images)
if args.seed is None:
seed = torch.seed()
else:
seed = args.seed
torch.manual_seed(seed)
print('Using seed:', seed)
model = load_vqgan_model(args.vqgan_config, args.vqgan_checkpoint).to(device)
perceptor = clip.load(args.clip_model, jit=False)[0].eval().requires_grad_(False).to(device)
cut_size = perceptor.visual.input_resolution
e_dim = model.quantize.e_dim
f = 2**(model.decoder.num_resolutions - 1)
make_cutouts = MakeCutouts(cut_size, args.cutn, cut_pow=args.cut_pow)
n_toks = model.quantize.n_e
toksX, toksY = args.size[0] // f, args.size[1] // f
sideX, sideY = toksX * f, toksY * f
z_min = model.quantize.embedding.weight.min(dim=0).values[None, :, None, None]
z_max = model.quantize.embedding.weight.max(dim=0).values[None, :, None, None]
if args.init_image:
pil_image = Image.open(args.init_image).convert('RGB')
pil_image = pil_image.resize((sideX, sideY), Image.LANCZOS)
z, *_ = model.encode(TF.to_tensor(pil_image).to(device).unsqueeze(0) * 2 - 1)
else:
one_hot = F.one_hot(torch.randint(n_toks, [toksY * toksX], device=device), n_toks).float()
z = one_hot @ model.quantize.embedding.weight
z = z.view([-1, toksY, toksX, e_dim]).permute(0, 3, 1, 2)
z_orig = z.clone()
z.requires_grad_(True)
opt = optim.Adam([z], lr=args.step_size)
normalize = transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711])
pMs = []
for prompt in args.prompts:
txt, weight, stop = parse_prompt(prompt)
embed = perceptor.encode_text(clip.tokenize(txt).to(device)).float()
pMs.append(Prompt(embed, weight, stop).to(device))
for prompt in args.image_prompts:
path, weight, stop = parse_prompt(prompt)
img = resize_image(Image.open(path).convert('RGB'), (sideX, sideY))
batch = make_cutouts(TF.to_tensor(img).unsqueeze(0).to(device))
embed = perceptor.encode_image(normalize(batch)).float()
pMs.append(Prompt(embed, weight, stop).to(device))
for seed, weight in zip(args.noise_prompt_seeds, args.noise_prompt_weights):
gen = torch.Generator().manual_seed(seed)
embed = torch.empty([1, perceptor.visual.output_dim]).normal_(generator=gen)
pMs.append(Prompt(embed, weight).to(device))
def synth(z):
z_q = vector_quantize(z.movedim(1, 3), model.quantize.embedding.weight).movedim(3, 1)
return clamp_with_grad(model.decode(z_q).add(1).div(2), 0, 1)
def add_xmp_data(nombrefichero):
image = ImgTag(filename=nombrefichero)
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'creator', 'VQGAN+CLIP', {"prop_array_is_ordered":True, "prop_value_is_array":True})
if args.prompts:
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'title', " | ".join(args.prompts), {"prop_array_is_ordered":True, "prop_value_is_array":True})
else:
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'title', 'None', {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'i', str(i), {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'model', model_name, {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'seed',str(seed) , {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'input_images',str(input_images) , {"prop_array_is_ordered":True, "prop_value_is_array":True})
#for frases in args.prompts:
# image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'Prompt' ,frases, {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.close()
def add_stegano_data(filename):
data = {
"title": " | ".join(args.prompts) if args.prompts else None,
"notebook": notebook_name,
"i": i,
"model": model_name,
"seed": str(seed),
"input_images": input_images
}
lsb.hide(filename, json.dumps(data)).save(filename)
@torch.no_grad()
def checkin(i, losses):
losses_str = ', '.join(f'{loss.item():g}' for loss in losses)
tqdm.write(f'i: {i}, loss: {sum(losses).item():g}, losses: {losses_str}')
out = synth(z)
TF.to_pil_image(out[0].cpu()).save('progress.png')
add_stegano_data('progress.png')
add_xmp_data('progress.png')
display.display(display.Image('progress.png'))
def ascend_txt():
global i
out = synth(z)
iii = perceptor.encode_image(normalize(make_cutouts(out))).float()
result = []
if args.init_weight:
result.append(F.mse_loss(z, z_orig) * args.init_weight / 2)
for prompt in pMs:
result.append(prompt(iii))
img = np.array(out.mul(255).clamp(0, 255)[0].cpu().detach().numpy().astype(np.uint8))[:,:,:]
img = np.transpose(img, (1, 2, 0))
filename = f"vqgan-steps/{i:04}.png"
imageio.imwrite(filename, np.array(img))
add_stegano_data(filename)
add_xmp_data(filename)
return result
def train(i):
opt.zero_grad()
lossAll = ascend_txt()
if i % args.display_freq == 0:
checkin(i, lossAll)
loss = sum(lossAll)
loss.backward()
opt.step()
with torch.no_grad():
z.copy_(z.maximum(z_min).minimum(z_max))
i = 0
try:
with tqdm() as pbar:
while True:
train(i)
if i == max_iterations:
break
i += 1
pbar.update()
except KeyboardInterrupt:
pass
#@markdown # **CLIP-Guided Diffusion**
#@markdown ##### WARNING: This requires access to 16GB of VRAM reliably, so may not work for users not using Colab Pro/+
usingDiffusion = True;
torch.cuda.empty_cache()
with torch.no_grad():
torch.cuda.empty_cache()
prompt = "A group of vultures decide the behavior of the stock market\""#@param {type:"string"}
batch_size = 1#@param {type:"number"}
#@markdown Note: x4 and x16 models for CLIP may not work reliably on lower-memory machines
clip_model = "ViT-B/32" #@param ['RN50', 'RN101', 'RN50x4', 'RN50x16', 'ViT-B/32','ViT-B/16']
#@markdown Controls how much the image should look like the prompt.
clip_guidance_scale = 1000#@param {type:"number"}
#@markdown Controls the smoothness of the final output.
tv_scale = 150#@param {type:"number"}
cutn = 32#@param {type:"number"}
cut_pow = 0.5#@param {type:"number"}
n_batches = 1#@param {type:"number"}
#@markdown This can be an URL or Colab local path and must be in quotes.
init_image = None #@param {type:"string"}
#@markdown This needs to be between approx. 200 and 500 when using an init image.
#@markdown Higher values make the output look more like the init.
skip_timesteps = 0#@param {type:"number"}
diffusion_steps = 1500#@param {type:"number"}
if seed == -1:
seed = None
diff_image_size = 256 # size of image when using diffusion
diff_image_size = int(diff_image_size)
model_config = model_and_diffusion_defaults()
model_config.update({
'attention_resolutions': '32, 16, 8',
'class_cond': False,
'diffusion_steps': diffusion_steps,
'rescale_timesteps': True,
'timestep_respacing': str(diffusion_steps), # Modify this value to decrease the number of
# timesteps.
'image_size': 512,
'learn_sigma': True,
'noise_schedule': 'linear',
'num_channels': 256,
'num_head_channels': 64,
'num_res_blocks': 2,
'resblock_updown': True,
'use_fp16': True,
'use_scale_shift_norm': True,
})
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print('Executing using CLIP guided diffusion method')
if (prompt != None):
print('Using prompt: '+ prompt)
print('Using device:', device)
model, diffusion = create_model_and_diffusion(**model_config)
model.load_state_dict(torch.load(abs_root_path + "/models/" + '512x512_diffusion_uncond_finetune_008100.pt', map_location='cpu'))
model.requires_grad_(False).eval().to(device)
for name, param in model.named_parameters():
if 'qkv' in name or 'norm' in name or 'proj' in name:
param.requires_grad_()
if model_config['use_fp16']:
model.convert_to_fp16()
clip_model = clip.load(clip_model, jit=False)[0].eval().requires_grad_(False).to(device)
clip_size = clip_model.visual.input_resolution
normalize = transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711])
def do_run():
if seed is not None:
torch.manual_seed(seed)
text_embed = clip_model.encode_text(clip.tokenize(prompt).to(device)).float()
init = None
if init_image is not None:
init = Image.open(fetch(init_image)).convert('RGB')
init = init.resize((model_config['image_size'], model_config['image_size']), Image.LANCZOS)
init = TF.to_tensor(init).to(device).unsqueeze(0).mul(2).sub(1)
make_cutouts = MakeCutouts(clip_size, cutn, cut_pow)
cur_t = None
def cond_fn(x, t, y=None):
with torch.enable_grad():
x = x.detach().requires_grad_()
n = x.shape[0]
my_t = torch.ones([n], device=device, dtype=torch.long) * cur_t
out = diffusion.p_mean_variance(model, x, my_t, clip_denoised=False, model_kwargs={'y': y})
fac = diffusion.sqrt_one_minus_alphas_cumprod[cur_t]
x_in = out['pred_xstart'] * fac + x * (1 - fac)
clip_in = normalize(make_cutouts(x_in.add(1).div(2)))
image_embeds = clip_model.encode_image(clip_in).float().view([cutn, n, -1])
dists = spherical_dist_loss(image_embeds, text_embed.unsqueeze(0))
losses = dists.mean(0)
tv_losses = tv_loss(x_in)
loss = losses.sum() * clip_guidance_scale + tv_losses.sum() * tv_scale
return -torch.autograd.grad(loss, x)[0]
if model_config['timestep_respacing'].startswith('ddim'):
sample_fn = diffusion.ddim_sample_loop_progressive
else:
sample_fn = diffusion.p_sample_loop_progressive
for i in range(n_batches):
cur_t = diffusion.num_timesteps - skip_timesteps - 1
samples = sample_fn(
model,
(batch_size, 3, model_config['image_size'], model_config['image_size']),
clip_denoised=False,
model_kwargs={},
cond_fn=cond_fn,
progress=True,
skip_timesteps=skip_timesteps,
init_image=init,
randomize_class=True,
)
for j, sample in enumerate(samples):
cur_t -= 1
for k, image in enumerate(sample['pred_xstart']):
filename = f'diffusion-steps/{batch_size * j:05}.png'
TF.to_pil_image(image.add(1).div(2).clamp(0, 1)).save(filename)
if j % display_frequency == 0 or cur_t == -1:
tqdm.write(f'Batch {i}, step {j}, output {k}:')
print()
display.display(display.Image(filename))
do_run()
###Output
_____no_output_____
###Markdown
**Upscale an image or a folder of images**
###Code
#@title Image Upscaling Setup
# partially adapted from https://colab.research.google.com/github/AhabbscienceStudioPak/ESRGAN/blob/master/ESRGAN_Colab.ipynb#scrollTo=MZuFBZncXRy1
torch.cuda.empty_cache()
with torch.no_grad():
torch.cuda.empty_cache()
ESRGAN_path = abs_root_path + "/ESRGAN"
if not os.path.exists(ESRGAN_path):
os.mkdir(ESRGAN_path)
import gdown
print("Downloading pretrained models")
output1 = ESRGAN_path + '/models/RRDB_ESRGAN_x4.pth'
output2 = ESRGAN_path + '/models/RRDB_PSNR_x4.pth'
output3 = ESRGAN_path + '/models/PPON_D.pth'
output4 = ESRGAN_path + '/models/PPON_G.pth'
print ('Downloading RRDB_ESRGAN_x4.pth')
gdown.download('https://drive.google.com/uc?id=1TPrz5QKd8DHHt1k8SRtm6tMiPjz_Qene', output1, quiet=True)
print ('Downloading RRDB_PSNR_x4.pth')
gdown.download('https://drive.google.com/uc?id=1pJ_T-V1dpb1ewoEra1TGSWl5e6H7M4NN', output2, quiet=True)
print ('Downloading PPON_D.pth by Zheng Hui')
gdown.download('https://drive.google.com/uc?id=1Fr5aKCD6mw6P-hI0BZr6My2gHNhtUk-V', output3, quiet=True)
print ('Downloading PPON_G.pth by Zheng Hui')
gdown.download('https://drive.google.com/uc?id=12uR3BSftNA0HDYiKda23GyAj_crpSjOm', output4, quiet=True)
#@title **Execute Image Upscaling**
import os.path as osp
import glob
import cv2
import numpy as np
import torch
from ESRGAN import RRDBNet_arch as arch
import requests
import imageio
import requests
import warnings
warnings.filterwarnings("ignore")
from google.colab import files
Choose_device = "cuda"
model_path = 'models/RRDB_PSNR_x4.pth' #@param ['models/RRDB_ESRGAN_x4.pth','models/RRDB_PSNR_x4.pth','models/PPON_G.pth','models/PPON_D.pth']
device = torch.device(Choose_device)
model_path = ESRGAN_path + '/' + model_path
esr_target_directory = 'your path in quotes' #@param string
test_img_folder = esr_target_directory
model = arch.RRDBNet(3, 3, 64, 23, gc=32)
model.load_state_dict(torch.load(model_path), strict=True)
model.eval()
model = model.to(device)
print('Model path {:s}. \nTesting...'.format(model_path))
idx = 0
for filename in os.listdir(test_img_folder):
filename = test_img_folder + "/" + filename
idx += 1
base = osp.splitext(osp.basename(filename))[0]
print(idx, base)
# read images
img = cv2.imread(filename, cv2.IMREAD_COLOR)
img = img * 1.0 / 255.0
img = torch.from_numpy(np.transpose(img[:, :, [2, 1, 0]], (2, 0, 1))).float()
img_LR = img.unsqueeze(0)
img_LR = img_LR.to(device)
with torch.no_grad():
output = model(img_LR).data.squeeze().float().cpu().clamp_(0, 1).numpy()
output = np.transpose(output[[2, 1, 0], :, :], (1, 2, 0))
output = (output * 255.0).round()
imageio.imwrite('ESRGAN/results/{:s}.png'.format(base), output.astype(np.uint8))
###Output
_____no_output_____
###Markdown
**Generate a video** Consider playing with higher FPS rates if you used the CLIP-guided diffusion method!
###Code
#@title Generate video using ffmpeg
init_frame = 1#@param {type: "number"}
last_frame = 25#@param {type: "number"}
min_fps = 60#@param {type: "number"}
max_fps = 60#@param {type: "number"}
total_frames = last_frame-init_frame
# Desired video runtime in seconds
length = 1#@param {type: "number"}
use_upscaled_images = True #@param {type: "boolean"}
frames = []
tqdm.write('Generating video...')
if use_upscaled_images == True:
for filename in os.listdir(ESRGAN_path + "/results/"):
filename = f"{ESRGAN_path}/results/{filename}"
frames.append(Image.open(filename))
elif use_upscaled_images == False:
for i in range(init_frame,last_frame): #
if usingDiffusion == False:
filename = f"{abs_root_path}/vqgan-steps/{i:04}.png"
elif usingDiffusion == True:
filename = f"{abs_root_path}/diffusion-steps/{i:05}.png"
frames.append(Image.open(filename))
#fps = last_frame/10
fps = np.clip(total_frames/length,min_fps,max_fps)
# Names the video after the prompt if there is one, if not, defaults to video.mp4
def listToString(s):
# initialize an empty string
str1 = ""
# traverse in the string
for ele in s:
str1 += ele
# return string
return str1
video_filename = "video" #@param {type: "string"}
#@markdown Note: using images previously upscaled by ESRGAN may take longer to generate
video_filename = listToString(video_filename).replace(" ","_")
print("Video filename: "+ video_filename)
video_filename = video_filename + ".mp4"
from subprocess import Popen, PIPE
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-r', str(fps), '-pix_fmt', 'yuv420p', '-crf', '17', '-preset', 'veryslow', video_filename], stdin=PIPE)
for im in tqdm(frames):
im.save(p.stdin, 'PNG')
p.stdin.close()
print("Compressing video...")
p.wait()
print("Video ready.")
# @title Download video
from google.colab import files
files.download(video_filename)
# @title View video in browser
#@markdown
mp4 = open(video_filename,'rb').read()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
display.HTML("""
<video width=400 controls>
<source src="%s" type="video/mp4">
</video>
""" % data_url)
###Output
_____no_output_____
###Markdown
**Clear all generated files**
###Code
!rm -rf {abs_root_path}"/diffusion-steps"
!rm -rf {abs_root_path}"/vqgan-steps"
!rm -rf {ESRGAN_path}"/results"
###Output
_____no_output_____
###Markdown
Follow & tag me in your art, revision requests etc. on Twitter: [@somewheresy](https://twitter.com/somewheresy) **LEGACY VERSION**: [S2 VQGAN+CLIP Classic.ipynb](https://github.com/justin-bennington/somewhere-ml/blob/main/S2_VQGAN%2BCLIP_Classic.ipynb)The notebook you're currently using is a multimodal GAN art generator patched together from various ML notebooks for generative art (see license). The results of this notebook may be distinct from others in the space.This notebook is great for procedurally generating new images from a text prompt or input image. At Somewhere Systems we use this for everything from generative landscapes to materials design for 3D graphics. Consider checking out our work @ https://s2.lol and hiring us to demystify technology like AR, ML, etc.
###Code
#@title MIT License
#
# Copyright (c) 2021 Katherine Crowson
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#@markdown What GPU am I using?
#@markdown V100 > P100 > everything else
!nvidia-smi --query-gpu=gpu_name,gpu_bus_id,vbios_version --format=csv
gpu_name = !nvidia-smi --query-gpu=gpu_name, --format=csv
###Output
_____no_output_____
###Markdown
**Filesystem Setup**
###Code
#@markdown Use Temp Filesystem (not recommended)
import os
abs_root_path = "/content"
def ensureProperRootPath():
if len(abs_root_path) > 0:
os.chdir(abs_root_path) # Changes directory to absolute root path
print("Root path check: ")
!pwd
ensureProperRootPath()
print("Well, I guess you're really bent on using the temporary runtime directory for some reason. Anyway, your root directory is: ")
!pwd
#@title Connect Google Drive (recommended)
import os
abs_root_path = "/content"
from google.colab import drive
drive.mount('/content/drive')
def ensureProperRootPath():
if len(abs_root_path) > 0:
os.chdir(abs_root_path) # Changes directory to absolute root path
print("Root path check: ")
!pwd
ensureProperRootPath()
#@title Make a new folder & set root path to that folder (recommended)
#@markdown Saves a step if you don't have a folder in your Google Drive for this. Makes one, sets the root_path to that new folder. You can name it whatever you'd like:
folder_name = "AI_ART" #@param {type: "string"}
abs_root_path = "/content"
if len(folder_name) > 0:
path_tmp = abs_root_path + "/drive/MyDrive/" + folder_name
if not os.path.exists(path_tmp):
os.mkdir(path_tmp)
abs_root_path = path_tmp
print("Created folder & set root path to: " + abs_root_path)
#@markdown Make & assign path to a project subfolder (optional)
project_name = "ALL_DATASETS_TEST" #@param {type: "string"}
if len(project_name) > 0:
path_tmp = abs_root_path + "/" + project_name
if not os.path.exists(path_tmp):
os.mkdir(path_tmp)
abs_root_path = path_tmp
print("Created project subfolder & set root path to: " + abs_root_path)
ensureProperRootPath()
###Output
_____no_output_____
###Markdown
**Dependencies**
###Code
# @title Library Installation
import os
!nvidia-smi
print("Downloading CLIP...")
!git clone https://github.com/openai/CLIP &> /dev/null
!git clone https://github.com/crowsonkb/guided-diffusion &> /dev/null
!pip install -e ./CLIP &> /dev/null
print("Installing library for guided diffusion...")
!pip install -e ./guided-diffusion &> /dev/null
print("Installing Python Libraries for AI")
!git clone https://github.com/CompVis/taming-transformers &> /dev/null
!pip install ftfy regex tqdm omegaconf pytorch-lightning &> /dev/null
!pip install kornia &> /dev/null
!pip install einops &> /dev/null
print("Installing transformers library...")
!pip install transformers &> /dev/null
print("Installing libraries for managing metadata...")
!pip install stegano &> /dev/null
!apt install exempi &> /dev/null
!pip install python-xmp-toolkit &> /dev/null
!pip install imgtag &> /dev/null
!pip install pillow==7.1.2 &> /dev/null
!pip install taming-transformers &> /dev/null
print("Installing ESRGAN for image upscaling...")
!git clone https://github.com/xinntao/ESRGAN &> /dev/null
print("Installing ffmpeg for creating videos...")
!pip install imageio-ffmpeg &> /dev/null
if not os.path.exists(abs_root_path + "/vqgan-steps"):
!mkdir "vqgan-steps" &> /dev/null
print("No directory for VQGAN+CLIP image output found. Made directory: ~/vqgan-steps")
if not os.path.exists(abs_root_path + "/diffusion-steps"):
!mkdir "diffusion-steps" &> /dev/null
print("No directory for CLIP-guided diffusion image output found. Made directory: ~/diffusion-steps")
!pip freeze > requirements.txt
print("Installation finished.")
#@title Download pre-trained models
#@markdown Ensure you select a model you've downloaded in the parameters block
#@markdown The below radio button downloads the model for CLIP-guided diffusion, a method that takes a bit longer to produce good results but generally makes more "realistic" interpretations of the prompt. If you'd like to use this, make sure to download Katherine Crowson's diffusion model.
diffusion = True #@param {type: "boolean"}
#@markdown Models for VQGAN+CLIP method
imagenet_1024 = False #@param {type:"boolean"}
imagenet_16384 = False #@param {type:"boolean"}
coco = False #@param {type:"boolean"}
# faceshq = True #@param {type:"boolean"}
# wikiart_1024 = True #@param {type:"boolean"}
wikiart_16384 = False #@param {type:"boolean"}
sflckr = False #@param {type:"boolean"}
if not os.path.exists(abs_root_path + "/models"):
os.mkdir(abs_root_path + "/models")
os.chdir(abs_root_path + "/models")
print("changing to models subdirectory: ")
!pwd
if imagenet_1024:
# !curl -L -o vqgan_imagenet_f16_1024.yaml -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_1024.yaml' #ImageNet 1024
# !curl -L -o vqgan_imagenet_f16_1024.ckpt -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_1024.ckpt' #ImageNet 1024
!curl -L -o vqgan_imagenet_f16_1024.ckpt -C - 'https://heibox.uni-heidelberg.de/f/140747ba53464f49b476/?dl=1'
!curl -L -o vqgan_imagenet_f16_1024.yaml -C - 'https://heibox.uni-heidelberg.de/f/6ecf2af6c658432c8298/?dl=1'
if imagenet_16384:
# !curl -L -o vqgan_imagenet_f16_16384.yaml -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_16384.yaml' #ImageNet 16384
# !curl -L -o vqgan_imagenet_f16_16384.ckpt -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_16384.ckpt' #ImageNet 16384
!curl -L -o vqgan_imagenet_f16_16384.ckpt -C - 'https://heibox.uni-heidelberg.de/f/867b05fc8c4841768640/?dl=1'
!curl -L -o vqgan_imagenet_f16_16384.yaml -C - 'https://heibox.uni-heidelberg.de/f/274fb24ed38341bfa753/?dl=1'
if coco:
!curl -L -o coco.yaml -C - 'https://dl.nmkd.de/ai/clip/coco/coco.yaml' #COCO
!curl -L -o coco.ckpt -C - 'https://dl.nmkd.de/ai/clip/coco/coco.ckpt' #COCO
#if faceshq:
#!curl -L -o faceshq.yaml -C - 'https://drive.google.com/uc?export=download&id=1fHwGx_hnBtC8nsq7hesJvs-Klv-P0gzT' #FacesHQ
#!curl -L -o faceshq.ckpt -C - 'https://app.koofr.net/content/links/a04deec9-0c59-4673-8b37-3d696fe63a5d/files/get/last.ckpt?path=%2F2020-11-13T21-41-45_faceshq_transformer%2Fcheckpoints%2Flast.ckpt' #FacesHQ
#if wikiart_1024:
#!curl -L -o wikiart_1024.yaml -C - 'http://mirror.io.community/blob/vqgan/wikiart.yaml' #WikiArt 1024
#!curl -L -o wikiart_1024.ckpt -C - 'http://mirror.io.community/blob/vqgan/wikiart.ckpt' #WikiArt 1024
if wikiart_16384:
# !curl -L -o wikiart_16384.yaml -C - 'http://mirror.io.community/blob/vqgan/wikiart_16384.yaml' #WikiArt 16384
# !curl -L -o wikiart_16384.ckpt -C - 'http://mirror.io.community/blob/vqgan/wikiart_16384.ckpt' #WikiArt 16384
!curl -L -o wikiart_16384.ckpt -C - 'http://eaidata.bmk.sh/data/Wikiart_16384/wikiart_f16_16384_8145600.ckpt'
!curl -L -o wikiart_16384.yaml -C - 'http://eaidata.bmk.sh/data/Wikiart_16384/wikiart_f16_16384_8145600.yaml'
if sflckr:
!curl -L -o sflckr.yaml -C - 'https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/files/?p=%2Fconfigs%2F2020-11-09T13-31-51-project.yaml&dl=1' #S-FLCKR
!curl -L -o sflckr.ckpt -C - 'https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/files/?p=%2Fcheckpoints%2Flast.ckpt&dl=1' #S-FLCKR
if diffusion:
# Download the diffusion model
!curl -L -o 512x512_diffusion_uncond_finetune_008100.pt -C - 'https://the-eye.eu/public/AI/models/512x512_diffusion_unconditional_ImageNet/512x512_diffusion_uncond_finetune_008100.pt' #DIFFUSION 512
!curl -L -o 512x512_diffusion_uncond_finetune_008100.pt -C - 'https://the-eye.eu/public/AI/models/512x512_diffusion_unconditional_ImageNet/512x512_diffusion_uncond_finetune_008100.pt' #DIFFUSION 256
# @title Load libraries and definitions
print(abs_root_path)
os.chdir(abs_root_path)
!pwd
import argparse
import math
from pathlib import Path
import io
import sys
sys.path.append('./taming-transformers')
from IPython import display
from base64 import b64encode
from omegaconf import OmegaConf
from PIL import Image
from taming.models import cond_transformer, vqgan
import torch
from torch import nn, optim
from torch.nn import functional as F
from torchvision import transforms
from torchvision.transforms import functional as TF
from tqdm.notebook import tqdm
from CLIP import clip
import kornia.augmentation as K
import numpy as np
import imageio
from PIL import ImageFile, Image
from imgtag import ImgTag # metadata
from libxmp import * # metadata
import libxmp # metadata
from stegano import lsb
import json
ImageFile.LOAD_TRUNCATED_IMAGES = True
sys.path.append('./CLIP')
sys.path.append('./guided-diffusion')
import clip
from guided_diffusion.script_util import create_model_and_diffusion, model_and_diffusion_defaults
def sinc(x):
return torch.where(x != 0, torch.sin(math.pi * x) / (math.pi * x), x.new_ones([]))
def lanczos(x, a):
cond = torch.logical_and(-a < x, x < a)
out = torch.where(cond, sinc(x) * sinc(x/a), x.new_zeros([]))
return out / out.sum()
def ramp(ratio, width):
n = math.ceil(width / ratio + 1)
out = torch.empty([n])
cur = 0
for i in range(out.shape[0]):
out[i] = cur
cur += ratio
return torch.cat([-out[1:].flip([0]), out])[1:-1]
def resample(input, size, align_corners=True):
n, c, h, w = input.shape
dh, dw = size
input = input.view([n * c, 1, h, w])
if dh < h:
kernel_h = lanczos(ramp(dh / h, 2), 2).to(input.device, input.dtype)
pad_h = (kernel_h.shape[0] - 1) // 2
input = F.pad(input, (0, 0, pad_h, pad_h), 'reflect')
input = F.conv2d(input, kernel_h[None, None, :, None])
if dw < w:
kernel_w = lanczos(ramp(dw / w, 2), 2).to(input.device, input.dtype)
pad_w = (kernel_w.shape[0] - 1) // 2
input = F.pad(input, (pad_w, pad_w, 0, 0), 'reflect')
input = F.conv2d(input, kernel_w[None, None, None, :])
input = input.view([n, c, h, w])
return F.interpolate(input, size, mode='bicubic', align_corners=align_corners)
class ReplaceGrad(torch.autograd.Function):
@staticmethod
def forward(ctx, x_forward, x_backward):
ctx.shape = x_backward.shape
return x_forward
@staticmethod
def backward(ctx, grad_in):
return None, grad_in.sum_to_size(ctx.shape)
replace_grad = ReplaceGrad.apply
class ClampWithGrad(torch.autograd.Function):
@staticmethod
def forward(ctx, input, min, max):
ctx.min = min
ctx.max = max
ctx.save_for_backward(input)
return input.clamp(min, max)
@staticmethod
def backward(ctx, grad_in):
input, = ctx.saved_tensors
return grad_in * (grad_in * (input - input.clamp(ctx.min, ctx.max)) >= 0), None, None
clamp_with_grad = ClampWithGrad.apply
def vector_quantize(x, codebook):
d = x.pow(2).sum(dim=-1, keepdim=True) + codebook.pow(2).sum(dim=1) - 2 * x @ codebook.T
indices = d.argmin(-1)
x_q = F.one_hot(indices, codebook.shape[0]).to(d.dtype) @ codebook
return replace_grad(x_q, x)
class Prompt(nn.Module):
def __init__(self, embed, weight=1., stop=float('-inf')):
super().__init__()
self.register_buffer('embed', embed)
self.register_buffer('weight', torch.as_tensor(weight))
self.register_buffer('stop', torch.as_tensor(stop))
def forward(self, input):
input_normed = F.normalize(input.unsqueeze(1), dim=2)
embed_normed = F.normalize(self.embed.unsqueeze(0), dim=2)
dists = input_normed.sub(embed_normed).norm(dim=2).div(2).arcsin().pow(2).mul(2)
dists = dists * self.weight.sign()
return self.weight.abs() * replace_grad(dists, torch.maximum(dists, self.stop)).mean()
def parse_prompt(prompt):
vals = prompt.rsplit(':', 2)
vals = vals + ['', '1', '-inf'][len(vals):]
return vals[0], float(vals[1]), float(vals[2])
class MakeCutouts(nn.Module):
def __init__(self, cut_size, cutn, cut_pow=1.):
super().__init__()
self.cut_size = cut_size
self.cutn = cutn
self.cut_pow = cut_pow
self.augs = nn.Sequential(
K.RandomHorizontalFlip(p=0.5),
# K.RandomSolarize(0.01, 0.01, p=0.7),
K.RandomSharpness(0.3,p=0.4),
K.RandomAffine(degrees=30, translate=0.1, p=0.8, padding_mode='border'),
K.RandomPerspective(0.2,p=0.4),
K.ColorJitter(hue=0.01, saturation=0.01, p=0.7))
self.noise_fac = 0.1
def forward(self, input):
sideY, sideX = input.shape[2:4]
max_size = min(sideX, sideY)
min_size = min(sideX, sideY, self.cut_size)
cutouts = []
for _ in range(self.cutn):
size = int(torch.rand([])**self.cut_pow * (max_size - min_size) + min_size)
offsetx = torch.randint(0, sideX - size + 1, ())
offsety = torch.randint(0, sideY - size + 1, ())
cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]
cutouts.append(resample(cutout, (self.cut_size, self.cut_size)))
batch = self.augs(torch.cat(cutouts, dim=0))
if self.noise_fac:
facs = batch.new_empty([self.cutn, 1, 1, 1]).uniform_(0, self.noise_fac)
batch = batch + facs * torch.randn_like(batch)
return batch
def load_vqgan_model(config_path, checkpoint_path):
config = OmegaConf.load(config_path)
if config.model.target == 'taming.models.vqgan.VQModel':
model = vqgan.VQModel(**config.model.params)
model.eval().requires_grad_(False)
model.init_from_ckpt(checkpoint_path)
elif config.model.target == 'taming.models.cond_transformer.Net2NetTransformer':
parent_model = cond_transformer.Net2NetTransformer(**config.model.params)
parent_model.eval().requires_grad_(False)
parent_model.init_from_ckpt(checkpoint_path)
model = parent_model.first_stage_model
else:
raise ValueError(f'unknown model type: {config.model.target}')
del model.loss
return model
def resize_image(image, out_size):
ratio = image.size[0] / image.size[1]
area = min(image.size[0] * image.size[1], out_size[0] * out_size[1])
size = round((area * ratio)**0.5), round((area / ratio)**0.5)
return image.resize(size, Image.LANCZOS)
# Define necessary functions for CLIP guided diffusion
def fetch(url_or_path):
if str(url_or_path).startswith('http://') or str(url_or_path).startswith('https://'):
r = requests.get(url_or_path)
r.raise_for_status()
fd = io.BytesIO()
fd.write(r.content)
fd.seek(0)
return fd
return open(url_or_path, 'rb')
class MakeCutouts(nn.Module):
def __init__(self, cut_size, cutn, cut_pow=1.):
super().__init__()
self.cut_size = cut_size
self.cutn = cutn
self.cut_pow = cut_pow
def forward(self, input):
sideY, sideX = input.shape[2:4]
max_size = min(sideX, sideY)
min_size = min(sideX, sideY, self.cut_size)
cutouts = []
for _ in range(self.cutn):
size = int(torch.rand([])**self.cut_pow * (max_size - min_size) + min_size)
offsetx = torch.randint(0, sideX - size + 1, ())
offsety = torch.randint(0, sideY - size + 1, ())
cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]
cutouts.append(F.adaptive_avg_pool2d(cutout, self.cut_size))
return torch.cat(cutouts)
def spherical_dist_loss(x, y):
x = F.normalize(x, dim=-1)
y = F.normalize(y, dim=-1)
return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)
def tv_loss(input):
"""L2 total variation loss, as in Mahendran et al."""
input = F.pad(input, (0, 1, 0, 1), 'replicate')
x_diff = input[..., :-1, 1:] - input[..., :-1, :-1]
y_diff = input[..., 1:, :-1] - input[..., :-1, :-1]
return (x_diff**2 + y_diff**2).mean([1, 2, 3])
###Output
_____no_output_____
###Markdown
**Image Generation**
###Code
#@markdown ### Set Global Parameters
os.chdir(abs_root_path)
seed = -1#@param {type:"number"}
display_frequency = 50#@param {type:"number"}
usingDiffusion = False;
#@markdown #**VQGAN+CLIP**
prompts = "A duck sitting in a pond" #@param {type:"string"}
width = 512#@param {type:"number"}
height = 512#@param {type:"number"}
#@markdown Note: x4 and x16 models for CLIP may not work reliably on lower-memory machines
clip_model = "ViT-B/32" #@param ['RN50', 'RN101', 'RN50x4', 'RN50x16', 'ViT-B/32','ViT-B/16']
vqgan_model = "vqgan_imagenet_f16_16384" #@param ["vqgan_imagenet_f16_16384", "vqgan_imagenet_f16_1024", "wikiart_16384", "coco", "sflckr"]
initial_image = ""#@param {type:"string"}
target_images = ""#@param {type:"string"}
max_iterations = 10000#@param {type:"number"}
input_images = ""
#@markdown ### Advanced VQGAN+CLIP Parameters
vq_init_weight = 0.0#@param {type:"number"}
vq_step_size = 0.1#@param {type:"number"}
vq_cutn = 64#@param {type:"number"}
vq_cutpow = 1.0#@param {type:"number"}
model_names={"vqgan_imagenet_f16_16384": 'ImageNet 16384',"vqgan_imagenet_f16_1024":"ImageNet 1024",
"wikiart_1024":"WikiArt 1024", "wikiart_16384":"WikiArt 16384", "coco":"COCO-Stuff", "faceshq":"FacesHQ", "sflckr":"S-FLCKR"}
model_name = model_names[vqgan_model]
torch.cuda.empty_cache()
with torch.no_grad():
torch.cuda.empty_cache()
if seed == -1:
seed = None
if initial_image == "None":
initial_image = None
if target_images == "None" or not target_images:
target_images = []
else:
target_images = target_images.split("|")
target_images = [image.strip() for image in target_images]
if initial_image or target_images != []:
input_images = True
prompts = [frase.strip() for frase in prompts.split("|")]
if prompts == ['']:
prompts = []
args = argparse.Namespace(
prompts=prompts,
image_prompts=target_images,
noise_prompt_seeds=[],
noise_prompt_weights=[],
size=[width, height],
init_image=initial_image,
init_weight=0.,
clip_model= clip_model,
vqgan_config=f'models/{vqgan_model}.yaml',
vqgan_checkpoint=f'models/{vqgan_model}.ckpt',
step_size=vq_step_size,
cutn=vq_cutn,
cut_pow=vq_cutpow,
display_freq=display_frequency,
seed=seed,
)
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
notebook_name = "VQGAN+CLIP"
print('Executing using VQGAN+CLIP method')
print('Using device:', device)
if prompts:
print('Using text prompt:', prompts)
if target_images:
print('Using image prompts:', target_images)
if args.seed is None:
seed = torch.seed()
else:
seed = args.seed
torch.manual_seed(seed)
print('Using seed:', seed)
model = load_vqgan_model(args.vqgan_config, args.vqgan_checkpoint).to(device)
perceptor = clip.load(args.clip_model, jit=False)[0].eval().requires_grad_(False).to(device)
cut_size = perceptor.visual.input_resolution
e_dim = model.quantize.e_dim
f = 2**(model.decoder.num_resolutions - 1)
make_cutouts = MakeCutouts(cut_size, args.cutn, cut_pow=args.cut_pow)
n_toks = model.quantize.n_e
toksX, toksY = args.size[0] // f, args.size[1] // f
sideX, sideY = toksX * f, toksY * f
z_min = model.quantize.embedding.weight.min(dim=0).values[None, :, None, None]
z_max = model.quantize.embedding.weight.max(dim=0).values[None, :, None, None]
if args.init_image:
pil_image = Image.open(args.init_image).convert('RGB')
pil_image = pil_image.resize((sideX, sideY), Image.LANCZOS)
z, *_ = model.encode(TF.to_tensor(pil_image).to(device).unsqueeze(0) * 2 - 1)
else:
one_hot = F.one_hot(torch.randint(n_toks, [toksY * toksX], device=device), n_toks).float()
z = one_hot @ model.quantize.embedding.weight
z = z.view([-1, toksY, toksX, e_dim]).permute(0, 3, 1, 2)
z_orig = z.clone()
z.requires_grad_(True)
opt = optim.Adam([z], lr=args.step_size)
normalize = transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711])
pMs = []
for prompt in args.prompts:
txt, weight, stop = parse_prompt(prompt)
embed = perceptor.encode_text(clip.tokenize(txt).to(device)).float()
pMs.append(Prompt(embed, weight, stop).to(device))
for prompt in args.image_prompts:
path, weight, stop = parse_prompt(prompt)
img = resize_image(Image.open(path).convert('RGB'), (sideX, sideY))
batch = make_cutouts(TF.to_tensor(img).unsqueeze(0).to(device))
embed = perceptor.encode_image(normalize(batch)).float()
pMs.append(Prompt(embed, weight, stop).to(device))
for seed, weight in zip(args.noise_prompt_seeds, args.noise_prompt_weights):
gen = torch.Generator().manual_seed(seed)
embed = torch.empty([1, perceptor.visual.output_dim]).normal_(generator=gen)
pMs.append(Prompt(embed, weight).to(device))
def synth(z):
z_q = vector_quantize(z.movedim(1, 3), model.quantize.embedding.weight).movedim(3, 1)
return clamp_with_grad(model.decode(z_q).add(1).div(2), 0, 1)
def add_xmp_data(nombrefichero):
image = ImgTag(filename=nombrefichero)
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'creator', 'VQGAN+CLIP', {"prop_array_is_ordered":True, "prop_value_is_array":True})
if args.prompts:
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'title', " | ".join(args.prompts), {"prop_array_is_ordered":True, "prop_value_is_array":True})
else:
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'title', 'None', {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'i', str(i), {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'model', model_name, {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'seed',str(seed) , {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'input_images',str(input_images) , {"prop_array_is_ordered":True, "prop_value_is_array":True})
#for frases in args.prompts:
# image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'Prompt' ,frases, {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.close()
def add_stegano_data(filename):
data = {
"title": " | ".join(args.prompts) if args.prompts else None,
"notebook": notebook_name,
"i": i,
"model": model_name,
"seed": str(seed),
"input_images": input_images
}
lsb.hide(filename, json.dumps(data)).save(filename)
@torch.no_grad()
def checkin(i, losses):
losses_str = ', '.join(f'{loss.item():g}' for loss in losses)
tqdm.write(f'i: {i}, loss: {sum(losses).item():g}, losses: {losses_str}')
out = synth(z)
TF.to_pil_image(out[0].cpu()).save('progress.png')
add_stegano_data('progress.png')
add_xmp_data('progress.png')
display.display(display.Image('progress.png'))
def ascend_txt():
global i
out = synth(z)
iii = perceptor.encode_image(normalize(make_cutouts(out))).float()
result = []
if args.init_weight:
result.append(F.mse_loss(z, z_orig) * args.init_weight / 2)
for prompt in pMs:
result.append(prompt(iii))
img = np.array(out.mul(255).clamp(0, 255)[0].cpu().detach().numpy().astype(np.uint8))[:,:,:]
img = np.transpose(img, (1, 2, 0))
filename = f"vqgan-steps/{i:04}.png"
imageio.imwrite(filename, np.array(img))
add_stegano_data(filename)
add_xmp_data(filename)
return result
def train(i):
opt.zero_grad()
lossAll = ascend_txt()
if i % args.display_freq == 0:
checkin(i, lossAll)
loss = sum(lossAll)
loss.backward()
opt.step()
with torch.no_grad():
z.copy_(z.maximum(z_min).minimum(z_max))
i = 0
try:
with tqdm() as pbar:
while True:
train(i)
if i == max_iterations:
break
i += 1
pbar.update()
except KeyboardInterrupt:
pass
#@markdown # **CLIP-Guided Diffusion**
#@markdown ##### WARNING: This requires access to 16GB of VRAM reliably, so may not work for users not using Colab Pro/+
usingDiffusion = True;
torch.cuda.empty_cache()
with torch.no_grad():
torch.cuda.empty_cache()
prompt = "A group of vultures decide the behavior of the stock market\""#@param {type:"string"}
batch_size = 1#@param {type:"number"}
#@markdown Note: x4 and x16 models for CLIP may not work reliably on lower-memory machines
clip_model = "ViT-B/32" #@param ['RN50', 'RN101', 'RN50x4', 'RN50x16', 'ViT-B/32','ViT-B/16']
#@markdown Controls how much the image should look like the prompt.
clip_guidance_scale = 1000#@param {type:"number"}
#@markdown Controls the smoothness of the final output.
tv_scale = 150#@param {type:"number"}
cutn = 32#@param {type:"number"}
cut_pow = 0.5#@param {type:"number"}
n_batches = 1#@param {type:"number"}
#@markdown This can be an URL or Colab local path and must be in quotes.
init_image = None #@param {type:"string"}
#@markdown This needs to be between approx. 200 and 500 when using an init image.
#@markdown Higher values make the output look more like the init.
skip_timesteps = 0#@param {type:"number"}
diffusion_steps = 1500#@param {type:"number"}
if seed == -1:
seed = None
diff_image_size = 256 # size of image when using diffusion
diff_image_size = int(diff_image_size)
model_config = model_and_diffusion_defaults()
model_config.update({
'attention_resolutions': '32, 16, 8',
'class_cond': False,
'diffusion_steps': diffusion_steps,
'rescale_timesteps': True,
'timestep_respacing': str(diffusion_steps), # Modify this value to decrease the number of
# timesteps.
'image_size': 512,
'learn_sigma': True,
'noise_schedule': 'linear',
'num_channels': 256,
'num_head_channels': 64,
'num_res_blocks': 2,
'resblock_updown': True,
'use_fp16': True,
'use_scale_shift_norm': True,
})
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print('Executing using CLIP guided diffusion method')
if (prompt != None):
print('Using prompt: '+ prompt)
print('Using device:', device)
model, diffusion = create_model_and_diffusion(**model_config)
model.load_state_dict(torch.load(abs_root_path + "/models/" + '512x512_diffusion_uncond_finetune_008100.pt', map_location='cpu'))
model.requires_grad_(False).eval().to(device)
for name, param in model.named_parameters():
if 'qkv' in name or 'norm' in name or 'proj' in name:
param.requires_grad_()
if model_config['use_fp16']:
model.convert_to_fp16()
clip_model = clip.load(clip_model, jit=False)[0].eval().requires_grad_(False).to(device)
clip_size = clip_model.visual.input_resolution
normalize = transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711])
def do_run():
if seed is not None:
torch.manual_seed(seed)
text_embed = clip_model.encode_text(clip.tokenize(prompt).to(device)).float()
init = None
if init_image is not None:
init = Image.open(fetch(init_image)).convert('RGB')
init = init.resize((model_config['image_size'], model_config['image_size']), Image.LANCZOS)
init = TF.to_tensor(init).to(device).unsqueeze(0).mul(2).sub(1)
make_cutouts = MakeCutouts(clip_size, cutn, cut_pow)
cur_t = None
def cond_fn(x, t, y=None):
with torch.enable_grad():
x = x.detach().requires_grad_()
n = x.shape[0]
my_t = torch.ones([n], device=device, dtype=torch.long) * cur_t
out = diffusion.p_mean_variance(model, x, my_t, clip_denoised=False, model_kwargs={'y': y})
fac = diffusion.sqrt_one_minus_alphas_cumprod[cur_t]
x_in = out['pred_xstart'] * fac + x * (1 - fac)
clip_in = normalize(make_cutouts(x_in.add(1).div(2)))
image_embeds = clip_model.encode_image(clip_in).float().view([cutn, n, -1])
dists = spherical_dist_loss(image_embeds, text_embed.unsqueeze(0))
losses = dists.mean(0)
tv_losses = tv_loss(x_in)
loss = losses.sum() * clip_guidance_scale + tv_losses.sum() * tv_scale
return -torch.autograd.grad(loss, x)[0]
if model_config['timestep_respacing'].startswith('ddim'):
sample_fn = diffusion.ddim_sample_loop_progressive
else:
sample_fn = diffusion.p_sample_loop_progressive
for i in range(n_batches):
cur_t = diffusion.num_timesteps - skip_timesteps - 1
samples = sample_fn(
model,
(batch_size, 3, model_config['image_size'], model_config['image_size']),
clip_denoised=False,
model_kwargs={},
cond_fn=cond_fn,
progress=True,
skip_timesteps=skip_timesteps,
init_image=init,
randomize_class=True,
)
for j, sample in enumerate(samples):
cur_t -= 1
for k, image in enumerate(sample['pred_xstart']):
filename = f'diffusion-steps/{batch_size * j:05}.png'
TF.to_pil_image(image.add(1).div(2).clamp(0, 1)).save(filename)
if j % display_frequency == 0 or cur_t == -1:
tqdm.write(f'Batch {i}, step {j}, output {k}:')
print()
display.display(display.Image(filename))
do_run()
###Output
_____no_output_____
###Markdown
**Upscale an image or a folder of images**
###Code
#@title Image Upscaling Setup
# partially adapted from https://colab.research.google.com/github/AhabbscienceStudioPak/ESRGAN/blob/master/ESRGAN_Colab.ipynb#scrollTo=MZuFBZncXRy1
torch.cuda.empty_cache()
with torch.no_grad():
torch.cuda.empty_cache()
ESRGAN_path = abs_root_path + "/ESRGAN"
if not os.path.exists(ESRGAN_path):
os.mkdir(ESRGAN_path)
import gdown
print("Downloading pretrained models")
output1 = ESRGAN_path + '/models/RRDB_ESRGAN_x4.pth'
output2 = ESRGAN_path + '/models/RRDB_PSNR_x4.pth'
output3 = ESRGAN_path + '/models/PPON_D.pth'
output4 = ESRGAN_path + '/models/PPON_G.pth'
print ('Downloading RRDB_ESRGAN_x4.pth')
gdown.download('https://drive.google.com/uc?id=1TPrz5QKd8DHHt1k8SRtm6tMiPjz_Qene', output1, quiet=True)
print ('Downloading RRDB_PSNR_x4.pth')
gdown.download('https://drive.google.com/uc?id=1pJ_T-V1dpb1ewoEra1TGSWl5e6H7M4NN', output2, quiet=True)
print ('Downloading PPON_D.pth by Zheng Hui')
gdown.download('https://drive.google.com/uc?id=1Fr5aKCD6mw6P-hI0BZr6My2gHNhtUk-V', output3, quiet=True)
print ('Downloading PPON_G.pth by Zheng Hui')
gdown.download('https://drive.google.com/uc?id=12uR3BSftNA0HDYiKda23GyAj_crpSjOm', output4, quiet=True)
#@title **Execute Image Upscaling**
import os.path as osp
import glob
import cv2
import numpy as np
import torch
from ESRGAN import RRDBNet_arch as arch
import requests
import imageio
import requests
import warnings
warnings.filterwarnings("ignore")
from google.colab import files
Choose_device = "cuda"
model_path = 'models/RRDB_PSNR_x4.pth' #@param ['models/RRDB_ESRGAN_x4.pth','models/RRDB_PSNR_x4.pth','models/PPON_G.pth','models/PPON_D.pth']
device = torch.device(Choose_device)
model_path = ESRGAN_path + '/' + model_path
esr_target_directory = 'your path in quotes' #@param string
test_img_folder = esr_target_directory
model = arch.RRDBNet(3, 3, 64, 23, gc=32)
model.load_state_dict(torch.load(model_path), strict=True)
model.eval()
model = model.to(device)
print('Model path {:s}. \nTesting...'.format(model_path))
idx = 0
for filename in os.listdir(test_img_folder):
filename = test_img_folder + "/" + filename
idx += 1
base = osp.splitext(osp.basename(filename))[0]
print(idx, base)
# read images
img = cv2.imread(filename, cv2.IMREAD_COLOR)
img = img * 1.0 / 255.0
img = torch.from_numpy(np.transpose(img[:, :, [2, 1, 0]], (2, 0, 1))).float()
img_LR = img.unsqueeze(0)
img_LR = img_LR.to(device)
with torch.no_grad():
output = model(img_LR).data.squeeze().float().cpu().clamp_(0, 1).numpy()
output = np.transpose(output[[2, 1, 0], :, :], (1, 2, 0))
output = (output * 255.0).round()
imageio.imwrite('ESRGAN/results/{:s}.png'.format(base), output.astype(np.uint8))
###Output
_____no_output_____
###Markdown
**Generate a video** Consider playing with higher FPS rates if you used the CLIP-guided diffusion method!
###Code
#@title Generate video using ffmpeg
init_frame = 1#@param {type: "number"}
last_frame = 25#@param {type: "number"}
min_fps = 60#@param {type: "number"}
max_fps = 60#@param {type: "number"}
total_frames = last_frame-init_frame
# Desired video runtime in seconds
length = 1#@param {type: "number"}
use_upscaled_images = True #@param {type: "boolean"}
frames = []
tqdm.write('Generating video...')
if use_upscaled_images == True:
for filename in os.listdir(ESRGAN_path + "/results/"):
filename = f"{ESRGAN_path}/results/{filename}"
frames.append(Image.open(filename))
elif use_upscaled_images == False:
for i in range(init_frame,last_frame): #
if usingDiffusion == False:
filename = f"{abs_root_path}/vqgan-steps/{i:04}.png"
elif usingDiffusion == True:
filename = f"{abs_root_path}/diffusion-steps/{i:05}.png"
frames.append(Image.open(filename))
#fps = last_frame/10
fps = np.clip(total_frames/length,min_fps,max_fps)
# Names the video after the prompt if there is one, if not, defaults to video.mp4
def listToString(s):
# initialize an empty string
str1 = ""
# traverse in the string
for ele in s:
str1 += ele
# return string
return str1
video_filename = "video" #@param {type: "string"}
#@markdown Note: using images previously upscaled by ESRGAN may take longer to generate
video_filename = listToString(video_filename).replace(" ","_")
print("Video filename: "+ video_filename)
video_filename = video_filename + ".mp4"
from subprocess import Popen, PIPE
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-r', str(fps), '-pix_fmt', 'yuv420p', '-crf', '17', '-preset', 'veryslow', video_filename], stdin=PIPE)
for im in tqdm(frames):
im.save(p.stdin, 'PNG')
p.stdin.close()
print("Compressing video...")
p.wait()
print("Video ready.")
# @title Download video
from google.colab import files
files.download(video_filename)
# @title View video in browser
#@markdown
mp4 = open(video_filename,'rb').read()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
display.HTML("""
<video width=400 controls>
<source src="%s" type="video/mp4">
</video>
""" % data_url)
###Output
_____no_output_____
###Markdown
**Clear all generated files**
###Code
!rm -rf {abs_root_path}"/diffusion-steps"
!rm -rf {abs_root_path}"/vqgan-steps"
!rm -rf {ESRGAN_path}"/results"
###Output
_____no_output_____
###Markdown
Follow & tag me in your art, revision requests etc. on Twitter: [@somewheresy](https://twitter.com/somewheresy) **LEGACY VERSION**: [S2 VQGAN+CLIP Classic.ipynb](https://github.com/justin-bennington/somewhere-ml/blob/main/S2_VQGAN%2BCLIP_Classic.ipynb)The notebook you're currently using is a multimodal GAN art generator patched together from various ML notebooks for generative art (see license). The results of this notebook may be distinct from others in the space.This notebook is great for procedurally generating new images from a text prompt or input image. At Somewhere Systems we use this for everything from generative landscapes to materials design for 3D graphics. Consider checking out our work @ https://s2.lol and hiring us to demystify technology like AR, ML, etc.
###Code
#@title MIT License
#
# Copyright (c) 2021 Katherine Crowson
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#@markdown What GPU am I using?
#@markdown V100 > P100 > everything else
!nvidia-smi --query-gpu=gpu_name,gpu_bus_id,vbios_version --format=csv
gpu_name = !nvidia-smi --query-gpu=gpu_name, --format=csv
###Output
_____no_output_____
###Markdown
**Filesystem Setup**
###Code
#@markdown Use Temp Filesystem (not recommended)
import os
abs_root_path = "/content"
def ensureProperRootPath():
if len(abs_root_path) > 0:
os.chdir(abs_root_path) # Changes directory to absolute root path
print("Root path check: ")
!pwd
ensureProperRootPath()
print("Well, I guess you're really bent on using the temporary runtime directory for some reason. Anyway, your root directory is: ")
!pwd
#@title Connect Google Drive (recommended)
import os
abs_root_path = "/content"
from google.colab import drive
drive.mount('/content/drive')
def ensureProperRootPath():
if len(abs_root_path) > 0:
os.chdir(abs_root_path) # Changes directory to absolute root path
print("Root path check: ")
!pwd
ensureProperRootPath()
#@title Make a new folder & set root path to that folder (recommended)
#@markdown Saves a step if you don't have a folder in your Google Drive for this. Makes one, sets the root_path to that new folder. You can name it whatever you'd like:
folder_name = "AI_ART" #@param {type: "string"}
abs_root_path = "/content"
if len(folder_name) > 0:
path_tmp = abs_root_path + "/drive/MyDrive/" + folder_name
if not os.path.exists(path_tmp):
os.mkdir(path_tmp)
abs_root_path = path_tmp
print("Created folder & set root path to: " + abs_root_path)
#@markdown Make & assign path to a project subfolder (optional)
project_name = "ALL_DATASETS_TEST" #@param {type: "string"}
if len(project_name) > 0:
path_tmp = abs_root_path + "/" + project_name
if not os.path.exists(path_tmp):
os.mkdir(path_tmp)
abs_root_path = path_tmp
print("Created project subfolder & set root path to: " + abs_root_path)
ensureProperRootPath()
###Output
_____no_output_____
###Markdown
**Dependencies**
###Code
# @title Library Installation
import os
!nvidia-smi
print("Downloading CLIP...")
!git clone https://github.com/openai/CLIP &> /dev/null
!git clone https://github.com/crowsonkb/guided-diffusion &> /dev/null
!pip install -e ./CLIP &> /dev/null
print("Installing library for guided diffusion...")
!pip install -e ./guided-diffusion &> /dev/null
print("Installing Python Libraries for AI")
!git clone https://github.com/CompVis/taming-transformers &> /dev/null
!pip install ftfy regex tqdm omegaconf pytorch-lightning &> /dev/null
!pip install kornia &> /dev/null
!pip install einops &> /dev/null
print("Installing transformers library...")
!pip install transformers &> /dev/null
print("Installing libraries for managing metadata...")
!pip install stegano &> /dev/null
!apt install exempi &> /dev/null
!pip install python-xmp-toolkit &> /dev/null
!pip install imgtag &> /dev/null
!pip install pillow==7.1.2 &> /dev/null
!pip install taming-transformers &> /dev/null
print("Installing ESRGAN for image upscaling...")
!git clone https://github.com/xinntao/ESRGAN &> /dev/null
print("Installing ffmpeg for creating videos...")
!pip install imageio-ffmpeg &> /dev/null
if not os.path.exists(abs_root_path + "/vqgan-steps"):
!mkdir "vqgan-steps" &> /dev/null
print("No directory for VQGAN+CLIP image output found. Made directory: ~/vqgan-steps")
if not os.path.exists(abs_root_path + "/diffusion-steps"):
!mkdir "diffusion-steps" &> /dev/null
print("No directory for CLIP-guided diffusion image output found. Made directory: ~/diffusion-steps")
!pip freeze > requirements.txt
print("Installation finished.")
#@title Download pre-trained models
#@markdown Ensure you select a model you've downloaded in the parameters block
#@markdown The below radio button downloads the model for CLIP-guided diffusion, a method that takes a bit longer to produce good results but generally makes more "realistic" interpretations of the prompt. If you'd like to use this, make sure to download Katherine Crowson's diffusion model.
diffusion = True #@param {type: "boolean"}
#@markdown Models for VQGAN+CLIP method
imagenet_1024 = False #@param {type:"boolean"}
imagenet_16384 = False #@param {type:"boolean"}
coco = False #@param {type:"boolean"}
# faceshq = True #@param {type:"boolean"}
# wikiart_1024 = True #@param {type:"boolean"}
wikiart_16384 = False #@param {type:"boolean"}
sflckr = False #@param {type:"boolean"}
if not os.path.exists(abs_root_path + "/models"):
os.mkdir(abs_root_path + "/models")
os.chdir(abs_root_path + "/models")
print("changing to models subdirectory: ")
!pwd
if imagenet_1024:
# !curl -L -o vqgan_imagenet_f16_1024.yaml -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_1024.yaml' #ImageNet 1024
# !curl -L -o vqgan_imagenet_f16_1024.ckpt -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_1024.ckpt' #ImageNet 1024
!curl -L -o vqgan_imagenet_f16_1024.ckpt -C - 'https://heibox.uni-heidelberg.de/f/140747ba53464f49b476/?dl=1'
!curl -L -o vqgan_imagenet_f16_1024.yaml -C - 'https://heibox.uni-heidelberg.de/f/6ecf2af6c658432c8298/?dl=1'
if imagenet_16384:
# !curl -L -o vqgan_imagenet_f16_16384.yaml -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_16384.yaml' #ImageNet 16384
# !curl -L -o vqgan_imagenet_f16_16384.ckpt -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_16384.ckpt' #ImageNet 16384
!curl -L -o vqgan_imagenet_f16_16384.ckpt -C - 'https://heibox.uni-heidelberg.de/f/867b05fc8c4841768640/?dl=1'
!curl -L -o vqgan_imagenet_f16_16384.yaml -C - 'https://heibox.uni-heidelberg.de/f/274fb24ed38341bfa753/?dl=1'
if coco:
!curl -L -o coco.yaml -C - 'https://dl.nmkd.de/ai/clip/coco/coco.yaml' #COCO
!curl -L -o coco.ckpt -C - 'https://dl.nmkd.de/ai/clip/coco/coco.ckpt' #COCO
#if faceshq:
#!curl -L -o faceshq.yaml -C - 'https://drive.google.com/uc?export=download&id=1fHwGx_hnBtC8nsq7hesJvs-Klv-P0gzT' #FacesHQ
#!curl -L -o faceshq.ckpt -C - 'https://app.koofr.net/content/links/a04deec9-0c59-4673-8b37-3d696fe63a5d/files/get/last.ckpt?path=%2F2020-11-13T21-41-45_faceshq_transformer%2Fcheckpoints%2Flast.ckpt' #FacesHQ
#if wikiart_1024:
#!curl -L -o wikiart_1024.yaml -C - 'http://mirror.io.community/blob/vqgan/wikiart.yaml' #WikiArt 1024
#!curl -L -o wikiart_1024.ckpt -C - 'http://mirror.io.community/blob/vqgan/wikiart.ckpt' #WikiArt 1024
if wikiart_16384:
# !curl -L -o wikiart_16384.yaml -C - 'http://mirror.io.community/blob/vqgan/wikiart_16384.yaml' #WikiArt 16384
# !curl -L -o wikiart_16384.ckpt -C - 'http://mirror.io.community/blob/vqgan/wikiart_16384.ckpt' #WikiArt 16384
!curl -L -o wikiart_16384.ckpt -C - 'http://eaidata.bmk.sh/data/Wikiart_16384/wikiart_f16_16384_8145600.ckpt'
!curl -L -o wikiart_16384.yaml -C - 'http://eaidata.bmk.sh/data/Wikiart_16384/wikiart_f16_16384_8145600.yaml'
if sflckr:
!curl -L -o sflckr.yaml -C - 'https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/files/?p=%2Fconfigs%2F2020-11-09T13-31-51-project.yaml&dl=1' #S-FLCKR
!curl -L -o sflckr.ckpt -C - 'https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/files/?p=%2Fcheckpoints%2Flast.ckpt&dl=1' #S-FLCKR
if diffusion:
# Download the diffusion model
!curl -L -o 512x512_diffusion_uncond_finetune_008100.pt -C - 'https://the-eye.eu/public/AI/models/512x512_diffusion_unconditional_ImageNet/512x512_diffusion_uncond_finetune_008100.pt' #DIFFUSION 512
!curl -L -o 512x512_diffusion_uncond_finetune_008100.pt -C - 'https://the-eye.eu/public/AI/models/512x512_diffusion_unconditional_ImageNet/512x512_diffusion_uncond_finetune_008100.pt' #DIFFUSION 256
# @title Load libraries and definitions
print(abs_root_path)
os.chdir(abs_root_path)
!pwd
import argparse
import math
from pathlib import Path
import io
import sys
sys.path.append('./taming-transformers')
from IPython import display
from base64 import b64encode
from omegaconf import OmegaConf
from PIL import Image
from taming.models import cond_transformer, vqgan
import torch
from torch import nn, optim
from torch.nn import functional as F
from torchvision import transforms
from torchvision.transforms import functional as TF
from tqdm.notebook import tqdm
from CLIP import clip
import kornia.augmentation as K
import numpy as np
import imageio
from PIL import ImageFile, Image
from imgtag import ImgTag # metadata
from libxmp import * # metadata
import libxmp # metadata
from stegano import lsb
import json
ImageFile.LOAD_TRUNCATED_IMAGES = True
sys.path.append('./CLIP')
sys.path.append('./guided-diffusion')
import clip
from guided_diffusion.script_util import create_model_and_diffusion, model_and_diffusion_defaults
def sinc(x):
return torch.where(x != 0, torch.sin(math.pi * x) / (math.pi * x), x.new_ones([]))
def lanczos(x, a):
cond = torch.logical_and(-a < x, x < a)
out = torch.where(cond, sinc(x) * sinc(x/a), x.new_zeros([]))
return out / out.sum()
def ramp(ratio, width):
n = math.ceil(width / ratio + 1)
out = torch.empty([n])
cur = 0
for i in range(out.shape[0]):
out[i] = cur
cur += ratio
return torch.cat([-out[1:].flip([0]), out])[1:-1]
def resample(input, size, align_corners=True):
n, c, h, w = input.shape
dh, dw = size
input = input.view([n * c, 1, h, w])
if dh < h:
kernel_h = lanczos(ramp(dh / h, 2), 2).to(input.device, input.dtype)
pad_h = (kernel_h.shape[0] - 1) // 2
input = F.pad(input, (0, 0, pad_h, pad_h), 'reflect')
input = F.conv2d(input, kernel_h[None, None, :, None])
if dw < w:
kernel_w = lanczos(ramp(dw / w, 2), 2).to(input.device, input.dtype)
pad_w = (kernel_w.shape[0] - 1) // 2
input = F.pad(input, (pad_w, pad_w, 0, 0), 'reflect')
input = F.conv2d(input, kernel_w[None, None, None, :])
input = input.view([n, c, h, w])
return F.interpolate(input, size, mode='bicubic', align_corners=align_corners)
class ReplaceGrad(torch.autograd.Function):
@staticmethod
def forward(ctx, x_forward, x_backward):
ctx.shape = x_backward.shape
return x_forward
@staticmethod
def backward(ctx, grad_in):
return None, grad_in.sum_to_size(ctx.shape)
replace_grad = ReplaceGrad.apply
class ClampWithGrad(torch.autograd.Function):
@staticmethod
def forward(ctx, input, min, max):
ctx.min = min
ctx.max = max
ctx.save_for_backward(input)
return input.clamp(min, max)
@staticmethod
def backward(ctx, grad_in):
input, = ctx.saved_tensors
return grad_in * (grad_in * (input - input.clamp(ctx.min, ctx.max)) >= 0), None, None
clamp_with_grad = ClampWithGrad.apply
def vector_quantize(x, codebook):
d = x.pow(2).sum(dim=-1, keepdim=True) + codebook.pow(2).sum(dim=1) - 2 * x @ codebook.T
indices = d.argmin(-1)
x_q = F.one_hot(indices, codebook.shape[0]).to(d.dtype) @ codebook
return replace_grad(x_q, x)
class Prompt(nn.Module):
def __init__(self, embed, weight=1., stop=float('-inf')):
super().__init__()
self.register_buffer('embed', embed)
self.register_buffer('weight', torch.as_tensor(weight))
self.register_buffer('stop', torch.as_tensor(stop))
def forward(self, input):
input_normed = F.normalize(input.unsqueeze(1), dim=2)
embed_normed = F.normalize(self.embed.unsqueeze(0), dim=2)
dists = input_normed.sub(embed_normed).norm(dim=2).div(2).arcsin().pow(2).mul(2)
dists = dists * self.weight.sign()
return self.weight.abs() * replace_grad(dists, torch.maximum(dists, self.stop)).mean()
def parse_prompt(prompt):
vals = prompt.rsplit(':', 2)
vals = vals + ['', '1', '-inf'][len(vals):]
return vals[0], float(vals[1]), float(vals[2])
class MakeCutouts(nn.Module):
def __init__(self, cut_size, cutn, cut_pow=1.):
super().__init__()
self.cut_size = cut_size
self.cutn = cutn
self.cut_pow = cut_pow
self.augs = nn.Sequential(
K.RandomHorizontalFlip(p=0.5),
# K.RandomSolarize(0.01, 0.01, p=0.7),
K.RandomSharpness(0.3,p=0.4),
K.RandomAffine(degrees=30, translate=0.1, p=0.8, padding_mode='border'),
K.RandomPerspective(0.2,p=0.4),
K.ColorJitter(hue=0.01, saturation=0.01, p=0.7))
self.noise_fac = 0.1
def forward(self, input):
sideY, sideX = input.shape[2:4]
max_size = min(sideX, sideY)
min_size = min(sideX, sideY, self.cut_size)
cutouts = []
for _ in range(self.cutn):
size = int(torch.rand([])**self.cut_pow * (max_size - min_size) + min_size)
offsetx = torch.randint(0, sideX - size + 1, ())
offsety = torch.randint(0, sideY - size + 1, ())
cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]
cutouts.append(resample(cutout, (self.cut_size, self.cut_size)))
batch = self.augs(torch.cat(cutouts, dim=0))
if self.noise_fac:
facs = batch.new_empty([self.cutn, 1, 1, 1]).uniform_(0, self.noise_fac)
batch = batch + facs * torch.randn_like(batch)
return batch
def load_vqgan_model(config_path, checkpoint_path):
config = OmegaConf.load(config_path)
if config.model.target == 'taming.models.vqgan.VQModel':
model = vqgan.VQModel(**config.model.params)
model.eval().requires_grad_(False)
model.init_from_ckpt(checkpoint_path)
elif config.model.target == 'taming.models.cond_transformer.Net2NetTransformer':
parent_model = cond_transformer.Net2NetTransformer(**config.model.params)
parent_model.eval().requires_grad_(False)
parent_model.init_from_ckpt(checkpoint_path)
model = parent_model.first_stage_model
else:
raise ValueError(f'unknown model type: {config.model.target}')
del model.loss
return model
def resize_image(image, out_size):
ratio = image.size[0] / image.size[1]
area = min(image.size[0] * image.size[1], out_size[0] * out_size[1])
size = round((area * ratio)**0.5), round((area / ratio)**0.5)
return image.resize(size, Image.LANCZOS)
# Define necessary functions for CLIP guided diffusion
def fetch(url_or_path):
if str(url_or_path).startswith('http://') or str(url_or_path).startswith('https://'):
r = requests.get(url_or_path)
r.raise_for_status()
fd = io.BytesIO()
fd.write(r.content)
fd.seek(0)
return fd
return open(url_or_path, 'rb')
class MakeCutouts(nn.Module):
def __init__(self, cut_size, cutn, cut_pow=1.):
super().__init__()
self.cut_size = cut_size
self.cutn = cutn
self.cut_pow = cut_pow
def forward(self, input):
sideY, sideX = input.shape[2:4]
max_size = min(sideX, sideY)
min_size = min(sideX, sideY, self.cut_size)
cutouts = []
for _ in range(self.cutn):
size = int(torch.rand([])**self.cut_pow * (max_size - min_size) + min_size)
offsetx = torch.randint(0, sideX - size + 1, ())
offsety = torch.randint(0, sideY - size + 1, ())
cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]
cutouts.append(F.adaptive_avg_pool2d(cutout, self.cut_size))
return torch.cat(cutouts)
def spherical_dist_loss(x, y):
x = F.normalize(x, dim=-1)
y = F.normalize(y, dim=-1)
return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)
def tv_loss(input):
"""L2 total variation loss, as in Mahendran et al."""
input = F.pad(input, (0, 1, 0, 1), 'replicate')
x_diff = input[..., :-1, 1:] - input[..., :-1, :-1]
y_diff = input[..., 1:, :-1] - input[..., :-1, :-1]
return (x_diff**2 + y_diff**2).mean([1, 2, 3])
###Output
_____no_output_____
###Markdown
**Image Generation**
###Code
#@markdown ### Set Global Parameters
os.chdir(abs_root_path)
seed = -1#@param {type:"number"}
display_frequency = 50#@param {type:"number"}
usingDiffusion = False;
#@markdown #**VQGAN+CLIP**
prompts = "A duck sitting in a pond" #@param {type:"string"}
width = 512#@param {type:"number"}
height = 512#@param {type:"number"}
#@markdown Note: x4 and x16 models for CLIP may not work reliably on lower-memory machines
clip_model = "ViT-B/32" #@param ['RN50', 'RN101', 'RN50x4', 'RN50x16', 'ViT-B/32','ViT-B/16']
vqgan_model = "vqgan_imagenet_f16_16384" #@param ["vqgan_imagenet_f16_16384", "vqgan_imagenet_f16_1024", "wikiart_16384", "coco", "sflckr"]
initial_image = ""#@param {type:"string"}
target_images = ""#@param {type:"string"}
max_iterations = 10000#@param {type:"number"}
input_images = ""
#@markdown ### Advanced VQGAN+CLIP Parameters
vq_init_weight = 0.0#@param {type:"number"}
vq_step_size = 0.1#@param {type:"number"}
vq_cutn = 64#@param {type:"number"}
vq_cutpow = 1.0#@param {type:"number"}
model_names={"vqgan_imagenet_f16_16384": 'ImageNet 16384',"vqgan_imagenet_f16_1024":"ImageNet 1024",
"wikiart_1024":"WikiArt 1024", "wikiart_16384":"WikiArt 16384", "coco":"COCO-Stuff", "faceshq":"FacesHQ", "sflckr":"S-FLCKR"}
model_name = model_names[vqgan_model]
torch.cuda.empty_cache()
with torch.no_grad():
torch.cuda.empty_cache()
if seed == -1:
seed = None
if initial_image == "None":
initial_image = None
if target_images == "None" or not target_images:
target_images = []
else:
target_images = target_images.split("|")
target_images = [image.strip() for image in target_images]
if initial_image or target_images != []:
input_images = True
prompts = [frase.strip() for frase in prompts.split("|")]
if prompts == ['']:
prompts = []
args = argparse.Namespace(
prompts=prompts,
image_prompts=target_images,
noise_prompt_seeds=[],
noise_prompt_weights=[],
size=[width, height],
init_image=initial_image,
init_weight=0.,
clip_model= clip_model,
vqgan_config=f'models/{vqgan_model}.yaml',
vqgan_checkpoint=f'models/{vqgan_model}.ckpt',
step_size=vq_step_size,
cutn=vq_cutn,
cut_pow=vq_cutpow,
display_freq=display_frequency,
seed=seed,
)
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
notebook_name = "VQGAN+CLIP"
print('Executing using VQGAN+CLIP method')
print('Using device:', device)
if prompts:
print('Using text prompt:', prompts)
if target_images:
print('Using image prompts:', target_images)
if args.seed is None:
seed = torch.seed()
else:
seed = args.seed
torch.manual_seed(seed)
print('Using seed:', seed)
model = load_vqgan_model(args.vqgan_config, args.vqgan_checkpoint).to(device)
perceptor = clip.load(args.clip_model, jit=False)[0].eval().requires_grad_(False).to(device)
cut_size = perceptor.visual.input_resolution
e_dim = model.quantize.e_dim
f = 2**(model.decoder.num_resolutions - 1)
make_cutouts = MakeCutouts(cut_size, args.cutn, cut_pow=args.cut_pow)
n_toks = model.quantize.n_e
toksX, toksY = args.size[0] // f, args.size[1] // f
sideX, sideY = toksX * f, toksY * f
z_min = model.quantize.embedding.weight.min(dim=0).values[None, :, None, None]
z_max = model.quantize.embedding.weight.max(dim=0).values[None, :, None, None]
if args.init_image:
pil_image = Image.open(args.init_image).convert('RGB')
pil_image = pil_image.resize((sideX, sideY), Image.LANCZOS)
z, *_ = model.encode(TF.to_tensor(pil_image).to(device).unsqueeze(0) * 2 - 1)
else:
one_hot = F.one_hot(torch.randint(n_toks, [toksY * toksX], device=device), n_toks).float()
z = one_hot @ model.quantize.embedding.weight
z = z.view([-1, toksY, toksX, e_dim]).permute(0, 3, 1, 2)
z_orig = z.clone()
z.requires_grad_(True)
opt = optim.Adam([z], lr=args.step_size)
normalize = transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711])
pMs = []
for prompt in args.prompts:
txt, weight, stop = parse_prompt(prompt)
embed = perceptor.encode_text(clip.tokenize(txt).to(device)).float()
pMs.append(Prompt(embed, weight, stop).to(device))
for prompt in args.image_prompts:
path, weight, stop = parse_prompt(prompt)
img = resize_image(Image.open(path).convert('RGB'), (sideX, sideY))
batch = make_cutouts(TF.to_tensor(img).unsqueeze(0).to(device))
embed = perceptor.encode_image(normalize(batch)).float()
pMs.append(Prompt(embed, weight, stop).to(device))
for seed, weight in zip(args.noise_prompt_seeds, args.noise_prompt_weights):
gen = torch.Generator().manual_seed(seed)
embed = torch.empty([1, perceptor.visual.output_dim]).normal_(generator=gen)
pMs.append(Prompt(embed, weight).to(device))
def synth(z):
z_q = vector_quantize(z.movedim(1, 3), model.quantize.embedding.weight).movedim(3, 1)
return clamp_with_grad(model.decode(z_q).add(1).div(2), 0, 1)
def add_xmp_data(nombrefichero):
image = ImgTag(filename=nombrefichero)
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'creator', 'VQGAN+CLIP', {"prop_array_is_ordered":True, "prop_value_is_array":True})
if args.prompts:
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'title', " | ".join(args.prompts), {"prop_array_is_ordered":True, "prop_value_is_array":True})
else:
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'title', 'None', {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'i', str(i), {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'model', model_name, {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'seed',str(seed) , {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'input_images',str(input_images) , {"prop_array_is_ordered":True, "prop_value_is_array":True})
#for frases in args.prompts:
# image.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'Prompt' ,frases, {"prop_array_is_ordered":True, "prop_value_is_array":True})
image.close()
def add_stegano_data(filename):
data = {
"title": " | ".join(args.prompts) if args.prompts else None,
"notebook": notebook_name,
"i": i,
"model": model_name,
"seed": str(seed),
"input_images": input_images
}
lsb.hide(filename, json.dumps(data)).save(filename)
@torch.no_grad()
def checkin(i, losses):
losses_str = ', '.join(f'{loss.item():g}' for loss in losses)
tqdm.write(f'i: {i}, loss: {sum(losses).item():g}, losses: {losses_str}')
out = synth(z)
TF.to_pil_image(out[0].cpu()).save('progress.png')
add_stegano_data('progress.png')
add_xmp_data('progress.png')
display.display(display.Image('progress.png'))
def ascend_txt():
global i
out = synth(z)
iii = perceptor.encode_image(normalize(make_cutouts(out))).float()
result = []
if args.init_weight:
result.append(F.mse_loss(z, z_orig) * args.init_weight / 2)
for prompt in pMs:
result.append(prompt(iii))
img = np.array(out.mul(255).clamp(0, 255)[0].cpu().detach().numpy().astype(np.uint8))[:,:,:]
img = np.transpose(img, (1, 2, 0))
filename = f"vqgan-steps/{i:04}.png"
imageio.imwrite(filename, np.array(img))
add_stegano_data(filename)
add_xmp_data(filename)
return result
def train(i):
opt.zero_grad()
lossAll = ascend_txt()
if i % args.display_freq == 0:
checkin(i, lossAll)
loss = sum(lossAll)
loss.backward()
opt.step()
with torch.no_grad():
z.copy_(z.maximum(z_min).minimum(z_max))
i = 0
try:
with tqdm() as pbar:
while True:
train(i)
if i == max_iterations:
break
i += 1
pbar.update()
except KeyboardInterrupt:
pass
#@markdown # **CLIP-Guided Diffusion**
#@markdown ##### WARNING: This requires access to 16GB of VRAM reliably, so may not work for users not using Colab Pro/+
usingDiffusion = True;
torch.cuda.empty_cache()
with torch.no_grad():
torch.cuda.empty_cache()
prompt = "A group of vultures decide the behavior of the stock market\""#@param {type:"string"}
batch_size = 1#@param {type:"number"}
#@markdown Note: x4 and x16 models for CLIP may not work reliably on lower-memory machines
clip_model = "ViT-B/32" #@param ['RN50', 'RN101', 'RN50x4', 'RN50x16', 'ViT-B/32','ViT-B/16']
#@markdown Controls how much the image should look like the prompt.
clip_guidance_scale = 1000#@param {type:"number"}
#@markdown Controls the smoothness of the final output.
tv_scale = 150#@param {type:"number"}
cutn = 32#@param {type:"number"}
cut_pow = 0.5#@param {type:"number"}
n_batches = 1#@param {type:"number"}
#@markdown This can be an URL or Colab local path and must be in quotes.
init_image = None #@param {type:"string"}
#@markdown This needs to be between approx. 200 and 500 when using an init image.
#@markdown Higher values make the output look more like the init.
skip_timesteps = 0#@param {type:"number"}
diffusion_steps = 1500#@param {type:"number"}
if seed == -1:
seed = None
diff_image_size = 256 # size of image when using diffusion
diff_image_size = int(diff_image_size)
model_config = model_and_diffusion_defaults()
model_config.update({
'attention_resolutions': '32, 16, 8',
'class_cond': False,
'diffusion_steps': diffusion_steps,
'rescale_timesteps': True,
'timestep_respacing': str(diffusion_steps), # Modify this value to decrease the number of
# timesteps.
'image_size': 512,
'learn_sigma': True,
'noise_schedule': 'linear',
'num_channels': 256,
'num_head_channels': 64,
'num_res_blocks': 2,
'resblock_updown': True,
'use_fp16': True,
'use_scale_shift_norm': True,
})
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print('Executing using CLIP guided diffusion method')
if (prompt != None):
print('Using prompt: '+ prompt)
print('Using device:', device)
model, diffusion = create_model_and_diffusion(**model_config)
model.load_state_dict(torch.load(abs_root_path + "/models/" + '512x512_diffusion_uncond_finetune_008100.pt', map_location='cpu'))
model.requires_grad_(False).eval().to(device)
for name, param in model.named_parameters():
if 'qkv' in name or 'norm' in name or 'proj' in name:
param.requires_grad_()
if model_config['use_fp16']:
model.convert_to_fp16()
clip_model = clip.load(clip_model, jit=False)[0].eval().requires_grad_(False).to(device)
clip_size = clip_model.visual.input_resolution
normalize = transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711])
def do_run():
if seed is not None:
torch.manual_seed(seed)
text_embed = clip_model.encode_text(clip.tokenize(prompt).to(device)).float()
init = None
if init_image is not None:
init = Image.open(fetch(init_image)).convert('RGB')
init = init.resize((model_config['image_size'], model_config['image_size']), Image.LANCZOS)
init = TF.to_tensor(init).to(device).unsqueeze(0).mul(2).sub(1)
make_cutouts = MakeCutouts(clip_size, cutn, cut_pow)
cur_t = None
def cond_fn(x, t, y=None):
with torch.enable_grad():
x = x.detach().requires_grad_()
n = x.shape[0]
my_t = torch.ones([n], device=device, dtype=torch.long) * cur_t
out = diffusion.p_mean_variance(model, x, my_t, clip_denoised=False, model_kwargs={'y': y})
fac = diffusion.sqrt_one_minus_alphas_cumprod[cur_t]
x_in = out['pred_xstart'] * fac + x * (1 - fac)
clip_in = normalize(make_cutouts(x_in.add(1).div(2)))
image_embeds = clip_model.encode_image(clip_in).float().view([cutn, n, -1])
dists = spherical_dist_loss(image_embeds, text_embed.unsqueeze(0))
losses = dists.mean(0)
tv_losses = tv_loss(x_in)
loss = losses.sum() * clip_guidance_scale + tv_losses.sum() * tv_scale
return -torch.autograd.grad(loss, x)[0]
if model_config['timestep_respacing'].startswith('ddim'):
sample_fn = diffusion.ddim_sample_loop_progressive
else:
sample_fn = diffusion.p_sample_loop_progressive
for i in range(n_batches):
cur_t = diffusion.num_timesteps - skip_timesteps - 1
samples = sample_fn(
model,
(batch_size, 3, model_config['image_size'], model_config['image_size']),
clip_denoised=False,
model_kwargs={},
cond_fn=cond_fn,
progress=True,
skip_timesteps=skip_timesteps,
init_image=init,
randomize_class=True,
)
for j, sample in enumerate(samples):
cur_t -= 1
for k, image in enumerate(sample['pred_xstart']):
filename = f'diffusion-steps/{batch_size * j:05}.png'
TF.to_pil_image(image.add(1).div(2).clamp(0, 1)).save(filename)
if j % display_frequency == 0 or cur_t == -1:
tqdm.write(f'Batch {i}, step {j}, output {k}:')
print()
display.display(display.Image(filename))
do_run()
###Output
_____no_output_____
###Markdown
**Upscale an image or a folder of images**
###Code
#@title Image Upscaling Setup
# partially adapted from https://colab.research.google.com/github/AhabbscienceStudioPak/ESRGAN/blob/master/ESRGAN_Colab.ipynb#scrollTo=MZuFBZncXRy1
torch.cuda.empty_cache()
with torch.no_grad():
torch.cuda.empty_cache()
ESRGAN_path = abs_root_path + "/ESRGAN"
if not os.path.exists(ESRGAN_path):
os.mkdir(ESRGAN_path)
import gdown
print("Downloading pretrained models")
output1 = ESRGAN_path + '/models/RRDB_ESRGAN_x4.pth'
output2 = ESRGAN_path + '/models/RRDB_PSNR_x4.pth'
output3 = ESRGAN_path + '/models/PPON_D.pth'
output4 = ESRGAN_path + '/models/PPON_G.pth'
print ('Downloading RRDB_ESRGAN_x4.pth')
gdown.download('https://drive.google.com/uc?id=1TPrz5QKd8DHHt1k8SRtm6tMiPjz_Qene', output1, quiet=True)
print ('Downloading RRDB_PSNR_x4.pth')
gdown.download('https://drive.google.com/uc?id=1pJ_T-V1dpb1ewoEra1TGSWl5e6H7M4NN', output2, quiet=True)
print ('Downloading PPON_D.pth by Zheng Hui')
gdown.download('https://drive.google.com/uc?id=1Fr5aKCD6mw6P-hI0BZr6My2gHNhtUk-V', output3, quiet=True)
print ('Downloading PPON_G.pth by Zheng Hui')
gdown.download('https://drive.google.com/uc?id=12uR3BSftNA0HDYiKda23GyAj_crpSjOm', output4, quiet=True)
#@title **Execute Image Upscaling**
import os.path as osp
import glob
import cv2
import numpy as np
import torch
from ESRGAN import RRDBNet_arch as arch
import requests
import imageio
import requests
import warnings
warnings.filterwarnings("ignore")
from google.colab import files
Choose_device = "cuda"
model_path = 'models/RRDB_PSNR_x4.pth' #@param ['models/RRDB_ESRGAN_x4.pth','models/RRDB_PSNR_x4.pth','models/PPON_G.pth','models/PPON_D.pth']
device = torch.device(Choose_device)
model_path = ESRGAN_path + '/' + model_path
esr_target_directory = 'your path in quotes' #@param string
test_img_folder = esr_target_directory
model = arch.RRDBNet(3, 3, 64, 23, gc=32)
model.load_state_dict(torch.load(model_path), strict=True)
model.eval()
model = model.to(device)
print('Model path {:s}. \nTesting...'.format(model_path))
idx = 0
for filename in os.listdir(test_img_folder):
filename = test_img_folder + "/" + filename
idx += 1
base = osp.splitext(osp.basename(filename))[0]
print(idx, base)
# read images
img = cv2.imread(filename, cv2.IMREAD_COLOR)
img = img * 1.0 / 255.0
img = torch.from_numpy(np.transpose(img[:, :, [2, 1, 0]], (2, 0, 1))).float()
img_LR = img.unsqueeze(0)
img_LR = img_LR.to(device)
with torch.no_grad():
output = model(img_LR).data.squeeze().float().cpu().clamp_(0, 1).numpy()
output = np.transpose(output[[2, 1, 0], :, :], (1, 2, 0))
output = (output * 255.0).round()
imageio.imwrite('ESRGAN/results/{:s}.png'.format(base), output.astype(np.uint8))
###Output
_____no_output_____
###Markdown
**Generate a video** Consider playing with higher FPS rates if you used the CLIP-guided diffusion method!
###Code
#@title Generate video using ffmpeg
init_frame = 1#@param {type: "number"}
last_frame = 25#@param {type: "number"}
min_fps = 60#@param {type: "number"}
max_fps = 60#@param {type: "number"}
total_frames = last_frame-init_frame
# Desired video runtime in seconds
length = 1#@param {type: "number"}
use_upscaled_images = False #@param {type: "boolean"}
frames = []
tqdm.write('Generating video...')
if use_upscaled_images == True:
for filename in os.listdir(ESRGAN_path + "/results/"):
filename = f"{ESRGAN_path}/results/{filename}"
frames.append(Image.open(filename))
elif use_upscaled_images == False:
for i in range(init_frame,last_frame): #
if usingDiffusion == False:
filename = f"{abs_root_path}/vqgan-steps/{i:04}.png"
frames.append(Image.open(filename))
elif usingDiffusion == True:
filename = f"{abs_root_path}/diffusion-steps/{i:05}.png"
frames.append(Image.open(filename))
#fps = last_frame/10
fps = np.clip(total_frames/length,min_fps,max_fps)
# Names the video after the prompt if there is one, if not, defaults to video.mp4
def listToString(s):
# initialize an empty string
str1 = ""
# traverse in the string
for ele in s:
str1 += ele
# return string
return str1
video_filename = "video" #@param {type: "string"}
#@markdown Note: using images previously upscaled by ESRGAN may take longer to generate
video_filename = listToString(video_filename).replace(" ","_")
print("Video filename: "+ video_filename)
video_filename = video_filename + ".mp4"
from subprocess import Popen, PIPE
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-r', str(fps), '-pix_fmt', 'yuv420p', '-crf', '17', '-preset', 'veryslow', video_filename], stdin=PIPE)
for im in tqdm(frames):
im.save(p.stdin, 'PNG')
p.stdin.close()
print("Compressing video...")
p.wait()
print("Video ready.")
# @title Download video
from google.colab import files
files.download(video_filename)
# @title View video in browser
#@markdown
mp4 = open(video_filename,'rb').read()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
display.HTML("""
<video width=400 controls>
<source src="%s" type="video/mp4">
</video>
""" % data_url)
###Output
_____no_output_____
###Markdown
**Clear all generated files**
###Code
!rm -rf {abs_root_path}"/diffusion-steps"
!rm -rf {abs_root_path}"/vqgan-steps"
!rm -rf {ESRGAN_path}"/results"
###Output
_____no_output_____ |
src/analysis.ipynb | ###Markdown
Ames Data Analysis
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from scipy.stats import f_oneway, pearsonr
import statsmodels.api as sm
import statsmodels.formula.api as smf
from statsmodels.stats.outliers_influence import OLSInfluence
from pipeline_v1 import ordinal, nominal, continuous, discrete
###Output
_____no_output_____
###Markdown
Helper Functions
###Code
def get_formulas(features):
formulas = []
for feature in features:
formula = "Sale_Price ~ C(" + feature + ")"
formulas.append(formula)
return formulas
# Get Data
filename = "../data/external/Ames_data.csv"
ames = pd.read_csv(filename)
X = ames.drop(columns=["PID","Sale_Price"])
y = ames["Sale_Price"].to_numpy()
ames_df = ames.drop(columns="PID")
###Output
(2930, 82)
###Markdown
Sale Condition
###Code
print(X[X["Sale_Condition"] != "Normal"].shape)
print(X[X["Sale_Condition"] != "Normal"].shape[0]/X.shape[0])
###Output
(517, 82)
0.1764505119453925
###Markdown
Neighborhoods
###Code
print(X["Neighborhood"].unique())
###Output
['North_Ames' 'Gilbert' 'Stone_Brook' 'Northwest_Ames' 'Somerset'
'Briardale' 'Northpark_Villa' 'Northridge_Heights' 'Bloomington_Heights'
'Northridge' 'Sawyer_West' 'Sawyer' 'Greens' 'Brookside' 'Old_Town'
'Iowa_DOT_and_Rail_Road' 'Clear_Creek'
'South_and_West_of_Iowa_State_University' 'Edwards' 'College_Creek'
'Crawford' 'Blueste' 'Mitchell' 'Timberland' 'Meadow_Village' 'Veenker'
'Green_Hills' 'Landmark']
###Markdown
Outlier Detection
###Code
def get_cooks(X, PID, features, formulas):
measures = pd.DataFrame()
for PID, feature, formula in zip(PID,features,formulas):
mu = np.mean(X[feature])
model = smf.ols(formula,data=X).fit()
infl = model.get_influence()
c, p = infl.cooks_distance
norm_cooks = c / mu
d = {"PID": PID, "Feature": feature, "mu": mu, "Cooks": c,"Normalized": norm_cooks, "p-value": p}
df = pd.DataFrame(data=d)
measures = pd.concat((measures, df), axis=0)
return measures
features = continuous + discrete
features.remove("Age")
features.remove("Garage_Age")
formulas = get_formulas(features)
cooks = get_cooks(ames_df, PID, features, formulas)
filepath = "../reports/cooks.csv"
cooks.to_csv(filepath, index=False)
def analyze_outliers(X, cooks, cutoff=3):
outlier_stats = cooks[cooks["Normalized"]>=cutoff].sort_values(by=["PID", "Feature"])
outliers = outlier_stats["PID"].unique()
n_outliers = len(outliers)
pct_outliers = n_outliers / X.shape[0] * 100
print(f"\nThere are {n_outliers} outliers ({pct_outliers}%) at cutoff={cutoff}.")
print(outlier_stats)
for i in range(3,7):
analyze_outliers(X, cooks, i)
###Output
_____no_output_____
###Markdown
Continuous Variables Correlation
###Code
corr = []
p_values = []
features = continuous + discrete
for feature in continuous:
r, p = pearsonr(X[feature],y)
corr.append(r)
p_values.append(p)
d = {"Feature": continuous, "Correlation": corr, "Importance": np.abs(corr), "p-values":p_values}
df = pd.DataFrame(data=d)
df = df.sort_values(by="Importance", ascending=False)
print(df)
###Output
Feature Correlation Importance p-values
10 Gr_Liv_Area 0.706780 0.706780 0.000000e+00
11 Garage_Area 0.640138 0.640138 0.000000e+00
6 Total_Bsmt_SF 0.632529 0.632529 0.000000e+00
7 First_Flr_SF 0.621676 0.621676 5.687256e-313
2 Mas_Vnr_Area 0.502196 0.502196 4.881360e-187
12 Wood_Deck_SF 0.327143 0.327143 4.820100e-74
13 Open_Porch_SF 0.312951 0.312951 1.373555e-67
8 Second_Flr_SF 0.269373 0.269373 6.941583e-50
1 Lot_Area 0.266549 0.266549 7.633843e-49
0 Lot_Frontage 0.201875 0.201875 2.547312e-28
5 Bsmt_Unf_SF 0.183308 0.183308 1.478791e-23
3 BsmtFin_SF_1 -0.134905 0.134905 2.254104e-13
14 Enclosed_Porch -0.128787 0.128787 2.607945e-12
16 Screen_Porch 0.112151 0.112151 1.148246e-09
17 Pool_Area 0.068403 0.068403 2.110638e-04
9 Low_Qual_Fin_SF -0.037660 0.037660 4.151429e-02
15 Three_season_porch 0.032225 0.032225 8.115701e-02
18 Misc_Val -0.015691 0.015691 3.958476e-01
4 BsmtFin_SF_2 0.006018 0.006018 7.447333e-01
###Markdown
###Code
def get_formulas(features):
formulas = []
for feature in features:
formula = "Sale_Price ~ C(" + feature + ")"
formulas.append(formula)
return formulas
def get_f(features, formulas):
p = []
s = []
for formula in formulas:
model = smf.ols(formula,data=ames_df).fit()
aov_table = sm.stats.anova_lm(model, typ=2)
p.append(aov_table['PR(>F)'][0])
sig = "Significance" if aov_table['PR(>F)'][0]<0.05 else "Not Signficant"
s.append(sig)
d = {"Feature": features, "PR(>F)": p, "Significant": s}
df = pd.DataFrame(d)
return df
formulas = get_formulas(ordinal)
results_ord = get_f(ordinal, formulas)
formulas = get_formulas(nominal)
results_nom = get_f(nominal, formulas)
print(results_ord)
print("\n")
print(results_nom)
###Output
Feature PR(>F) Significant
0 BsmtFin_Type_1 5.579684e-160 Significance
1 BsmtFin_Type_2 1.143399e-17 Significance
2 Bsmt_Cond 2.061283e-31 Significance
3 Bsmt_Exposure 1.188864e-124 Significance
4 Bsmt_Qual 0.000000e+00 Significance
5 Electrical 2.713510e-36 Significance
6 Exter_Cond 1.611921e-16 Significance
7 Exter_Qual 0.000000e+00 Significance
8 Fence 1.136202e-23 Significance
9 Fireplace_Qu 2.543386e-229 Significance
10 Functional 6.270286e-09 Significance
11 Garage_Cond 2.249954e-52 Significance
12 Garage_Finish 5.458334e-233 Significance
13 Garage_Qual 3.973437e-57 Significance
14 Heating_QC 4.917676e-152 Significance
15 Kitchen_Qual 0.000000e+00 Significance
16 Land_Slope 1.011949e-03 Significance
17 Lot_Shape 1.067863e-60 Significance
18 Overall_Cond 9.118002e-98 Significance
19 Overall_Qual 0.000000e+00 Significance
20 Paved_Drive 2.034633e-51 Significance
21 Pool_QC 1.238923e-11 Significance
22 Utilities 2.129079e-01 Not Signficant
Feature PR(>F) Significant
0 MS_SubClass 1.622412e-167 Significance
1 MS_Zoning 1.100499e-74 Significance
2 Street 1.267537e-03 Significance
3 Alley 6.239449e-15 Significance
4 Land_Contour 3.743678e-28 Significance
5 Lot_Config 1.041223e-12 Significance
6 Neighborhood 0.000000e+00 Significance
7 Condition_1 2.491407e-25 Significance
8 Condition_2 5.430319e-13 Significance
9 Bldg_Type 2.475923e-21 Significance
10 House_Style 3.028710e-47 Significance
11 Roof_Style 2.371044e-49 Significance
12 Roof_Matl 4.847509e-08 Significance
13 Exterior_1st 3.385609e-106 Significance
14 Exterior_2nd 4.633598e-105 Significance
15 Mas_Vnr_Type 4.056978e-133 Significance
16 Foundation 1.267124e-205 Significance
17 Heating 2.802295e-05 Significance
18 Central_Air 4.247468e-48 Significance
19 Garage_Type 1.665985e-177 Significance
20 Misc_Feature 1.540321e-02 Significance
21 Sale_Type 3.418425e-89 Significance
22 Sale_Condition 2.312086e-91 Significance
###Markdown
Implementation of FIR Filter in the FPGA Generation of the signal
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
from scipy.signal import lfilter, firwin #fundamental to simulate a fir_filter in python
import fileinput
from scipy.fft import fft, fftfreq
import sys
# To create the signal
def sine_wave(A, time, f): # creates a sine wave
return A * np.sin(2 * np.pi * f * time) # F(t) = A sin(wt) = A sin(2pi f t)
data_size = 125
# X-Axis
t = np.linspace(0, 1, data_size)
print('t =',t[:3],'...',t[data_size-3:])
#Now we generate the wave
A1, A2 = 60, 60
f1, f2 = 10, 60
wave = sine_wave(A1, t, f1) + sine_wave(A2, t, f2)
# Let's plot it
plt.figure(figsize=(15, 5))
plt.plot(t, wave, '-o',alpha=.5,label='Raw signal')
file = open("../signal.txt", "w") # the file where to write the signal to be filtered
numtaps = 4
f = 30
# this function gives us the coefficients used in the testbench
# numtaps = 4 depends on the structure of the filter (in the weighted sum we add 4 terms)
# f = the cutoff frequency (for a Low-pass from 0 to f)
# fs = The sampling frequency of the signal.
# Each frequency in cutoff must be between 0 and fs/2. (Nyquist theorem)
# fs is the number of samples obtain in one second
c = signal.firwin(numtaps, f, fs=data_size)
print("Coefficients for the Fir Filter:", c)
coeffpot = 8
rc = c * 2**coeffpot
print(rc)
trunc_rc = np.round(rc,0).astype(int)
bin_rc = []
hex_rc = []
for i in range(numtaps):
bin_rc.append(bin(trunc_rc[i]))
hex_rc.append(hex(trunc_rc[i]))
print(bin_rc)
print(hex_rc)
c = signal.firwin(4, 30, fs=125)
rc = c * 2**8
hex_rc = []
for i in range(numtaps):
hex_rc.append(hex(trunc_rc[i]))
print(rc)
print(hex_rc)
file = open("../signal.txt", "w") # the file where to write the signal to be filtered
np.savetxt(file, wave, fmt='%d', delimiter='\n')
file.close()
# Now we visualize it:
sig = np.loadtxt("../signal.txt", delimiter='\n')
plt.figure(figsize=(10, 5))
plt.xlim(-0.02,0.6)
plt.plot(t, sig, '-o',alpha=.6,label='raw signal')
plt.legend(loc="upper left")
###Output
_____no_output_____
###Markdown
Results
###Code
# After being computed on the actual FPGA
##############
code = 'VElAx'
##############
url = 'https://transfer.sh/'+ code +'/output.txt'
! curl $url --output ../fromfpga.txt
fname = "../fromfpga.txt"
fpga = np.loadtxt(fname, delimiter='\n')
for i in range(len(fpga)):
# 2**10 due to truncation of bits
# 2**9 because I multiplied the coefficients by 2**9
fpga[i] = (2**10/2**8)*fpga[i]
# this function simulates the output of a fir_filter
python_sig = lfilter(c, 1, wave)
plt.figure(figsize=(10, 5))
plt.plot(t, wave, '-o',alpha=.5,label='Raw signal')
plt.plot(t-5/data_size, fpga, '-o', color='g',alpha=1,label='FPGA signal')
#plt.plot(t-5/data_size, fpga, '-o', color='r',alpha=1,label='Python simulated')
#plt.title("")
plt.xlim(-0.02,0.6)
plt.legend(loc="upper left")
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(14, 6))
plt.xlim(0,.2)
ax1.plot(t, wave, '-o',alpha=.5,label='Raw signal')
ax1.plot(t-5/data_size, fpga, '-o', color='g',alpha=1,label='FPGA signal')
ax1.set_xlim(0,0.3)
ax1.legend(loc="upper left")
ax2.plot(t, wave, '-o',alpha=.5,label='Raw signal')
ax2.plot(t-5/data_size, fpga, '-o', color='r',alpha=1,label='Python simulated')
ax2.set_xlim(0,0.3)
ax2.legend(loc="upper left")
###Output
_____no_output_____
###Markdown
Fourier Analysis
###Code
plt.figure(figsize=(10, 5))
fft_wave = np.fft.fft(wave)
fft_fpga = np.fft.fft(fpga)
T = 1 / data_size # sampling interval
N = data_size
xf = np.arange(data_size//2+1)
plt.ylabel("Amplitude")
plt.xlabel("Frequency [Hz]")
plt.plot(xf, 2.0/N * np.abs(fft_wave[0:N//2+1]),label='Signal frequencies')
plt.plot(xf, 2.0/N * np.abs(fft_fpga[0:N//2+1]), color = 'g', label='Filtered frequencies')
plt.plot([30, 30], [60, -1], 'k--', lw=2, label= 'Cutoff frequency',alpha=.6)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Finding and fixing a mismatch between the Go memory model and data-race detector. A story on applied formal methodsDaniel S. [email protected]://github.com/dfava/paper.go.mm.drd
###Code
import re
import operator
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from matplotlib import rc
rc('font',**{'family': 'serif', 'serif': ['Computer Modern'],'size' : 18})
rc('text', usetex=True)
class Experiment:
def __init__(self, fname, info=None):
assert(type(fname)==str)
assert(info == None or type(info)==dict)
self.fname = fname
self.info = info
self.data = None
def len(self):
return len(self.data['ops'])
def parse(self, verbose=False):
fhandle = open(self.fname, 'r')
lines = fhandle.readlines()
fhandle.close()
s = re.compile('\S*, ops=(.*), procs=(.*)/(.*), locks=(.*), VC procs=(.*)/(.*), VC locks=(.*)/(.*)')
self.data = {
'name' : self.fname,
'ops' : [],
'procs' : { 'active' : [], 'total' : [], 'vc total' : [] },
'locks' : { 'total' : [], 'vc total' : [] },
}
for lnum, line in enumerate(lines):
m = s.match(line)
if m:
self.data['ops'].append(int(m.group(1)))
self.data['procs']['active'].append(int(m.group(2)))
self.data['procs']['total'].append(int(m.group(3)))
self.data['locks']['total'].append(int(m.group(4)))
self.data['procs']['vc total'].append(int(m.group(6)))
self.data['locks']['vc total'].append(int(m.group(8)))
continue
if verbose:
max_ = 4
print(self.data['ops'][0:max_])
print(self.data['procs']['vc total'][0:max_])
print(self.data['locks']['vc total'][0:max_])
expsFT = [
Experiment('../data/sortnp.ft.out', info={'go' : {'sz' : 10000, 'N' : 40}, 'rd' : None}),
Experiment('../data/sortnp.fix.ft.out', info={'go' : {'sz' : 10000, 'N' : 40}, 'rd' : None}),
]
for exp in expsFT:
exp.parse(verbose=True)
print()
start=0
end=min(len(expsFT[0].data['ops']), len(expsFT[1].data['ops']))
fig, ax = plt.subplots()
ax.plot(expsFT[0].data['ops'][start:end], expsFT[0].data['locks']['vc total'][start:end], color='black')
ax.plot(expsFT[1].data['ops'][start:end], expsFT[1].data['locks']['vc total'][start:end], linestyle='--', color='black')
ax.set_ylabel('VC entries')
ax.set_xlabel('Instructions executed')
ax.grid()
fig.canvas.draw()
labels = [item.get_text() for item in ax.get_xticklabels()]
print(labels)
xlabels = ['', '0', '5M', '10M', '15M', '20M']
_ = ax.set_xticklabels(xlabels)
ylabels = ['', '0', '2K', '4K', '6K', '8K', '10K']
_ = ax.set_yticklabels(ylabels)
plt.savefig("vcentries.pdf", bbox_inches='tight')
###Output
['$-0.5$', '$0.0$', '$0.5$', '$1.0$', '$1.5$', '$2.0$', '$2.5$']
###Markdown
Task at hand: - Creation of embedding vectors for users and items (movies)- These vectors are optimized over difference between ratings and dot product of user and item vectors
###Code
K = [1,2,3,4,5]
M = [2,3,1,5]
print (LabelEncoder().fit_transform(K))
print (LabelEncoder().fit_transform(M))
print (ratings_df['MovieID'].nunique())
[i for i in range(60434) if i not in ratings_df['UserID']]
n_users = ratings_df['UserID'].nunique()
n_movies = ratings_df['MovieID'].nunique()
print (n_users*n_movies)
print (len(ratings_df['Rating']))
# Analysis (plag)
g = ratings_df.groupby('UserID')['Rating'].count()
top_users = g.sort_values(ascending=False)[:15]
g = ratings_df.groupby('MovieID')['Rating'].count()
top_movies = g.sort_values(ascending=False)[:15]
top_r = ratings_df.join(top_users, rsuffix='_u', how='inner', on='UserID')
top_r = top_r.join(top_movies, rsuffix='_m', how='inner', on='MovieID')
# Analysis_2 (plag)
user_enc = LabelEncoder()
ratings_df['User'] = user_enc.fit_transform(ratings_df['UserID'].values)
n_users = ratings_df['User'].nunique()
item_enc = LabelEncoder()
ratings_df['Movie'] = item_enc.fit_transform(ratings_df['MovieID'].values)
n_movies = ratings_df['Movie'].nunique()
print (type(ratings_df['Rating'][0]))
ratings_df['Rating'] = ratings_df['Rating'].values.astype(np.float32)
min_rating = min(ratings_df['Rating'])
max_rating = max(ratings_df['Rating'])
n_users, n_movies, min_rating, max_rating
# Analysis_3 (plag)
from sklearn.model_selection import train_test_split
X = ratings_df[['User', 'Movie']].values
y = ratings_df['Rating'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
# converting into list of lists
X_train_array = [X_train[:, 0], X_train[:, 1]]
X_test_array = [X_test[:, 0], X_test[:, 1]]
# Deep Learning (plag)
# computation graph creation
from keras.models import Model
from keras.layers import Input, Reshape, Dot
from keras.layers.embeddings import Embedding
from keras.optimizers import Adam
from keras.regularizers import l2
def RecommenderV1(n_users, n_movies, n_factors):
user = Input(shape=(1,), name='user_input')
u = Embedding(n_users, n_factors, embeddings_initializer='he_normal',
embeddings_regularizer=l2(1e-6))(user)
u = Reshape((n_factors,))(u)
movie = Input(shape=(1,))
m = Embedding(n_movies, n_factors, embeddings_initializer='he_normal',
embeddings_regularizer=l2(1e-6))(movie)
m = Reshape((n_factors,))(m)
x = Dot(axes=1)([u, m])
model = Model(inputs=[user, movie], outputs=x)
opt = Adam(lr=0.001)
model.compile(loss='mean_squared_error', optimizer=opt)
return model
# Initialization (plag)
n_factors = 50
model = RecommenderV1(n_users, n_movies, n_factors)
model.summary()
# Model fitting (plag)
history = model.fit(x=X_train_array, y=y_train, batch_size=64, epochs=5,
verbose=1, validation_data=(X_test_array, y_test))
for layer in model.layers:
print(layer.output_shape)
type(history)
# Improvement (plag)
from keras.layers import Add, Activation, Lambda, Dense
class EmbeddingLayer:
def __init__(self, n_items, n_factors):
self.n_items = n_items
self.n_factors = n_factors
def __call__(self, x):
x = Embedding(self.n_items, self.n_factors, embeddings_initializer='he_normal',
embeddings_regularizer=l2(1e-6))(x)
x = Reshape((self.n_factors,))(x)
return x
def RecommenderV2(n_users, n_movies, n_factors, min_rating, max_rating):
user = Input(shape=(1,))
u = EmbeddingLayer(n_users, n_factors)(user)
ub = EmbeddingLayer(n_users, 1)(user)
movie = Input(shape=(1,))
m = EmbeddingLayer(n_movies, n_factors)(movie)
mb = EmbeddingLayer(n_movies, 1)(movie)
x = Dot(axes=1)([u, m])
x = Add()([x, ub, mb])
# x = Dense(n_users*n_movies, activation='sigmoid')(x)
x = Activation('sigmoid')(x)
x = Lambda(lambda x: x * (max_rating - min_rating) + min_rating)(x)
print ('BHai : ', x.shape)
model = Model(inputs=[user, movie], outputs=x)
opt = Adam(lr=0.001)
model.compile(loss='mean_squared_error', optimizer=opt)
return model
# Initialization (plag)
n_factors = 50
model = RecommenderV2(n_users, n_movies, n_factors, min_rating, max_rating)
model.summary()
# Model fitting (plag)
history = model.fit(x=X_train_array, y=y_train, batch_size=64, epochs=5,
verbose=1, validation_data=(X_test_array, y_test))
###Output
/Users/eshasingh/env/lib/python3.7/site-packages/tensorflow_core/python/framework/indexed_slices.py:424: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
###Markdown
AnalysisIn this notebook, I analyse the results of our hyperparameter search and the errors the different models make. The notebook is structured as:1. Results2. Error analysis
###Code
%load_ext blackcellmagic
import json
import os
import pandas as pd
from filenames import ROOT
os.chdir(ROOT)
###Output
_____no_output_____
###Markdown
Results- 1440 models trained- Best single model: 0.957- Best averaged model: 0.953- Strong positive correlation between performance and token embedding size- Medium positive correlation between performance and hidden size- Weak positive correlation between performance and character embedding size- Models take on average 3.5 minutes to train- Almost all models achieve above 0.99 top 3 accuracy
###Code
# Load relevant results into a pandas DataFrame
DIR = "models/pos"
subdirnames = os.listdir(DIR)
results = []
for subdirname in subdirnames:
try:
filename = os.path.join(DIR, subdirname, "metrics.json")
with open(filename) as file:
metrics = json.load(file)
fold, token, char, hidden, batch, pretrained, _ = subdirname.split("-")
for var in ["fold", "token", "char", "hidden", "batch", "pretrained"]:
metrics[var] = eval(var)
results.append(metrics)
except FileNotFoundError: # this model may still be training
continue
results = pd.DataFrame(results)
columns_to_keep = [
"fold",
"token",
"char",
"hidden",
"batch",
"pretrained",
"best_epoch",
"training_duration",
"validation_accuracy",
"validation_accuracy3",
]
results = results[columns_to_keep]
columns_to_int = ["fold", "token", "char", "hidden", "batch"]
for column in columns_to_int:
results[column] = results[column].astype(int)
results["pretrained"].replace({"true": True, "false": False}, inplace=True)
results["training_duration"] = pd.to_timedelta(results["training_duration"])
results.head()
# How many models have we trained?
len(results)
# What is the best score?
results["validation_accuracy"].max()
# Which model was that?
results.iloc[results["validation_accuracy"].idxmax()]
# Which hyperparameters are correlated with performance?
columns = ["fold", "token", "char", "hidden", "batch", "validation_accuracy"]
results[columns].corr()["validation_accuracy"].sort_values(ascending=False)
# How are the models performing on top 3 accuracy?
results["validation_accuracy3"].describe()
# How do the models compare when averaging over the cross-validation folds?
hyperparams = ["token", "char", "hidden", "batch", "pretrained"]
results.groupby(hyperparams)["validation_accuracy"].mean().to_frame().sort_values(
by="validation_accuracy"
).tail(30)
# How long are the models taking to train?
results["training_duration"].describe()
# How many epochs are the models taking?
results["best_epoch"].describe()
###Output
_____no_output_____
###Markdown
Error analysis
###Code
from tqdm import tqdm
from pos import load_model, predict_from_text
def read_fold_validation(fold):
filename = f"data/evalatin/processed/pos/{fold}-valid-unprocessed.txt"
with open(filename) as file:
contents = file.readlines()
return contents
DIR = "models/pos"
subdirnames = os.listdir(DIR)
results = []
i = 0
for subdirname in tqdm(subdirnames):
try:
serialization_dir = os.path.join(DIR, subdirname)
model = load_model(serialization_dir)
fold, token, char, hidden, batch, pretrained, _ = subdirname.split("-")
validation_data = read_fold_validation(fold)
for sentence in validation_data:
words = [pair.split("/")[0] for pair in sentence.split()]
tags = [pair.split("/")[1] for pair in sentence.split()]
predictions = predict_from_text(model, " ".join(words))
predictions["true_tag"] = tags
errors = predictions[predictions["tag"] != predictions["true_tag"]].copy()
for var in ["fold", "token", "char", "hidden", "batch", "pretrained"]:
errors[var] = eval(var)
results.append(errors)
except FileNotFoundError: # this model may still be training
continue
i += 1
if i > 3:
break
results = pd.concat(results, ignore_index=True)
results.head()
# What are the most common errors?
results.groupby(["tag", "true_tag"]).size().to_frame("count").sort_values(by="count", ascending=False).head(30)
# What form are the models most often getting wrong?
results["form"].value_counts().head(20)
###Output
_____no_output_____
###Markdown
Plurals
###Code
print("Counts of each type of value within the plurals dataset")
plural_type_counts = plurals.subcategory.value_counts()
plural_type_counts
print("Percentage of each type which were found to be exact matches")
(plurals.groupby(by="subcategory")['score_0_exact'].agg("sum") / plural_type_counts) * 100
print("Evaluating means of bleu scores")
plurals.groupby(by="subcategory")['score_0_bleu'].agg("mean").round(4)
print("Median bleu score of each subcategory")
plurals.groupby(by="subcategory")['score_0_bleu'].agg("median").round(6)
print("Percent of exact values from entire plural set")
(plurals['score_0_exact'].agg("sum") / len(plurals.index)) * 100
print("Average bleu score for plurals")
plurals['score_0_bleu'].agg("mean").round(4)
print("Median bleu score for plurals")
plurals['score_0_bleu'].agg("median").round(6)
print("Percent of exact values from plurals where subcategory is not to-single")
(plurals[plurals['subcategory'] != 'plural|from-single']['score_0_exact'].agg("sum") / len(plurals[plurals['subcategory'] != 'plural|from-single'].index)) * 100
print("Average bleu score for plurals where subcategory is not single")
plurals[plurals['subcategory'] != 'plural|from-single']['score_0_bleu'].agg("mean").round(4)
print("Median bleu score for plurals where subcategory is not single")
plurals[plurals['subcategory'] != 'plural|from-single']['score_0_bleu'].agg("median").round(6)
print("Examples of exact matches where not \"to-single\"")
plurals.query('score_0_exact == 1 & subcategory != \'plural|from-single\'').sample(n=20)
print("Examples of exact matches where is \"to-single\"")
plurals.query('score_0_exact == 1 & subcategory == \'plural|from-single\'').sample(n=20)
print("Examples from top 10% matches where not \"to-single\"")
plurals.query('score_0_exact == 0 & subcategory != \'plural|from-single\'').sort_values(by="score_0_bleu", ascending=False).head(int(len(plurals)*0.10)).sample(n=20)
print("Examples from top 10% matches where is \"to-single\"")
plurals.query('score_0_exact == 0 & subcategory == \'plural|from-single\'').sort_values(by="score_0_bleu", ascending=False).head(int(len(plurals)*0.10)).sample(n=20)
print("Examples from bottom 25% matches where not \"to-single\"")
plurals.query('score_0_exact == 0 & subcategory != \'plural|from-single\'').sort_values(by="score_0_bleu", ascending=False).tail(int(len(plurals)*0.10)).sample(n=20)
print("Examples from bottom 25% matches where is \"to-single\"")
plurals.query('score_0_exact == 0 & subcategory == \'plural|from-single\'').sort_values(by="score_0_bleu", ascending=False).tail(int(len(plurals)*0.25)).sample(n=20)
###Output
Examples from bottom 25% matches where is "to-single"
###Markdown
Opposites
###Code
opposite_type_counts = opposites.subcategory.value_counts()
opposite_type_counts
(opposites.groupby(by="subcategory")['score_0_exact'].agg("sum") / opposite_type_counts) * 100
opposites.groupby(by="subcategory")['score_0_bleu'].agg("mean").round(4)
opposites.groupby(by="subcategory")['score_0_bleu'].agg("median").round(4)
print("Percent of exact values from entire opposite set")
(opposites['score_0_exact'].agg("sum") / len(opposites.index)) * 100
print("Average bleu score for opposites")
opposites['score_0_bleu'].agg("mean").round(4)
print("Median bleu score for opposites")
opposites['score_0_bleu'].agg("median").round(6)
print("Examples of exact matches")
opposites[opposites['score_0_exact'] == 1]
print("Top 25% of bleu scores in opposites")
opposites[opposites['score_0_exact'] == 0].sort_values(by="score_0_bleu", ascending=False).head(int(len(plurals)*0.10)).sample(n=20)
print("Top 25% of bleu scores in opposites where OPTIMUS didn't generate the value in c")
opposites.query("score_0_exact == 0 & pred_0 != c.str.lower()").sort_values(by="score_0_bleu", ascending=False).head(int(len(plurals)*0.10)).sample(n=20)
print("Bottom 25% of bleu scores in opposites")
opposites[opposites['score_0_exact'] == 0].sort_values(by="score_0_bleu", ascending=False).tail(int(len(plurals)*0.25)).sample(n=20)
###Output
Bottom 25% of bleu scores in opposites
###Markdown
Comparatives
###Code
comparative_type_counts = comparatives.subcategory.value_counts()
comparative_type_counts
(comparatives.groupby(by="subcategory")['score_0_exact'].agg("sum") / comparative_type_counts) * 100
comparatives.groupby(by="subcategory")['score_0_bleu'].agg("mean").round(4)
comparatives.groupby(by="subcategory")['score_0_bleu'].agg("median").round(4)
print("Examples of exact matches")
comparatives[comparatives['score_0_exact'] == 1].sample(n=20)
print("Top 25% of bleu scores in comparative")
comparatives[comparatives['score_0_exact'] == 0].sort_values(by="score_0_bleu", ascending=False).head(int(len(plurals)*0.10)).sample(n=20)
print("Bottom 25% of bleu scores in opposites")
comparatives[comparatives['score_0_exact'] == 0].sort_values(by="score_0_bleu", ascending=False).tail(int(len(plurals)*0.25)).sample(n=20)
###Output
Bottom 25% of bleu scores in opposites
###Markdown
Checking out duplicate valuesAssuming that the 'Ocorrencia' is a unique code for the transaction itself. Let's check if there's any duplicated occurrence.```pythonlen(df.index.unique())```If the dataset doesn't present any duplicated values, this piece of code should return, as output, 150.000 data entries. Nevertheless it returned only 64.958 values - meaning that this dataset presents around 85.042 duplicated data entries.```pythonlen(df) - len(df.index.unique())```The duplicated values will be kept on analysis and training in modeling step. Due the nature of this dataset, this duplicate values could have been naturally generated - meaning that one occurrence could occur more than once - or, due the lack of available training material, some transactions could have been artificially generated.--------------------------------
###Code
# Checking the number of unique values.
len(df.index.unique())
# Checking the number of duplicated entries.
len(df) - len(df.index.unique())
###Output
_____no_output_____
###Markdown
Exploratory AnalysisSection aimed on checking the data distribution and data behaviour.- N.A. values?- Outliers?- Min.- Max.- Mean.- Stdev.-------------------------
###Code
df.describe()
###Output
_____no_output_____
###Markdown
Describe Analysis ResultThis section summarizes the initial analysis on this dataset.The command below allows to summarize each variable and retrieve the main statistical characteristics. ```pythondf.describe()```The first thing to be noticed is at 'Sacado' variable - the amount of money withdrawn. | Statistical Measurement | Value || :---------------------: | :----------: || Mean | -88.602261 || Standard Deviation | 247.302373 || Min | -19656.53 || Max | -0.00 |How can be observed on this chart. The behaviour of 'Sacado' variable is pretty weird. First of all, this variable presents the highest standard deviation of all variables (247.30).```pythondf.describe().loc['std'].sort_values(ascending=False).head()```The mean, min and max values are pretty strange as well - with all of them being negative or null values. How this values could be negative/null values if this variable it was meant to represent the total withdrawn value of the transaction?__Possible errors:__- Acquistion errors?- Parsing issues?Other variables seems to behave pretty well (well distributed along the mean value - almost a normal curve) - even didn't knowing what they represent (the max values are high? the min values are low?)._obs: Even with the lower deviation. On training, a simple normalization will be made on this dataset._-------------
###Code
df.describe().loc['std'].sort_values(ascending=False).head()
df[df.Sacado >= 0]
###Output
_____no_output_____
###Markdown
Some plotsOn this section are plots for visualizing the dispersion of some 'random' variables.----------------
###Code
df[['PP1', 'PP2', 'PP6', 'PP21']].hist()
# As it can be observed. The Sacado variable has a lot of outliers - removing and analysing it alone
# (for not disturbing the scale)
df[['PP1', 'PP2', 'PP21', 'PP6', 'Sacado']].boxplot()
# There are outliers on it - predicted it on histogram.
df[['PP1', 'PP2', 'PP6', 'PP21']].boxplot()
df[['Sacado']].boxplot()
###Output
_____no_output_____
###Markdown
Seeking for N.A. valuesThis dataset does not present N.A./Blank values.----------------------------
###Code
sum(df.index.isna())
dict_na = {
'columns': list(df.columns),
'na': []
}
for i in range(len(df.columns)):
dict_na.get('na').append(sum(df[df.columns[i]].isna()))
pandas.DataFrame(dict_na).set_index('columns')
###Output
_____no_output_____
###Markdown
Does this dataset is non-balanced?This section aims on checking if the dataset is non-balanced - are more frauds than non-frauds? Vice-Versa?Table below assumes that the y variable - Fraude - has only 2 unique values - presented in table.```pythondf.Fraude.unique()```| Value | Meaning | Total | Percentage || :---: | :-------: | :------: | :--------: || 0 | Non Fraud | 149.763 | 99,842 % || 1 | Fraud | 237 | 0,0158 % |As can be observed on the table above. It's been assumed that 0 represents a non-fraudulent transaction and 1 represents a fraudulent transaction. This dataset is pretty unbalanced - with less than 1 % being fraudulent transactions (237 data entries). This scenario, on model training steps would be a problem - the model probably will be overfitted in fraudulents occurrences. To prevent it, it must be added some new - artificially generated or naturally acquired - fraudulents data entries.----------------------------------------
###Code
# Checking how many unique entries this variable presents.
df.Fraude.unique()
# Checking how many data entries are non-fraud or 0
print(len(df[df['Fraude'] == 0]))
# Checking the percentage of non-fraud transactions
print(len(df[df['Fraude'] == 0])/len(df.Fraude))
# Checking how many data entries are fraud or 1
len(df[df['Fraude'] == 1])
# Checking the percentage of fraud transactions
print(len(df[df['Fraude'] == 1])/len(df.Fraude))
###Output
0.00158
###Markdown
Dimensionality ReductionThis section aims on reduct the dimensionality of this dataset.__It can be used:__- linear regression, correlation and statistically relevance;- PCA;_obs: despite the robustness of PCA, some articles presents issues on its performance - losing to simpler techniques._-----------------------
###Code
occurrence = pandas.Series(df.index)
x = pandas.DataFrame(df[df.columns[1:-1]])
y = pandas.DataFrame(df[df.columns[-1]])
# Multiple Linear Regression
lm = linear_model.LinearRegression().fit(x, y)
attr_reduction = SelectFromModel(lm, prefit=True)
df_pca = pandas.DataFrame(attr_reduction.transform(x))
###Output
_____no_output_____
###Markdown
Building PredictorsThree models will be implemented - if none of them supply the needs, new models could be choosen - and compared. Not only the assertiveness rate will be considered. The most problematic issue are False Negatives occurences - when the occurrence is Fraudulent however the model classified it as a Non-fraudulent occurence - if this happens the model will "lose" some points. False positives could be sent to a human validation - not so problematic as False Negatives.__Models__:- Linear Regression;- Support Vector Machines;- Random Forest._obs: Random forest classifier, when compared with other classifiers, presented 1 advantage point and 1 disavantage point - it wasn't able to converge in polynomial time (when compared to Linear Regression and SVM's times - much bigger time to converge), however it presented the most precise classifiers between all 3 - With lesser False Negatives.__obs: Due the results. A grid search with SVM and Random Forest will not be needed_On this scenario, even with time complexity being an issue - when pipelined in production - the random forest will be chosen into "production" step._obs: My concerns come to reality. All 3 models classifies pretty well non fraudulent transactions. However - due the lack of data - all 3 - at some point and in some level - presented an overfitting in classifying Fraudulent transactions - a further study will be made with Random Forest - the model with the most precise behaviour._------------------------
###Code
def data_separation(df, proportion=0.2):
"""
Data separation method.
"""
return train_test_split(df, test_size=proportion)
def time_screening(dt):
"""
Fitting time performance calculator.
"""
print(datetime.datetime.now() - dt)
results = {
'linear_model': {
'train': [],
'test': [],
'validation': []
},
'svm': {
'train': [],
'test': [],
'validation': []
},
'random_forest': {
'train': [],
'test': [],
'validation': []
}
}
train, test = data_separation(df)
test, validation = data_separation(test, 0.4)
# Splitting into train - x and y
x_train = pandas.DataFrame(train[train.columns[0:-1]])
y_train = pandas.DataFrame(train[train.columns[-1]])
# Splitting into test - x and y
x_test = pandas.DataFrame(test[test.columns[0:-1]])
y_test = pandas.DataFrame(test[test.columns[-1]])
# Splitting into validation - x and y
x_validation = pandas.DataFrame(validation[validation.columns[0:-1]])
y_validation = pandas.DataFrame(validation[validation.columns[-1]])
# Multiple Linear Regression
begin = datetime.datetime.now()
lm = linear_model.LinearRegression().fit(x_train, y_train)
time_screening(begin)
y_train['Predicted'] = lm.predict(x_train)
y_train['Predicted'] = y_train['Predicted'].astype(int)
y_test['Predicted'] = lm.predict(x_test)
y_test['Predicted'] = y_test['Predicted'].astype(int)
y_validation['Validation'] = lm.predict(x_validation)
y_validation['Validation'] = y_validation['Validation'].astype(int)
results.get('linear_model')['train'] = len(y_train[y_train['Fraude'] == y_train['Predicted']])/len(y_train)
results.get('linear_model')['test'] = len(y_test[y_test['Fraude'] == y_test['Predicted']])/len(y_test)
results.get('linear_model')['validation'] = len(y_validation[y_validation['Fraude'] == y_validation['Validation']])/len(y_validation)
pandas.DataFrame(confusion_matrix(y_train[['Fraude']], y_train[['Predicted']]),
['Non Fraud', 'Fraud'], ['Non Fraud', 'Fraud'])
pandas.DataFrame(confusion_matrix(y_test[['Fraude']], y_test[['Predicted']]),
['Non Fraud', 'Fraud'], ['Non Fraud', 'Fraud'])
pandas.DataFrame(confusion_matrix(y_validation[['Fraude']], y_validation[['Validation']]),
['Non Fraud', 'Fraud'], ['Non Fraud', 'Fraud'])
# Linear Support Vector Machine
begin = datetime.datetime.now()
lsvc = LinearSVC(C=0.01, penalty="l1", dual=False, max_iter=10000).fit(x_train, y_train.Fraude.values)
time_screening(begin)
y_train['Predicted'] = lsvc.predict(x_train)
y_train['Predicted'] = y_train['Predicted'].astype(int)
y_test['Predicted'] = lsvc.predict(x_test)
y_test['Predicted'] = y_test['Predicted'].astype(int)
y_validation['Validation'] = lsvc.predict(x_validation)
y_validation['Validation'] = y_validation['Validation'].astype(int)
results.get('svm')['train'] = len(y_train[y_train['Fraude'] == y_train['Predicted']])/len(y_train)
results.get('svm')['test'] = len(y_test[y_test['Fraude'] == y_test['Predicted']])/len(y_test)
results.get('svm')['validation'] = len(y_validation[y_validation['Fraude'] == y_validation['Validation']])/len(y_validation)
pandas.DataFrame(confusion_matrix(y_train[['Fraude']], y_train[['Predicted']]),
['Non Fraud', 'Fraud'], ['Non Fraud', 'Fraud'])
pandas.DataFrame(confusion_matrix(y_test[['Fraude']], y_test[['Predicted']]),
['Non Fraud', 'Fraud'], ['Non Fraud', 'Fraud'])
pandas.DataFrame(confusion_matrix(y_validation[['Fraude']], y_validation[['Validation']]),
['Non Fraud', 'Fraud'], ['Non Fraud', 'Fraud'])
# Random Forest
begin = datetime.datetime.now()
r_forest = RandomForestClassifier(n_estimators=90).fit(x_train, y_train.Fraude.values)
time_screening(begin)
y_train['Predicted'] = r_forest.predict(x_train)
y_train['Predicted'] = y_train['Predicted'].astype(int)
y_test['Predicted'] = r_forest.predict(x_test)
y_test['Predicted'] = y_test['Predicted'].astype(int)
y_validation['Validation'] = r_forest.predict(x_validation)
y_validation['Validation'] = y_validation['Validation'].astype(int)
results.get('random_forest')['train'] = len(y_train[y_train['Fraude'] == y_train['Predicted']])/len(y_train)
results.get('random_forest')['test'] = len(y_test[y_test['Fraude'] == y_test['Predicted']])/len(y_test)
results.get('random_forest')['validation'] = len(y_validation[y_validation['Fraude'] == y_validation['Validation']])/len(y_validation)
pandas.DataFrame(confusion_matrix(y_train[['Fraude']], y_train[['Predicted']]),
['Non Fraud', 'Fraud'], ['Non Fraud', 'Fraud'])
pandas.DataFrame(confusion_matrix(y_test[['Fraude']], y_test[['Predicted']]),
['Non Fraud', 'Fraud'], ['Non Fraud', 'Fraud'])
pandas.DataFrame(confusion_matrix(y_validation[['Fraude']], y_validation[['Validation']]),
['Non Fraud', 'Fraud'], ['Non Fraud', 'Fraud'])
pandas.DataFrame(results)
###Output
_____no_output_____
###Markdown
Using selected model in "production" environment- Normalize data- Split data- fit and predict model-----------------------------------------------------
###Code
# Data Normalization
scaler = preprocessing.MinMaxScaler().fit(df_pca)
df_pca_norm = pandas.DataFrame(scaler.transform(df_pca))
df_pca_norm['Occurrence'] = occurrence
df_pca_norm.set_index('Occurrence', drop=True, inplace=True)
# Data separation
df_pca_norm['Fraude'] = y
train, test = data_separation(df_pca_norm)
test, validation = data_separation(test, 0.4)
# Splitting into train - x and y
x_train = pandas.DataFrame(train[train.columns[0:-1]])
y_train = pandas.DataFrame(train[train.columns[-1]])
# Splitting into test - x and y
x_test = pandas.DataFrame(test[test.columns[0:-1]])
y_test = pandas.DataFrame(test[test.columns[-1]])
# Splitting into validation - x and y
x_validation = pandas.DataFrame(validation[validation.columns[0:-1]])
y_validation = pandas.DataFrame(validation[validation.columns[-1]])
# Random Forest
begin = datetime.datetime.now()
r_forest = RandomForestClassifier(n_estimators=90).fit(x_train, y_train.Fraude.values)
time_screening(begin)
y_train['Predicted'] = r_forest.predict(x_train)
y_train['Predicted'] = y_train['Predicted'].astype(int)
y_test['Predicted'] = r_forest.predict(x_test)
y_test['Predicted'] = y_test['Predicted'].astype(int)
y_validation['Validation'] = r_forest.predict(x_validation)
y_validation['Validation'] = y_validation['Validation'].astype(int)
print(len(y_train[y_train['Fraude'] == y_train['Predicted']])/len(y_train))
print(len(y_test[y_test['Fraude'] == y_test['Predicted']])/len(y_test))
print(len(y_validation[y_validation['Fraude'] == y_validation['Validation']])/len(y_validation))
pandas.DataFrame(confusion_matrix(y_train[['Fraude']], y_train[['Predicted']]),
['Non Fraud', 'Fraud'], ['Non Fraud', 'Fraud'])
pandas.DataFrame(confusion_matrix(y_test[['Fraude']], y_test[['Predicted']]),
['Non Fraud', 'Fraud'], ['Non Fraud', 'Fraud'])
pandas.DataFrame(confusion_matrix(y_validation[['Fraude']], y_validation[['Validation']]),
['Non Fraud', 'Fraud'], ['Non Fraud', 'Fraud'])
# Checking if there's overfitting on classifying Frauds - due the low quantity of data entries
overfitting = x_validation
overfitting['Fraude'] = y_validation['Fraude']
aux = x_test
aux['Fraude'] = y_test['Fraude']
overfitting = overfitting.append(aux)
overfitting = overfitting[overfitting['Fraude'] == 1]
del(aux)
overfitting['Predicted'] = r_forest.predict(overfitting.drop(columns=['Fraude']))
# Decay of assertiveness rate
print(len(overfitting[overfitting['Fraude'] == overfitting['Predicted']])/len(overfitting))
pandas.DataFrame(confusion_matrix(overfitting[['Fraude']], overfitting[['Predicted']]),
['Non Fraud', 'Fraud'], ['Non Fraud', 'Fraud'])
###Output
_____no_output_____
###Markdown
Description: this program uses an artificial recurrent neural netwrok called Long Short Term Memory (LSTM) to predict the closing price of an Index (S&P 500) using the past 60 day Index price.
###Code
# Import the libraries
import math
import pandas_datareader as web
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import Dense, LSTM
import matplotlib.pyplot as plt
import mplcyberpunk
plt.style.use('fivethirtyeight')
# Get the stock quote
df = web.DataReader('^GSPC', data_source='yahoo', start='2012-01-01', end='2020-12-17')
# Show the data
df
# Get the number of rows and columns in the data set
df.shape
# Visualize the closing price history
# plt.style.use("cyberpunk")
plt.figure(figsize=(16, 8))
plt.title('S&P500 Close Price History')
plt.plot(df['Close'])
plt.xlabel('Data', fontsize=18)
plt.ylabel('Close Price USD ($)', fontsize=18)
# mplcyberpunk.add_glow_effects()
plt.show()
#plt.savefig('price_hist.png')
# Create a new dataframe with only the Close column
data = df.filter(['Close'])
# Convert the dataframe to a numpy array
dataset = data.values
# Get the number of rows to train the model on
training_data_len = math.ceil( len(dataset) * .8 )
training_data_len
# Scale the data
scaler = MinMaxScaler(feature_range=(0,1))
scaled_data = scaler.fit_transform(dataset)
scaled_data
# Create the training data set
# Create the scaled training data set
train_data = scaled_data[0:training_data_len, :]
# Split the data into x_train and y_train data sets
x_train = []
y_train = []
for i in range(60, len(train_data)):
x_train.append(train_data[i-60:i, 0])
y_train.append(train_data[i, 0])
if i<= 61:
print(x_train)
print(y_train)
print()
# Convert the x_train and y_train to numpy arrays
x_train, y_train = np.array(x_train), np.array(y_train)
# Reshape the data
x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1))
x_train.shape
# Build the LSTM model
model = Sequential()
model.add(LSTM(50, return_sequences=True, input_shape=(x_train.shape[1], 1)))
model.add(LSTM(50, return_sequences=False))
model.add(Dense(25))
model.add(Dense(1))
# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')
# Train the model
model.fit(x_train, y_train, batch_size=1, epochs=1)
# Create the testing data set
# Create a new array containing scaled values from index 1745 to 2256
test_data = scaled_data[training_data_len - 60:, :]
# Create the data sets x_test and y_test
x_test = []
y_test = dataset[training_data_len:, :]
for i in range(60, len(test_data)):
x_test.append(test_data[i-60:i,0])
# Convert the data into a numpy array
x_test = np.array(x_test)
# Reshape the data
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1))
# Get the models predicted price values
predictions = model.predict(x_test)
predictions = scaler.inverse_transform(predictions)
# Get the root mean squared error (RMSE)
rmse = np.sqrt(np.mean(predictions - y_test)**2)
rmse
# Plot the data
train = data[:training_data_len]
valid = data[training_data_len:]
valid['Predictions'] = predictions
# Visualize
plt.figure(figsize=(16,8))
plt.title('LSTM Prediction Model on S&P 500')
plt.xlabel('Date', fontsize=18)
plt.ylabel('Close Price ($)', fontsize=18)
plt.plot(train['Close'])
plt.plot(valid[['Close', 'Predictions']])
plt.legend(['Train', 'Val', 'Predictions'], loc='lower right')
plt.show()
# Show the valid and predicted prices
valid
# Get the quote
sp500_quote = web.DataReader('^GSPC', data_source='yahoo', start='2012-01-01', end='2020-12-17')
# Create a new dataframe
new_df = sp500_quote.filter(['Close'])
# Get the last 60 day closing price values and convert the dataframe to an array
last_60_days = new_df[-60:].values
# Scale the data to the values between 0 and 1
last_60_days_scaled = scaler.transform(last_60_days)
# Create an empty list
X_test = []
# Append the past 60 days
X_test.append(last_60_days_scaled)
# Convert the X_test data set to a numpy array
X_test = np.array(X_test)
# Reshape the data
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
# Get the predicted scaled price
pred_price = model.predict(X_test)
# Undo the scaling
pred_price = scaler.inverse_transform(pred_price)
print(pred_price)
sp500_quote2 = web.DataReader('^GSPC', data_source='yahoo', start='2020-12-18', end='2020-12-18')
print(sp500_quote2['Close'])
###Output
Date
2020-12-18 3709.409912
2020-12-18 3709.409912
Name: Close, dtype: float64
###Markdown
Setting up
###Code
%pylab
%matplotlib inline
from scipy.sparse.linalg import eigs
from scipy.integrate import trapz, dblquad
from scipy.special import erf, erfc, xlogy
from scipy.constants import Boltzmann as kB, g as g_earth
from scipy.optimize import minimize_scalar
params = {
"backend": "MacOSX",
"font.family": "sans-serif",
"text.usetex": True,
"mathtext.fontset": "cm",
"text.latex.preamble": "\n".join([
r"\usepackage{amsmath}", r"\usepackage{lmodern}",
r"\usepackage{siunitx}", r"\usepackage{units}",
r"\usepackage{physics}", r"\usepackage{bm}",
r"\usepackage{nicefrac}", r"\usepackage{amssymb}"
]),
"figure.figsize": [6, 6],
"lines.linewidth": 3.0,
"lines.markersize": 5.0,
"axes.spines.top": False,
"axes.spines.right": False,
"axes.labelsize": 28,
"axes.formatter.limits": [-4, 4],
"xtick.labelsize": 20,
"ytick.labelsize": 20,
"xtick.minor.visible": True,
"ytick.minor.visible": True,
"hist.bins": "auto",
"errorbar.capsize": 5.0,
}
matplotlib.rcParams.update(params)
set_printoptions(linewidth=400, formatter={"float_kind": lambda x: "%.5f" % x})
def N_distr(x, mu, sigma2):
"""Return the normal distribution with mean mu and variance sigma2."""
return exp(-0.5*((x-mu)**2)/sigma2)/sqrt(2.0*pi*sigma2)
###Output
_____no_output_____
###Markdown
Main text ($\sigma_{m} = 0$) Figure 3A We outline below the computations to get the theoretical curve in Fig. 3A of our manuscript.
###Code
def compute_clean_T(xnpr, v, dg, dt):
"""Return the propagator (Eq. S22) for xnpr = x_{n^{+}}^{r}."""
return (
heaviside(-v[None, :], 0.0)*N_distr(
xnpr[:, None], (v[None, :]+dg)*exp(-dt) - dg, 1.0-exp(-2.0*dt)
)
+ heaviside(v[None, :], 0.0)*N_distr(
xnpr[:, None], -dg - (v[None, :]-dg)*exp(-dt), 1.0-exp(-2.0*dt)
)
)*abs(xnpr[1]-xnpr[0])
def compute_clean_Ttilde(xnr, u, dg, dt):
"""Return the propagator (Eq. S23) for xnr = x_{n}^{r}."""
return heaviside(-xnr[:, None], 0.0)*(
N_distr(xnr[:, None], dg - (u[None, :]+dg)*exp(-dt), 1.0-exp(-2.0*dt))
+ N_distr(xnr[:, None], (u[None, :]+dg)*exp(-dt) - dg, 1.0-exp(-2.0*dt))
)*abs(xnr[1]-xnr[0])
def find_clean_steady_states(out_grid, in_grid, dg, dt, TOL=1e-3):
"""Find and return the steady-state distributions for xnpr and xnr."""
# compute transition matrices
T_clean = compute_clean_T(out_grid, in_grid, dg, dt)
Ttilde_clean = compute_clean_Ttilde(out_grid, in_grid, dg, dt)
# find the 3 largest eigenvalues and associated eigenvectors
p0 = N_distr(out_grid, 0.0, 1.0) # starting guess is standard Gaussian
w_xnpr, v_xnpr = eigs(T_clean, k=3, v0=p0)
w_xnr, v_xnr = eigs(Ttilde_clean, k=3, v0=p0)
# find the eigenvector with eigenvalue 1
p_xnpr = v_xnpr[:, where((w_xnpr - 1.0).__abs__() < TOL)[0][0]]
p_xnr = v_xnr[:, where((w_xnr - 1.0).__abs__() < TOL)[0][0]]
# re-normalize the eigenvectors to make them into distributions
p_xnpr /= trapz(p_xnpr, out_grid)
p_xnr /= trapz(p_xnr, out_grid)
return p_xnpr.real, p_xnr.real
def compute_means(dg=0.8, nscan=40):
"""Run the calculation that gives you the steady-state
average power as a function of sampling time."""
# set up the grid over to discretize equations over
grid_to = grid_from = linspace(-20.0, 20.0, 2000)
times = logspace(-3.0, 2.0, nscan)
mean_powers_out = zeros(int(nscan))
for idx, time in enumerate(times):
p_xnpr, p_xnr = find_clean_steady_states(grid_to, grid_from, dg, time)
# compute the mean work
mean_powers_out[idx] = dg*(trapz(grid_to*p_xnpr, grid_to) - trapz(grid_to*p_xnr, grid_to))/time
return column_stack((times, mean_powers_out))
def get_clean_limits(times, dg=0.8):
W_eq = sqrt(2.0/pi)*dg*exp(-0.5*(dg**2)) + (dg**2)*(erf(sqrt(0.5)*dg)-1.0)
P_eq = W_eq/times
P_infty = sqrt(2.0/pi)*dg*exp(-0.5*(dg**2))/(1+erf(sqrt(0.5)*dg))
return P_eq, P_infty
###Output
_____no_output_____
###Markdown
Getting out the results for the parameters used in Fig. 3A
###Code
clean_results = compute_means()
quasistatic_limit, infty_limit = get_clean_limits(clean_results[:,0])
###Output
_____no_output_____
###Markdown
Given the results, we now re-plot the theory curve in Fig. 3A
###Code
fig, ax = subplots(1, 1)
# plotting the results of numerical computation
ax.plot(1.0/clean_results[::-1,0], clean_results[::-1,1], "k", lw=3.0)
# plotting the theoretical asymptotic behaviors
ax.axhline(infty_limit, color="lightgray", ls=":", zorder=3)
ax.plot(1.0/clean_results[::-1, 0], quasistatic_limit[::-1], color="lightgray", ls="--", zorder=3)
# making the plot look nice
ax.set_yscale("log")
ax.set_ylim((5e-3, 0.5))
ax.set_xscale("log")
ax.set_xlim((3e-2, 1e3))
ax.set_ylabel(r"$P\ \left[k_{\mathrm{B}}T/\tau_{\mathrm{R}}\right]$", fontsize=22, labelpad=8)
ax.set_xlabel(r"$f_{\mathrm{s}}$", fontsize=22, labelpad=8)
ax.tick_params(labelsize=20)
fig.tight_layout()
###Output
_____no_output_____
###Markdown
We can also look at the velocity $v$ as a function of the sampling frequency $f_{\mathrm{s}}$; however, since $P$ and $v$ are related simply by a multiplicative factor of $\delta_{\mathrm{g}}$ then the curve will look qualitatively the same but will be shifted vertically from the $P$ vs. $f_{\mathrm{s}}$ (on a log-log scale) by an amount $-\log\delta_{\mathrm{g}}$. Figure 3B Now we outline the calculations for Figure 3B:
###Code
def compute_pow_v_thresh(dg=0.8, nscan=40):
"""Compute the power vs. threshold curve by evaluating the mean first-passage time
through the integral formula (Eq. 9)"""
V = lambda x: (0.5*x + dg)*x # define the potential to be integrated
theory_curve = zeros(nscan)
threshold_values = linspace(1e-3, 3.0, theory_curve.size)
for idx, Xt in enumerate(threshold_values):
theory_curve[idx], _ = dblquad(
lambda y, x: exp(V(x)-V(y)),
-Xt, Xt,
lambda x: -800.0, # setting to something really large and negative to replicate -\infty
lambda x: x
)
mean_powers_out = dg*(2.0*(threshold_values)/theory_curve)
return column_stack((threshold_values, mean_powers_out))
clean_results_thresh = compute_pow_v_thresh()
fig, ax = subplots(1, 1)
ax.plot(clean_results_thresh[:,0], clean_results_thresh[:,1], lw=3.0)
ax.axhline(infty_limit, color="lightgray", ls=":", zorder=3)
ax.axhline(0.0, color="lightgray", ls="--", zorder=3, lw=1.0)
ax.tick_params(labelsize=20)
ax.set_xticks([0.0, 1.0, 2.0, 3.0])
ax.set_xticklabels([r"$0$", r"$1$", r"$2$", r"$3$"])
ax.set_xlim((0.0, 2.5))
ax.set_ylabel(r"$P\ \left[k_{\mathrm{B}}T/\tau_{\mathrm{R}}\right]$", fontsize=22, labelpad=8)
ax.set_xlabel(r"$X_{\mathrm{T}}$", fontsize=22)
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Supplementary Material Now we run through the computations that are outlined primarily in the Supplemental Material Section L Here we run through the computations that generate Table S1.
###Code
def get_tbl_S1():
# ============= DEFINE PROPERTIES ==============
# define bead properties
diameters = array([0.5, 1.5, 3.0, 5.0])*1e-6
r = diameters*0.5 #radius
mass = array([6.54e-17, 1.73e-15, 1.41e-14, 6.54e-14])
gamma = array([4.20e-9, 1.25e-8, 2.52e-8, 4.19e-8])
# define medium properties
eta = 8.9e-4 # dynamic viscosity of water
rho_f = 1e3 # density of water
kT = kB*(293.0) # assume temperature of 20 degrees celcius
beta = kT**(-1)
# ============= COMPUTE QUANTITIES ==============
# define S^{2} = tau_{f}/tau_{r} and Q^{2} = tau_{v}/tau_{r}
Sval = 1.0
Qval = 1.0
# compute the fluid and velocity relaxation time scales
tau_f = (r**2)*rho_f/eta
tau_v = mass/gamma
# compute critical kappa values for Q = Qval = 1 and S = Sval = 1
kf = ((Sval**2)*gamma*eta)/((r**2)*rho_f)
kv = ((Qval**2)*(gamma**2))/mass
# compute delta values associated with kf and kv
delta_s = mass*g_earth*sqrt(beta/kf)
delta_q = mass*g_earth*sqrt(beta/kv)
# compute velocity and power associated with critical kappa
v_s = sqrt(2.0/pi)*exp(-0.5*(delta_s**2))*(1.0/sqrt(beta*kf)) / ((gamma/kf)*(1.0+erf(delta_s/sqrt(2.0))))
pows_s = sqrt(2.0/pi)*delta_s*exp(-0.5*(delta_s**2)) / ((gamma/kf)*(1.0+erf(delta_s/sqrt(2.0))))
# return in corresponding units to Table
return vstack((diameters/1e-6, tau_f/1e-6, tau_v/1e-6, kf/1e-6, v_s/1e-6, pows_s))
get_tbl_S1()
###Output
_____no_output_____
###Markdown
Section N ($\sigma_{m}\neq 0$) In this section we work out the efficiency of our engine by considering the case where we have noisy measurements as modeled by a Gaussian noise kernel with mean zero and variance $\sigma_{\mathrm{m}}^{2}$ (ie. $p(y|x) = \mathcal{N}(y|x, \sigma_{\mathrm{m}}^{2})$) Given this noise model the relative coordinates have new propagators. For detailed derivations of these propagators see associated Mathematica "propagators.nb" notebook.
###Code
def compute_noisy_T(xkpr, u, dg=0.8, sg=0.1, t=1e-3, alpha=2):
"""Return the propagator (Eq. S22) for xkpr = x_{n^{+}}^{r} but for noisy measurements."""
return (
(exp(t - (-dg + (-1 + alpha)*u[None,:] + exp(t)*(dg + xkpr[:,None]))**2/(2.*(-1 + exp(2*t) + (alpha**2)*(sg**2))))*(1 + erf(((-1 + exp(2*t) + alpha*(sg**2))*u[None,:] + alpha*(sg**2)*(dg - exp(t)*(dg + xkpr[:,None])))/(sqrt(2)*sg*sqrt((-1 + exp(2*t))*(-1 + exp(2*t) + (alpha**2)*(sg**2)))))))/(2.*sqrt(2*pi)*sqrt(-1 + exp(2*t) + (alpha**2)*(sg**2))) +
(exp(t/2. - ((dg + u[None,:] - exp(t)*(dg + xkpr[:,None]))**2/sinh(t))/(4.*exp(t)))*(1 - erf(u[None,:]/(sqrt(2)*sg))))/(4.*sqrt(pi)*sqrt(sinh(t)))
)*abs(xkpr[1]-xkpr[0])
def compute_noisy_Ttilde(xkr, v, dg=0.8, sig=0.1, t=1e-3, alpha=2):
"""Return the propagator (Eq. S23) for xkr = x_{n}^{r} but for noisy measurements."""
return (
-(exp(t/2. - ((dg + v[None,:] - exp(t)*(dg + xkr[:,None]))**2/sinh(t))/(4.*exp(t)))*(-1 + erf(xkr[:,None]/(sqrt(2)*sig))))/(4.*sqrt(pi)*sqrt(sinh(t))) +
(exp((4*t - (2*xkr[:,None]**2)/((alpha**2)*(sig**2)) - ((dg - dg*exp(t) + v[None,:])**2/sinh(t))/exp(t) + ((1.0/sinh(t))*((alpha**2)*(sig**2)*(dg - dg*exp(t) + v[None,:]) - 2*(-1 + alpha)*xkr[:,None]*sinh(t))**2)/((alpha**2)*(sig**2)*((alpha**2)*(sig**2)*cosh(t) + (2 + alpha*(-4 + alpha*(2 + (sig**2))))*sinh(t))))/4.)*(-1 + (1.0/tanh(t)))*
((1.0/sinh(t))*(-2 + exp(2*t)*(1 + erf((-((alpha**2)*exp(t)*(sig**2)*(dg*(-1 + exp(t)) - v[None,:])) + (-1 + alpha)*(1 - exp(2*t))*xkr[:,None])/(sqrt(2)*alpha*sig*sqrt((-1 + exp(2*t))*(-(-1 + alpha)**2 + exp(2*t)*(1 + alpha*(-2 + alpha + alpha*(sig**2)))))))) +
erfc((exp(t)*((alpha**2)*(sig**2)*(dg - dg*exp(t) + v[None,:]) - 2*(-1 + alpha)*xkr[:,None]*sinh(t)))/(sqrt(2)*alpha*sig*sqrt((-1 + exp(2*t))*(-(-1 + alpha)**2 + exp(2*t)*(1 + alpha*(-2 + alpha + alpha*(sig**2)))))))) +
2*exp(t)*erf((abs((alpha**2)*exp(t)*(sig**2)*(dg*(-1 + exp(t)) - v[None,:]) + (-1 + alpha)*(-1 + exp(2*t))*xkr[:,None])*sqrt(2 + alpha*(-4 + alpha*(2 + (sig**2))) + (alpha**2)*(sig**2)*(1.0/tanh(t))))/(2.*alpha*sig*(-(-1 + alpha)**2 + exp(2*t)*(1 + alpha*(-2 + alpha + alpha*(sig**2))))))*
sign((alpha**2)*exp(t)*(sig**2)*(dg*(-1 + exp(t)) - v[None,:]) + (-1 + alpha)*(-1 + exp(2*t))*xkr[:,None]) - 2*exp(t)*erf((abs(alpha*exp(t)*(sig**2)*(dg*(-1 + exp(t)) - v[None,:]) + xkr[:,None] - alpha*xkr[:,None] + exp(2*t)*(-1 + alpha + alpha*(sig**2))*xkr[:,None])*sqrt(2 + alpha*(-4 + alpha*(2 + (sig**2))) + (alpha**2)*(sig**2)*(1.0/tanh(t))))/
(2.*exp(t)*((alpha**2)*exp(t)*(sig**3) + 2*(-1 + alpha)**2*sig*sinh(t))))*sign(alpha*exp(t)*(sig**2)*(dg*(-1 + exp(t)) - v[None,:]) + xkr[:,None] - alpha*xkr[:,None] + exp(2*t)*(-1 + alpha + alpha*(sig**2))*xkr[:,None]))*sinh(t))/(4.*sqrt(-2*(-1 + alpha)**2*pi + 2*exp(2*t)*pi*(1 + alpha*(-2 + alpha + alpha*(sig**2)))))
)*abs(xkr[1]-xkr[0])
def find_noisy_steady_states(out_grid, in_grid, dg=0.8, sg=0.1, dt=1e-3, alpha=2.0, TOL=5e-3):
"""Find and return the steady-state distributions for xnpr and xnr given noisy measurements."""
# compute transition matrices
T = compute_noisy_T(out_grid, in_grid, dg, sg, dt, alpha)
Ttilde = compute_noisy_Ttilde(out_grid, in_grid, dg, sg, dt, alpha)
# find the 3 largest eigenvalues and associated eigenvectors
p0 = N_distr(out_grid, 0.0, 1.0) # use equilibrium as a starting guess for iteration
wT_ss, vT_ss = eigs(T, k=3, v0=p0)
wTtilde_ss, vTtilde_ss = eigs(Ttilde, k=3, v0=p0)
# find the eigenvector with eigenvalue 1
p_xnpr = vT_ss[:, where((wT_ss - 1.0).__abs__() < TOL)[0][0]]
p_xnr = vTtilde_ss[:, where((wTtilde_ss - 1.0).__abs__() < TOL)[0][0]]
# re-normalize the eigenvectors to make them into distributions
p_xnpr /= trapz(p_xnpr, out_grid)
p_xnr /= trapz(p_xnr, out_grid)
return p_xnpr.real, p_xnr.real
###Output
_____no_output_____
###Markdown
Given the steady-states we compute the input and output works...
###Code
def compute_thermo_quants(ngrid=4000, dg=0.8, sm=0.1, ts=1e-3, alpha=2, return_ss=False):
"""Compute the thermodynamic quantities of input and output work."""
# ========== finding steady-state distributions ==========
out_grid = in_grid = linspace(-60.0, 60.0, int(ngrid))
p_xkpr, p_xkr = find_noisy_steady_states(out_grid, in_grid, dg, sm, ts, alpha)
# regularization -- zero out entries that are sufficiently small and negative
p_xkr[logical_and(p_xkr > -finfo("float32").eps, p_xkr < 0.0)] = 0.0
p_xkpr[logical_and(p_xkpr > -finfo("float32").eps, p_xkpr < 0.0)] = 0.0
# checks on distribution
# will trigger error if there are big enough negative entries that are not
# captured by the regularization above
assert (p_xkr >= 0.0).all(), "p_xkr has non-positive entries!"
assert (p_xkpr >= 0.0).all(), "p_xkpr has non-positive entries!"
p_xkr_norm = trapz(p_xkr, out_grid)
p_xkpr_norm = trapz(p_xkpr, out_grid)
if (abs(p_xkr_norm - 1.0) > (finfo("float32").eps)):
print(f"p_xkr not normalized! Normalization value {p_xkr_norm:.8f}")
if (abs(p_xkpr_norm - 1.0) > (finfo("float32").eps)):
print(f"p_xkpr not normalized! Normalization value {p_xkpr_norm:.8f}")
# compute relevant moments
## first moments
mu_xkpr = trapz(out_grid*p_xkpr, out_grid)
mu_xkr = trapz(out_grid*p_xkr, out_grid)
## second moments
ms_xkpr = trapz((out_grid**2)*p_xkpr, out_grid)
ms_xkr = trapz((out_grid**2)*p_xkr, out_grid)
W_in = 0.5*(ms_xkr-ms_xkpr)
W_out = dg*(mu_xkpr-mu_xkr)
if return_ss:
return W_in, W_out, p_xkpr, p_xkr, out_grid
else:
return W_in
###Output
_____no_output_____
###Markdown
We compute the *information flow* using the steady-state relative coordinate distributions
###Code
def compute_info_flow(ngrid=4000, dg=0.8, sm=0.1, ts=1e-3, alpha=2, p_xkpr_in=None, p_xkr_in=None, inout_grid=None):
# find steady-state distributions only if necessary
if ((p_xkpr_in is None) or (p_xkr_in is None) or (inout_grid is None)):
# define the grid of the solution
out_grid = in_grid = linspace(-60.0, 60.0, int(ngrid))
p_xkpr, p_xkr = find_noisy_steady_states(out_grid, in_grid, dg, sm, ts, alpha)
else:
p_xkpr = copy(p_xkpr_in)
p_xkr = copy(p_xkr_in)
out_grid = copy(inout_grid)
# regularization: zero out entries that are too small
p_xkr[logical_and(p_xkr > -finfo("float32").eps, p_xkr < 0.0)] = 0.0
p_xkpr[logical_and(p_xkpr > -finfo("float32").eps, p_xkpr < 0.0)] = 0.0
# before proceeding to computations check that the distributions are behaving properly
p_xkpr_norm = trapz(p_xkpr, out_grid)
p_xkr_norm = trapz(p_xkr, out_grid)
# bail if negative entries in probability distribution
assert (p_xkpr >= 0.0).all(), "p_xkpr has non-positive entries!"
assert (p_xkr >= 0.0).all(), "p_xkr has non-positive entries!"
# complain if normalization is not sufficient to within single-float
# but don't kill calculation
if (abs(p_xkpr_norm - 1.0) > (finfo("float32").eps)):
print(f"p_xkpr not normalized! Normalization value {p_xkpr_norm:.8f}")
if (abs(p_xkr_norm - 1.0) > (finfo("float32").eps)):
print(f"p_xkr not normalized! Normalization value {p_xkr_norm:.8f}")
# ========== computing entropies ==========
# computing the conditional entropies
H_xkr = -trapz(xlogy(p_xkr, p_xkr), out_grid)
H_xkpr = -trapz(xlogy(p_xkpr, p_xkpr), out_grid)
return H_xkpr - H_xkr
###Output
_____no_output_____
###Markdown
$\alpha = 2$ is now no longer correct since the trap is reacting to noisy measurements. Therefore, we do bisection to find $\alpha^{*}$, the value of $\alpha$ that makes the input work zero:
###Code
def optim(sm_in, ts_in, dg_in=0.8, ngrid_in=4000):
def objective_func(alpha, dg, sm, ts, ngrid):
win = compute_thermo_quants(ngrid=ngrid, dg=dg, sm=sm, ts=ts, alpha=alpha, return_ss=False)
return abs(win)
res = minimize_scalar(objective_func, bounds=(1e-2, 2.5), args=(dg_in, sm_in, ts_in, ngrid_in), method="bounded")
return res.x
###Output
_____no_output_____
###Markdown
Putting all of this together we define a function that can compute the input and outpout work of our engine, as well as the information flow and the efficiency given experimental parameters:
###Code
def compute_noisy_efficiency(SNR=10.0, fs=180.0, dg=0.8, ngrid=10000):
"""Run the calculation that computes the efficiency for the experimental ratchet parameters."""
sm = 1.0/SNR
ts = 1.0/fs
# find the alpha that makes the input work zero
astar = optim(sm, ts, dg, ngrid_in=int(ngrid))
# compute the quantities needed for the efficiency
win, wout, p_xkpr, p_xkr, out_grid = compute_thermo_quants(
ngrid=int(ngrid), dg=dg, sm=sm, ts=ts, alpha=astar, return_ss=True
)
inf_flow = compute_info_flow(
ngrid=int(ngrid), dg=dg, sm=sm, ts=ts, alpha=astar,
p_xkpr_in=p_xkpr, p_xkr_in=p_xkr, inout_grid=out_grid
)
# compute the efficiency
eta = wout/(win+inf_flow)
return win, wout, inf_flow, eta
compute_noisy_efficiency()
###Output
_____no_output_____
###Markdown
Статистичні показники швидкості:
###Code
speeds_arr = np.array(speeds)
print("Mean: ", np.mean(speeds_arr))
print("Median: ", np.median(speeds_arr))
print("Mode: ", stats.mode(speeds_arr))
print("Max: ", np.max(speeds_arr, axis=0))
count_types_dict = General.count_types_vehicle()
plt.title("Розподіл типу транспорту за видом")
plt.pie(count_types_dict.values(), labels=count_types_dict.keys(), autopct='%.2f')
plt.show()
all_miss = AnalyzeSchedule.get_all_miss()
plt.figure(figsize=(9, 5))
plt.title("Розподіл значень відстаней машин від зупинок за розкладом, м")
plt.hist(all_miss, density=True, bins=500)
plt.show()
###Output
_____no_output_____
###Markdown
Статистичні показники значень відстаней машин від зупинок за розкладом:
###Code
all_miss_arr = np.array(all_miss)
print("Mean: ", np.mean(all_miss_arr))
print("Median: ", np.median(all_miss_arr))
print("Mode: ", stats.mode(all_miss_arr))
print("Max: ", np.max(all_miss_arr, axis=0))
start_time = datetime.datetime(2020, 5, 24, 11, 7, 0)
stop_time = datetime.datetime(2020, 5, 24, 17, 0, 0)
time_periods_miss, count_avg_miss = AnalyzeSchedule.calculate_average_miss(start_time, stop_time, 5)
plt.figure(figsize=(9, 5))
plt.plot(range(len(count_avg_miss)), count_avg_miss)
plt.xticks(range(len(count_avg_miss))[::2], time_periods_miss[::2], rotation='vertical')
plt.title("Зміна моди відстаней машин від зупинок за розкладом протягом дня")
plt.show()
time_periods_miss, count_percentage_miss = AnalyzeSchedule.calculate_percentage_schedule_hit(start_time, stop_time, 5)
plt.figure(figsize=(9, 5))
plt.plot(range(len(count_percentage_miss)), count_percentage_miss)
plt.xticks(range(len(count_percentage_miss))[::2], time_periods_miss[::2], rotation='vertical')
plt.title("Зміна відсотку виконання розкладу протягом дня")
plt.show()
x = np.linspace(0, len(count_avg_miss), num=8, endpoint=True)
y = np.array([1, 2, 8, 5, 4, 3, 2, 2])
y2 = interp1d(x, y, kind='cubic')
x_traffic = range(len(count_avg_miss))
y_traffic = [y2(val) for val in x_traffic]
plt.figure(figsize=(9, 5))
plt.plot(x_traffic, y_traffic)
plt.title("Зміна відносного показника наявності заторів протягом дня")
plt.xticks(range(len(count_avg_miss))[::2], time_periods_miss[::2], rotation='vertical')
plt.show()
x = np.array(count_avg_miss).astype(float)
y = y_traffic
slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)
plt.figure(figsize=(9, 5))
plt.title("Регресія залежності моди відстаней машин від зупинок за розкладом\n" \
"відносно показника наявності заторів протягом дня")
plt.plot(x,y,'ob')
plt.plot(x, intercept + slope*x, 'r',)
plt.show()
###Output
_____no_output_____
###Markdown
Summer Olympics Data Analysis Assignment
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df=pd.read_csv('summer.csv')
df
# cleaning data 1
checking_null=df.isnull()
total_null=checking_null.sum()
total_null # finding total null vales in a dataset
# cleaning data 2
df.dropna() # removing rows with null values
###Output
_____no_output_____
###Markdown
1. In how many cities Summer Olympics is held so far?
###Code
df['City'].unique()
###Output
_____no_output_____
###Markdown
2. Which sport is having most number of Gold Medals so far? (Top 5)
###Code
for_gold=df[df['Medal']=='Gold']
most_gold=for_gold['Sport'].value_counts().head()
print(most_gold)
most_gold.plot(kind='bar')
plt.show()
###Output
Aquatics 1421
Athletics 1215
Rowing 890
Gymnastics 820
Fencing 552
Name: Sport, dtype: int64
###Markdown
3. Which sport is having most number of medals so far? (Top 5)
###Code
most_medals=df['Sport'].value_counts().head()
print(most_medals)
most_medals.plot(kind='bar')
plt.show()
###Output
Aquatics 4170
Athletics 3638
Rowing 2667
Gymnastics 2307
Fencing 1613
Name: Sport, dtype: int64
###Markdown
4. Which player has won most number of medals? (Top 5)
###Code
ath_med=df['Athlete'].value_counts().head()
print(ath_med)
ath_med.plot(kind='bar')
plt.show()
###Output
PHELPS, Michael 22
LATYNINA, Larisa 18
ANDRIANOV, Nikolay 15
ONO, Takashi 13
MANGIAROTTI, Edoardo 13
Name: Athlete, dtype: int64
###Markdown
5. Which player has won most number Gold Medals of medals? (Top 5)
###Code
gold_players=df[df['Medal']=='Gold']['Athlete'].value_counts().head()
print(gold_players)
gold_players.plot(kind='bar')
plt.show()
###Output
PHELPS, Michael 18
SPITZ, Mark 9
LATYNINA, Larisa 9
LEWIS, Carl 9
NURMI, Paavo 9
Name: Athlete, dtype: int64
###Markdown
6. In which year India won first Gold Medal in Summer Olympics?
###Code
g_medal=df[df['Medal']=='Gold']
only_ind=g_medal[g_medal['Country']=='IND']
only_ind['Year'].min() # we can also use only_ind['Year'].sort_values(ascending=True)
###Output
_____no_output_____
###Markdown
7. Which event is most popular in terms on number of players? (Top 5)
###Code
most_pop=df['Event'].value_counts().head()
print(most_pop)
most_pop.plot(kind='bar')
plt.show()
###Output
Football 1497
Hockey 1422
Team Competition 1147
Basketball 1012
Handball 973
Name: Event, dtype: int64
###Markdown
8. Which sport is having most female Gold Medalists? (Top 5)
###Code
for_women=df[df['Gender']=='Women']
for_gold=for_women[for_women['Medal']=='Gold']
most_fem=for_gold['Sport'].value_counts().head()
print(most_fem)
most_fem.plot(kind='bar')
plt.show()
###Output
Aquatics 589
Athletics 389
Gymnastics 268
Rowing 217
Volleyball 166
Name: Sport, dtype: int64
###Markdown
Assigned Tickets per Support Operator
###Code
data = messages['operator'].value_counts().reset_index()
fig = plt.figure(figsize=(10, 4), dpi=80)
ax = plt.subplot(111)
ax.grid(axis='y', zorder=0, color='lightgray')
ax.bar(data['index'], data['operator'], zorder=10, color='darkorange', width=0.7)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_xlim([-0.7, len(data)-0.3])
ax.set_title('Assigned Tickets per Support Operator')
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
Number of Tickets per Category
###Code
data = messages['category'].value_counts().reset_index()
fig = plt.figure(figsize=(10, 4), dpi=80)
ax = plt.subplot(111)
ax.grid(axis='y', zorder=0, color='lightgray')
ax.bar(data['index'], data['category'], zorder=10, color='firebrick', width=0.7)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_xlim([-0.7, len(data)-0.3])
ax.set_title('Number of Tickets per Category')
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
Tickets Created per Week Over Time
###Code
def add_zero_week_datapoints(series):
dates = []
date = min(series.index)
while date < max(series.index):
if date not in series.index:
dates.append(date)
date += datetime.timedelta(weeks=1)
dates = pd.DataFrame({'timestamp': dates, 'count': [0]*len(dates)}).set_index('timestamp')
return series.append(dates).sort_index()
data = messages[['operator', 'timestamp']].copy()
data['timestamp'] = data['timestamp'].apply(lambda x: x.date() - datetime.timedelta(days=x.weekday()))
e = data.groupby('timestamp')['timestamp'].agg(['count'])
e = add_zero_week_datapoints(e)
fig = plt.figure(figsize=(14, 2), dpi=80)
ax = plt.subplot(111)
ax.grid(axis='y', zorder=0, color='lightgray')
ax.plot(e, zorder=10, color='darkgreen')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_ylim([0, None])
ax.set_xlim([min(data['timestamp']), max(data['timestamp'])])
ax.set_title('Tickets Created Per Week Over Time')
plt.show()
###Output
_____no_output_____
###Markdown
Tickets Handled By Specific Operators Over Time
###Code
for operator in data['operator'].unique():
# prepare data fill weeks without activity with zeros
e = data[data['operator'] == operator].groupby('timestamp')['timestamp'].agg(['count'])
e = add_zero_week_datapoints(e)
# initialize matplotlib figure
fig = plt.figure(figsize=(14, 0.8), dpi=80)
ax = plt.subplot(111)
# set style and plot data
ax.grid(axis='y', zorder=0, color='lightgray')
ax.plot(e, zorder=10, color='indigo')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_ylim([0, 40])
ax.set_xlim([min(data['timestamp']), max(data['timestamp'])])
ax.set_title(f'Tickets Handled By {operator} Per Week Over Time')
plt.show()
###Output
_____no_output_____
###Markdown
Dataset de sinais
###Code
h5 = h5py.File('../data/db_sinais.h5','r')
print(list(h5.keys()))
sig = np.array(h5['esperados'])
sat = np.array(h5['saturados'])
ctd = np.array(h5['cortados'])
dpc = np.array(h5['doisPicos'])
fig = plt.figure(figsize=(15,13))
fig.add_subplot(2,2,1)
plt.plot(sig[3015],'k')
plt.xlabel('Amostras')
plt.ylabel('Amplitude (ADC)')
plt.title('Esperado')
plt.grid()
fig.add_subplot(2,2,2)
plt.plot(sat[6000],'k')
plt.xlabel('Amostras')
plt.ylabel('Amplitude (ADC)')
plt.title('Saturado')
plt.grid()
fig.add_subplot(2,2,3)
plt.plot(ctd[2500],'k')
plt.xlabel('Amostras')
plt.ylabel('Amplitude (ADC)')
plt.title('Cortado')
plt.grid()
fig.add_subplot(2,2,4)
plt.plot(dpc[2500],'k')
plt.xlabel('Amostras')
plt.ylabel('Amplitude (ADC)')
plt.title('Dois Picos')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Sinal do Experimento Vs. Sinal do Laboratório
###Code
sinais_exp = np.load('../data/Sinais_bons.npy','r')
sinais_exp.shape
plt.figure(figsize=(10,8))
plt.plot(sinais_exp[1000]/max(sinais_exp[1000]),'--', color='dimgray',linewidth=2.0,label='Experiment')
plt.plot(np.roll(sig[5500]/max(sig[5500]),2),'k',label='Laboratory')
plt.xlabel('Time (samples)')
plt.ylabel('Normalized Amplitude')
plt.grid()
plt.legend()
#plt.savefig('signal_shape.pdf', format='pdf',dpi=300, bbox_inches='tight', pad_inches=0.1)
plt.show()
###Output
_____no_output_____
###Markdown
Dataset de parâmetros
###Code
#Load DataFrame
df = pd.read_csv('../data/DataFrame_Aqst.csv',index_col=0)
df.head()
###Output
_____no_output_____
###Markdown
Amplitude das classes
###Code
amp = np.array([])
for i in range(1,5):
amp = np.append(amp, len(df.loc[(df['Label'] == i)]))
classes = ['Uncorrupted', 'Saturated', 'Cut', 'Double Peak']
plt.figure(figsize=(10,8))
plt.bar(classes, amp, width=0.3, color='black')
plt.xlabel('Classes')
plt.ylabel('Events')
plt.grid()
#plt.savefig('amp_classes.pdf', format='pdf',dpi=300, bbox_inches='tight', pad_inches=0.1)
plt.show()
###Output
_____no_output_____
###Markdown
PDF dos parâmetros
###Code
#Amplitude x Área
plt.figure(figsize=(10,8))
cl = [2,3,0,1]
color = ['darkgrey','black', 'silver', 'dimgray']
mk = ('s','P','^','o')
for i in cl:
amp = df["Amp"][df["Label"] == i+1]
area = df["Area"][df["Label"] == i+1]
plt.scatter(area,amp,label=classes[i],marker=mk[i],s=30,color=color[i])#,edgecolors=color[i],facecolor='None')
plt.legend()
plt.xlabel('Area (Samples*ns)')
plt.ylabel('Amplitude')
plt.grid()
#plt.savefig('area_amp_sim3.png', format='png',dpi=300, bbox_inches='tight', pad_inches=0.1)
plt.show()
#Posição do pico
bins = [4, 4, 52, 37]
color = ['black','gray','black','gray']
linestyle = ['-','-','--','--']
plt.figure(figsize=(9,6))
for i in range(4):
plt.hist(df["Pos_Amp"][df["Label"]==i+1],label = classes[i], bins = bins[i], density=True,
histtype= 'step',linewidth = 2.0, linestyle=linestyle[i], color=color[i])
plt.legend()
plt.xlabel('Peak Position (Sample)')
plt.ylabel('Normalized Histogram')
plt.grid()
#plt.savefig('pos_amp.pdf', format='pdf',dpi=300, bbox_inches='tight', pad_inches=0.1)
plt.show()
#Largura à meia altura
bins = [5, 10, 9, 36]
color = ['black','gray','black','gray']
linestyle = ['-','-','--','--']
plt.figure(figsize=(9,6))
for i in range(4):
plt.hist(df["FWHM"][df["Label"]==i+1],label = classes[i], bins = bins[i], density=True,
histtype= 'step',linewidth = 2.0, linestyle=linestyle[i], color=color[i])
plt.legend()
plt.xlabel('Pulse Width (Samples)')
plt.ylabel('Normalized Histogram')
plt.grid()
#plt.savefig('pulse_width.pdf', format='pdf',dpi=300, bbox_inches='tight', pad_inches=0.1)
plt.show()
###Output
_____no_output_____
###Markdown
Description: this program uses an artificial recurrent neural netwrok called Long Short Term Memory (LSTM) to predict the closing price of an Index (S&P 500) using the past 60 day Index price.
###Code
# Import the libraries
import math
import pandas_datareader as web
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import Dense, LSTM
import matplotlib.pyplot as plt
import mplcyberpunk
plt.style.use('fivethirtyeight')
# Get the stock quote
df = web.DataReader('^GSPC', data_source='yahoo', start='2012-01-01', end='2020-12-17')
# Show the data
df
# Get the number of rows and columns in the data set
df.shape
# Visualize the closing price history
# plt.style.use("cyberpunk")
plt.figure(figsize=(16, 8))
plt.title('S&P500 Close Price History')
plt.plot(df['Close'])
plt.xlabel('Data', fontsize=18)
plt.ylabel('Close Price USD ($)', fontsize=18)
# mplcyberpunk.add_glow_effects()
plt.show()
# Create a new dataframe with only the Close column
data = df.filter(['Close'])
# Convert the dataframe to a numpy array
dataset = data.values
# Get the number of rows to train the model on
training_data_len = math.ceil( len(dataset) * .8 )
training_data_len
# Scale the data
scaler = MinMaxScaler(feature_range=(0,1))
scaled_data = scaler.fit_transform(dataset)
scaled_data
# Create the training data set
# Create the scaled training data set
train_data = scaled_data[0:training_data_len, :]
# Split the data into x_train and y_train data sets
x_train = []
y_train = []
for i in range(60, len(train_data)):
x_train.append(train_data[i-60:i, 0])
y_train.append(train_data[i, 0])
if i<= 61:
print(x_train)
print(y_train)
print()
# Convert the x_train and y_train to numpy arrays
x_train, y_train = np.array(x_train), np.array(y_train)
# Reshape the data
x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1))
x_train.shape
# Build the LSTM model
model = Sequential()
model.add(LSTM(50, return_sequences=True, input_shape=(x_train.shape[1], 1)))
model.add(LSTM(50, return_sequences=False))
model.add(Dense(25))
model.add(Dense(1))
# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')
# Train the model
model.fit(x_train, y_train, batch_size=1, epochs=1)
# Create the testing data set
# Create a new array containing scaled values from index 1745 to 2256
test_data = scaled_data[training_data_len - 60:, :]
# Create the data sets x_test and y_test
x_test = []
y_test = dataset[training_data_len:, :]
for i in range(60, len(test_data)):
x_test.append(test_data[i-60:i,0])
# Convert the data into a numpy array
x_test = np.array(x_test)
# Reshape the data
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1))
# Get the models predicted price values
predictions = model.predict(x_test)
predictions = scaler.inverse_transform(predictions)
# Get the root mean squared error (RMSE)
rmse = np.sqrt(np.mean(predictions - y_test)**2)
rmse
# Plot the data
train = data[:training_data_len]
valid = data[training_data_len:]
valid['Predictions'] = predictions
# Visualize
plt.figure(figsize=(16,8))
plt.title('LSTM Prediction Model on S&P 500')
plt.xlabel('Date', fontsize=18)
plt.ylabel('Close Price ($)', fontsize=18)
plt.plot(train['Close'])
plt.plot(valid[['Close', 'Predictions']])
plt.legend(['Train', 'Val', 'Predictions'], loc='lower right')
plt.show()
# Show the valid and predicted prices
valid
# Get the quote
sp500_quote = web.DataReader('^GSPC', data_source='yahoo', start='2012-01-01', end='2020-12-17')
# Create a new dataframe
new_df = sp500_quote.filter(['Close'])
# Get the last 60 day closing price values and convert the dataframe to an array
last_60_days = new_df[-60:].values
# Scale the data to the values between 0 and 1
last_60_days_scaled = scaler.transform(last_60_days)
# Create an empty list
X_test = []
# Append the past 60 days
X_test.append(last_60_days_scaled)
# Convert the X_test data set to a numpy array
X_test = np.array(X_test)
# Reshape the data
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
# Get the predicted scaled price
pred_price = model.predict(X_test)
# Undo the scaling
pred_price = scaler.inverse_transform(pred_price)
print(pred_price)
sp500_quote2 = web.DataReader('^GSPC', data_source='yahoo', start='2020-12-18', end='2020-12-18')
print(sp500_quote2['Close'])
###Output
Date
2020-12-18 3709.409912
2020-12-18 3709.409912
Name: Close, dtype: float64
###Markdown
2016 Facebook's Russian adsBenjamin Brodeur Mathieu University of Washington12/08/2019Note: If the images do not appear, start your notebook kernel from the repository's root folder. IntroductionThis analysis takes a look at the Facebook ads used by the IRA (Internet Research Agency), a Kremlin supported "troll" farm, reported by the HPSCI (House of Representatives Minority Permanent Select Committee on Intelligence). The ads dating from 2015 to 2017 were identified to be related to the group by Facebook and were made public after a hearing was held between the HPSCI committee and social media companies.> "*On February 16, 2018 Special Counsel Robert S. Mueller III indicted 13 Russian individuals and three Russian organizations for engaging in operations to interfere with U.S. political and electoral processes, including the 2016 presidential election.*" - HSPCI$^{[1]}$In the 2016 elections, the United-States political system was influenced by a foreign power whose goal was to > "*sow discord in the U.S. political system, including the 2016 U.S. presidential election. Defendants posted derogatory information about a number of candidates, and by early to mid-2016, Defendants’ operations included supporting the presidential campaign of then-candidate Donald J. Trump (“Trump Campaign”) and disparaging Hillary Clinton.*" - Indictment United States of America v. Internet Research Agency LLC $^{[2]}$As we are currently a year away from the 2020 presidential election, it is important to understand how these actors went about influencing the elections to ensure that actions can be taken to recognize and ultimately undermine these efforts. Voting is **the** way American citizen's make their voice heard and influence decision making at a national level and, as a consequence of having one of the largest capital and a far reaching culture, a global level. Data SourceThe raw pdfs of the IRA's Facebook ads were made publicly available on the HPSCI's [data website](https://intelligence.house.gov/social-media-content/social-media-advertisements.htm) and [information website](https://intelligence.house.gov/social-media-content/)- The dataset doesn't specify license/terms of use, but the text of the intelligence website's is clear that it is making the data available for public / academic use :> "*As part of that continuing effort to educate the public and seek additional analysis, the Committee Minority is making available all IRA advertisements identified by Facebook. This is an effort to be fully transparent with the public, allow outside experts to analyze the data, and provide the American people a fuller accounting of Russian efforts to sow discord and interfere in our democracy. [...] Congress does not have the technical expertise to fully analyze this data—that lies in outside groups such as news publications and academic researchers. We hope that the publication of these materials will facilitate this important work.*" - HSPCI data website$^{[3]}$ DescriptionThe descriptions in the table below were extracted from the Engima website$^{[5]}$ which hosts its own version of the dataset in csv format. The comments in parentheses were added retrospectively.| Field name | Type | Description || ----------------- | ------- | ------------------------------------------------------------------------------- || Ad Id | integer | Unique identifier assigned to Facebook advertisement. || Ad Text | string | Facebook advertisement text. || Ad Landing Page | string | URL to Facebook advertisement landing page. || Ad Targeting | integer | Facebook advertisement targeting, unparsed. || Ad Impression | integer | Number of Facebook advertisement impressions. || Ad Clicks | integer | Number of Facebook advertisement clicks. || Ad Spend | string | Money spent on Facebook advertisement in Russian rubbles (RUB). || Ad Creation Date | string | Date Facebook advertisement was created in MM-DD-YY format. || Ad End Date | string | Date Facebook advertisement ended in MM-DD-YY format. || Target Location | string | Facebook advertisement target location. (state origin / state living) || Target Age | string | Facebook advertisement target age. || Target Language | string | Facebook advertisement target language. (language(s) spoken by target audience) || Target Placements | string | Facebook advertisement target placements. (app and location) || Target People | string | Facebook advertisement target people. (likes) | BackgroundDue to the potential repercussions and scale of the issue, a lot of work has already been done by a variety of institutions to understand this dataset. These efforts can be divided into three categories. JournalisticArticles by news outlet like the New York Times$^{[6]}$ and the Wall Street Journal$^{[7]}$, tend to summarize the issue, show key statistics such as the number of ads made available and focus on specific examples of outrageous advertisement crafted by the group. These source also mention that the key goals of the IRA was to increase tension around divisive political issues. Student / data enthusiastsGroups on kaggle$^{[8]}$ did some key descriptive analysis and basic clustering analysis. One group of data enthusiast in particular, *tech for campaigns*, published a comprehensive descriptive analysis$^{[9]}$ of the dataset which included the following findings:- Trump and Hilary were not the focus of the ad campaign- Instead, identity politics were the main focus- Ads concentrated around African Americans and Conservatives- Analysis of ad spent, ad viewed and click through rates per demographic Massive cross-dataset studiesTwo massive studies were conducted by merging the Facebook ads dataset with other sources of data. Below is a summary of their goals and findings. On Microtargeting Socially Divisive Ads: A Case Study of Russia-Linked Ad Campaigns on Facebook$^{[10]}$This study focused on three issues:1. How divisive the IRA ads content was.> "*we conducted three online surveys on a U.S. census-representative sample (n=2,886). We used each survey to measure one of three axes along which ads could potentially be divisive: 1) reporting: whether respondents would report the ads, and why, 2) approval and disapproval: whether they approve or disapprove the content of the ad, and 3) false claims: if they are able to identify any false claims in the content of the ad. [...] We find that many of these ads were severely divisive, and generated strongly varied opinions across the two ideological groups of liberals and conservatives."*2. The effectiveness of the targeting of socially divisive ads> "*Click through rate was 10x higher than that of typical Facebook ads.*"> "*A deeper analysis of the demographic biases in the targeted audience reveals that the ads have been targeted at people who are more likely to approve the content and perceive fewer false claims, and are less likely to report.*"3. What features of Facebook's ad API were leveraged in targeting the ads?> "*Facebook provides a tool for advertisers that, given a target attribute, presents a list of other attributes that target people with similar demographic aspects.*"> "*We also provide strong evidence that these advertisers have explored the Facebook suggestions tool to engineer the targeted populations.*"During their analysis the team of researcher also:- Counted the number of ads created, their impressions, cost and received clicks over time.- Identified the groups that were more targeted.- Analyzed grouping by urls.- Analyzed the role of websites redirections. The IRA, Social Media and Political Polarization in the United States, 2012-2018$^{[11]}$The study looked at the Facebooks ads dataset and complemented it with facebook posts as well as their own content gathering from Twitter and Youtube. They also looked at the virality of the campaign through shares and likes. They identified that:> "*Peaks in advertising and organic activity often correspond to important dates in the US political calendar, crises, and international events*"> "*The most far reaching IRA activity is in organic posting, not advertisements*"Russia's IRA activities were designed to polarize the US public and interfere in elections by:> - campaigning for African American voters to boycott elections- procedures in 2016, and more recently for Mexican American and Hispanic voters to distrust US institutions- encouraging extreme right-wing voters to be more confrontational- spreading sensationalist, conspiratorial, and other forms of junk political news and misinformation to voters across the political spectrum. >> Finally, the IRA was able to leverage their presence on multiple platforms once detection efforts caught up with them by redirecting traffic to platforms where their activities had not been disrupted, and by using their accounts on one social media platform to complain about suspensions of their accounts on another platform. Research questionsOverall, the articles and studies did not have a strong focus on measuring the engagement users had with the ads. Although the "click through rate" might not be the best proxy to determine how engaged users were, it gives a baseline to understand the proportion of individuals who were compelled to action by the IRA's advertisement. This research will attempt to answer the following questions: Q1: Were some targeted demographics more engaged with the IRA ads? MethodOftentimes, Facebook ads target users using "interests". The definition of what consists an "interest" is defined below on the Facebook advertising$^{[12]}$:Based on a visual examination of multiple ads' pdfs, the "interests" fields were leveraged to find the targeted demographics. The "interests" field was chosen as it was the only field reliably present throughout the dataset.To create demographic groups, the most frequent terms in the "interests" were ranked by frequency. Clusters of these terms were created based on common cultural themes (when these were ambiguous a visual inspections of ads containing the interests was used). Ads containing the same terms identified in this first "pass" were then used to identify more ads belonging to the same group. This iterative process stopped when the number of ads for each group did not grow significantly from one iteration to another. For more specifics on the criterion used for creating demographic groups see the [demographic_labeling notebook](demographic_labeling.ipynb).This approach was favored over a purely algorithmic approach involving k-means and tfidf as the term frequencies within the ads were not sufficient to group interests by cultural context. To know more about the K-means clustering approach and why it was not leveraged in this analysis refer to the [k means test notebook]([TEST]_k_means_demographic_labeling.ipynb).Engagement by demographics was then graphed using a barplot. Instead of directly compairing the click through rates of the different demographics, the ratio with the average Facebook ad campaign's click through rate$^{[13]}$ was used to give an idea of the comparative attractiveness of the IRA ads. Q2: How did the amount of ads seen by targeted demographics change preceding political events? MethodThe Oxford study$^{[11]}$ was able to identify spikes in Facebook posts around a series of important political events. Using their timeline, increases in political ads in the week leading up and following some of the same political events were evaluated.![Oxford events important political events timeline](../assets/pictures/important_events.png)In order to compare demographic groups' numbers, a multiline graph showcasing the most targeted groups in terms of daily ad impressions was used for each event. Mulitline graph enabled comparison between the groups to be drawn easily for the period of interest. Daily impressions were chosen instead of the raw count of ads as the later is not representative of an ad's audience.For both questions, ads which had at least one click, impression and cost were be used. The first cross-dataset study$^{[10]}$ identified that due to these factors, ads with these attributes were unlikely to have been published by the IRA group. FindingsTo interpret these findings accurately, keep in mind the subjective definitions of the demographics created:| Demographic | Subset of the targeted interests used to define the group ||---------------------|--------------------------------------------------------------------------------------|| Mexican-American | La Raza, Chicano, Hispanidad etc. || Native-American | Native American Indian Wisdom, Cherokee Nation etc. || African-American | African-American history, Black nationalism, Malcom X, Stop Police Brutality, Gospel || Memes | BuzzFeed, CollegeHumor, 9GAG, Imgur, iFunny etc. || LGBT | LGBT community, Homosexuality, Same-sex marriage etc. || Left wing | Bernie Sanders, Born Liberal, Homeless shelter etc. || Right wing | Patriotism, Donald Trump, Republican Party, "From my cold dead hands" etc. || Self-Defense | Mixed martial arts, Martial arts, Self-defense etc. || Muslim-American | Muslim-Brotherhood, Islam, Muhammad, State of Palestine etc. || Free music software | Grooveshark, Free software, Music etc. |When there was not a clear relation between the demographic and culture, individual ads were consulted. Q1: Were some targeted demographics more engaged with the IRA ads?![CTR by demographic](../assets/pictures/q1.png)We find that all demographics, with the exception of "Free music software" were at least 4 times more likely to interact with an IRA add than the average Facebook ad$^{[13]}$. This echoes the findings of the first cross-dataset study$^{[10]}$, but adds nuance in regards of the distrubtion of the engagement: The "Free music software", "Muslim-American" and "Self-Defense" demographics were notably lower than the rest. The "Mexican-American", "Native-American" and "African-American" groups were the most responsive being more than 10 times more like to interact with the IRA ads. Q2: How did the amount of ads seen by targeted demographics change preceding political events?We find that there were spikes in IRA activity on Facebook around all 10 events chosen:12/18/2015 - [Third democratic debate](https://en.wikipedia.org/wiki/2016_Democratic_Party_presidential_debates_and_forumsSchedule) 01/14/2016 - [Sixth republican debate](https://en.wikipedia.org/wiki/2016_Republican_Party_presidential_debates_and_forums) 02/01/2016 - [Iowa caucuses (start of primaries)](https://en.wikipedia.org/wiki/2016_Democratic_Party_presidential_primaries) 06/14/2016 - [End of primaries](https://en.wikipedia.org/wiki/2016_Democratic_Party_presidential_primaries) 09/16/2016 - [First presidential debate between Hillary and Donald](https://en.wikipedia.org/wiki/2016_United_States_presidential_debatesFirst_presidential_debate) 10/04/2016 - [Second presidential debate between Hillary and Donald](https://en.wikipedia.org/wiki/2016_United_States_presidential_debatesSecond_presidential_debate) 10/14/2016 - [Third presidential debate between Hillary and Donald](https://en.wikipedia.org/wiki/2016_United_States_presidential_debatesThird_presidential_debate) 11/08/2016 - [Election day](https://en.wikipedia.org/wiki/2016_United_States_presidential_election) 12/29/2016 - [Obama announces sanctions against Russia](https://en.wikipedia.org/wiki/Special_ficials)For the sake of brevity, the changes near the third democratic debate, the end of the primaries and the election day will be described.>"Impressions measure how often your ads were on screen for your target audience." - Facebook for Business$^{[14]}$ Thrid democratic debate![Third democratic debate](../assets/pictures/q2_3rd_dem.png)We find a spike of around 8 thousand ad impressions targeting the Muslim-American demographic with a small amount of ads targeting other demographics in the 7 days before and after the debate. End of primaries![End of primaries](../assets/pictures/q2_end_primary.png)We find a spike of about 11 thousand ad impressions targeting the LGBT demographic 3 days after the end of the primaries. Election day![Election day](../assets/pictures/q2_election.png)We see a massive increase, around 110 thousand ad impressions in ads targeting African-Americans starting 5 days before the election. DiscussionWe find that the IRA ads were very effective at getting Facebook users to interact with their content. However, interaction was measured using the ads' click through rate which does not equate to belief or trust in the content portrayed. Moreover, we have no control variables to compare these findings against. The following questions could be used to bolsters our efforts:* How did the toxicity of certain ads play a role in their high click through rate?* How did the number of ads by a demographic affect their click through rate?* Are some demographics more likely to click on Facebook ads than others?* How does some population's skepticism come in to play? * In our case, groups such as "Self-Defense", "Muslim-American" and "Free music software" were all not very likely to interact with ads.In regards to our second question, analyzing the events as well as the broader contexts in which the ads were shown can help us obtain a sense of how the IRA leverages the ads to influence the 2016 elections:The first spike near the 3rd democratic debate seemed to have been related to the debate's topics>_\[which included\] ISIS, President Assad of Syria [...] statbility in the Middle East_ -Wikipedia$^{[14]}$]The ads used around the event show the IRA attempting to create ideologically opposed groups. We also notice that both of the ads contain errors (the're instead of they're) and (unit instead of unite) which could be used to enhance return on investment of these ads:>_By \[sharing information\] that repels all but the most gullible the scammer gets the most promising marks to self-select, and tilts the true to false positive ratio in his favor._ -Microsoft Research$^{[15]}$The spike in ads targeting the LGBT demographic around the end of the primary was unrelated to the event:This second ad shows how the IRA created events which occured on the ground in the United-States. These in-person events must have given credibility to these Facebook groups.With the information above as a background, we can see how, after a year an half of activities, some of these groups may have been perceived as trustworthy by their following. The massive increase in targeting of African-American combined with the messages used by the groups surrounding the election show how these groups could be leveraged to reduce voter turnout: Through the above ads, we see that the IRA attempted to decrease African-American voter turnout by underlining that the election was between two unappealing white candidates and that Hillary was not trustworthy. More insideously we see that the ads shown immediately after the election normalized not voting.>_"This is exactly what i have been trying to tell y'all. Your vote should be precious to y'all cus its your power to put another person in authority over you, your freedom, your daily life and your safety.To most people, it's about just going out to vote, if you did just that because you want to, well done, if you didn't, aka BOYCOTT just like me, you also got your reasons.BUT, let those reasons be genuine! **Let them be your reasons, not influenced upon or manipulated in any way.**I'm proud of this mama not because she didn't vote Hillary or Trump, but simply because she did her research, decided for herself, in her interest and the interest of her kids, and voted consciously!**I'm glad i choose not to be a part of this whole system. Im proud to be woke**"_ - Ad ID 3163 (P(1)0001161.pdfOverall, more time needs to be spent revisiting the news cycles around the different political events to understand some of the targeting strategies used by the IRA. The ads above were picked as they helped portrayed my perception of the IRA's intention. As a result, these are biased by my desire to communicate a clear narrative. The reality of these ads and their context is more muddled when taking into account all of these ads shown during the same periods simultaneously. Many ads tried to created communities near the election date, others displayed police brutality which may have reinforced the notion that the political system did not work toward African-American interests and discouraged voters. Yet other ads seemed entirely nonsensical.This analysis focused on ads reported by Facebook, but other cross platform studies have already framed the interactions between the Facebook ads and the other platforms used by the group. These studies give a much clearer high level picture of how ads were used as part of the IRA's strategy.Nonetheless, this research reinforced my perception that studying strategies targeting of specific demographics can provide information about the tactics used by the IRA. Identifications of these techniques could eventually be used for detection and prevention of missinformation on social media platforms. Conclusion and reflectionThis analysis attempted to shed light on the IRA's use of Facebook ads to further their political agenda by interfering with the 2016 US presidential election. We first examined whether some of the targeted demographics were more or less engaged with the IRA ads. It was found that nearly all demographics were at lest four times more likely to engage with an IRA ad than the average Facebook ad. The demographics most likely to interact were Mexican-Americans, Native-Americans and African-Americans while the least likely to interact were Self-Defense (Martial arts) advocacy groups, Muslim-Americans and groups who interested in free music software.We then examined how different demographics had been targeted near important political events. Our findings suggests that the IRA infiltrated and created Facebook groups and communities which caused discord in public discourse. These same groups obtained credibility by creating rallies and events on the ground in the United-States. Some groups ultimately leveraged their following to spread messages meant to suppress voter turnout near election day.This analysis complements other studies on the subject by using a primarily human-centered approach. Using a purely algorithmic method to cluster the demographics would have lost the richness of cultural associations the IRA leveraged to target the groups. Similarly, spikes in the data were put in the context of these demographics and the different political events. References\[1\] [HPSCI website](https://intelligence.house.gov/social-media-content/) \[2\] [Indictment United States of America v. Internet Research Agency LLC](https://www.justice.gov/file/1035477/download) \[3\] [HPSCI data website](https://intelligence.house.gov/social-media-content/social-media-advertisements.htm) \[4\] [HPSCI information website](https://intelligence.house.gov/social-media-content/) \[5\] [Enigma facebook ads dataset](https://public.enigma.com/datasets/committee-minority-report-on-facebook-ads/619060d1-71ad-4764-8f2f-b5a3872c05c7) \[6\] [New York Times 2016 election facebook ads article](https://www.nytimes.com/2017/11/01/us/politics/russia-2016-election-facebook.html) \[7\] [Wall Street Journal 2016 facebook ads](https://www.wsj.com/articles/full-stock-of-russia-linked-facebook-ads-shows-how-propaganda-sharpened-1525960804) \[8\] [Exploring political proganda on facebook (Kaggle entry)](https://www.kaggle.com/paultimothymooney/exploring-political-propaganda-on-facebook) \[9\] [Medium - How Russian trolls won American hearts and minds](https://medium.com/techforcampaigns/how-russian-trolls-won-american-hearts-and-minds-30037e1e13b7) \[10\] [On Microtargeting Socially Divisive Ads: A Case Study of Russia-Linked Ad Campaigns on Facebook](https://arxiv.org/pdf/1808.09218.pdf) \[11\] [The IRA, Social Media and Political Polarization in the United States, 2012-2018](https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/12/IRA-Report.pdf) \[12\] [Facebook ad targeting definition](https://www.facebook.com/business/ads/ad-targeting) \[13\] [WordStream's Facebook advertising benchmarks](https://www.wordstream.com/blog/ws/2017/02/28/facebook-advertising-benchmarks) \[14\] [Facebook for Business Impressions definition](https://www.facebook.com/business/help/675615482516035) \[15\] [Topics of the 2016 Democratic debates and forums](https://en.wikipedia.org/wiki/2016_Democratic_Party_presidential_debates_and_forumsSaturday_December_19,_2015_%E2%80%93_Goffstown,_New_Hampshire) \[16\] [Why Do Nigerian Scammers Say They are From Nigeria?](https://www.microsoft.com/en-us/research/publication/why-do-nigerian-scammers-say-they-are-from-nigeria/?from=http%3A%2F%2Fresearch.microsoft.com%2Fpubs%2F167719%2Fwhyfromnigeria.pdf) Code used to produce findingsNote that the code below starts from the clean and labeled data. If you wish to understand how the data was extracted from the raw pdfs ads and subsequently cleaned, start at the [pdf data extraction](pdf_data_extraction.ipynb) notebook. The last cell of each notebook contains the link to the next notebook. Below is a summary of each notebook's content in order.| Order | Notebook | Content description ||-------|----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|| 1 | pdf_data_extraction.ipynb | Extract text from pdfs and creates a raw csv file. || 2 | data_cleaning.ipynb | Removes unused/null columns and clean each columns used for the analysis. || 3 | demographic_labeling.ipynb | Adds the demographic column to the dataset. || 4 | analysis.ipynb | Analysis and figure's code. | Q1 - Were some targeted demographics more engaged with the IRA ads?We first import pandas, numpy and read the labeled clean dataset.
###Code
import pandas as pd
import numpy as np
ads_df = pd.read_csv('../clean_data/labeled_clean_data.csv', parse_dates=['ad_creation_date', 'ad_end_date'])
###Output
_____no_output_____
###Markdown
First we add a column for ad_count. We then group the data by demographic and aggregate by summing the values in the ad_spend, ad_imporession, ad_click and ad_count columns. We calculate the click through rate by deviding the total clicks by the number of ad impressions.
###Code
# Add ad_count column
ads_df['ad_count'] = 1
# Aggregate the data by demographic
summary_by_demographic = ads_df.groupby('demographic').agg({'ad_spend':'sum','ad_impressions':'sum', 'ad_clicks': 'sum', 'ad_count': 'sum'})
# Calculate the click through rate
summary_by_demographic['Click through rate'] = summary_by_demographic.apply(lambda x: x.ad_clicks / x.ad_impressions, axis=1)
###Output
_____no_output_____
###Markdown
Before displaying the data we sort the data to be in descending order of click through rates.We want to display the click through rates as a ratio with the average Facebook ad click through rate 0.9%.
###Code
sorted_demographics = summary_by_demographic['Click through rate'].sort_values(ascending=False)
sorted_demographics_multiples = (sorted_demographics / 0.009)
###Output
_____no_output_____
###Markdown
We use the seaborn library to make our graphic more visually appealing (color and background styling options). We also change font sizes for readability.
###Code
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(font_scale=5)
fig, ax = plt.subplots(figsize=(32,18))
sns.axes_style("darkgrid")
sns.set_palette('husl', 10)
ax = sns.barplot(sorted_demographics_multiples.values, sorted_demographics_multiples.index.values, orient='h')
ax.tick_params(labelsize=40)
ax.set_xticks(np.arange(0,15))
ax.set_xlabel('Click through rate (nb times the average facebook ad)')
plt.title('How many times more likely to click an IRA ad than the average Facebook ad')
plt.show()
###Output
_____no_output_____
###Markdown
The code below saves the colors associated with the different demographics so they can be reused for the rest of the analysis.
###Code
# Save the color palette used for the graph above
palette = sns.color_palette('husl',10)
# Create a dictionary associating the colors to the demographics
palette_dict = {}
index = 0
for demographic in sorted_demographics_multiples.index.values:
palette_dict[demographic] = palette[index]
index +=1
# Write a utility function used to get the colors of demographics in subsquent graph
def get_cmap(demographics):
return [palette_dict[demographic] for demographic in demographics]
###Output
_____no_output_____
###Markdown
Q2 - How did the amount of ads seen by targeted demographics change preceding political events?The image below taken from a [report](https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/12/IRA-Report.pdf) shows that the number of ads on Facebook spiked before and sometimes shortly after important political events of the 2016 election.![Oxford events important political events timeline](../assets/pictures/important_events.png)From this graph and with a bit of help from wikipedia, we extract the following important campaign dates.12/18/2015 - [Third democratic debate](https://en.wikipedia.org/wiki/2016_Democratic_Party_presidential_debates_and_forumsSchedule) 01/14/2016 - [Sixth republican debate](https://en.wikipedia.org/wiki/2016_Republican_Party_presidential_debates_and_forums) 02/01/2016 - [Iowa caucuses (start of primaries)](https://en.wikipedia.org/wiki/2016_Democratic_Party_presidential_primaries) 06/14/2016 - [End of primaries](https://en.wikipedia.org/wiki/2016_Democratic_Party_presidential_primaries) 09/16/2016 - [First presidential debate between Hillary and Donald](https://en.wikipedia.org/wiki/2016_United_States_presidential_debatesFirst_presidential_debate) 10/04/2016 - [Second presidential debate between Hillary and Donald](https://en.wikipedia.org/wiki/2016_United_States_presidential_debatesSecond_presidential_debate) 10/14/2016 - [Third presidential debate between Hillary and Donald](https://en.wikipedia.org/wiki/2016_United_States_presidential_debatesThird_presidential_debate) 11/08/2016 - [Election day](https://en.wikipedia.org/wiki/2016_United_States_presidential_election) 12/29/2016 - [Obama announces sanctions against Russia](https://en.wikipedia.org/wiki/Special_Counsel_investigation_(2017%E2%80%932019)Links_between_Trump_associates_and_Russian_officials) First we import a few libraries which will be useful:* datetime for calculating time intervals* matplotlib's pyplot for graphing* pandas.plotting for easy conversion of dates from pandas to matplotlib* DateFormatter to format date labels
###Code
from datetime import timedelta
import matplotlib.pyplot as plt
from pandas.plotting import register_matplotlib_converters
from matplotlib.dates import DateFormatter
register_matplotlib_converters()
###Output
_____no_output_____
###Markdown
We then create a dictionary of important events and the dates before and after the important date we will be examining. For each important events, we add the event_name entry which will be used whend displaying the results as well as a mask entry which will enable us to filter adds to the selected period.
###Code
political_events = {
'12/18/2015': {'date_range': [], 'event_name': '3rd Dem. debate', 'mask': [] },
'01/14/2016': {'date_range': [], 'event_name': '6th Rep. debate', 'mask': [] },
'02/01/2016': {'date_range': [], 'event_name': 'Iowa caucuses', 'mask': [] },
'06/14/2016': {'date_range': [], 'event_name': 'End of primary', 'mask': [] },
'09/16/2016': {'date_range': [], 'event_name': '1st Pres. debate', 'mask': [] },
'10/04/2016': {'date_range': [], 'event_name': '2nd Pres. debate', 'mask': [] },
'10/14/2016': {'date_range': [], 'event_name': '3rd Pres. debate', 'mask': [] },
'11/08/2016': {'date_range': [], 'event_name': 'Election', 'mask': [] },
'12/29/2016': {'date_range': [], 'event_name': 'Sanctions on Russia', 'mask': [] }
}
###Output
_____no_output_____
###Markdown
Ads have a start date and sometimes and end date. According to the "CITATION HERE" study, ads that only have a creation date are most likely to have run for that day only where ads that have both a start and end date have ran for mutliple days. To filter the ads to a given date range around the political event we first need to duplicate rows so that the number of ads shown by demographic by day can be counted correctly.Below we create a function which duplicates rows that ran for multiple days.
###Code
# Given a dataframe with non-null start date string column and end date string column
# returns a new dataset with duplicate rows for all days between start and end date inclusive
def expand_dates(df, start, end, new_col_name):
all_rows = []
# For each row
for index, row in df.iterrows():
# Select start date
start_timestamp = pd.to_datetime(row[start])
# If end date is null put same as start
if (pd.isnull(row[end])):
end_timestamp = start_timestamp
else:
end_timestamp = pd.to_datetime(row[end])
# For dates until active is bigger than end
active_timestamp = start_timestamp
while active_timestamp <= end_timestamp:
new_row = row.append(pd.Series([active_timestamp], index=[new_col_name]))
all_rows.append(new_row)
active_timestamp += timedelta(days=1)
return pd.DataFrame(all_rows)
ads_dates_expanded = expand_dates(ads_df, 'ad_creation_date', 'ad_end_date', 'active_date')
###Output
_____no_output_____
###Markdown
The code below generates, for each event, date intervals and a mask for the data between start and end dates. The interval on both side of the targeted events was set to 7 days.
###Code
def get_start_end_date(middle_date_string, days):
middle_date_timestamp = pd.to_datetime(middle_date_string)
end_date = middle_date_timestamp + timedelta(days=days)
start_date = middle_date_timestamp - timedelta(days=days)
return start_date, end_date
# We will be graphing 7 days before and after the event's date
days_range = 7
for date_string, _ in political_events.items():
start_date, end_date = get_start_end_date(date_string, days_range)
# For each political event create a mask to only get the ads in the date range
political_events[date_string]['mask'] = (ads_dates_expanded['active_date'] >= start_date) & (ads_dates_expanded['active_date'] <= end_date)
###Output
_____no_output_____
###Markdown
We create a few utility functions one dataframe for each of the political events to simplify the charting.1. get_data_range, let's us iterate through a period of "days" number of days before a date given as a string.2. get_plot_data, for each date in an interval, make sure an entry is present foreach demographic. (When no entry is present add a 0 to the dataset) This is done purely for matplotlib graphing. The ordered labels of the demographic for the data are also returned.
###Code
def get_date_range(middle_date_string, days):
middle_date_timestamp = pd.to_datetime(middle_date_string)
end_date = middle_date_timestamp + timedelta(days=days)
start_date = middle_date_timestamp - timedelta(days=days)
# Starting at start date yield dates one by one
current_date = start_date
while current_date <= end_date:
yield current_date
current_date = current_date + timedelta(days=1)
def get_plot_data(rows, event_date_string):
plot_data = []
labels = []
for demographic in rows['demographic'].unique():
labels.append(demographic)
demographic_ads_rows = rows[rows['demographic'] == demographic]
ads_for_demographic_by_date = demographic_ads_rows.groupby('active_date').agg({'ad_clicks': 'sum'}).ad_clicks
values = []
date_indexes = []
# For each date in the interval
for date in get_date_range(event_date_string, days_range):
# If no ads were sent for the date add a 0 count entry
if date not in ads_for_demographic_by_date:
date_indexes.append(date)
values.append(0)
if len(values) > 0:
ads_for_demographic_by_date = ads_for_demographic_by_date.append(pd.Series(values, index=date_indexes))
plot_data.append(ads_for_demographic_by_date.sort_index())
return plot_data, labels
###Output
_____no_output_____
###Markdown
Using the utility functions obtain the plot data and labels for each event and add them to the political_event dictionary for simple graphing.
###Code
for key, dictionary in political_events.items():
rows_for_dates = ads_dates_expanded[dictionary['mask']]
plot_data, labels = get_plot_data(rows_for_dates, key)
dictionary['plot_data'] = plot_data
dictionary['labels'] = labels
###Output
_____no_output_____
###Markdown
For each event we generate graphs of the ad_impressions by demographic over the period of interest.
###Code
# Contrast, Font scale and linewidth
sns.set_context("notebook", font_scale=1.5, rc={"lines.linewidth": 1.5})
# Create a date formatter for the appropriate date format
date_form = DateFormatter("%Y-%m-%d")
event_count = len(political_events)
event_date_strings = list(political_events.keys())
# Create event_count subplots with width 14 and a total eight of 8*event_count
fig, axs = plt.subplots(event_count, figsize=(14, event_count*8))
# Leave a 0.4 inch space between each graph
plt.subplots_adjust(hspace=0.4)
for i in range(event_count):
# Get the date and date string of the event we are graphing
event_date_string = event_date_strings[i]
event_date = pd.to_datetime(event_date_string)
# Get the data and labels for the plot
plot_data = political_events[event_date_string]['plot_data']
labels = political_events[event_date_string]['labels']
dates_to_plot = [x for x in plot_data[0].index]
# Get the axis object from matplotlib
ax = axs[i]
# Set font size
ax.tick_params(labelsize=15)
# Add a line for the event date
ax.axvline(event_date, linestyle='--', linewidth=1, c='black')
# Each graph has its own x axis
ax.set_xticks(dates_to_plot)
# X label dates with yyyy-mm-dd format
ax.xaxis.set_major_formatter(date_form)
# Plot values for date ascending, the color map ensures the same demographic is
# always drown using the same color
for i in range(len(labels)):
ax.plot(dates_to_plot, np.array(plot_data[i])/1000, label=None, color=get_cmap([labels[i]])[0])
ax.fill_between(dates_to_plot, np.array(plot_data[i])/1000, label=labels[i], color=get_cmap([labels[i]])[0], alpha=0.5)
ax.legend(loc='right', ncol=1, bbox_to_anchor=(1.2, .5), shadow=True)
label_index = 1
ax.set_title('Ad impressions (1000x) by demographic 7 days before and after the ' + political_events[event_date_string]['event_name'])
for label in ax.get_xticklabels():
label.set_ha('right')
label.set_rotation(15)
label.set_visible(label_index%3 == 0)
label_index += 1
###Output
_____no_output_____
###Markdown
Ryder: Global Expansion Research This Jupyter notebook contains the summaries and visualizations that should accompany the report on Ryder's company expansion into other countries, based on key indicators derived from a country's level of crime and stability, basic measures of health and wealth, and ratios of public education spending.*** Table of Contents 1. [Title](top)2. [Introduction](introduction)3. [Dependencies](dependencies)4. [Prepare for Analysis](prepare) a. [Import Countries](import_countries) b. [Selecting Countries](select_countries) c. [Import Data](import_data) i. [Import MFI](import_data_mfi) ii. [Import GTD](import_data_gtd) iii. [Import PED](import_data_ped)5. [Queries](query) a. [Query MFI](query_mfi) b. [Query GTD](query_gtd) c. [Query PED](query_ped)*** Introduction This notebook contains queries made against cleansed versions of the following Kaggle datasets:- [Global Terrorism Database](https://www.kaggle.com/START-UMD/gtd) (2018)- [Infant Mortality, Fertility, Income per Capita](https://www.kaggle.com/burhanykiyakoglu/infant-mortality-fertility-income) (2018)- [Public Education Expenditure as share of GDP](https://www.kaggle.com/ibrahimmukherjee/gdp-world-bank-datapublic-education-expenditure-as-share-of-gdp.csv) (2018)Datasets were prepared using a combination of different commandline tools, including `bash`, `awk`, `sed`, and `python`. The original wide datasets were broken up to better facilitate querying. Details on that process are included in the accompanying documentation.*** Dependencies This notebook requires the following packages: - numpy - pandas - matplotlibBefore we can plot information and fiddle with the data, we need to import these modules.
###Code
import numpy as np # numpy library for vectorized computations.
import pandas as pd # pandas library for manipulating dataframes.
###Output
_____no_output_____
###Markdown
pandas & numpy Pandas allows us to manipulate dataframes.We can create these dataframes by reading in our data. `*.tsv` files can be imported using the `read_csv()` function. Note the custom `\t` separator used with tab-separated-value files.```python Outputs a dataframe from a parsed data file.dataframe = pd.read_csv(".tsv", sep="\t")``` matplotlib We can also import matplotlib and the pylot modules for configuration and later use.
###Code
import matplotlib as mpl # matplotlib library for global settings.
import matplotlib.pyplot as plt # Our plotting functions from matplotlib.
display(plt.style.available) # Display what styles are available.
# %matplotlib inline
%config InlineBackend.figure_format = 'png'
# Above lines makes plots appear as inline svgs.
mpl.rcParams['figure.dpi'] = 100 # Apply DPI to matplotlib inline plots.
plt.style.use(['fivethirtyeight', 'seaborn-dark', 'ggplot']) # Apply particular styles.
plt.plot(np.sin(np.linspace(0, 2 * np.pi)), 'r-o') # Make the plot.
plt.show() # Show the plot.
###Output
_____no_output_____
###Markdown
*** Preparing for Analysis The following packages contain support functions unique to this particular report.The `analysis.analyser` package contains the `analyser` and `country` modules that do a bulk of the computational work. The `analysis.utils` package contains utility modules used to do repetitive tasks across the entire project.
###Code
from analysis.analyser import analyser
from analysis.analyser.country import Country
from analysis.utils import parser
from analysis.utils import validate
###Output
_____no_output_____
###Markdown
Selecting Candidate Countries We need to select countries from our dataset's available countries. This means finding the above three (3) countries and ensuring they have entries across all three datasets.
###Code
# Outputs a dataframe from a parsed data file.
countries_df = parser.read_tsv("../data/country_codes.tsv")
display(countries_df)
# Get the np.array of unique countries that appear in the public education expenditure dataset.
ped_countries_df = parser.read_tsv("../data/ped/ped_countries.tsv")
keys_ped = Country.from_frame(countries_df, search=list(ped_countries_df.iloc[:,0].unique()))
print(Country.format(keys_ped, sep="\n"))
# Get np.array of unique countries that appear in the infant mortality, fertility, income per capita dataset.
mfi_countries_df = parser.read_tsv("../data/mfi/mfi_countries.tsv")
keys_mfi = Country.from_frame(countries_df, search=mfi_countries_df.iloc[:,0].unique())
print(Country.format(keys_mfi[:5], sep="\n"), "\n...\n", Country.format(keys_mfi[-5:], sep="\n"))
# Get np.array of unique countries that appear in the global terrorism database.
gtd_countries_df = parser.read_tsv("../data/gtd/gtd_countries.tsv")
keys_gtd = Country.from_frame(countries_df, search=gtd_countries_df.iloc[:,0].unique())
print(Country.format(keys_gtd[:5], sep="\n"), "\n...\n", Country.format(keys_gtd[-5:], sep="\n"))
# Compute the unique available countries among the datasets.
country_codes = {
"ped": list(map(lambda country: country.code, keys_ped)),
"mfi": list(map(lambda country: country.code, keys_mfi)),
"gtd": list(map(lambda country: country.code, keys_gtd))
}
# Find the available countries.
unique_codes = analyser.find_intersection(*country_codes.values())
print(f'Unique Codes: {unique_codes}')
available_countries = countries_df[countries_df["Code"].isin(unique_codes)]
display(available_countries)
# Clear unused variables in IPython.
%reset_selective -f "^id$"
%reset_selective -f "^code$"
%reset_selective -f "^name$"
# Equivalent: del keys_ped
# Equivalent: del keys_mfi
# Equivalent: del keys_gtd
%reset_selective -f "_ped$"
%reset_selective -f "_mfi$"
%reset_selective -f "_gtd$"
# Equivalent: del ped_countries_df
# Equivalent: del mfi_countries_df
# Equivalent: del gtd_countries_df
%reset_selective -f "_countries_df$"
# Equivalent: del country_codes
# Equivalent: del unique_codes
%reset_selective -f "_codes$"
###Output
_____no_output_____
###Markdown
Selecting Countries Now, with a sense of what countries are available, we can select 3 countries (besides the current one, USA) for comparison.
###Code
# The countries we want to evaluate.
codes = [
"GBR", # United Kingdom,
"JPN", # Japan,
"SWE", # Sweden,
"USA" # United States
]
display(codes)
# Select these countries from the set of available countries.
selected_df = available_countries[available_countries["Code"].isin(codes)]
selected_df = selected_df.set_index("Code", drop=False)
display(selected_df)
# Convert selected countries into Country representations for use across all datasets.
selected_countries = Country.get_countries(selected_df)
print(Country.format(selected_countries, sep="\n"))
###Output
<[JPN 101]: "Japan">
<[SWE 198]: "Sweden">
<[USA 217]: "United States">
<[GBR 603]: "United Kingdom">
###Markdown
Importing Data In order to query the data, we need to form our `pandas.DataFrame` representations. Mortality The mortality, fertility, and income datasets were made tidy, so that important can be done in a relatively consistent manner. The important feature for the infant mortality dataset is the 'Mortality Rate'.
###Code
# Get the mortality table using the parser.read_mfi helper function.
mortality_df = parser.read_mfi(
'../data/mfi/mortality/mortality_long.tsv',
title="Mortality Rate",
countries=selected_df.index)
display(mortality_df)
# Get the fertility table using the parser.read_mfi helper function.
fertility_df = parser.read_mfi(
'../data/mfi/fertility/fertility_long.tsv',
title="Fertility Rate",
countries=selected_df.index)
display(fertility_df)
# Get the income table using the parser.read_mfi helper function.
income_df = parser.read_mfi(
'../data/mfi/income/income_long.tsv',
title="Income",
countries=selected_df.index)
display(income_df)
###Output
_____no_output_____
###Markdown
Import Terrorism Database
###Code
# Import the terrorism database.
crime_df = parser.read_tsv('../data/gtd/gtd.tsv')
# Rename columns.
crime_df.columns = ['Event ID', 'Country ID', 'Country', 'Year', 'Success', 'Attack Type ID', 'Attack Type', 'Killed', 'Wounded']
# Import the terrorism database.
crime_df = parser.read_tsv('../data/gtd/gtd.tsv')
# Rename columns.
crime_df.columns = ['Event ID', 'Country ID', 'Country', 'Year', 'Success', 'Attack Type ID', 'Attack Type', 'Killed', 'Wounded']
# Descriptive statistics regarding the entire dataset.
print("Results for the overall dataset:")
display(crime_df.describe(include=np.object))
display(crime_df.drop(labels=['Event ID', 'Country ID'], axis=1).describe())
# Preparing for queries.
print("Selecting for candidate countries...")
crime_df = crime_df[crime_df['Country ID'].isin(selected_df['ID'])]
print("Mapping country ID to country code...")
id_map = dict(selected_df[['ID', 'Code']].values)
crime_df['Code'] = crime_df['Country ID'].map(id_map)
print("Summing to get casualty total...")
crime_df['Casualties'] = crime_df['Killed'] + crime_df['Wounded']
print("Reorganize columns...")
crime_df = crime_df[['Event ID', 'Code', 'Country', 'Year', 'Attack Type', 'Killed', 'Wounded', 'Casualties', 'Success']]
display(crime_df)
###Output
_____no_output_____
###Markdown
Import Public Expenditure Data
###Code
# Import the public expenditure database.
ped_df = parser.read_tsv('../data/ped/ped.tsv')
display(ped_df)
print("Rename columns...")
ped_df = ped_df.rename(columns={ 'Entity': 'Country', 'Public Expenditure on Education (percent of GDP)': "%GDP" })
print("Reorder columns...")
ped_df = ped_df[['Code', 'Country', 'Year', '%GDP']]
print("Selecting countries...")
ped_df = ped_df[ped_df['Code'].isin(selected_df['Code'])]
print("Sorting by year...")
ped_df = ped_df.sort_values(by=["Code", "Year"], axis=0)
display(ped_df)
# Descriptions for selected countries.
display(ped_df.describe(include=np.object))
display(ped_df.describe())
display(ped_df.groupby(['Code']).describe().drop(labels="Year", axis=1))
###Output
_____no_output_____
###Markdown
*** Queries The objective of the research project is to evaluate key indicators of stability and growth across several countries in order to estimate Ryder's potential for success in these new markets.This sample report evaluates the trends in three (3) separate countries as a sampling of what the broader international research has to offer. **The United Kingdom** (GBR), **Japan** (JPN), and **Sweden** (SWE).The **United States of America** (USA) is included in order to use our current operation region for context. For each of these countries, we want to answer the following questions: 1. Is infant mortality improving, stable, or getting worse? 2. Is income rising, stagnant, or falling? 3. Does one country or another seem more or less stable than the others, and why do you say this? 4. What changes do you predict for these countries, and why? Infant Mortality For **each country** in `selected_countries`, observations regarding infant mortality per 1,000 live births can be determined with the following steps:- Summraize rates of infant mortality per 1,000 live births per year, for each country.- Plot a time-series showing infant mortality trends in each country.
###Code
# Query 1a. Average infant mortality per 1,000 live births per year, by country.
# print(describe_numeric(mortality_df.groupby(['Code']), 'Mortality Rate'))
mortality_stats_year = mortality_df.groupby(['Code']).agg({
'Year': [
'count',
'min',
'max',
analyser.spread(),
analyser.percentile(0),
analyser.percentile(0.25),
analyser.percentile(0.5),
analyser.percentile(0.75),
analyser.percentile(1),
analyser.IQR()
]
}).dropna(axis=1,how='all')
print(mortality_stats_year)
mortality_stats_rate = mortality_df.groupby(['Code']).agg({
'Mortality Rate': [
'min',
'idxmin',
'max',
'idxmax'
]
}).dropna(axis=1,how='all')
print(mortality_stats_rate)
mortality_quartiles = mortality_df.groupby(['Code']).agg({
'Mortality Rate': [
analyser.percentile(0),
analyser.percentile(0.25),
analyser.percentile(0.5),
analyser.percentile(0.75),
analyser.percentile(1),
analyser.IQR(),
analyser.spread()
]
})
print(mortality_quartiles)
# Query 1b. Time-series plot showing infant mortality trends, by country.
df = mortality_df.drop(labels=['original_index'], axis=1)
num_years = df['Year'].nunique()
num_countries = df['Code'].nunique()
year_min = df['Year'].min()
year_max = df['Year'].max()
options = {
'xlabel': 'year',
'ylabel': 'infant mortality (per 1,000 live births)',
'title': f'Infant mortality rate, by country ({year_min} to {year_max})',
'xlim': (year_min - 2, year_max + 2),
'ylim': (-2, df['Mortality Rate'].max() * 1.25),
}
# Plot the mortality information.
print(f'Plotting {options["title"]} for each {num_countries} countr(y/ies) across {num_years} year(s)...')
fig, ax = plt.subplots()
for key, grp in df.groupby(['Code']):
ax.plot('Year', 'Mortality Rate', 'o-', data=grp, label=df[df['Code'] == key]['Country'].unique()[0])
ax.set(**options)
ax.xaxis.set_major_locator(mpl.ticker.MaxNLocator(15))
ax.legend(loc='upper right', frameon=False)
ax.grid(True)
###Output
Plotting Infant mortality rate, by country (1970 to 2016) for each 4 countr(y/ies) across 47 year(s)...
###Markdown
National Net Income per Capita For **each country** in `selected_countries`, observations regarding NNI per Capita (in 2018 $USD) can be determined with the following steps:- Summarize the NNI per Capita per year, for each country.- Plot a time-series showing NNI per Capita trends in each country.
###Code
# Query 2a. Summarize the NNI per Capita per year, for each country.
income_stats = income_df.groupby(['Code']).agg({
'Income': [
'min',
'idxmin',
'max',
'idxmax'
]
}).dropna(axis=1,how='all')
print(income_stats)
income_quartiles = income_df.groupby(['Code']).agg({
'Income': [
analyser.percentile(0),
analyser.percentile(0.25),
analyser.percentile(0.5),
analyser.percentile(0.75),
analyser.percentile(1),
analyser.IQR(),
analyser.spread()
]
})
print(income_quartiles)
# Query 2b. Plot a time-series showing NNI per Capita trends in each country.
df = income_df.drop(labels=['original_index'], axis=1)
num_years = df['Year'].nunique()
num_countries = df['Code'].nunique()
year_min = df['Year'].min()
year_max = df['Year'].max()
options = {
'xlabel': 'year',
'ylabel': 'national net income per capita',
'title': f'NNI per Capita, by country ({year_min} to {year_max}) in 2018 $USD',
'xlim': (year_min - 2, year_max + 2),
'ylim': (-2, df['Income'].max() * 1.25),
}
# Plot the mortality information.
print(f'Plotting {options["title"]} for each {num_countries} countr(y/ies) across {num_years} year(s)...')
fig, ax = plt.subplots()
for key, grp in df.groupby(['Code']):
ax.plot('Year', 'Income', 'o-', data=grp, label=df[df['Code'] == key]['Country'].unique()[0])
ax.set(**options)
ax.xaxis.set_major_locator(mpl.ticker.MaxNLocator(15))
ax.yaxis.set_major_formatter(mpl.ticker.StrMethodFormatter('${x:,.0f}'))
ax.legend(loc='upper left', frameon=True)
ax.grid(True)
plt.show()
###Output
Plotting NNI per Capita, by country (1970 to 2016) in 2018 $USD for each 4 countr(y/ies) across 47 year(s)...
###Markdown
Crime & Stability For **each country** in `selected_countries`, observations regarding terrorist incidents can be determined with the following steps:- Summarize the terrorism incidents per year, for each country.- Plot a time-series showing crime and stability in each country.
###Code
# Query 3a. Summarize the terrorism incidents per year, for each country.
display(crime_df.groupby(['Code']).nunique()[['Event ID', 'Year', 'Attack Type']])
crime_stats = crime_df.groupby(['Code']).agg({
'Killed': [
'min',
'idxmin',
'max',
'idxmax',
'count',
'sum',
],
'Wounded': [
'min',
'idxmin',
'max',
'idxmax',
'sum',
],
'Casualties': [
'min',
'idxmin',
'max',
'idxmax',
'sum',
],
'Attack Type': [
'count',
'max',
'nunique',
analyser.mode(),
],
}).dropna(axis=1,how='all')
display(crime_stats)
crime_quartiles = crime_df.groupby(['Code']).agg({
'Killed': [
analyser.mode(),
analyser.percentile(0),
analyser.percentile(0.25),
analyser.percentile(0.5),
analyser.percentile(0.75),
analyser.percentile(1),
analyser.IQR(),
analyser.spread()
],
'Wounded': [
analyser.mode(),
analyser.percentile(0),
analyser.percentile(0.25),
analyser.percentile(0.5),
analyser.percentile(0.75),
analyser.percentile(1),
analyser.IQR(),
analyser.spread()
],
'Casualties': [
analyser.mode(),
analyser.percentile(0),
analyser.percentile(0.25),
analyser.percentile(0.5),
analyser.percentile(0.75),
analyser.percentile(1),
analyser.IQR(),
analyser.spread()
],
})
display(crime_quartiles)
crime_desc = crime_df.drop(labels=['Event ID'], axis=1).groupby(['Code', 'Attack Type']).describe()
print('Country Terrorist Attacks by Year')
display(crime_desc['Year'][['count', 'min', '50%', 'max']])
print('Country Terrorist Attack Casualties')
display(crime_desc[['Killed', 'Wounded']].drop(labels=['25%', '75%', 'mean', 'std'], axis=1, level=1))
print('Country Terrorist Attack Casualties')
display(crime_desc['Casualties'])
# Query 3b. Plot a time-series showing crime and stability in each country.
df = crime_df.sort_values(by=['Code','Event ID'])
df['Incidents'] = df.groupby(['Code']).cumcount()
num_attack_max = df['Incidents'].max()
num_years = df['Year'].nunique()
num_countries = df['Code'].nunique()
year_min = df['Year'].min()
year_max = df['Year'].max()
# Fill in missing years by padding using the previous year's values.
df['Incidents'] = df['Incidents'].interpolate()
options = {
'xlabel': 'year',
'ylabel': 'number of incidents',
'title': f'Terror Incidents, by country ({year_min} to {year_max})',
'xlim': (year_min - 1, year_max + 1),
'ylim': (-2, num_attack_max * 1.25),
}
# Plot the crime information.
print(f'Plotting {options["title"]} for each {num_countries} countr(y/ies) across {num_years} year(s)...')
fig, ax = plt.subplots()
for key, grp in df.groupby(['Code']):
ax.plot(grp['Year'], grp['Incidents'], '.-', label=df[df['Code'] == key]['Country'].unique()[0])
ax.set(**options)
ax.xaxis.set_major_locator(mpl.ticker.MaxNLocator(15))
# ax.yaxis.set_major_formatter(mpl.ticker.StrMethodFormatter('${x:,.0f}'))
ax.legend(loc='upper left', frameon=True)
ax.grid(True)
plt.show()
options = {
'xlabel': 'year',
'ylabel': 'number of casualties',
'title': f'Casualties due to terrorism, by country ({year_min} to {year_max})',
'xlim': (year_min - 1, year_max + 1),
'ylim': (0, df['Casualties'].max() * 1.25),
}
# Plot the crime information.
print(f'Plotting {options["title"]} for each {num_countries} countr(y/ies) across {num_years} year(s)...')
fig, ax = plt.subplots()
for key, grp in df.groupby(['Code']):
ax.plot(grp['Year'], grp['Casualties'], 'o-', label=df[df['Code'] == key]['Country'].unique()[0])
ax.set(**options)
ax.yaxis.set_major_locator(mpl.ticker.MaxNLocator(10))
ax.xaxis.set_major_locator(mpl.ticker.MaxNLocator(15))
# ax.yaxis.set_major_formatter(mpl.ticker.StrMethodFormatter('${x:,.0f}'))
ax.legend(loc='upper left', frameon=True)
ax.grid(True)
plt.show()
casualties_df = df.groupby(['Code']).count()['Casualties']
incidents_df = df.groupby(['Code']).max()['Incidents']
options = {
'xlabel': 'number of casualties',
'ylabel': 'number of incidents',
'title': f'Terror incidents vs. casualties, by country ({year_min} to {year_max})',
'xlim': (-500, casualties_df.max() * 1.25),
'ylim': (incidents_df.min() - 500, incidents_df.max() * 1.25),
}
# Plot the crime information.
print(f'Plotting {options["title"]} for each {num_countries} countr(y/ies) across {num_years} year(s)...')
fig, ax = plt.subplots()
df = pd.concat([casualties_df, incidents_df], axis=1)
df = df.T
for code in df.columns:
ax.plot(df[code]['Casualties'], df[code]['Incidents'], 'o', label=code)
ax.set(**options)
ax.yaxis.set_major_locator(mpl.ticker.MaxNLocator(15))
ax.xaxis.set_major_locator(mpl.ticker.MaxNLocator(5))
# ax.yaxis.set_major_formatter(mpl.ticker.StrMethodFormatter('${x:,.0f}'))
ax.legend(loc='upper left', frameon=True)
ax.grid(True)
plt.show()
###Output
Plotting Terror incidents vs. casualties, by country (1970 to 2017) for each 4 countr(y/ies) across 47 year(s)...
###Markdown
Public Expenditure
###Code
mux = pd.MultiIndex.from_product([
ped_df['Code'],
range(ped_df['Year'].min(), 2018)
])
# Get missing years.
ped_df_intp = ped_df.set_index(['Code', 'Year'])
ped_df_intp = ped_df_intp.sort_index().reindex(mux).reset_index()
ped_df_intp.columns = ['Code', 'Year', 'Country', '%GDP']
# Map NaN values by code or interpolation.
ped_df_intp = ped_df_intp.drop(labels=['Country'], axis=1)
ped_df_intp['%GDP'] = ped_df_intp['%GDP'].interpolate()
display(ped_df_intp)
print('Actual statistics')
display(ped_df.groupby(['Code']).describe(include=np.object))
display(ped_df.groupby(['Code']).describe()['%GDP'])
display("***")
print('Interpolated statistics')
display(ped_df_intp.describe()['%GDP'])
display(ped_df_intp.groupby(['Code']).describe()['%GDP'])
df = ped_df
year_min = df['Year'].min()
year_max = df['Year'].max()
options = {
'xlabel': 'year',
'ylabel': 'public expenditure on education (as share of GDP)',
'title': f'Public expenditure on education, by country ({year_min} to {year_max})',
'xlim': (df['Year'].quantile(0.25) * 0.95, df['Year'].quantile(0.75) * 1.05),
'ylim': (0 - df['%GDP'].quantile(0.15), df['%GDP'].max() * 1.25),
}
# Plot the crime information.
print(f'Plotting {options["title"]} for each {num_countries} countr(y/ies) across {num_years} year(s)...')
fig, ax = plt.subplots()
for key, grp in df.groupby(['Code']):
ax.plot(grp['Year'], grp['%GDP'], 'o', label=key)
ax.set(**options)
ax.xaxis.set_major_locator(mpl.ticker.MaxNLocator(df['Year'].nunique()))
ax.yaxis.set_major_locator(mpl.ticker.MaxNLocator(10))
ax.yaxis.set_major_formatter(mpl.ticker.PercentFormatter())
ax.legend(loc='upper right', frameon=True)
ax.grid(True)
plt.show()
df = ped_df_intp
year_min = df['Year'].min()
year_max = df['Year'].max()
options = {
'xlabel': 'year',
'ylabel': 'public expenditure on education (as share of GDP)',
'title': f'Public expenditure on education, by country ({year_min} to {year_max})',
'xlim': (df['Year'].quantile(0.25) * 0.95, df['Year'].quantile(0.75) * 1.05),
'ylim': (0 - df['%GDP'].quantile(0.15), df['%GDP'].max() * 1.25),
}
display(df.groupby(['Code']).describe())
# Plot the crime information.
print(f'Plotting {options["title"]} for each {num_countries} countr(y/ies) across {num_years} year(s)...')
fig, ax = plt.subplots()
for key, grp in df.groupby(['Code']):
ax.plot(grp['Year'], grp['%GDP'], '.', label=key)
ax.set(**options)
ax.xaxis.set_major_locator(mpl.ticker.MaxNLocator(15))
ax.yaxis.set_major_locator(mpl.ticker.MaxNLocator(10))
ax.yaxis.set_major_formatter(mpl.ticker.PercentFormatter())
ax.legend(loc='upper right', frameon=True)
ax.grid(True)
plt.show()
###Output
_____no_output_____ |
session-2/python/TFIDF_NewsRecommender.ipynb | ###Markdown
TF-IDF based Recommender System Recommender System based on tf-idf as vector representation of documents TF-IDF Based Recommender1. Represent articles in terms of bag of words2. Represent user in terms of read articles associated words3. Generate TF-IDF matrix for user read articles and unread articles4. Calculate cosine similarity between user read articles and unread articles 5. Get the recommended articles **Describing parameters**:*1. PATH_NEWS_ARTICLES: specify the path where news_article.csv is present* *2. ARTICLES_READ: List of Article_Ids read by the user* *3. NO_RECOMMENDED_ARTICLES: Refers to the number of recommended articles as a result*
###Code
PATH_NEWS_ARTICLES="/home/phoenix/Documents/HandsOn/Final/news_articles.csv"
ARTICLES_READ=[2,7]
NUM_RECOMMENDED_ARTICLES=5
try:
import numpy
import pandas as pd
import pickle as pk
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import re
from nltk.stem.snowball import SnowballStemmer
import nltk
stemmer = SnowballStemmer("english")
except ImportError:
print('You are missing some packages! ' \
'We will try installing them before continuing!')
!pip install "numpy" "pandas" "sklearn" "nltk"
import numpy
import pandas as pd
import pickle as pk
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import re
from nltk.stem.snowball import SnowballStemmer
import nltk
stemmer = SnowballStemmer("english")
print('Done!')
###Output
_____no_output_____
###Markdown
1. Represent articles in terms of bag of words1. Reading the csv file to get the Article id, Title and News Content2. Remove punctuation marks and other symbols from each article3. Tokenize each article4. Stem token of every article
###Code
news_articles = pd.read_csv(PATH_NEWS_ARTICLES)
news_articles.head()
#Select relevant columns and remove rows with missing values
news_articles = news_articles[['Article_Id','Title','Content']].dropna()
#articles is a list of all articles
articles = news_articles['Content'].tolist()
articles[0] #an uncleaned article
def clean_tokenize(document):
document = re.sub('[^\w_\s-]', ' ',document) #remove punctuation marks and other symbols
tokens = nltk.word_tokenize(document) #Tokenize sentences
cleaned_article = ' '.join([stemmer.stem(item) for item in tokens]) #Stemming each token
return cleaned_article
cleaned_articles = map(clean_tokenize, articles)
cleaned_articles[0] #a cleaned, tokenized and stemmed article
###Output
_____no_output_____
###Markdown
2. Represent user in terms of read articles associated words
###Code
#Get user representation in terms of words associated with read articles
user_articles = ' '.join(cleaned_articles[i] for i in ARTICLES_READ)
user_articles
###Output
_____no_output_____
###Markdown
3. Generate TF-IDF matrix for user read articles and unread articles
###Code
#Generate tfidf matrix model for entire corpus
tfidf_matrix = TfidfVectorizer(stop_words='english', min_df=2)
article_tfidf_matrix = tfidf_matrix.fit_transform(cleaned_articles)
article_tfidf_matrix #tfidf vector of an article
#Generate tfidf matrix model for read articles
user_article_tfidf_vector = tfidf_matrix.transform([user_articles])
user_article_tfidf_vector
user_article_tfidf_vector.toarray()
###Output
_____no_output_____
###Markdown
4. Calculate cosine similarity between user read articles and unread articles
###Code
articles_similarity_score=cosine_similarity(article_tfidf_matrix, user_article_tfidf_vector)
recommended_articles_id = articles_similarity_score.flatten().argsort()[::-1]
recommended_articles_id
#Remove read articles from recommendations
final_recommended_articles_id = [article_id for article_id in recommended_articles_id
if article_id not in ARTICLES_READ ][:NUM_RECOMMENDED_ARTICLES]
###Output
_____no_output_____
###Markdown
5. Get the recommended articles
###Code
final_recommended_articles_id
#Recommended Articles and their title
print 'Articles Read'
print news_articles.loc[news_articles['Article_Id'].isin(ARTICLES_READ)]['Title']
print '\n'
print 'Recommender '
print news_articles.loc[news_articles['Article_Id'].isin(final_recommended_articles_id)]['Title']
###Output
Articles Read
2 US South Korea begin joint military drill ami...
7 Dialogue crucial in finding permanent solution...
Name: Title, dtype: object
Recommender
2724 PM Modi says at all-party meeting that PoK is ...
2808 J K CM Mufti blames vested interests for Ka...
2862 J K PM Modi appeals for peace in Valley assu...
2950 Kashmir Death toll rises to 8 in protests ove...
3326 US China to fully implement sanctions again...
Name: Title, dtype: object
|
notebooks/eflint3-features/2_conjuctive_and_disjunctive_conditions.ipynb | ###Markdown
2. Conjuctive and disjunctive conditionsIn previous versions of eFLINT, `Conditioned by` clauses could only be associated with actions to determine whether these actions are `Enabled`, i.e. do *not* cause violations when they are triggered. Experiments showed the value of impliciting using logical conjunction to combine multiple conditions expressed using `Conditioned by`, whether as part of a single declaration or through the use of `Extend`. The combination of the *conjuctive* `Conditioned by` clauses and the *disjunctive* `Holds when` (and `Derived from`) clauses proved very expressive, in a way that is also useful for other kinds of types. In eFLINT-3.0, the role of `Conditioned by` clauses has changed. These clauses can now also be associated with the other kinds of types, i.e. fact-types, duty-types and event-types. Moreover, they are not only used to determine whether an instance of a type is `Enabled`, but act as a kind of filter on derivation rules. That is, an instance of a type is considered to be *derivable* if one or more of its derivation rules say it is **and** if all conditions of the type are satisfied. (As a consequence, the only difference between `Holds` and `Enabled` is on instances whose truth is postulated rather than derived and whose type has one or more conditions associated with it.)To demonstrate the expressiveness of conjuctive and disjunctive conditions this notebook works out an example of an exception on an exception. Consider the following article about entry requirements on a school:> Article 1) An Applicant can be accepted into St. John's Academy only if they have completed a Primary School with a GPA score of at least 3 as demonstrated by a nationally recognised Diploma.This article lays out a condition (GPA >= 3) that applies to any potential student, irrespective of the source of their eligibilty. Therefore the condition is to be expressed as a conjunctive clause.
###Code
Fact applicant
Fact primary-school Identified by StMary, StGeorge
Fact gpa Identified by 1..4
Fact diploma Identified by primary-school * applicant * gpa
Fact accepted Identified by applicant
Act accept-application Recipient applicant
Holds when applicant // contextual condition for the power
Conditioned by diploma() && gpa >= 3 // a valid diploma must exist for this applicant with gpa >= 3
Creates accepted()
.
+applicant(Alice).
+applicant(Bob).
+applicant(Chloe).
+diploma(StMary, Alice, 3).
+diploma(StGeorge, Bob, 3).
+diploma(StGeorge, Chloe, 2).
?Enabled(accept-application(StJohn, Alice)).
?Enabled(accept-application(StJohn, Bob)).
?!Enabled(accept-application(StJohn, Chloe)).
###Output
_____no_output_____
###Markdown
The code cell above shows the eligibility condition of article 1 as a `Conditioned by` on the act-type `accept-application`. Written as such, conditioned can not be 'weakened' without modifying the original code, i.e. by executing an additional declaration. A layer of indirection can be used to avoid this problem by adding a fact-type (`[Article 1]` below) that has its own conditions and derivation rules.
###Code
Act accept-application Recipient applicant
Holds when applicant // contextual condition for the power
Conditioned by [Article 1]()
Creates accepted()
Fact [Article 1]
Identified by applicant // identified by all the types bound on the 'call site'
Holds when diploma() && gpa >= 3
.
?Enabled(accept-application(StJohn, Alice)).
?Enabled(accept-application(StJohn, Bob)).
?!Enabled(accept-application(StJohn, Chloe)).
###Output
_____no_output_____
###Markdown
Because the condition expressed in article 1 is now written in a `Holds when` clause, it can be overruled by additional `Holds when` clauses added to the type `[Article 1]`. However, because the article 1 condition is still referred to from a `Conditioned by` clause in `accept-application`, it is still a condition that needs to hold true for all applications that are to be accepted. To see how this works, consider the following exception to article 1 and its formalisation.> Article 2) An exception to Article 1 can be made for all applicants with a Diploma from St. George's Primary School with a GPA of at least 2
###Code
Extend Fact [Article 1] // condition that turns article 1 "off"
Holds when [Article 2]()
Fact [Article 2] Identified by applicant
Holds when diploma() && primary-school == StGeorge && gpa >= 2
.
accept-application(StJohn, Alice).
accept-application(StJohn, Bob).
?Enabled(accept-application(StJohn, Chloe)).
###Output
_____no_output_____
###Markdown
Exeptions on Article 2 are possible because the Article 2 exception is itself applied through the indirection suggested for Article 1. For example,> Article 3) However, Article 2 is only applicable for the first applicant from St George's.
###Code
Extend Fact [Article 2]
Conditioned by [Article 3]()
Fact [Article 3] Identified by applicant
Holds when Not(Exists diploma : diploma.applicant != applicant
&& accepted(diploma.applicant)
&& diploma.primary-school == StGeorge)
.
?!Enabled(accept-application(StJohn, Chloe)).
###Output
_____no_output_____ |
COVID19_Allergy_ETL.ipynb | ###Markdown
ETL Strecke und Implementierung von Data Warehouse mit der Information von COVID-19 und AllergyDieses Skript extraiert die Information der CSV-Dateien von **COVID-19** und **Allergy** aus dem Data Set [SyntheaTM Patient Generator](https://github.com/synthetichealth/synthea). Aus diesem Data Set werden die Dateien `patients.csv`, `observations.csv`, `procedures.csv` und `conditions.csv` von den vorherigen Pathologien genutzt.Obwohl diese Daten synthetisch sind, werden die Patienten ID durch MD5-Hash-Strings ersetzt, Geburtsdatum und Todesdatum zu Geburtsjahr und Todesjahr vergröbert, um eine Pseudonymisierung zu simulieren. Noch dazu werden andere (für unseres Projekt) nicht relevante Spalten nicht genohmen.Die transformierte "rohe" Daten werden im Data Warehouse gespeichert. Sie sind die Basis der Dimentionen und Fakten Tabellen. Für das Data Warehousing wird ein Stern Schema zusammen mit Views erstellt, um das Online Analytical Processing zu erleisten. Am Ende werden die Tabellen mit den rohen Daten gelöscht. Reproducibility
Löschen alle Variablen
###Code
%reset -f
###Output
_____no_output_____
###Markdown
Import Bibliotheken
###Code
import pandas as pd
import sys
from pandas.util import hash_pandas_object
from functools import reduce
import sqlite3 as sq
from sqlite3 import Error
import hashlib as hl
import csv
import numpy as np
from pandas_profiling import ProfileReport
from google.colab import drive
drive.mount("/content/drive", force_remount=True)
###Output
Mounted at /content/drive
###Markdown
Version der Umgebung und Bibliotheken checken
|Bibliothek|Version|
|-|-|
| csv |1.0|
| pandas |1.1.5|
| numpy |1.19.5|
| sqlite3 |2.6.0|
| hashlib |3.9|
| google|2.0.3|
###Code
#sys.version
# %pip freeze
###Output
_____no_output_____
###Markdown
Definition der Variablen für die CSV-Dateien und Datenbanken
###Code
# Studies
patient_allergy = "allergy"
patient_covid19 = "covid19"
# csv files
material_path_covid19 = "/content/drive/MyDrive/csv_data/"+patient_covid19+"/"
material_path_allergy = "/content/drive/MyDrive/csv_data/"+patient_allergy+"/"
# Data Warehouse
db_file_path_cov_alle = "/content/drive/MyDrive/db_files/cov_alle.db"
!rm {db_file_path_cov_alle} # delete file if exists
###Output
_____no_output_____
###Markdown
Extraktion und TransformationBenuzte Datasets- `patients`- `observations`- `conditions`- `procedures`Mehtode* Laden CSV-Dateien in Data Frames* Einfügen einer neuen Spalte mit den Studien* Zwei Data Frames zusammenfügen* Löschen von redundanten und nicht notweindige Variablen* Erstellung von Pseudonym (nur für patients)* Handlung von fehlenden Werten Laden CSV-Dateien
###Code
# Patients
# covid-19
patient_cov = pd.read_csv(material_path_covid19 + "/patients.csv")
# allergy
patient_all = pd.read_csv(material_path_allergy + "/patients.csv")
# Observations
# covid-19
observation_cov = pd.read_csv(material_path_covid19 + "/observations.csv")
# allergy
observation_all = pd.read_csv(material_path_allergy + "/observations.csv")
# Conditions
# covid19
condition_cov = pd.read_csv(material_path_covid19 + "/conditions.csv")
# allergy
condition_all = pd.read_csv(material_path_allergy + "/conditions.csv")
# Procedures
# covid19
procedure_cov = pd.read_csv(material_path_covid19 + "/procedures.csv")
# allergy
procedure_all = pd.read_csv(material_path_allergy + "/procedures.csv")
###Output
_____no_output_____
###Markdown
Erzeugung des Data Sets aus CSV-Dateien
###Code
# Patients
#covid
patient_cov["STUDY"] = 'COVID-19' # new column with study
#allergy
patient_all["STUDY"] = 'Allergy'
#union of both dataframes
patient = pd.concat([patient_all, patient_cov]).drop_duplicates()
# delete not important columns
patient = patient.drop(['SSN', 'PREFIX', 'ZIP', 'DRIVERS', 'PASSPORT', 'FIRST',
'LAST', 'BIRTHPLACE', 'ADDRESS', 'STATE', 'COUNTY', 'MAIDEN', 'SUFFIX', 'LAT', 'LON', 'HEALTHCARE_EXPENSES', 'HEALTHCARE_COVERAGE'], axis=1)
# new hash patient id
patient['PSPID'] = [hl.md5(val.encode('UTF-8')).hexdigest() for val in patient['Id']]
# another tranformations to clean the information
patient['MARITAL'].fillna(patient['MARITAL'].mode()[0], inplace=True)
patient["DEATHDATE"] = patient.DEATHDATE.fillna(pd.to_datetime("today"))
# date time transformation and keep the year of birth and year of death
patient["DEATHDATE"] = pd.to_datetime(patient["DEATHDATE"])
patient["DEATHDATE"] = patient.DEATHDATE.dt.year
patient["BIRTHDATE"] = pd.to_datetime(patient["BIRTHDATE"])
patient['BIRTHDATE'] = patient.BIRTHDATE.dt.year
# calculate age
patient["AGE"] = patient.DEATHDATE - patient.BIRTHDATE
# show some patients
pd.concat([patient.head(3), patient.tail(3)])
# Observations
observation_cov = pd.read_csv(material_path_covid19 + "/observations.csv")
observation_cov["STUDY"] = 2
observation_all = pd.read_csv(material_path_allergy + "/observations.csv")
observation_all["STUDY"] = 1
observation = pd.concat([observation_all, observation_cov]).drop_duplicates()
observation = observation.drop(['ENCOUNTER', 'TYPE'], axis=1)
observation["DATE"] = pd.to_datetime(observation["DATE"])
pd.concat([observation.head(3), observation.tail(3)])
# Conditions
condition_cov = pd.read_csv(material_path_covid19 + "/conditions.csv")
condition_cov["STUDY"] = 2
condition_all = pd.read_csv(material_path_allergy + "/conditions.csv")
condition_all["STUDY"] = 1
condition = pd.concat([condition_all, condition_cov]).drop_duplicates()
condition = condition.drop(['ENCOUNTER'], axis=1)
condition["START"] = pd.to_datetime(condition["START"])
condition["STOP"] = condition.STOP.fillna(pd.to_datetime("today"))
condition["STOP"] = pd.to_datetime(condition["STOP"])
pd.concat([condition.head(3), condition.tail(3)])
# Procedures
procedure_cov = pd.read_csv(material_path_covid19 + "/procedures.csv")
procedure_cov["STUDY"] = 2
procedure_all = pd.read_csv(material_path_allergy + "/procedures.csv")
procedure_all["STUDY"] = 1
procedure = pd.concat([procedure_all, procedure_cov])
procedure = procedure.drop(['ENCOUNTER', 'REASONCODE', 'REASONDESCRIPTION', 'BASE_COST'], axis=1)
procedure["DATE"] = pd.to_datetime(procedure["DATE"])
pd.concat([procedure.head(3), procedure.tail(3)])
###Output
_____no_output_____
###Markdown
Data Warehouse Vorbereitung der Dimentionen für Data Warehouse mit Hilfe der Tabelle `patients`* Spalten von Interesse herausnehmen* Lösche reduntante Werte* Erstellung neues Index* Einfüge neue Spalte mit IDs
###Code
# gender
# select gender
gender = pd.DataFrame(patient['GENDER'].unique().tolist(), columns=['GENDER'])
# new column with IDs
gender["ID"] = gender.index + 1
gender
# race
race = pd.DataFrame(patient['RACE'].unique().tolist(), columns=['RACE'])
race['ID'] = race.index + 1
race
# marital
marital = pd.DataFrame(patient['MARITAL'].unique().tolist(), columns=['MARITAL'])
marital['ID'] = marital.index + 1
marital
# ethnicity
ethnicity = pd.DataFrame(patient['ETHNICITY'].unique().tolist(), columns=['ETHNICITY'])
ethnicity['ID'] = ethnicity.index + 1
ethnicity
# study
study = pd.DataFrame(patient['STUDY'].unique().tolist(), columns=['STUDY'])
study['ID'] = study.index + 1
study
# city
city = pd.DataFrame(patient['CITY'].unique().tolist(), columns=['CITY'])
city['ID'] = city.index + 1
city.head(3)
###Output
_____no_output_____
###Markdown
Tabellen in Data Warehouse
###Code
# Data Warehouse
sql_table_dwh = {} # tables
sql_index_dwh = {} # indices
###Output
_____no_output_____
###Markdown
Tabellen für "rohe" Daten
###Code
# patient: Id BIRTHDATE DEATHDATE MARITAL RACE ETHNICITY GENDER CITY LAT LON HEALTHCARE_EXPENSES HEALTHCARE_COVERAGE STUDY PSPID AGE
sql_table_dwh['patient'] = """
create table if not exists patient(
ID VARCHAR,
BIRTHDATE INTEGER,
DEATHDATE INTEGER,
MARITAL VARCHAR,
RACE VARCHAR,
ETHNICITY VARCHAR,
GENDER VARCHAR,
CITY VARCHAR,
STUDY VARCHAR,
PSPID VARCHAR,
AGE INTEGER
);
"""
# observations: DATE PATIENT CODE DESCRIPTION VALUE UNITS STUDY
sql_table_dwh['observation'] = """
create table if not exists observation(
DATE DATE,
PATIENT VARCHAR,
CODE VARCHAR,
DESCRIPTION VARCHAR,
VALUE VARCHAR,
UNITS VARCHAR,
STUDY VARCHAR
);
"""
# conditions: START STOP PATIENT CODE DESCRIPTION STUDY
sql_table_dwh['condition'] = """
create table if not exists condition(
START DATE,
STOP DATE,
PATIENT VARCHAR,
CODE VARCHAR,
DESCRIPTION VARCHAR,
STUDY VARCHAR
);
"""
# procedures: DATE PATIENT CODE DESCRIPTION BASE_COST STUDY
sql_table_dwh['procedures'] = """
create table if not exists procedure(
DATE DATE,
PATIENT VARCHAR,
CODE VARCHAR,
DESCRIPTION VARCHAR,
BASE_COST VARCHAR,
STUDY VARCHAR
);
"""
###Output
_____no_output_____
###Markdown
Dimension Tabellen- `dimPatient`- `dimGender`- `dimStudy`- `dimCity`- `dimSnomed`- `dimLoinc`- `dimEthnicity`- `dimMarital`- `dimRace`
###Code
# patient
sql_table_dwh['dimPatient'] = """
create table if not exists dimPatient(
BIRTHDATE INTEGER,
DEATHDATE INTEGER,
PSPID VARCHAR PRIMARY KEY,
AGE INTEGER
);
"""
# Gender
sql_table_dwh['dimGender'] = """
create table if not exists dimGender(
ID INTEGER PRIMARY KEY,
GENDER VARCHAR UNIQUE NOT NULL
);
"""
# Study
sql_table_dwh['dimStudy'] = """
create table if not exists dimStudy(
ID INTEGER PRIMARY KEY,
STUDY VARCHAR UNIQUE NOT NULL
);
"""
# City
sql_table_dwh['dimCity'] = """
create table if not exists dimCity(
ID INTEGER PRIMARY KEY,
CITY VARCHAR UNIQUE NOT NULL
);
"""
# Ethnicity
sql_table_dwh['dimEthnicity'] = """
create table if not exists dimEthnicity(
ID INTEGER PRIMARY KEY,
ETHNICITY VARCHAR UNIQUE NOT NULL
);
"""
# Marital
sql_table_dwh['dimMarital'] = """
create table if not exists dimMarital(
ID INTEGER PRIMARY KEY,
MARITAL VARCHAR UNIQUE NOT NULL
);
"""
# Race
sql_table_dwh['dimRace'] = """
create table if not exists dimRace(
ID INTEGER PRIMARY KEY,
RACE VARCHAR UNIQUE NOT NULL
);
"""
# SNOMED
sql_table_dwh['dimSnomed'] = """
create table if not exists dimSnomed(
CODE VARCHAR PRIMARY KEY,
DESCRIPTION VARCHAR UNIQUE NOT NULL
);
"""
# LOINC
sql_table_dwh['dimLoinc'] = """
create table if not exists dimLoinc(
CODE VARCHAR PRIMARY KEY,
DESCRIPTION VARCHAR UNIQUE NOT NULL
);
"""
###Output
_____no_output_____
###Markdown
Fakten Tabllen
* `factObservation`
* `factProcedure`
* `factCondition`
Jede Tabelle besitzt Indizes an jede Spalte mit IDs
###Code
# factObservation
sql_table_dwh['factObservation'] = """
create table if not exists factObservation(
PATIENT_PSPID VARCHAR REFERENCES dimPatient(PSPID),
BIRTHYEAR INTEGER,
DEATHYEAR INTEGER,
MARITAL_ID VARCHAR REFERENCES dimMarital(ID),
RACE_ID VARCHAR REFERENCES dimRace(ID),
ETHNICITY_ID VARCHAR REFERENCES dimEthnicity(ID),
GENDER_ID VARCHAR REFERENCES dimGender(ID),
CITY_ID VARCHAR REFERENCES dimCity(ID),
STUDY_ID VARCHAR REFERENCES dimStudy(ID),
AGE INTEGER,
DATE DATE,
LOINC VARCHAR REFERENCES dimLoinc(CODE),
VALUE VARCHAR,
UNITS VARCHAR
);
"""
sql_index_dwh["ix_factObservation_patient"] = """CREATE INDEX if not exists ix_factObservation_patient on factObservation(PATIENT_PSPID);"""
sql_index_dwh["ix_factObservation_marital"] = """CREATE INDEX if not exists ix_factObservation_marital on factObservation(MARITAL_ID);"""
sql_index_dwh["ix_factObservation_race"] = """CREATE INDEX if not exists ix_factObservation_race on factObservation(RACE_ID);"""
sql_index_dwh["ix_factObservation_ethnicity"] = """CREATE INDEX if not exists ix_factObservation_ethnicity on factObservation(ETHNICITY_ID);"""
sql_index_dwh["ix_factObservation_gender"] = """CREATE INDEX if not exists ix_factObservation_gender on factObservation(GENDER_ID);"""
sql_index_dwh["ix_factObservation_city"] = """CREATE INDEX if not exists ix_factObservation_city on factObservation(CITY_ID);"""
sql_index_dwh["ix_factObservation_study"] = """CREATE INDEX if not exists ix_factObservation_study on factObservation(STUDY_ID);"""
sql_index_dwh["ix_factObservation_loinc"] = """CREATE INDEX if not exists ix_factObservation_loinc on factObservation(LOINC);"""
# factProcedure
sql_table_dwh['factProcedure'] = """
create table if not exists factProcedure(
PATIENT_PSPID VARCHAR REFERENCES dimPatient(PSPID),
BIRTHYEAR INTEGER,
DEATHYEAR INTEGER,
MARITAL_ID VARCHAR REFERENCES dimMarital(ID),
RACE_ID VARCHAR REFERENCES dimRace(ID),
ETHNICITY_ID VARCHAR REFERENCES dimEthnicity(ID),
GENDER_ID VARCHAR REFERENCES dimGender(ID),
CITY_ID VARCHAR REFERENCES dimCity(ID),
STUDY_ID VARCHAR REFERENCES dimStudy(ID),
AGE INTEGER,
DATE DATE,
SNOMED VARCHAR REFERENCES dimSnomed(CODE)
);
"""
sql_index_dwh["ix_factProcedure_patient"] = """CREATE INDEX if not exists ix_factProcedure_patient on factProcedure(PATIENT_PSPID);"""
sql_index_dwh["ix_factProcedure_marital"] = """CREATE INDEX if not exists ix_factProcedure_marital on factProcedure(MARITAL_ID);"""
sql_index_dwh["ix_factProcedure_race"] = """CREATE INDEX if not exists ix_factProcedure_race on factProcedure(RACE_ID);"""
sql_index_dwh["ix_factProcedure_ethnicity"] = """CREATE INDEX if not exists ix_factProcedure_ethnicity on factProcedure(ETHNICITY_ID);"""
sql_index_dwh["ix_factProcedure_gender"] = """CREATE INDEX if not exists ix_factProcedure_gender on factProcedure(GENDER_ID);"""
sql_index_dwh["ix_factProcedure_city"] = """CREATE INDEX if not exists ix_factProcedure_city on factProcedure(CITY_ID);"""
sql_index_dwh["ix_factProcedure_study"] = """CREATE INDEX if not exists ix_factProcedure_study on factProcedure(STUDY_ID);"""
sql_index_dwh["ix_factProcedure_snomed"] = """CREATE INDEX if not exists ix_factProcedure_snomed on factProcedure(SNOMED);"""
# factCondition
sql_table_dwh['factCondition'] = """
create table if not exists factCondition(
PATIENT_PSPID VARCHAR REFERENCES dimPatient(PSPID),
BIRTHYEAR INTEGER,
DEATHYEAR INTEGER,
MARITAL_ID VARCHAR REFERENCES dimMarital(ID),
RACE_ID VARCHAR REFERENCES dimRace(ID),
ETHNICITY_ID VARCHAR REFERENCES dimEthnicity(ID),
GENDER_ID VARCHAR REFERENCES dimGender(ID),
CITY_ID VARCHAR REFERENCES dimCity(ID),
STUDY_ID VARCHAR REFERENCES dimStudy(ID),
AGE INTEGER,
START DATE,
STOP DATE,
SNOMED VARCHAR REFERENCES dimSnomed(CODE)
);
"""
sql_index_dwh["ix_factCondition_patient"] = """CREATE INDEX if not exists ix_factCondition_patient on factCondition(PATIENT_PSPID);"""
sql_index_dwh["ix_factCondition_marital"] = """CREATE INDEX if not exists ix_factCondition_marital on factCondition(MARITAL_ID);"""
sql_index_dwh["ix_factCondition_race"] = """CREATE INDEX if not exists ix_factCondition_race on factCondition(RACE_ID);"""
sql_index_dwh["ix_factCondition_ethnicity"] = """CREATE INDEX if not exists ix_factCondition_ethnicity on factCondition(ETHNICITY_ID);"""
sql_index_dwh["ix_factCondition_gender"] = """CREATE INDEX if not exists ix_factCondition_gender on factCondition(GENDER_ID);"""
sql_index_dwh["ix_factCondition_city"] = """CREATE INDEX if not exists ix_factCondition_city on factCondition(CITY_ID);"""
sql_index_dwh["ix_factCondition_study"] = """CREATE INDEX if not exists ix_factCondition_study on factCondition(STUDY_ID);"""
sql_index_dwh["ix_factCondition_snomed"] = """CREATE INDEX if not exists ix_factCondition_snomed on factCondition(SNOMED);"""
###Output
_____no_output_____
###Markdown
Funktion für die Verbindung mit dem Data Warehouse.Das Data Warehouse ist eine SQLite-Datenbank in Google Drive.
###Code
def connect_to_db(db_file):
sqlite3_conn = None
try:
sqlite3_conn = sq.connect(db_file)
return sqlite3_conn
except Error as err:
print(err)
if sqlite3_conn is not None:
sqlite3_conn.close()
###Output
_____no_output_____
###Markdown
Herstellung der Tabellen und Indizes
###Code
conn_dwh = connect_to_db(db_file_path_cov_alle)
if conn_dwh is not None:
cursor_dwh = conn_dwh.cursor()
for name in sql_table_dwh.keys():
print(name)
cursor_dwh.execute(sql_table_dwh[name])
for ix_name in sql_index_dwh.keys():
print(ix_name)
cursor_dwh.execute(sql_index_dwh[ix_name])
else:
print('Connection to database failed')
###Output
patient
observation
condition
procedures
dimPatient
dimGender
dimStudy
dimCity
dimEthnicity
dimMarital
dimRace
dimSnomed
dimLoinc
factObservation
factProcedure
factCondition
ix_factObservation_patient
ix_factObservation_marital
ix_factObservation_race
ix_factObservation_ethnicity
ix_factObservation_gender
ix_factObservation_city
ix_factObservation_study
ix_factObservation_loinc
ix_factProcedure_patient
ix_factProcedure_marital
ix_factProcedure_race
ix_factProcedure_ethnicity
ix_factProcedure_gender
ix_factProcedure_city
ix_factProcedure_study
ix_factProcedure_snomed
ix_factCondition_patient
ix_factCondition_marital
ix_factCondition_race
ix_factCondition_ethnicity
ix_factCondition_gender
ix_factCondition_city
ix_factCondition_study
ix_factCondition_snomed
###Markdown
Einfüge der Information der Data Frames in den TabellenIn dem Fall der Patient-Tabelle werden einige Spalten gelöscht und durch IDs erzets.
###Code
# raw data
patient.to_sql(name = 'patient', con=conn_dwh, if_exists='append', index=False)
observation.to_sql(name = 'observation', con=conn_dwh, if_exists='append', index=False)
condition.to_sql(name = 'condition', con=conn_dwh, if_exists='append', index=False)
procedure.to_sql(name = 'procedure', con=conn_dwh, if_exists='append', index=False)
# dimensions
patient_to_dim = patient.drop(['Id','MARITAL', 'RACE', 'GENDER', 'CITY', 'STUDY', 'ETHNICITY'], axis=1)
patient_to_dim.to_sql(name = 'dimPatient', con=conn_dwh, if_exists='append', index=False)
gender.to_sql(name = 'dimGender', con=conn_dwh, if_exists='append', index=False)
study.to_sql(name = 'dimStudy', con=conn_dwh, if_exists='append', index=False)
city.to_sql(name = 'dimCity', con=conn_dwh, if_exists='append', index=False)
ethnicity.to_sql(name = 'dimEthnicity', con=conn_dwh, if_exists='append', index=False)
marital.to_sql(name = 'dimMarital', con=conn_dwh, if_exists='append', index=False)
race.to_sql(name = 'dimRace', con=conn_dwh, if_exists='append', index=False)
###Output
_____no_output_____
###Markdown
Extraktion von SNOMED-CT und LOINC für die Dimentionen in Data Warehouse
**SQL-Erklärung**: Selektiert die verschiedene `code` und `description` der Tabellen ` procedure` und `condition` für SNOMED und `observation` für LOINC, davon für jede `code` nur die längste ` description` nehmen (es gibt `code` mit verschiedenen `description`), und sortiert das Ergebnis nach `code`.
Solche SQL-Statement wird in einem Data Frame gespeichert, und in den Dimension Tabellen eingefügt.
###Code
# SNOMED-CT
snomed = pd.read_sql_query("""
select distinct code, description from(
select distinct code, description FROM "procedure" p
union
select distinct code, description FROM "condition" c
) as snomed
group by code
having max(LENGTH(description))
order by code
;""", conn_dwh
)
snomed.to_sql(name = 'dimSnomed', con=conn_dwh, if_exists='append', index=False)
snomed.head(3)
loinc = pd.read_sql_query("""
select distinct code, description from(
select distinct code, description FROM observation
) as loinc
group by code
having max(LENGTH(description))
order by code
;""", conn_dwh
)
loinc.to_sql(name = 'dimLoinc', con=conn_dwh, if_exists='append', index=False)
loinc.head(3)
###Output
_____no_output_____
###Markdown
Auswahl und Einfügen der Information in Fakten Tabellen
- Auswahl der Information mit Hilfe von Select-Statement und speichern in Data Frame
- Einfügen des Data Frames in Fakt Tabellen
###Code
# fatObservation
factObservation = pd.read_sql_query("""
select DISTINCT
PSPID PATIENT_PSPID,
BIRTHDATE BIRTHYEAR,
DEATHDATE DEATHYEAR,
dm.ID MARITAL_ID,
dr.ID RACE_ID ,
de.ID ETHNICITY_ID,
dg.ID GENDER_ID,
dc.ID CITY_ID,
ds.ID STUDY_ID,
AGE,
o.date DATE,
o.CODE LOINC,
o.VALUE,
o.UNITS
from patient pat
join dimMarital dm
on dm.MARITAL = pat.MARITAL
join dimRace dr
on dr.RACE = pat.RACE
join dimEthnicity de
on de.ETHNICITY = pat.ETHNICITY
join dimCity dc
on dc.CITY = pat.CITY
join dimGender dg
on dg.GENDER = pat.GENDER
join dimStudy ds
on ds.STUDY = pat.STUDY
join observation o
on o.PATIENT = pat.Id
;"""
, conn_dwh)
factObservation.to_sql(name='factObservation', con=conn_dwh, if_exists='append', index=False)
# factProcedure
factProcedure = pd.read_sql_query("""
select DISTINCT
PSPID PATIENT_PSPID,
BIRTHDATE BIRTHYEAR,
DEATHDATE DEATHYEAR,
dm.ID MARITAL_ID,
dr.ID RACE_ID ,
de.ID ETHNICITY_ID,
dg.ID GENDER_ID,
dc.ID CITY_ID,
ds.ID STUDY_ID,
AGE,
p.date DATE,
p.CODE SNOMED
from patient pat
join dimMarital dm
on dm.MARITAL = pat.MARITAL
join dimRace dr
on dr.RACE = pat.RACE
join dimEthnicity de
on de.ETHNICITY = pat.ETHNICITY
join dimCity dc
on dc.CITY = pat.CITY
join dimGender dg
on dg.GENDER = pat.GENDER
join dimStudy ds
on ds.STUDY = pat.STUDY
join "procedure" p
on p.PATIENT = pat.Id
;""", conn_dwh)
factProcedure.to_sql(name='factProcedure', con=conn_dwh, if_exists='append', index=False)
# factCondition
factCondition = pd.read_sql_query("""
select DISTINCT
PSPID PATIENT_PSPID,
BIRTHDATE BIRTHYEAR,
DEATHDATE DEATHYEAR,
dm.ID MARITAL_ID,
dr.ID RACE_ID ,
de.ID ETHNICITY_ID,
dg.ID GENDER_ID,
dc.ID CITY_ID,
ds.ID STUDY_ID,
AGE,
c.START,
c.STOP,
c.CODE SNOMED
from patient pat
join dimMarital dm
on dm.MARITAL = pat.MARITAL
join dimRace dr
on dr.RACE = pat.RACE
join dimEthnicity de
on de.ETHNICITY = pat.ETHNICITY
join dimCity dc
on dc.CITY = pat.CITY
join dimGender dg
on dg.GENDER = pat.GENDER
join dimStudy ds
on ds.STUDY = pat.STUDY
join "condition" c
on c.PATIENT = pat.Id
;""", conn_dwh)
factCondition.to_sql(name='factCondition', con=conn_dwh, if_exists='append', index=False)
###Output
_____no_output_____
###Markdown
Views in Data Warehouse
* `v_patients`
* `v_observations`
* `v_conditions`
* `v_procedures`
Solche Views dienen die Erleichterung der Data Analysis.
###Code
cursor_dwh.executescript(
"""
-- Patients
CREATE view v_patients as
select DISTINCT
PATIENT_PSPID PATIENT,
BIRTHYEAR,
DEATHYEAR,
MARITAL,
RACE,
ETHNICITY,
GENDER,
CITY,
AGE,
STUDY
from factObservation fo
JOIN dimMarital dm
ON fo.MARITAL_ID = dm.ID
join dimRace dr
on dr.ID = fo.RACE_ID
join dimEthnicity de
on de.ID = fo.ETHNICITY_ID
join dimGender dg
on dg.ID = fo.GENDER_ID
join dimCity dc
on dc.ID = fo.CITY_ID
join dimStudy ds
on ds.ID = fo.STUDY_ID ;
-- Observations
create view v_observations as
select
PATIENT_PSPID PATIENT,
BIRTHYEAR,
DEATHYEAR,
dm.MARITAL,
dr.RACE,
de.ETHNICITY,
dg.GENDER,
dc.CITY ,
AGE,
DATE,
LOINC,
dl.description DESCRIPTION,
VALUE,
UNITS,
ds.STUDY
from factObservation fo
join dimMarital dm
on fo.MARITAL_ID = dm.ID
join dimRace dr
on dr.ID = fo.RACE_ID
join dimEthnicity de
on de.ID = fo.ETHNICITY_ID
join dimGender dg
on dg.ID = fo.GENDER_ID
join dimCity dc
on dc.ID = fo.CITY_ID
join dimLoinc dl
on dl.code = fo.LOINC
join dimStudy ds
on ds.ID = fo.STUDY_ID
;
-- Conditions
create view v_conditions as
select
PATIENT_PSPID PATIENT,
BIRTHYEAR,
DEATHYEAR,
dm.MARITAL,
dr.RACE,
de.ETHNICITY,
dg.GENDER,
dc.CITY ,
AGE,
"START" ,
STOP ,
SNOMED ,
dsn.description DESCRIPTION,
ds.STUDY
from factCondition fc
join dimMarital dm
on fc.MARITAL_ID = dm.ID
join dimRace dr
on dr.ID = fc.RACE_ID
join dimEthnicity de
on de.ID = fc.ETHNICITY_ID
join dimGender dg
on dg.ID = fc.GENDER_ID
join dimCity dc
on dc.ID = fc.CITY_ID
join dimSnomed dsn
on dsn.code = fc.SNOMED
join dimStudy ds
on ds.ID = fc.STUDY_ID
;
-- Procedures
create view v_procedures as
select
PATIENT_PSPID PATIENT,
BIRTHYEAR,
DEATHYEAR,
dm.MARITAL,
dr.RACE,
de.ETHNICITY,
dg.GENDER,
dc.CITY ,
AGE,
DATE ,
SNOMED ,
dsn.description DESCRIPTION,
ds.STUDY
from factProcedure fc
join dimMarital dm
on fc.MARITAL_ID = dm.ID
join dimRace dr
on dr.ID = fc.RACE_ID
join dimEthnicity de
on de.ID = fc.ETHNICITY_ID
join dimGender dg
on dg.ID = fc.GENDER_ID
join dimCity dc
on dc.ID = fc.CITY_ID
join dimSnomed dsn
on dsn.code = fc.SNOMED
join dimStudy ds
on ds.ID = fc.STUDY_ID
;
--drop tables with raw data
drop table if exists patient;
drop table if exists condition;
drop table if exists procedure;
drop table if exists observation;
"""
)
###Output
_____no_output_____
###Markdown
Commiten und Schliessung der Verbindungen
###Code
# commit and close connections
conn_dwh.commit()
conn_dwh.close()
###Output
_____no_output_____
###Markdown
ETL Strecke und Implementierung von Data Warehouse mit der Information von COVID-19 und AllergyDieses Skript extraiert die Information der CSV-Dateien von **COVID-19** und **Allergy** aus dem Data Set [SyntheaTM Patient Generator](https://github.com/synthetichealth/synthea). Aus diesem Data Set werden die Dateien `patients.csv`, `observations.csv`, `procedures.csv` und `conditions.csv` von den vorherigen Pathologien genutzt.Obwohl diese Daten synthetisch sind, werden die Patienten ID durch MD5-Hash-Strings ersetzt, Geburtsdatum und Todesdatum zu Geburtsjahr und Todesjahr vergröbert, um eine Pseudonymisierung zu simulieren. Noch dazu werden andere (für unseres Projekt) nicht relevante Spalten nicht genohmen.Die transformierte "rohe" Daten werden im Data Warehouse gespeichert. Sie sind die Basis der Dimentionen und Fakten Tabellen. Für das Data Warehousing wird ein Stern Schema zusammen mit Views erstellt, um das Online Analytical Processing zu erleisten. Am Ende werden die Tabellen mit den rohen Daten gelöscht. Reproducibility
Löschen alle Variablen
###Code
%reset -f
###Output
_____no_output_____
###Markdown
Import Bibliotheken
###Code
import pandas as pd
import sys
from pandas.util import hash_pandas_object
from functools import reduce
import sqlite3 as sq
from sqlite3 import Error
import hashlib as hl
import csv
import numpy as np
from pandas_profiling import ProfileReport
from google.colab import drive
drive.mount("/content/drive", force_remount=True)
###Output
Mounted at /content/drive
###Markdown
Version der Umgebung und Bibliotheken checken
|Bibliothek|Version|
|-|-|
| csv |1.0|
| pandas |1.1.5|
| numpy |1.19.5|
| sqlite3 |2.6.0|
| hashlib |3.9|
| google|2.0.3|
###Code
#sys.version
# %pip freeze
###Output
_____no_output_____
###Markdown
Definition der Variablen für die CSV-Dateien und Datenbanken
###Code
# Studies
patient_allergy = "allergy"
patient_covid19 = "covid19"
# csv files
material_path_covid19 = "/content/drive/MyDrive/csv_data/"+patient_covid19+"/"
material_path_allergy = "/content/drive/MyDrive/csv_data/"+patient_allergy+"/"
# Data Warehouse
db_file_path_cov_alle = "/content/drive/MyDrive/db_files/cov_alle.db"
!rm {db_file_path_cov_alle} # delete file if exists
###Output
_____no_output_____
###Markdown
Extraktion und TransformationBenuzte Datasets- `patients`- `observations`- `conditions`- `procedures`Mehtode* Laden CSV-Dateien in Data Frames* Einfügen einer neuen Spalte mit den Studien* Zwei Data Frames zusammenfügen* Löschen von redundanten und nicht notweindige Variablen* Erstellung von Pseudonym (nur für patients)* Handlung von fehlenden Werten Laden CSV-Dateien
###Code
# Patients
# covid-19
patient_cov = pd.read_csv(material_path_covid19 + "/patients.csv")
# allergy
patient_all = pd.read_csv(material_path_allergy + "/patients.csv")
# Observations
# covid-19
observation_cov = pd.read_csv(material_path_covid19 + "/observations.csv")
# allergy
observation_all = pd.read_csv(material_path_allergy + "/observations.csv")
# Conditions
# covid19
condition_cov = pd.read_csv(material_path_covid19 + "/conditions.csv")
# allergy
condition_all = pd.read_csv(material_path_allergy + "/conditions.csv")
# Procedures
# covid19
procedure_cov = pd.read_csv(material_path_covid19 + "/procedures.csv")
# allergy
procedure_all = pd.read_csv(material_path_allergy + "/procedures.csv")
###Output
_____no_output_____
###Markdown
Rechne Checksum des Data SetsDamit wird das Data Set validiert.
###Code
# Calculate the checksum of the data set
"""patient_cov_hash = hash_pandas_object(patient_cov)
patient_all_hash = hash_pandas_object(patient_all)
observation_cov_hash = hash_pandas_object(observation_cov)
observation_all_hash = hash_pandas_object(observation_all)
condition_cov_hash = hash_pandas_object(condition_cov)
condition_all_hash = hash_pandas_object(condition_all)
procedure_cov_hash = hash_pandas_object(procedure_cov)
procedure_all_hash = hash_pandas_object(procedure_all)"""
###Output
_____no_output_____
###Markdown
Zeigt Checksum des Data Sets
###Code
"""print('COVID-19\npatients.csv: ' + str(patient_cov_hash.sum()))
print('observations.csv: ' + str(observation_cov_hash.sum()))
print('conditions.csv: ' + str(condition_cov_hash.sum()))
print('procedures.csv: ' + str(procedure_cov_hash.sum()))
print('\nAllergy\npatients.csv: ' + str(patient_all_hash.sum()))
print('observations.csv: ' + str(observation_all_hash.sum()))
print('conditions.csv: ' + str(condition_all_hash.sum()))
print('procdures.csv: ' + str(procedure_all_hash.sum())) """
###Output
_____no_output_____
###Markdown
Values der Checksum
- patient_cov_hash = 5546755912481062969
- patient_all_hash = -7991106203008684058
- observation_cov_hash = 6828304207460361826
- observation_all_hash = 604369861310068017
- condition_cov_hash = 8579072675289514469
- condition_all_hash = 8446863690955095255
- procedure_cov_hash = -6236636528620667348
- procedure_all_hash = -7480254982461391402 Erzeugung des Data Sets aus CSV-Dateien
###Code
# Patients
#covid
patient_cov["STUDY"] = 'COVID-19' # new column with study
#allergy
patient_all["STUDY"] = 'Allergy'
#union of both dataframes
patient = pd.concat([patient_all, patient_cov]).drop_duplicates()
# delete not important columns
patient = patient.drop(['SSN', 'PREFIX', 'ZIP', 'DRIVERS', 'PASSPORT', 'FIRST',
'LAST', 'BIRTHPLACE', 'ADDRESS', 'STATE', 'COUNTY', 'MAIDEN', 'SUFFIX'], axis=1)
# new hash patient id
patient['PSPID'] = [hl.md5(val.encode('UTF-8')).hexdigest() for val in patient['Id']]
# another tranformations to clean the information
patient['MARITAL'].fillna(patient['MARITAL'].mode()[0], inplace=True)
patient["DEATHDATE"] = patient.DEATHDATE.fillna(pd.to_datetime("today"))
# date time transformation and keep the year of birth and year of death
patient["DEATHDATE"] = pd.to_datetime(patient["DEATHDATE"])
patient["DEATHDATE"] = patient.DEATHDATE.dt.year
patient["BIRTHDATE"] = pd.to_datetime(patient["BIRTHDATE"])
patient['BIRTHDATE'] = patient.BIRTHDATE.dt.year
# calculate age
patient["AGE"] = patient.DEATHDATE - patient.BIRTHDATE
# show some patients
pd.concat([patient.head(3), patient.tail(3)])
# Observations
observation_cov = pd.read_csv(material_path_covid19 + "/observations.csv")
observation_cov["STUDY"] = 2
observation_all = pd.read_csv(material_path_allergy + "/observations.csv")
observation_all["STUDY"] = 1
observation = pd.concat([observation_all, observation_cov]).drop_duplicates()
observation = observation.drop(['ENCOUNTER', 'TYPE'], axis=1)
observation["DATE"] = pd.to_datetime(observation["DATE"])
pd.concat([observation.head(3), observation.tail(3)])
# Conditions
condition_cov = pd.read_csv(material_path_covid19 + "/conditions.csv")
condition_cov["STUDY"] = 2
condition_all = pd.read_csv(material_path_allergy + "/conditions.csv")
condition_all["STUDY"] = 1
condition = pd.concat([condition_all, condition_cov]).drop_duplicates()
condition = condition.drop(['ENCOUNTER'], axis=1)
condition["START"] = pd.to_datetime(condition["START"])
condition["STOP"] = condition.STOP.fillna(pd.to_datetime("today"))
condition["STOP"] = pd.to_datetime(condition["STOP"])
pd.concat([condition.head(3), condition.tail(3)])
# Procedures
procedure_cov = pd.read_csv(material_path_covid19 + "/procedures.csv")
procedure_cov["STUDY"] = 2
procedure_all = pd.read_csv(material_path_allergy + "/procedures.csv")
procedure_all["STUDY"] = 1
procedure = pd.concat([procedure_all, procedure_cov])
procedure = procedure.drop(['ENCOUNTER', 'REASONCODE', 'REASONDESCRIPTION'], axis=1)
procedure["DATE"] = pd.to_datetime(procedure["DATE"])
pd.concat([procedure.head(3), procedure.tail(3)])
###Output
_____no_output_____
###Markdown
Data Warehouse Vorbereitung der Dimentionen für Data Warehouse mit Hilfe der Tabelle `patients`* Spalten von Interesse herausnehmen* Lösche reduntante Werte* Erstellung neues Index* Einfüge neue Spalte mit IDs
###Code
# gender
# select gender
gender = patient[['GENDER']]
# delete duplicated
gender = gender.drop_duplicates()
# new index
gender = gender.reset_index()
# new column with IDs
gender["ID"] = gender.index + 1
gender = gender.drop(['index'], axis=1)
gender
# race
race = patient[['RACE']]
race = race.drop_duplicates()
race = race.reset_index()
race['ID'] = race.index + 1
race = race.drop('index', axis=1)
race
# marital
marital = patient[['MARITAL']]
marital = marital.drop_duplicates()
marital = marital.reset_index()
marital['ID'] = marital.index + 1
marital = marital.drop('index', axis=1)
marital
# ethnicity
ethnicity = patient[['ETHNICITY']]
ethnicity = ethnicity.drop_duplicates()
ethnicity = ethnicity.reset_index()
ethnicity['ID'] = ethnicity.index + 1
ethnicity = ethnicity.drop('index', axis=1)
ethnicity
# study
study = patient[['STUDY']]
study = study.drop_duplicates()
study = study.reset_index()
study['ID'] = study.index + 1
study = study.drop('index', axis=1)
study
# city
city = patient[['CITY']]
city = city.drop_duplicates()
city = city.reset_index()
city['ID'] = city.index + 1
city = city.drop('index', axis=1)
city.head(3)
###Output
_____no_output_____
###Markdown
Tabellen fürs Data Warehouse
###Code
sql_table_dwh = {} # Data Warehouse
###Output
_____no_output_____
###Markdown
Tabellen für "rohe" Daten in Data Warehouse
###Code
# patient: Id BIRTHDATE DEATHDATE MARITAL RACE ETHNICITY GENDER CITY LAT LON HEALTHCARE_EXPENSES HEALTHCARE_COVERAGE STUDY PSPID AGE
sql_table_dwh['patient'] = """
create table if not exists patient(
ID VARCHAR,
BIRTHDATE INTEGER,
DEATHDATE INTEGER,
MARITAL VARCHAR,
RACE VARCHAR,
ETHNICITY VARCHAR,
GENDER VARCHAR,
CITY VARCHAR,
LAT DOUBLE,
LON DOUBLE,
HEALTHCARE_EXPENSES DOUBLE,
HEALTHCARE_COVERAGE DOUBLE,
STUDY VARCHAR,
PSPID VARCHAR,
AGE INTEGER
);
"""
# observations: DATE PATIENT CODE DESCRIPTION VALUE UNITS STUDY
sql_table_dwh['observation'] = """
create table if not exists observation(
DATE DATE,
PATIENT VARCHAR,
CODE VARCHAR,
DESCRIPTION VARCHAR,
VALUE VARCHAR,
UNITS VARCHAR,
STUDY VARCHAR
);
"""
# conditions: START STOP PATIENT CODE DESCRIPTION STUDY
sql_table_dwh['condition'] = """
create table if not exists condition(
START DATE,
STOP DATE,
PATIENT VARCHAR,
CODE VARCHAR,
DESCRIPTION VARCHAR,
STUDY VARCHAR
);
"""
# procedures: DATE PATIENT CODE DESCRIPTION BASE_COST STUDY
sql_table_dwh['procedures'] = """
create table if not exists procedure(
DATE DATE,
PATIENT VARCHAR,
CODE VARCHAR,
DESCRIPTION VARCHAR,
BASE_COST VARCHAR,
STUDY VARCHAR
);
"""
###Output
_____no_output_____
###Markdown
Dimension Tables fürs Data Warehouse
###Code
# patient
sql_table_dwh['dimPatient'] = """
create table if not exists dimPatient(
ID VARCHAR,
BIRTHDATE INTEGER,
DEATHDATE INTEGER,
LAT DOUBLE,
LON DOUBLE,
HEALTHCARE_EXPENSES DOUBLE,
HEALTHCARE_COVERAGE DOUBLE,
PSPID VARCHAR PRIMARY KEY,
AGE INTEGER
);
"""
# Gender
sql_table_dwh['dimGender'] = """
create table if not exists dimGender(
ID INTEGER PRIMARY KEY,
GENDER VARCHAR UNIQUE NOT NULL
);
"""
# Study
sql_table_dwh['dimStudy'] = """
create table if not exists dimStudy(
ID INTEGER PRIMARY KEY,
STUDY VARCHAR UNIQUE NOT NULL
);
"""
# City
sql_table_dwh['dimCity'] = """
create table if not exists dimCity(
ID INTEGER PRIMARY KEY,
CITY VARCHAR UNIQUE NOT NULL
);
"""
# Ethnicity
sql_table_dwh['dimEthnicity'] = """
create table if not exists dimEthnicity(
ID INTEGER PRIMARY KEY,
ETHNICITY VARCHAR UNIQUE NOT NULL
);
"""
# Marital
sql_table_dwh['dimMarital'] = """
create table if not exists dimMarital(
ID INTEGER PRIMARY KEY,
MARITAL VARCHAR UNIQUE NOT NULL
);
"""
# Race
sql_table_dwh['dimRace'] = """
create table if not exists dimRace(
ID INTEGER PRIMARY KEY,
RACE VARCHAR UNIQUE NOT NULL
);
"""
# SNOMED
sql_table_dwh['dimSnomed'] = """
create table if not exists dimSnomed(
CODE VARCHAR PRIMARY KEY,
DESCRIPTION VARCHAR UNIQUE NOT NULL
);
"""
# LOINC
sql_table_dwh['dimLoinc'] = """
create table if not exists dimLoinc(
CODE VARCHAR PRIMARY KEY,
DESCRIPTION VARCHAR UNIQUE NOT NULL
);
"""
# show tables
print(sql_table_dwh.keys())
###Output
dict_keys(['patient', 'observation', 'condition', 'procedures', 'dimPatient', 'dimGender', 'dimStudy', 'dimCity', 'dimEthnicity', 'dimMarital', 'dimRace', 'dimSnomed', 'dimLoinc'])
###Markdown
Funktion für die Verbindung mit einer SQLite-Datenbank
###Code
def connect_to_db(db_file):
sqlite3_conn = None
try:
sqlite3_conn = sq.connect(db_file)
return sqlite3_conn
except Error as err:
print(err)
if sqlite3_conn is not None:
sqlite3_conn.close()
###Output
_____no_output_____
###Markdown
Herstellung der Tabellen in Data Warehouse
###Code
conn_dwh = connect_to_db(db_file_path_cov_alle)
if conn_dwh is not None:
cursor_dwh = conn_dwh.cursor()
for name in sql_table_dwh.keys():
print(name)
cursor_dwh.execute(sql_table_dwh[name])
else:
print('Connection to database failed')
###Output
patient
observation
condition
procedures
dimPatient
dimGender
dimStudy
dimCity
dimEthnicity
dimMarital
dimRace
dimSnomed
dimLoinc
###Markdown
Einfüge der Information der Data Frames in Data WarehouseIn dem Fall der Patient-Tabelle werden einige Spalten gelöscht und durch IDs erzets.
###Code
# raw data
patient.to_sql(name = 'patient', con=conn_dwh, if_exists='append', index=False)
observation.to_sql(name = 'observation', con=conn_dwh, if_exists='append', index=False)
condition.to_sql(name = 'condition', con=conn_dwh, if_exists='append', index=False)
procedure.to_sql(name = 'procedure', con=conn_dwh, if_exists='append', index=False)
# dimensions
patient_to_dim = patient.drop(['MARITAL', 'RACE', 'GENDER', 'CITY', 'STUDY', 'ETHNICITY'], axis=1)
patient_to_dim.to_sql(name = 'dimPatient', con=conn_dwh, if_exists='append', index=False)
gender.to_sql(name = 'dimGender', con=conn_dwh, if_exists='append', index=False)
study.to_sql(name = 'dimStudy', con=conn_dwh, if_exists='append', index=False)
city.to_sql(name = 'dimCity', con=conn_dwh, if_exists='append', index=False)
ethnicity.to_sql(name = 'dimEthnicity', con=conn_dwh, if_exists='append', index=False)
marital.to_sql(name = 'dimMarital', con=conn_dwh, if_exists='append', index=False)
race.to_sql(name = 'dimRace', con=conn_dwh, if_exists='append', index=False)
###Output
_____no_output_____
###Markdown
Extraktion von SNOMED-CT und LOINC für die Dimentionen in Data Warehouse
**SQL-Erklärung**: Selektiert die verschiedene `code` und `description` der Tabellen ` procedure` und `condition` für SNOMED und `observation` für LOINC, davon für jede `code` nur die längste ` description` nehmen (es gibt `code` mit verschiedenen `description`), und sortiert das Ergebnis nach `code`.
Solche SQL-Statement wird in einem Data Frame gespeichert, und in der Dimention Tabellen im Data Warehouse eingefügt.
###Code
# SNOMED-CT
snomed = pd.read_sql_query("""
select distinct code, description from(
select distinct code, description FROM "procedure" p
union
select distinct code, description FROM "condition" c
) as snomed
group by code
having max(LENGTH(description))
order by code
;""", conn_dwh
)
snomed.to_sql(name = 'dimSnomed', con=conn_dwh, if_exists='append', index=False)
snomed.head(3)
loinc = pd.read_sql_query("""
select distinct code, description from(
select distinct code, description FROM observation
) as loinc
group by code
having max(LENGTH(description))
order by code
;""", conn_dwh
)
loinc.to_sql(name = 'dimLoinc', con=conn_dwh, if_exists='append', index=False)
loinc.head(3)
###Output
_____no_output_____
###Markdown
Fakten Tabllen in Data Warehouse
* `factObservation`
* `factProcedure`
* `factCondition`
Jede Tabelle besitzt Indizes an jede Spalte mit IDs
###Code
sql_table_dwh = {} # tables
sql_index_dwh = {} # indices
# factObservation
sql_table_dwh['factObservation'] = """
create table if not exists factObservation(
PATIENT_PSPID VARCHAR REFERENCES dimPatient(PSPID),
BIRTHYEAR INTEGER,
DEATHYEAR INTEGER,
MARITAL_ID VARCHAR REFERENCES dimMarital(ID),
RACE_ID VARCHAR REFERENCES dimRace(ID),
ETHNICITY_ID VARCHAR REFERENCES dimEthnicity(ID),
GENDER_ID VARCHAR REFERENCES dimGender(ID),
CITY_ID VARCHAR REFERENCES dimCity(ID),
STUDY_ID VARCHAR REFERENCES dimStudy(ID),
AGE INTEGER,
DATE DATE,
LOINC VARCHAR REFERENCES dimLoinc(CODE),
VALUE VARCHAR,
UNITS VARCHAR
);
"""
sql_index_dwh["ix_factObservation_patient"] = """CREATE INDEX if not exists ix_factObservation_patient on factObservation(PATIENT_PSPID);"""
sql_index_dwh["ix_factObservation_marital"] = """CREATE INDEX if not exists ix_factObservation_marital on factObservation(MARITAL_ID);"""
sql_index_dwh["ix_factObservation_race"] = """CREATE INDEX if not exists ix_factObservation_race on factObservation(RACE_ID);"""
sql_index_dwh["ix_factObservation_ethnicity"] = """CREATE INDEX if not exists ix_factObservation_ethnicity on factObservation(ETHNICITY_ID);"""
sql_index_dwh["ix_factObservation_gender"] = """CREATE INDEX if not exists ix_factObservation_gender on factObservation(GENDER_ID);"""
sql_index_dwh["ix_factObservation_city"] = """CREATE INDEX if not exists ix_factObservation_city on factObservation(CITY_ID);"""
sql_index_dwh["ix_factObservation_study"] = """CREATE INDEX if not exists ix_factObservation_study on factObservation(STUDY_ID);"""
sql_index_dwh["ix_factObservation_loinc"] = """CREATE INDEX if not exists ix_factObservation_loinc on factObservation(LOINC);"""
# factProcedure
sql_table_dwh['factProcedure'] = """
create table if not exists factProcedure(
PATIENT_PSPID VARCHAR REFERENCES dimPatient(PSPID),
BIRTHYEAR INTEGER,
DEATHYEAR INTEGER,
MARITAL_ID VARCHAR REFERENCES dimMarital(ID),
RACE_ID VARCHAR REFERENCES dimRace(ID),
ETHNICITY_ID VARCHAR REFERENCES dimEthnicity(ID),
GENDER_ID VARCHAR REFERENCES dimGender(ID),
CITY_ID VARCHAR REFERENCES dimCity(ID),
STUDY_ID VARCHAR REFERENCES dimStudy(ID),
AGE INTEGER,
DATE DATE,
SNOMED VARCHAR REFERENCES dimSnomed(CODE)
);
"""
sql_index_dwh["ix_factProcedure_patient"] = """CREATE INDEX if not exists ix_factProcedure_patient on factProcedure(PATIENT_PSPID);"""
sql_index_dwh["ix_factProcedure_marital"] = """CREATE INDEX if not exists ix_factProcedure_marital on factProcedure(MARITAL_ID);"""
sql_index_dwh["ix_factProcedure_race"] = """CREATE INDEX if not exists ix_factProcedure_race on factProcedure(RACE_ID);"""
sql_index_dwh["ix_factProcedure_ethnicity"] = """CREATE INDEX if not exists ix_factProcedure_ethnicity on factProcedure(ETHNICITY_ID);"""
sql_index_dwh["ix_factProcedure_gender"] = """CREATE INDEX if not exists ix_factProcedure_gender on factProcedure(GENDER_ID);"""
sql_index_dwh["ix_factProcedure_city"] = """CREATE INDEX if not exists ix_factProcedure_city on factProcedure(CITY_ID);"""
sql_index_dwh["ix_factProcedure_study"] = """CREATE INDEX if not exists ix_factProcedure_study on factProcedure(STUDY_ID);"""
sql_index_dwh["ix_factProcedure_snomed"] = """CREATE INDEX if not exists ix_factProcedure_snomed on factProcedure(SNOMED);"""
# factCondition
sql_table_dwh['factCondition'] = """
create table if not exists factCondition(
PATIENT_PSPID VARCHAR REFERENCES dimPatient(PSPID),
BIRTHYEAR INTEGER,
DEATHYEAR INTEGER,
MARITAL_ID VARCHAR REFERENCES dimMarital(ID),
RACE_ID VARCHAR REFERENCES dimRace(ID),
ETHNICITY_ID VARCHAR REFERENCES dimEthnicity(ID),
GENDER_ID VARCHAR REFERENCES dimGender(ID),
CITY_ID VARCHAR REFERENCES dimCity(ID),
STUDY_ID VARCHAR REFERENCES dimStudy(ID),
AGE INTEGER,
START DATE,
STOP DATE,
SNOMED VARCHAR REFERENCES dimSnomed(CODE)
);
"""
sql_index_dwh["ix_factCondition_patient"] = """CREATE INDEX if not exists ix_factCondition_patient on factCondition(PATIENT_PSPID);"""
sql_index_dwh["ix_factCondition_marital"] = """CREATE INDEX if not exists ix_factCondition_marital on factCondition(MARITAL_ID);"""
sql_index_dwh["ix_factCondition_race"] = """CREATE INDEX if not exists ix_factCondition_race on factCondition(RACE_ID);"""
sql_index_dwh["ix_factCondition_ethnicity"] = """CREATE INDEX if not exists ix_factCondition_ethnicity on factCondition(ETHNICITY_ID);"""
sql_index_dwh["ix_factCondition_gender"] = """CREATE INDEX if not exists ix_factCondition_gender on factCondition(GENDER_ID);"""
sql_index_dwh["ix_factCondition_city"] = """CREATE INDEX if not exists ix_factCondition_city on factCondition(CITY_ID);"""
sql_index_dwh["ix_factCondition_study"] = """CREATE INDEX if not exists ix_factCondition_study on factCondition(STUDY_ID);"""
sql_index_dwh["ix_factCondition_snomed"] = """CREATE INDEX if not exists ix_factCondition_snomed on factCondition(SNOMED);"""
print(sql_table_dwh.keys()) # show tables
print(sql_index_dwh.keys()) # show indices
###Output
dict_keys(['factObservation', 'factProcedure', 'factCondition'])
dict_keys(['ix_factObservation_patient', 'ix_factObservation_marital', 'ix_factObservation_race', 'ix_factObservation_ethnicity', 'ix_factObservation_gender', 'ix_factObservation_city', 'ix_factObservation_study', 'ix_factObservation_loinc', 'ix_factProcedure_patient', 'ix_factProcedure_marital', 'ix_factProcedure_race', 'ix_factProcedure_ethnicity', 'ix_factProcedure_gender', 'ix_factProcedure_city', 'ix_factProcedure_study', 'ix_factProcedure_snomed', 'ix_factCondition_patient', 'ix_factCondition_marital', 'ix_factCondition_race', 'ix_factCondition_ethnicity', 'ix_factCondition_gender', 'ix_factCondition_city', 'ix_factCondition_study', 'ix_factCondition_snomed'])
###Markdown
Herstellung der Fakten Tabellen und Indizes in Data Warehouse
###Code
if conn_dwh is not None:
# cursor_dwh = conn_dwh.cursor()
for name in sql_table_dwh.keys():
print(name)
cursor_dwh.execute(sql_table_dwh[name])
for ix_name in sql_index_dwh.keys():
print(ix_name)
cursor_dwh.execute(sql_index_dwh[ix_name])
else:
print('Connection to database failed')
###Output
factObservation
factProcedure
factCondition
ix_factObservation_patient
ix_factObservation_marital
ix_factObservation_race
ix_factObservation_ethnicity
ix_factObservation_gender
ix_factObservation_city
ix_factObservation_study
ix_factObservation_loinc
ix_factProcedure_patient
ix_factProcedure_marital
ix_factProcedure_race
ix_factProcedure_ethnicity
ix_factProcedure_gender
ix_factProcedure_city
ix_factProcedure_study
ix_factProcedure_snomed
ix_factCondition_patient
ix_factCondition_marital
ix_factCondition_race
ix_factCondition_ethnicity
ix_factCondition_gender
ix_factCondition_city
ix_factCondition_study
ix_factCondition_snomed
###Markdown
Auswahl der Information für Fakt Tabellen und einfügen in Fakt Tabellen in Data Warehouse
- Auswahl der Information mit Hilfe von Select-Statement und speichern in Data Frame
- Einfügen des Data Frames in Fakt Tabellen in Data Warehouse
###Code
# fatObservation
factObservation = pd.read_sql_query("""
select DISTINCT
PSPID PATIENT_PSPID,
BIRTHDATE BIRTHYEAR,
DEATHDATE DEATHYEAR,
dm.ID MARITAL_ID,
dr.ID RACE_ID ,
de.ID ETHNICITY_ID,
dg.ID GENDER_ID,
dc.ID CITY_ID,
ds.ID STUDY_ID,
AGE,
o.date DATE,
o.CODE LOINC,
o.VALUE,
o.UNITS
from patient pat
join dimMarital dm
on dm.MARITAL = pat.MARITAL
join dimRace dr
on dr.RACE = pat.RACE
join dimEthnicity de
on de.ETHNICITY = pat.ETHNICITY
join dimCity dc
on dc.CITY = pat.CITY
join dimGender dg
on dg.GENDER = pat.GENDER
join dimStudy ds
on ds.STUDY = pat.STUDY
join observation o
on o.PATIENT = pat.Id
;"""
, conn_dwh)
factObservation.to_sql(name='factObservation', con=conn_dwh, if_exists='append', index=False)
# factProcedure
factProcedure = pd.read_sql_query("""
select DISTINCT
PSPID PATIENT_PSPID,
BIRTHDATE BIRTHYEAR,
DEATHDATE DEATHYEAR,
dm.ID MARITAL_ID,
dr.ID RACE_ID ,
de.ID ETHNICITY_ID,
dg.ID GENDER_ID,
dc.ID CITY_ID,
ds.ID STUDY_ID,
AGE,
p.date DATE,
p.CODE SNOMED
from patient pat
join dimMarital dm
on dm.MARITAL = pat.MARITAL
join dimRace dr
on dr.RACE = pat.RACE
join dimEthnicity de
on de.ETHNICITY = pat.ETHNICITY
join dimCity dc
on dc.CITY = pat.CITY
join dimGender dg
on dg.GENDER = pat.GENDER
join dimStudy ds
on ds.STUDY = pat.STUDY
join "procedure" p
on p.PATIENT = pat.Id
;""", conn_dwh)
factProcedure.to_sql(name='factProcedure', con=conn_dwh, if_exists='append', index=False)
# factCondition
factCondition = pd.read_sql_query("""
select DISTINCT
PSPID PATIENT_PSPID,
BIRTHDATE BIRTHYEAR,
DEATHDATE DEATHYEAR,
dm.ID MARITAL_ID,
dr.ID RACE_ID ,
de.ID ETHNICITY_ID,
dg.ID GENDER_ID,
dc.ID CITY_ID,
ds.ID STUDY_ID,
AGE,
c.START,
c.STOP,
c.CODE SNOMED
from patient pat
join dimMarital dm
on dm.MARITAL = pat.MARITAL
join dimRace dr
on dr.RACE = pat.RACE
join dimEthnicity de
on de.ETHNICITY = pat.ETHNICITY
join dimCity dc
on dc.CITY = pat.CITY
join dimGender dg
on dg.GENDER = pat.GENDER
join dimStudy ds
on ds.STUDY = pat.STUDY
join "condition" c
on c.PATIENT = pat.Id
;""", conn_dwh)
factCondition.to_sql(name='factCondition', con=conn_dwh, if_exists='append', index=False)
###Output
_____no_output_____
###Markdown
Views in Data Warehouse
* `v_patients`
* `v_observations`
* `v_conditions`
* `v_procedures`
Solche Views dienen die Erleichterung der Data Analysis.
###Code
cursor_dwh.executescript(
"""
-- Patients
CREATE view v_patients as
select DISTINCT
PATIENT_PSPID PATIENT,
BIRTHYEAR,
DEATHYEAR,
MARITAL,
RACE,
ETHNICITY,
GENDER,
CITY,
AGE,
STUDY
from factObservation fo
JOIN dimMarital dm
ON fo.MARITAL_ID = dm.ID
join dimRace dr
on dr.ID = fo.RACE_ID
join dimEthnicity de
on de.ID = fo.ETHNICITY_ID
join dimGender dg
on dg.ID = fo.GENDER_ID
join dimCity dc
on dc.ID = fo.CITY_ID
join dimStudy ds
on ds.ID = fo.STUDY_ID ;
-- Observations
create view v_observations as
select
PATIENT_PSPID PATIENT,
BIRTHYEAR,
DEATHYEAR,
dm.MARITAL,
dr.RACE,
de.ETHNICITY,
dg.GENDER,
dc.CITY ,
AGE,
DATE,
LOINC,
dl.description DESCRIPTION,
VALUE,
UNITS,
ds.STUDY
from factObservation fo
join dimMarital dm
on fo.MARITAL_ID = dm.ID
join dimRace dr
on dr.ID = fo.RACE_ID
join dimEthnicity de
on de.ID = fo.ETHNICITY_ID
join dimGender dg
on dg.ID = fo.GENDER_ID
join dimCity dc
on dc.ID = fo.CITY_ID
join dimLoinc dl
on dl.code = fo.LOINC
join dimStudy ds
on ds.ID = fo.STUDY_ID
;
-- Conditions
create view v_conditions as
select
PATIENT_PSPID PATIENT,
BIRTHYEAR,
DEATHYEAR,
dm.MARITAL,
dr.RACE,
de.ETHNICITY,
dg.GENDER,
dc.CITY ,
AGE,
"START" ,
STOP ,
SNOMED ,
dsn.description DESCRIPTION,
ds.STUDY
from factCondition fc
join dimMarital dm
on fc.MARITAL_ID = dm.ID
join dimRace dr
on dr.ID = fc.RACE_ID
join dimEthnicity de
on de.ID = fc.ETHNICITY_ID
join dimGender dg
on dg.ID = fc.GENDER_ID
join dimCity dc
on dc.ID = fc.CITY_ID
join dimSnomed dsn
on dsn.code = fc.SNOMED
join dimStudy ds
on ds.ID = fc.STUDY_ID
;
-- Procedures
create view v_procedures as
select
PATIENT_PSPID PATIENT,
BIRTHYEAR,
DEATHYEAR,
dm.MARITAL,
dr.RACE,
de.ETHNICITY,
dg.GENDER,
dc.CITY ,
AGE,
DATE ,
SNOMED ,
dsn.description DESCRIPTION,
ds.STUDY
from factProcedure fc
join dimMarital dm
on fc.MARITAL_ID = dm.ID
join dimRace dr
on dr.ID = fc.RACE_ID
join dimEthnicity de
on de.ID = fc.ETHNICITY_ID
join dimGender dg
on dg.ID = fc.GENDER_ID
join dimCity dc
on dc.ID = fc.CITY_ID
join dimSnomed dsn
on dsn.code = fc.SNOMED
join dimStudy ds
on ds.ID = fc.STUDY_ID
;
--drop tables with raw data
drop table if exists patient;
drop table if exists condition;
drop table if exists procedure;
drop table if exists observation;
"""
)
###Output
_____no_output_____
###Markdown
Commiten und Schliessung der Verbindungen
###Code
# commit and close connections
conn_dwh.commit()
conn_dwh.close()
###Output
_____no_output_____ |
notebooks/Health_insurance_cross_sell.ipynb | ###Markdown
0.0. IMPORTS
###Code
import pandas as pd
import numpy as np
import psycopg2 as pg
import seaborn as sns
import scikitplot as skplt
from matplotlib import pyplot as plt
from sklearn import preprocessing as pp
from sklearn import model_selection as ms
from sklearn import ensemble as en
from sklearn import neighbors as nh
from sklearn import linear_model as lm
import warnings
import pandas.io.sql as psql
from IPython.core.display import HTML
###Output
_____no_output_____
###Markdown
0.1. Helper Functions
###Code
def jupyter_settings():
%matplotlib inline
%pylab inline
plt.style.use( 'bmh' )
plt.rcParams['figure.figsize'] = [20, 8]
plt.rcParams['font.size'] = 24
display( HTML( '<style>.container { width:100% !important; }</style>') )
pd.options.display.max_columns = None
pd.options.display.max_rows = None
pd.set_option( 'display.expand_frame_repr', False )
sns.set()
warnings.filterwarnings( 'ignore' )
pd.options.display.float_format = '{:.6f}'.format
jupyter_settings()
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
0.2. Loading Data
###Code
# Credentials
host = 'comunidade-ds-postgres.c50pcakiuwi3.us-east-1.rds.amazonaws.com'
port = 5432
database = 'comunidadedsdb'
username = 'member'
pwd = 'cdspa'
# Connecting to database
conn = pg.connect( user=username,
password=pwd,
host=host,
port=port,
database=database )
cursor = conn.cursor()
# Query Schemas
query_schema = """
SELECT nspname
FROM pg_catalog.pg_namespace
"""
cursor.execute( query_schema )
print(cursor.fetchall())
# Query Tables
query_tables = """
SELECT * FROM pa004.users u LEFT JOIN pa004.vehicle v on (u.id = v.id)
LEFT JOIN pa004.insurance i on (u.id = i.id)
order by u.id
"""
# Defining raw dataset
df_raw_table = pd.read_sql(query_tables, conn)
# Closing cursor and connction
cursor.close()
conn.close()
df_raw_table.head()
###Output
_____no_output_____
###Markdown
1.0. Data Description
###Code
df1 = df_raw_table.copy()
df1.columns
# Removing duplicated columns
df1 = df1.loc[:,~df1.columns.duplicated()]
###Output
_____no_output_____
###Markdown
1.1. Data Dimension
###Code
print(f"Number of Rows: {df1.shape[0]}")
print(f"Number of Columns: {df1.shape[1]}")
###Output
Number of Rows: 381109
Number of Columns: 12
###Markdown
1.2. Data Types
###Code
df1.dtypes
###Output
_____no_output_____
###Markdown
1.3. Check NA
###Code
df1.isnull().sum()
###Output
_____no_output_____
###Markdown
1.4. Data Descriptive
###Code
num_attributes = df1.select_dtypes( include=['int64', 'float64'] )
cat_attributes = df1.select_dtypes( exclude=['int64', 'float64', 'datetime64[ns]'] )
# Central Tendency - Mean and Median
ct1 = pd.DataFrame( num_attributes.apply( np.mean ) ).T
ct2 = pd.DataFrame( num_attributes.apply( np.median ) ).T
# Dispersion - std, min, max, range, skew, kurtosis
d1 = pd.DataFrame( num_attributes.apply( np.std ) ).T
d2 = pd.DataFrame( num_attributes.apply( min ) ).T
d3 = pd.DataFrame( num_attributes.apply( max ) ).T
d4 = pd.DataFrame( num_attributes.apply( lambda x: x.max() - x.min() ) ).T
d5 = pd.DataFrame( num_attributes.apply( lambda x: x.skew() ) ).T
d6 = pd.DataFrame( num_attributes.apply( lambda x: x.kurtosis() ) ).T
# Concatenate
m = pd.concat( [d2, d3, d4, ct1, ct2, d1, d5, d6] ).T.reset_index()
m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis']
m
###Output
_____no_output_____
###Markdown
2.0. Feature Engineering
###Code
df2 = df1.copy()
df2.head()
# vehicle age
df2['vehicle_age'] = df2['vehicle_age'].apply( lambda x: 'over_2_years' if x =='> 2 Years' else 'between_1_2_year' if x == '1-2 Year' else 'below_1_year')
# vehicle damage
df2['vehicle_damage'] = df2['vehicle_damage'].apply( lambda x: 0 if x == "No" else 1 )
###Output
_____no_output_____
###Markdown
3.0. Data Filtering
###Code
df3 = df2.copy()
###Output
_____no_output_____
###Markdown
4.0. EDA
###Code
df4 = df3.copy()
df4.columns
###Output
_____no_output_____
###Markdown
4.1. Univariated Analysis
###Code
# Gender
pd.crosstab( df4['gender'], df4['response'] ).apply( lambda x: x / x.sum(), axis = 1)
aux = df4[['id', 'gender', 'response']].groupby( ['gender', 'response'] ).count().reset_index()
aux
# Age
plt.subplot(1, 3, 1)
sns.boxplot( x='response', y='age', data=df4)
plt.subplot(1, 3, 2)
aux00 = df4.loc[df4['response'] == 0, 'age']
sns.histplot( aux00 )
plt.subplot(1, 3, 3)
aux00 = df4.loc[df4['response'] == 1, 'age']
sns.histplot( aux00 )
# Region Code
aux = df4[['id', 'region_code', 'response']].groupby( ['region_code', 'response'] ).count().reset_index()
sns.scatterplot( x='region_code', y='id', hue='response', data=aux)
# Policy Sales Channel
aux = df4[['policy_sales_channel', 'response']].groupby( 'policy_sales_channel').sum().reset_index()
sns.barplot( x= 'response', y='policy_sales_channel', data=aux)
plt.xticks( rotation=90 );
# FAZER O GRAFICO DE BARRAS EMPILHADAS COM % - stacked percentage bar chart python
# Driving License
aux = df4[['response', 'driving_license']].groupby('response').sum().reset_index()
sns.barplot( x='response', y='driving_license', data=aux)
# Vehicle Age
aux = df4[['id','vehicle_age', 'response' ]].groupby(['response', 'vehicle_age']).count().reset_index()
sns.barplot( x='vehicle_age', y='id', hue='response',data=aux)
aux.head()
# Vehicle Damage
# Previously Insured
pd.crosstab( df4['previously_insured'], df4['response'] ).apply( lambda x: x / x.sum(), axis = 1)
# Annual Premium
plt.subplot(1, 3, 1)
aux = df4.loc[df4['annual_premium'] < 70000]
sns.boxplot( x='response', y='annual_premium', data=aux)
aux = df4.loc[(df4['annual_premium'] > 10000) & (df4['annual_premium'] < 70000)]
plt.subplot(1, 3, 2)
aux00 = aux.loc[aux['response'] == 0, 'annual_premium']
sns.histplot( aux00 )
plt.subplot(1, 3, 3)
aux00 = aux.loc[aux['response'] == 1, 'annual_premium']
sns.histplot( aux00 )
# Vintage
plt.subplot(1, 3, 1)
sns.boxplot( x='response', y='vintage', data=df4)
plt.subplot(1, 3, 2)
aux00 = df4.loc[df4['response'] == 0, 'vintage']
sns.histplot( aux00 )
plt.subplot(1, 3, 3)
aux00 = df4.loc[df4['response'] == 1, 'vintage']
sns.histplot( aux00 )
###Output
_____no_output_____
###Markdown
5.0. Data preparation
###Code
df5 = df4.copy()
X = df4.drop( 'response', axis=1 )
y = df4['response'].copy()
x_train, x_val, y_train, y_val = ms.train_test_split( X, y, test_size=0.2 )
df5 = pd.concat( [x_train, y_train], axis=1 )
###Output
_____no_output_____
###Markdown
5.1. Standardization
###Code
ss = pp.StandardScaler()
# Annual Premium
df5['annual_premium'] = ss.fit_transform( df5[['annual_premium']].values )
###Output
_____no_output_____
###Markdown
5.2. Rescaling
###Code
mms_age = pp.MinMaxScaler()
mms_vintage = pp.MinMaxScaler()
# Age
df5['age'] = mms_age.fit_transform( df5[['age']].values )
# Vintage
df5['vintage'] = mms_vintage.fit_transform( df5[['vintage']].values )
###Output
_____no_output_____
###Markdown
5.3. Encoder 5.3.1. Target Encoder
###Code
# Target Encoder
# Gender
target_encode_gender = df5.groupby('gender')['response'].mean()
df5['gender'] = df5['gender'].map( target_encode_gender )
# Region Code
target_encode_region_code = df5.groupby('region_code')['response'].mean()
df5['region_code'] = df5['region_code'].map(target_encode_region_code)
###Output
_____no_output_____
###Markdown
5.3.2. Dummies
###Code
# Vehicle Age
df5 = pd.get_dummies( df5, prefix='vehicle_age', columns=['vehicle_age'] )
###Output
_____no_output_____
###Markdown
5.3.3. Frequency Encoder
###Code
# Policy Sales Channel
fe_policy_sales_channel = df5.groupby( 'policy_sales_channel' ).size() / len(df5)
df5['policy_sales_channel'] = df5['policy_sales_channel'].map(fe_policy_sales_channel)
###Output
_____no_output_____
###Markdown
5.4. Enconding Validation Data
###Code
# Annual Premium
x_val['annual_premium'] = ss.fit_transform( x_val[['annual_premium']].values )
# Age
x_val['age'] = mms_age.fit_transform( x_val[['age']].values )
# Vintage
x_val['vintage'] = mms_vintage.fit_transform( x_val[['vintage']].values )
# Gender
x_val['gender'] = x_val['gender'].map( target_encode_gender )
# Region Code
x_val['region_code'] = x_val['region_code'].map( target_encode_region_code)
# Vehicle Age
x_val = pd.get_dummies( x_val, prefix='vehicle_age', columns=['vehicle_age'] )
# Policy Sales Channel
x_val['policy_sales_channel'] = x_val['policy_sales_channel'].map(fe_policy_sales_channel)
# Fillna
x_val = x_val.fillna( 0 )
###Output
_____no_output_____
###Markdown
6.0. Feature Selection 6.1. Feature Importancy
###Code
# model_definition
forest = en.RandomForestClassifier( n_estimators=250, random_state=0, n_jobs=-1 )
# Data Preparation
x_train_n = df5.drop( ['id', 'response'], axis=1 )
y_train_n = y_train.values
forest.fit( x_train_n, y_train_n )
importances = forest.feature_importances_
std = np.std( [tree.feature_importances_ for tree in forest.estimators_], axis=0 )
indices = np.argsort( importances )[::-1]
# Print the feature ranking
print("Feature Ranking:")
df = pd.DataFrame()
for i, j in zip( x_train_n, forest.feature_importances_ ):
aux = pd.DataFrame( {'feature': i, 'importance': j}, index=[0])
df = pd.concat( [df, aux], axis=0 )
print( df.sort_values( 'importance', ascending=False ) )
# Plot
plt.figure()
plt.title("Feature Importances")
plt.bar(range(x_train_n.shape[1]), importances[indices], color="r", yerr=std[indices], align="center" )
plt.xticks(range(x_train_n.shape[1]), indices )
plt.xlim([-1, x_train_n.shape[1]])
plt.show()
###Output
Feature Ranking:
feature importance
0 vintage 0.281093
0 annual_premium 0.252908
0 age 0.147616
0 region_code 0.096700
0 vehicle_damage 0.072802
0 policy_sales_channel 0.070406
0 previously_insured 0.048624
0 vehicle_age_below_1_year 0.012956
0 gender 0.009581
0 vehicle_age_between_1_2_year 0.004744
0 vehicle_age_over_2_years 0.002053
0 driving_license 0.000518
###Markdown
7.0. Machine Learning Model
###Code
cols_selected = ['vintage', 'annual_premium', 'age', 'region_code', 'vehicle_damage', 'policy_sales_channel', 'previously_insured']
x_train = df5[cols_selected]
x_val = x_val[cols_selected]
###Output
_____no_output_____
###Markdown
7.1. KNN
###Code
# Model Definition
knn_model = nh.KNeighborsClassifier( n_neighbors=6 )
# Model Training
knn_model.fit( x_train, y_train )
# Model Prediction - Poder de Generalização
yhat_knn = knn_model.predict_proba( x_val )
# Accumulative Gain
skplt.metrics.plot_cumulative_gain( y_val, yhat_knn );
###Output
_____no_output_____ |
projects/modelingsteps/ModelingSteps_1through4.ipynb | ###Markdown
Modeling Steps 1 - 4**By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm__Content reviewers:__ Eric DeWitt, Tara van Viegen, Marius Pachitariu__Production editors:__ Ella Batty **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** **Note that this is the same as W1D2 Tutorial 1 - we provide it here as well for ease of access.** --- Tutorial objectivesYesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by thinking through the logic of modeling based on your project ideas.We assume that you have a general idea of a project in mind, i.e. a preliminary question, and/or phenomenon you would like to understand. You should have started developing a project idea yesterday with [this brainstorming demo](https://youtu.be/H6rSlZzlrgQ). Maybe you have a goal in mind. We will now work through the 4 first steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)): **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypothesesThe remaining steps 5-10 will be covered in a second notebook that you can consult throughout the modeling process when you work on your projects.**Importantly**, we will guide you through Steps 1-4 today. After you do more work on projects, you likely have to revite some or all of these steps *before* you move on the the remaining steps of modeling. **Note**: there will be no coding today. It's important that you think through the different steps of this how-to-model tutorial to maximize your chance of succeeding in your group projects. **Also**: "Models" here can be data analysis pipelines, not just computational models...**Think! Sections**: All activities you should perform are labeled with **Think!**. These are discussion based exercises and can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on! DemosWe will demo the modeling process to you based on the train illusion. The introductory video will explain the phenomenon to you. Then we will do roleplay to showcase some common pitfalls to you based on a computational modeling project around the train illusion. In addition to the computational model, we will also provide a data neuroscience project example to you so you can appreciate similarities and differences. Enjoy!
###Code
# @title Video 1: Introduction to tutorial
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GyGNs1fLIYQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
# for random distributions:
from scipy.stats import norm, poisson
# for logistic regression:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# @title Plotting Functions
def rasterplot(spikes,movement,trial):
[movements, trials, neurons, timepoints] = np.shape(spikes)
trial_spikes = spikes[movement,trial,:,:]
trial_events = [((trial_spikes[x,:] > 0).nonzero()[0]-150)/100 for x in range(neurons)]
plt.figure()
dt=1/100
plt.eventplot(trial_events, linewidths=1);
plt.title('movement: %d - trial: %d'%(movement, trial))
plt.ylabel('neuron')
plt.xlabel('time [s]')
def plotCrossValAccuracies(accuracies):
f, ax = plt.subplots(figsize=(8, 3))
ax.boxplot(accuracies, vert=False, widths=.7)
ax.scatter(accuracies, np.ones(8))
ax.set(
xlabel="Accuracy",
yticks=[],
title=f"Average test accuracy: {accuracies.mean():.2%}"
)
ax.spines["left"].set_visible(False)
#@title Generate Data
def generateSpikeTrains():
gain = 2
neurons = 50
movements = [0,1,2]
repetitions = 800
np.random.seed(37)
# set up the basic parameters:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt) # a time interval
Velocity_sigma = 0.5 # std dev of the velocity profile
Velocity_Profile = norm.pdf(t,0,Velocity_sigma)/norm.pdf(0,0,Velocity_sigma) # The Gaussian velocity profile, normalized to a peak of 1
# set up the neuron properties:
Gains = np.random.rand(neurons) * gain # random sensitivity between 0 and `gain`
FRs = (np.random.rand(neurons) * 60 ) - 10 # random base firing rate between -10 and 50
# output matrix will have this shape:
target_shape = [len(movements), repetitions, neurons, len(Velocity_Profile)]
# build matrix for spikes, first, they depend on the velocity profile:
Spikes = np.repeat(Velocity_Profile.reshape([1,1,1,len(Velocity_Profile)]),len(movements)*repetitions*neurons,axis=2).reshape(target_shape)
# multiplied by gains:
S_gains = np.repeat(np.repeat(Gains.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes * S_gains
# and multiplied by the movement:
S_moves = np.repeat( np.array(movements).reshape([len(movements),1,1,1]), repetitions*neurons*len(Velocity_Profile), axis=3 ).reshape(target_shape)
Spikes = Spikes * S_moves
# on top of a baseline firing rate:
S_FR = np.repeat(np.repeat(FRs.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes + S_FR
# can not run the poisson random number generator on input lower than 0:
Spikes = np.where(Spikes < 0, 0, Spikes)
# so far, these were expected firing rates per second, correct for dt:
Spikes = poisson.rvs(Spikes * dt)
return(Spikes)
def subsetPerception(spikes):
movements = [0,1,2]
split = 400
subset = 40
hwin = 3
[num_movements, repetitions, neurons, timepoints] = np.shape(spikes)
decision = np.zeros([num_movements, repetitions])
# ground truth for logistic regression:
y_train = np.repeat([0,1,1],split)
y_test = np.repeat([0,1,1],repetitions-split)
m_train = np.repeat(movements, split)
m_test = np.repeat(movements, split)
# reproduce the time points:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt)
w_idx = list( (abs(t) < (hwin*dt)).nonzero()[0] )
w_0 = min(w_idx)
w_1 = max(w_idx)+1 # python...
# get the total spike counts from stationary and movement trials:
spikes_stat = np.sum( spikes[0,:,:,:], axis=2)
spikes_move = np.sum( spikes[1:,:,:,:], axis=3)
train_spikes_stat = spikes_stat[:split,:]
train_spikes_move = spikes_move[:,:split,:].reshape([-1,neurons])
test_spikes_stat = spikes_stat[split:,:]
test_spikes_move = spikes_move[:,split:,:].reshape([-1,neurons])
# data to use to predict y:
x_train = np.concatenate((train_spikes_stat, train_spikes_move))
x_test = np.concatenate(( test_spikes_stat, test_spikes_move))
# this line creates a logistics regression model object, and immediately fits it:
population_model = LogisticRegression(solver='liblinear', random_state=0).fit(x_train, y_train)
# solver, one of: 'liblinear', 'newton-cg', 'lbfgs', 'sag', and 'saga'
# some of those require certain other options
#print(population_model.coef_) # slope
#print(population_model.intercept_) # intercept
ground_truth = np.array(population_model.predict(x_test))
ground_truth = ground_truth.reshape([3,-1])
output = {}
output['perception'] = ground_truth
output['spikes'] = spikes[:,split:,:subset,:]
return(output)
def getData():
spikes = generateSpikeTrains()
dataset = subsetPerception(spikes=spikes)
return(dataset)
dataset = getData()
perception = dataset['perception']
spikes = dataset['spikes']
###Output
_____no_output_____
###Markdown
---- Step 1: Finding a phenomenon and a question to ask about it
###Code
# @title Video 2: Asking a question
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="4Gl8X_y_uoA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 1
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people have the wrong percept. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We asked the following (arbitrary) question for our demo project: "How do noisy vestibular estimates of motion lead to illusory percepts of self motion?"
</font>
'''
markdown2 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people mix this up. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We assume that we have build the train illusion model (see the other example project colab). That model predicts that accumulated sensory evidence from vestibular signals determines the decision of whether self-motion is experienced or not. We now have vestibular neuron data (simulated in our case, but let's pretend) and would like to see if that prediction holds true.
The data contains *N* neurons and *M* trials for each of 3 motion conditions: no self-motion, slowly accelerating self-motion and faster accelerating self-motion. In our data,
*N* = 40 and *M* = 400.
**So we can ask the following question**: "Does accumulated vestibular neuron activity correlate with self-motion judgements?"
</font>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 1: Asking your own question *Please discuss the following for about 25 min*You should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs modeling? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define an evaluation method! * How will you know your modeling is good? * E.g. comparison to specific data (quantitative method of comparison?)* For computational models: think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need? **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsQuestion is too general Remember: science advances one small step at the time. Get the small step right… Precise aspect of phenomenon you want to model is unclear You will fail to ask a meaningful question You have already chosen a toolkit This will prevent you from thinking deeply about the best way to answer your scientific question You don’t have a clear goal What do you want to get out of modeling? You don’t have a potential experiment in mind This will help concretize your objectives and think through the logic behind your goal **Note**The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step. ---- Step 2: Understanding the state of the art & background Here you will do a literature review (**to be done AFTER this tutorial!**).
###Code
# @title Video 3: Literature Review & Background Knowledge
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d8zriLaMc14", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 2
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 2
<br>
<font size='3pt'>
You have learned all about the vestibular system in the Intro video. This is also where you would do a literature search to learn more about what's known about self-motion perception and vestibular signals. You would also want to examine any attempts to model self-motion, perceptual decision making and vestibular processing.</font>
'''
markdown21 = '''
## Step 2
<br>
<font size='3pt'>
While it seems a well-known fact that vestibular signals are noisy, we should check if we can also find this in the literature.
Let's also see what's in our data, there should be a 4d array called `spikes` that has spike counts (positive integers), a 2d array called `perception` with self-motion judgements (0=no motion or 1=motion). Let's see what this data looks like:
</font><br>
'''
markdown22 = '''
<br>
<font size='3pt'>
In the `spikes` array, we see our 3 acceleration conditions (first dimension), with 400 trials each (second dimensions) and simultaneous recordings from 40 neurons (third dimension), across 3 seconds in 10 ms bins (fourth dimension). The first two dimensions are also there in the `perception` array.
Perfect perception would have looked like [0, 1, 1]. The average judgements are far from correct (lots of self-motion illusions) but they do make some sense: it's closer to 0 in the no-motion condition and closer to 1 in both of the real-motion conditions.
The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. Let's see if we can reproduce the stimuli from the data:
</font>
<br>
'''
markdown23 = '''
<br>
<font size='3pt'>
Blue is the no-motion condition, and produces flat average spike counts across the 3 s time interval. The orange and green line do show a bell-shaped curve that corresponds to the acceleration profile. But there also seems to be considerable noise: exactly what we need. Let's see what the spike trains for a single trial look like:
</font>
<br>
'''
markdown24 = '''
<br>
<font size='3pt'>
You can change the trial number in the bit of code above to compare what the rasterplots look like in different trials. You'll notice that they all look kind of the same: the 3 conditions are very hard (impossible?) to distinguish by eye-balling.
Now that we have seen the data, let's see if we can extract self-motion judgements from the spike counts.
</font>
<br>
'''
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown21))
print(f'The shape of `spikes` is: {np.shape(spikes)}')
print(f'The shape of `perception` is: {np.shape(perception)}')
print(f'The mean of `perception` is: {np.mean(perception, axis=1)}')
display(Markdown(markdown22))
for move_no in range(3):
plt.plot(np.arange(-1.5,1.5+(1/100),(1/100)),np.mean(np.mean(spikes[move_no,:,:,:], axis=0), axis=0), label=['no motion', '$1 m/s^2$', '$2 m/s^2$'][move_no])
plt.xlabel('time [s]');
plt.ylabel('averaged spike counts');
plt.legend()
plt.show()
display(Markdown(markdown23))
for move in range(3):
rasterplot(spikes = spikes, movement = move, trial = 0)
plt.show()
display(Markdown(markdown24))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Here you will do a literature review (**to be done AFTER this tutorial!**). For the projects, do not spend too much time on this. A thorough literature review could take weeks or months depending on your prior knowledge of the field...The important thing for your project here is not to exhaustively survey the literature but rather to learn the process of modeling. 1-2 days of digging into the literature should be enough!**Here is what you should get out of it**:* Survey the literature * What’s known? * What has already been done? * Previous models as a starting point? * What hypotheses have been emitted in the field? * Are there any alternative / complementary modeling approaches?* What skill sets are required? * Do I need learn something before I can start? * Ensure that no important aspect is missed* Potentially provides specific data sets / alternative modeling approaches for comparison **Do this AFTER the tutorial** ---- Step 3: Determining the basic ingredients
###Code
# @title Video 4: Determining basic ingredients
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="XpEj-p7JkFE", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 3
from ipywidgets import widgets
from IPython.display import Markdown, Math
markdown1 = r'''
## Step 3
<br>
<font size='3pt'>
We determined that we probably needed the following ingredients for our model:
* Vestibular input: *v(t)*
* Binary decision output: *d* - time dependent?
* Decision threshold: θ
* A filter (maybe running average?): *f*
* An integration mechanism to get from vestibular acceleration to sensed velocity: ∫
</font>
'''
markdown2 = '''
## Step 3
<br>
<font size='3pt'>
In order to address our question we need to design an appropriate computational data analysis pipeline. We did some brainstorming and think that we need to somehow extract the self-motion judgements from the spike counts of our neurons. Based on that, our algorithm needs to make a decision: was there self motion or not? This is a classical 2-choice classification problem. We will have to transform the raw spike data into the right input for the algorithm (spike pre-processing).
So we determined that we probably needed the following ingredients:
* spike trains *S* of 3-second trials (10ms spike bins)
* ground truth movement *m<sub>r</sub>* (real) and perceived movement *m<sub>p</sub>*
* some form of classifier *C* giving us a classification *c*
* spike pre-processing
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 3: Determine your basic ingredients *Please discuss the following for about 25 min*This will allow you to think deeper about what your modeling project will need. It's a crucial step before you can formulate hypotheses because you first need to understand what your modeling approach will need. There are 2 aspects you want to think about:1. What parameters / variables are needed?] * Constants? * Do they change over space, time, conditions…? * What details can be omitted? * Constraints, initial conditions? * Model inputs / outputs?2. Variables needed to describe the process to be modelled? * Brainstorming! * What can be observed / measured? latent variables? * Where do these variables come from? * Do any abstract concepts need to be instantiated as variables? * E.g. value, utility, uncertainty, cost, salience, goals, strategy, plant, dynamics * Instantiate them so that they relate to potential measurements!This is a step where your prior knowledge and intuition is tested. You want to end up with an inventory of *specific* concepts and/or interactions that need to be instantiated. **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsI’m experienced, I don’t need to think about ingredients anymore Or so you think… I can’t think of any ingredients Think about the potential experiment. What are your stimuli? What parameters? What would you control? What do you measure? I have all inputs and outputs Good! But what will link them? Thinking about that will start shaping your model and hypotheses I can’t think of any links (= mechanisms) You will acquire a library of potential mechanisms as you keep modeling and learning But the literature will often give you hints through hypotheses If you still can't think of links, then maybe you're missing ingredients? ---- Step 4: Formulating specific, mathematically defined hypotheses
###Code
# @title Video 5: Formulating a hypothesis
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="nHXMSXLcd9A", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 4
from ipywidgets import widgets
from IPython.display import Markdown
# Not writing in latex because that didn't render in jupyterbook
markdown1 = r'''
## Step 4
<br>
<font size='3pt'>
Our main hypothesis is that the strength of the illusion has a linear relationship to the amplitude of vestibular noise.
Mathematically, this would write as
<div align="center">
<em>S</em> = <em>k</em> ⋅ <em>N</em>
</div>
where *S* is the illusion strength and *N* is the noise level, and *k* is a free parameter.
>we could simply use the frequency of occurance across repetitions as the "strength of the illusion"
We would get the noise as the standard deviation of *v(t)*, i.e.
<div align="center">
<em>N</em> = <b>E</b>[<em>v(t)</em><sup>2</sup>],
</div>
where **E** stands for the expected value.
Do we need to take the average across time points?
> doesn't really matter because we have the generative process, so we can just use the σ that we define
</font>
'''
markdown2 = '''
## Step 4
<br>
<font size='3pt'>
We think that noise in the signal drives whether or not people perceive self motion. Maybe the brain uses the strongest signal at peak acceleration to decide on self motion, but we actually think it is better to accumulate evidence over some period of time. We want to test this. The noise idea also means that when the signal-to-noise ratio is higher, the brain does better, and this would be in the faster acceleration condition. We want to test this too.
We came up with the following hypotheses focussing on specific details of our overall research question:
* Hyp 1: Accumulated vestibular spike rates explain self-motion judgements better than average spike rates around peak acceleration.
* Hyp 2: Classification performance should be better for faster vs slower self-motion.
> There are many other hypotheses you could come up with, but for simplicity, let's go with those.
Mathematically, we can write our hypotheses as follows (using our above ingredients):
* Hyp 1: **E**(c<sub>accum</sub>) > **E**(c<sub>win</sub>)
* Hyp 2: **E**(c<sub>fast</sub>) > **E**(c<sub>slow</sub>)
Where **E** denotes taking the expected value (in this case the mean) of its argument: classification outcome in a given trial type.
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Modeling Steps 1 - 4**By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm__Content reviewers:__ Eric DeWitt, Tara van Viegen, Marius Pachitariu__Production editors:__ Ella Batty **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** **Note that this is the same as W1D2 Tutorial 1 - we provide it here as well for ease of access.** --- Tutorial objectivesYesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by thinking through the logic of modeling based on your project ideas.We assume that you have a general idea of a project in mind, i.e. a preliminary question, and/or phenomenon you would like to understand. You should have started developing a project idea yesterday with [this brainstorming demo](https://youtu.be/H6rSlZzlrgQ). Maybe you have a goal in mind. We will now work through the 4 first steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)): **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypothesesThe remaining steps 5-10 will be covered in a second notebook that you can consult throughout the modeling process when you work on your projects.**Importantly**, we will guide you through Steps 1-4 today. After you do more work on projects, you likely have to revite some or all of these steps *before* you move on the the remaining steps of modeling. **Note**: there will be no coding today. It's important that you think through the different steps of this how-to-model tutorial to maximize your chance of succeeding in your group projects. **Also**: "Models" here can be data analysis pipelines, not just computational models...**Think! Sections**: All activities you should perform are labeled with **Think!**. These are discussion based exercises and can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on! DemosWe will demo the modeling process to you based on the train illusion. The introductory video will explain the phenomenon to you. Then we will do roleplay to showcase some common pitfalls to you based on a computational modeling project around the train illusion. In addition to the computational model, we will also provide a data neuroscience project example to you so you can appreciate similarities and differences. Enjoy!
###Code
# @title Video 1: Introduction to tutorial
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mf4y1b7xS", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GyGNs1fLIYQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
# for random distributions:
from scipy.stats import norm, poisson
# for logistic regression:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# @title Plotting Functions
def rasterplot(spikes,movement,trial):
[movements, trials, neurons, timepoints] = np.shape(spikes)
trial_spikes = spikes[movement,trial,:,:]
trial_events = [((trial_spikes[x,:] > 0).nonzero()[0]-150)/100 for x in range(neurons)]
plt.figure()
dt=1/100
plt.eventplot(trial_events, linewidths=1);
plt.title('movement: %d - trial: %d'%(movement, trial))
plt.ylabel('neuron')
plt.xlabel('time [s]')
def plotCrossValAccuracies(accuracies):
f, ax = plt.subplots(figsize=(8, 3))
ax.boxplot(accuracies, vert=False, widths=.7)
ax.scatter(accuracies, np.ones(8))
ax.set(
xlabel="Accuracy",
yticks=[],
title=f"Average test accuracy: {accuracies.mean():.2%}"
)
ax.spines["left"].set_visible(False)
#@title Generate Data
def generateSpikeTrains():
gain = 2
neurons = 50
movements = [0,1,2]
repetitions = 800
np.random.seed(37)
# set up the basic parameters:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt) # a time interval
Velocity_sigma = 0.5 # std dev of the velocity profile
Velocity_Profile = norm.pdf(t,0,Velocity_sigma)/norm.pdf(0,0,Velocity_sigma) # The Gaussian velocity profile, normalized to a peak of 1
# set up the neuron properties:
Gains = np.random.rand(neurons) * gain # random sensitivity between 0 and `gain`
FRs = (np.random.rand(neurons) * 60 ) - 10 # random base firing rate between -10 and 50
# output matrix will have this shape:
target_shape = [len(movements), repetitions, neurons, len(Velocity_Profile)]
# build matrix for spikes, first, they depend on the velocity profile:
Spikes = np.repeat(Velocity_Profile.reshape([1,1,1,len(Velocity_Profile)]),len(movements)*repetitions*neurons,axis=2).reshape(target_shape)
# multiplied by gains:
S_gains = np.repeat(np.repeat(Gains.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes * S_gains
# and multiplied by the movement:
S_moves = np.repeat( np.array(movements).reshape([len(movements),1,1,1]), repetitions*neurons*len(Velocity_Profile), axis=3 ).reshape(target_shape)
Spikes = Spikes * S_moves
# on top of a baseline firing rate:
S_FR = np.repeat(np.repeat(FRs.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes + S_FR
# can not run the poisson random number generator on input lower than 0:
Spikes = np.where(Spikes < 0, 0, Spikes)
# so far, these were expected firing rates per second, correct for dt:
Spikes = poisson.rvs(Spikes * dt)
return(Spikes)
def subsetPerception(spikes):
movements = [0,1,2]
split = 400
subset = 40
hwin = 3
[num_movements, repetitions, neurons, timepoints] = np.shape(spikes)
decision = np.zeros([num_movements, repetitions])
# ground truth for logistic regression:
y_train = np.repeat([0,1,1],split)
y_test = np.repeat([0,1,1],repetitions-split)
m_train = np.repeat(movements, split)
m_test = np.repeat(movements, split)
# reproduce the time points:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt)
w_idx = list( (abs(t) < (hwin*dt)).nonzero()[0] )
w_0 = min(w_idx)
w_1 = max(w_idx)+1 # python...
# get the total spike counts from stationary and movement trials:
spikes_stat = np.sum( spikes[0,:,:,:], axis=2)
spikes_move = np.sum( spikes[1:,:,:,:], axis=3)
train_spikes_stat = spikes_stat[:split,:]
train_spikes_move = spikes_move[:,:split,:].reshape([-1,neurons])
test_spikes_stat = spikes_stat[split:,:]
test_spikes_move = spikes_move[:,split:,:].reshape([-1,neurons])
# data to use to predict y:
x_train = np.concatenate((train_spikes_stat, train_spikes_move))
x_test = np.concatenate(( test_spikes_stat, test_spikes_move))
# this line creates a logistics regression model object, and immediately fits it:
population_model = LogisticRegression(solver='liblinear', random_state=0).fit(x_train, y_train)
# solver, one of: 'liblinear', 'newton-cg', 'lbfgs', 'sag', and 'saga'
# some of those require certain other options
#print(population_model.coef_) # slope
#print(population_model.intercept_) # intercept
ground_truth = np.array(population_model.predict(x_test))
ground_truth = ground_truth.reshape([3,-1])
output = {}
output['perception'] = ground_truth
output['spikes'] = spikes[:,split:,:subset,:]
return(output)
def getData():
spikes = generateSpikeTrains()
dataset = subsetPerception(spikes=spikes)
return(dataset)
dataset = getData()
perception = dataset['perception']
spikes = dataset['spikes']
###Output
_____no_output_____
###Markdown
---- Step 1: Finding a phenomenon and a question to ask about it
###Code
# @title Video 2: Asking a question
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1VK4y1M7dc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="4Gl8X_y_uoA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 1
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people have the wrong percept. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We asked the following (arbitrary) question for our demo project: "How do noisy vestibular estimates of motion lead to illusory percepts of self motion?"
</font>
'''
markdown2 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people mix this up. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We assume that we have build the train illusion model (see the other example project colab). That model predicts that accumulated sensory evidence from vestibular signals determines the decision of whether self-motion is experienced or not. We now have vestibular neuron data (simulated in our case, but let's pretend) and would like to see if that prediction holds true.
The data contains *N* neurons and *M* trials for each of 3 motion conditions: no self-motion, slowly accelerating self-motion and faster accelerating self-motion. In our data,
*N* = 40 and *M* = 400.
**So we can ask the following question**: "Does accumulated vestibular neuron activity correlate with self-motion judgements?"
</font>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 1: Asking your own question *Please discuss the following for about 25 min*You should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs modeling? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define an evaluation method! * How will you know your modeling is good? * E.g. comparison to specific data (quantitative method of comparison?)* For computational models: think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need? **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsQuestion is too general Remember: science advances one small step at the time. Get the small step right… Precise aspect of phenomenon you want to model is unclear You will fail to ask a meaningful question You have already chosen a toolkit This will prevent you from thinking deeply about the best way to answer your scientific question You don’t have a clear goal What do you want to get out of modeling? You don’t have a potential experiment in mind This will help concretize your objectives and think through the logic behind your goal **Note**The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step. ---- Step 2: Understanding the state of the art & background Here you will do a literature review (**to be done AFTER this tutorial!**).
###Code
# @title Video 3: Literature Review & Background Knowledge
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1by4y1M7TZ", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d8zriLaMc14", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 2
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 2
<br>
<font size='3pt'>
You have learned all about the vestibular system in the Intro video. This is also where you would do a literature search to learn more about what's known about self-motion perception and vestibular signals. You would also want to examine any attempts to model self-motion, perceptual decision making and vestibular processing.</font>
'''
markdown21 = '''
## Step 2
<br>
<font size='3pt'>
While it seems a well-known fact that vestibular signals are noisy, we should check if we can also find this in the literature.
Let's also see what's in our data, there should be a 4d array called `spikes` that has spike counts (positive integers), a 2d array called `perception` with self-motion judgements (0=no motion or 1=motion). Let's see what this data looks like:
</font><br>
'''
markdown22 = '''
<br>
<font size='3pt'>
In the `spikes` array, we see our 3 acceleration conditions (first dimension), with 400 trials each (second dimensions) and simultaneous recordings from 40 neurons (third dimension), across 3 seconds in 10 ms bins (fourth dimension). The first two dimensions are also there in the `perception` array.
Perfect perception would have looked like [0, 1, 1]. The average judgements are far from correct (lots of self-motion illusions) but they do make some sense: it's closer to 0 in the no-motion condition and closer to 1 in both of the real-motion conditions.
The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. Let's see if we can reproduce the stimuli from the data:
</font>
<br>
'''
markdown23 = '''
<br>
<font size='3pt'>
Blue is the no-motion condition, and produces flat average spike counts across the 3 s time interval. The orange and green line do show a bell-shaped curve that corresponds to the acceleration profile. But there also seems to be considerable noise: exactly what we need. Let's see what the spike trains for a single trial look like:
</font>
<br>
'''
markdown24 = '''
<br>
<font size='3pt'>
You can change the trial number in the bit of code above to compare what the rasterplots look like in different trials. You'll notice that they all look kind of the same: the 3 conditions are very hard (impossible?) to distinguish by eye-balling.
Now that we have seen the data, let's see if we can extract self-motion judgements from the spike counts.
</font>
<br>
'''
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown21))
print(f'The shape of `spikes` is: {np.shape(spikes)}')
print(f'The shape of `perception` is: {np.shape(perception)}')
print(f'The mean of `perception` is: {np.mean(perception, axis=1)}')
display(Markdown(markdown22))
for move_no in range(3):
plt.plot(np.arange(-1.5,1.5+(1/100),(1/100)),np.mean(np.mean(spikes[move_no,:,:,:], axis=0), axis=0), label=['no motion', '$1 m/s^2$', '$2 m/s^2$'][move_no])
plt.xlabel('time [s]');
plt.ylabel('averaged spike counts');
plt.legend()
plt.show()
display(Markdown(markdown23))
for move in range(3):
rasterplot(spikes = spikes, movement = move, trial = 0)
plt.show()
display(Markdown(markdown24))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Here you will do a literature review (**to be done AFTER this tutorial!**). For the projects, do not spend too much time on this. A thorough literature review could take weeks or months depending on your prior knowledge of the field...The important thing for your project here is not to exhaustively survey the literature but rather to learn the process of modeling. 1-2 days of digging into the literature should be enough!**Here is what you should get out of it**:* Survey the literature * What’s known? * What has already been done? * Previous models as a starting point? * What hypotheses have been emitted in the field? * Are there any alternative / complementary modeling approaches?* What skill sets are required? * Do I need learn something before I can start? * Ensure that no important aspect is missed* Potentially provides specific data sets / alternative modeling approaches for comparison **Do this AFTER the tutorial** ---- Step 3: Determining the basic ingredients
###Code
# @title Video 4: Determining basic ingredients
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mq4y1x77s", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="XpEj-p7JkFE", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 3
from ipywidgets import widgets
from IPython.display import Markdown, Math
markdown1 = r'''
## Step 3
<br>
<font size='3pt'>
We determined that we probably needed the following ingredients for our model:
* Vestibular input: *v(t)*
* Binary decision output: *d* - time dependent?
* Decision threshold: θ
* A filter (maybe running average?): *f*
* An integration mechanism to get from vestibular acceleration to sensed velocity: ∫
</font>
'''
markdown2 = '''
## Step 3
<br>
<font size='3pt'>
In order to address our question we need to design an appropriate computational data analysis pipeline. We did some brainstorming and think that we need to somehow extract the self-motion judgements from the spike counts of our neurons. Based on that, our algorithm needs to make a decision: was there self motion or not? This is a classical 2-choice classification problem. We will have to transform the raw spike data into the right input for the algorithm (spike pre-processing).
So we determined that we probably needed the following ingredients:
* spike trains *S* of 3-second trials (10ms spike bins)
* ground truth movement *m<sub>r</sub>* (real) and perceived movement *m<sub>p</sub>*
* some form of classifier *C* giving us a classification *c*
* spike pre-processing
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 3: Determine your basic ingredients *Please discuss the following for about 25 min*This will allow you to think deeper about what your modeling project will need. It's a crucial step before you can formulate hypotheses because you first need to understand what your modeling approach will need. There are 2 aspects you want to think about:1. What parameters / variables are needed?] * Constants? * Do they change over space, time, conditions…? * What details can be omitted? * Constraints, initial conditions? * Model inputs / outputs?2. Variables needed to describe the process to be modelled? * Brainstorming! * What can be observed / measured? latent variables? * Where do these variables come from? * Do any abstract concepts need to be instantiated as variables? * E.g. value, utility, uncertainty, cost, salience, goals, strategy, plant, dynamics * Instantiate them so that they relate to potential measurements!This is a step where your prior knowledge and intuition is tested. You want to end up with an inventory of *specific* concepts and/or interactions that need to be instantiated. **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsI’m experienced, I don’t need to think about ingredients anymore Or so you think… I can’t think of any ingredients Think about the potential experiment. What are your stimuli? What parameters? What would you control? What do you measure? I have all inputs and outputs Good! But what will link them? Thinking about that will start shaping your model and hypotheses I can’t think of any links (= mechanisms) You will acquire a library of potential mechanisms as you keep modeling and learning But the literature will often give you hints through hypotheses If you still can't think of links, then maybe you're missing ingredients? ---- Step 4: Formulating specific, mathematically defined hypotheses
###Code
# @title Video 5: Formulating a hypothesis
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1fh411h7aX", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="nHXMSXLcd9A", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 4
from ipywidgets import widgets
from IPython.display import Markdown
# Not writing in latex because that didn't render in jupyterbook
markdown1 = r'''
## Step 4
<br>
<font size='3pt'>
Our main hypothesis is that the strength of the illusion has a linear relationship to the amplitude of vestibular noise.
Mathematically, this would write as
<div align="center">
<em>S</em> = <em>k</em> ⋅ <em>N</em>
</div>
where *S* is the illusion strength and *N* is the noise level, and *k* is a free parameter.
>we could simply use the frequency of occurance across repetitions as the "strength of the illusion"
We would get the noise as the standard deviation of *v(t)*, i.e.
<div align="center">
<em>N</em> = <b>E</b>[<em>v(t)</em><sup>2</sup>],
</div>
where **E** stands for the expected value.
Do we need to take the average across time points?
> doesn't really matter because we have the generative process, so we can just use the σ that we define
</font>
'''
markdown2 = '''
## Step 4
<br>
<font size='3pt'>
We think that noise in the signal drives whether or not people perceive self motion. Maybe the brain uses the strongest signal at peak acceleration to decide on self motion, but we actually think it is better to accumulate evidence over some period of time. We want to test this. The noise idea also means that when the signal-to-noise ratio is higher, the brain does better, and this would be in the faster acceleration condition. We want to test this too.
We came up with the following hypotheses focussing on specific details of our overall research question:
* Hyp 1: Accumulated vestibular spike rates explain self-motion judgements better than average spike rates around peak acceleration.
* Hyp 2: Classification performance should be better for faster vs slower self-motion.
> There are many other hypotheses you could come up with, but for simplicity, let's go with those.
Mathematically, we can write our hypotheses as follows (using our above ingredients):
* Hyp 1: **E**(c<sub>accum</sub>) > **E**(c<sub>win</sub>)
* Hyp 2: **E**(c<sub>fast</sub>) > **E**(c<sub>slow</sub>)
Where **E** denotes taking the expected value (in this case the mean) of its argument: classification outcome in a given trial type.
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Modeling Steps 1 - 4**By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm__Content reviewers:__ Eric DeWitt, Tara van Viegen, Marius Pachitariu__Production editors:__ Ella Batty **Note that this is the same as W1D2 Tutorial 1 - we provide it here as well for ease of access.** **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial objectivesYesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by thinking through the logic of modeling based on your project ideas.We assume that you have a general idea of a project in mind, i.e. a preliminary question, and/or phenomenon you would like to understand. You should have started developing a project idea yesterday with [this brainstorming demo](https://youtu.be/H6rSlZzlrgQ). Maybe you have a goal in mind. We will now work through the 4 first steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)): **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypothesesThe remaining steps 5-10 will be covered in a second notebook that you can consult throughout the modeling process when you work on your projects.**Importantly**, we will guide you through Steps 1-4 today. After you do more work on projects, you likely have to revite some or all of these steps *before* you move on the the remaining steps of modeling. **Note**: there will be no coding today. It's important that you think through the different steps of this how-to-model tutorial to maximize your chance of succeeding in your group projects. **Also**: "Models" here can be data analysis pipelines, not just computational models...**Think! Sections**: All activities you should perform are labeled with **Think!**. These are discussion based exercises and can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on! DemosWe will demo the modeling process to you based on the train illusion. The introductory video will explain the phenomenon to you. Then we will do roleplay to showcase some common pitfalls to you based on a computational modeling project around the train illusion. In addition to the computational model, we will also provide a data neuroscience project example to you so you can appreciate similarities and differences. Enjoy!
###Code
# @title Video 1: Introduction to tutorial
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GyGNs1fLIYQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
# for random distributions:
from scipy.stats import norm, poisson
# for logistic regression:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# @title Plotting Functions
def rasterplot(spikes,movement,trial):
[movements, trials, neurons, timepoints] = np.shape(spikes)
trial_spikes = spikes[movement,trial,:,:]
trial_events = [((trial_spikes[x,:] > 0).nonzero()[0]-150)/100 for x in range(neurons)]
plt.figure()
dt=1/100
plt.eventplot(trial_events, linewidths=1);
plt.title('movement: %d - trial: %d'%(movement, trial))
plt.ylabel('neuron')
plt.xlabel('time [s]')
def plotCrossValAccuracies(accuracies):
f, ax = plt.subplots(figsize=(8, 3))
ax.boxplot(accuracies, vert=False, widths=.7)
ax.scatter(accuracies, np.ones(8))
ax.set(
xlabel="Accuracy",
yticks=[],
title=f"Average test accuracy: {accuracies.mean():.2%}"
)
ax.spines["left"].set_visible(False)
#@title Generate Data
def generateSpikeTrains():
gain = 2
neurons = 50
movements = [0,1,2]
repetitions = 800
np.random.seed(37)
# set up the basic parameters:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt) # a time interval
Velocity_sigma = 0.5 # std dev of the velocity profile
Velocity_Profile = norm.pdf(t,0,Velocity_sigma)/norm.pdf(0,0,Velocity_sigma) # The Gaussian velocity profile, normalized to a peak of 1
# set up the neuron properties:
Gains = np.random.rand(neurons) * gain # random sensitivity between 0 and `gain`
FRs = (np.random.rand(neurons) * 60 ) - 10 # random base firing rate between -10 and 50
# output matrix will have this shape:
target_shape = [len(movements), repetitions, neurons, len(Velocity_Profile)]
# build matrix for spikes, first, they depend on the velocity profile:
Spikes = np.repeat(Velocity_Profile.reshape([1,1,1,len(Velocity_Profile)]),len(movements)*repetitions*neurons,axis=2).reshape(target_shape)
# multiplied by gains:
S_gains = np.repeat(np.repeat(Gains.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes * S_gains
# and multiplied by the movement:
S_moves = np.repeat( np.array(movements).reshape([len(movements),1,1,1]), repetitions*neurons*len(Velocity_Profile), axis=3 ).reshape(target_shape)
Spikes = Spikes * S_moves
# on top of a baseline firing rate:
S_FR = np.repeat(np.repeat(FRs.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes + S_FR
# can not run the poisson random number generator on input lower than 0:
Spikes = np.where(Spikes < 0, 0, Spikes)
# so far, these were expected firing rates per second, correct for dt:
Spikes = poisson.rvs(Spikes * dt)
return(Spikes)
def subsetPerception(spikes):
movements = [0,1,2]
split = 400
subset = 40
hwin = 3
[num_movements, repetitions, neurons, timepoints] = np.shape(spikes)
decision = np.zeros([num_movements, repetitions])
# ground truth for logistic regression:
y_train = np.repeat([0,1,1],split)
y_test = np.repeat([0,1,1],repetitions-split)
m_train = np.repeat(movements, split)
m_test = np.repeat(movements, split)
# reproduce the time points:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt)
w_idx = list( (abs(t) < (hwin*dt)).nonzero()[0] )
w_0 = min(w_idx)
w_1 = max(w_idx)+1 # python...
# get the total spike counts from stationary and movement trials:
spikes_stat = np.sum( spikes[0,:,:,:], axis=2)
spikes_move = np.sum( spikes[1:,:,:,:], axis=3)
train_spikes_stat = spikes_stat[:split,:]
train_spikes_move = spikes_move[:,:split,:].reshape([-1,neurons])
test_spikes_stat = spikes_stat[split:,:]
test_spikes_move = spikes_move[:,split:,:].reshape([-1,neurons])
# data to use to predict y:
x_train = np.concatenate((train_spikes_stat, train_spikes_move))
x_test = np.concatenate(( test_spikes_stat, test_spikes_move))
# this line creates a logistics regression model object, and immediately fits it:
population_model = LogisticRegression(solver='liblinear', random_state=0).fit(x_train, y_train)
# solver, one of: 'liblinear', 'newton-cg', 'lbfgs', 'sag', and 'saga'
# some of those require certain other options
#print(population_model.coef_) # slope
#print(population_model.intercept_) # intercept
ground_truth = np.array(population_model.predict(x_test))
ground_truth = ground_truth.reshape([3,-1])
output = {}
output['perception'] = ground_truth
output['spikes'] = spikes[:,split:,:subset,:]
return(output)
def getData():
spikes = generateSpikeTrains()
dataset = subsetPerception(spikes=spikes)
return(dataset)
dataset = getData()
perception = dataset['perception']
spikes = dataset['spikes']
###Output
_____no_output_____
###Markdown
---- Step 1: Finding a phenomenon and a question to ask about it
###Code
# @title Video 2: Asking a question
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="4Gl8X_y_uoA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 1
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people have the wrong percept. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We asked the following (arbitrary) question for our demo project: "How do noisy vestibular estimates of motion lead to illusory percepts of self motion?"
</font>
'''
markdown2 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people mix this up. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We assume that we have build the train illusion model (see the other example project colab). That model predicts that accumulated sensory evidence from vestibular signals determines the decision of whether self-motion is experienced or not. We now have vestibular neuron data (simulated in our case, but let's pretend) and would like to see if that prediction holds true.
The data contains *N* neurons and *M* trials for each of 3 motion conditions: no self-motion, slowly accelerating self-motion and faster accelerating self-motion. In our data,
*N* = 40 and *M* = 400.
**So we can ask the following question**: "Does accumulated vestibular neuron activity correlate with self-motion judgements?"
</font>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 1: Asking your own question *Please discuss the following for about 25 min*You should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs modeling? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define an evaluation method! * How will you know your modeling is good? * E.g. comparison to specific data (quantitative method of comparison?)* For computational models: think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need? **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsQuestion is too general Remember: science advances one small step at the time. Get the small step right… Precise aspect of phenomenon you want to model is unclear You will fail to ask a meaningful question You have already chosen a toolkit This will prevent you from thinking deeply about the best way to answer your scientific question You don’t have a clear goal What do you want to get out of modeling? You don’t have a potential experiment in mind This will help concretize your objectives and think through the logic behind your goal **Note**The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step. ---- Step 2: Understanding the state of the art & background Here you will do a literature review (**to be done AFTER this tutorial!**).
###Code
# @title Video 3: Literature Review & Background Knowledge
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d8zriLaMc14", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 2
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 2
<br>
<font size='3pt'>
You have learned all about the vestibular system in the Intro video. This is also where you would do a literature search to learn more about what's known about self-motion perception and vestibular signals. You would also want to examine any attempts to model self-motion, perceptual decision making and vestibular processing.</font>
'''
markdown21 = '''
## Step 2
<br>
<font size='3pt'>
While it seems a well-known fact that vestibular signals are noisy, we should check if we can also find this in the literature.
Let's also see what's in our data, there should be a 4d array called `spikes` that has spike counts (positive integers), a 2d array called `perception` with self-motion judgements (0=no motion or 1=motion). Let's see what this data looks like:
</font><br>
'''
markdown22 = '''
<br>
<font size='3pt'>
In the `spikes` array, we see our 3 acceleration conditions (first dimension), with 400 trials each (second dimensions) and simultaneous recordings from 40 neurons (third dimension), across 3 seconds in 10 ms bins (fourth dimension). The first two dimensions are also there in the `perception` array.
Perfect perception would have looked like [0, 1, 1]. The average judgements are far from correct (lots of self-motion illusions) but they do make some sense: it's closer to 0 in the no-motion condition and closer to 1 in both of the real-motion conditions.
The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. Let's see if we can reproduce the stimuli from the data:
</font>
<br>
'''
markdown23 = '''
<br>
<font size='3pt'>
Blue is the no-motion condition, and produces flat average spike counts across the 3 s time interval. The orange and green line do show a bell-shaped curve that corresponds to the acceleration profile. But there also seems to be considerable noise: exactly what we need. Let's see what the spike trains for a single trial look like:
</font>
<br>
'''
markdown24 = '''
<br>
<font size='3pt'>
You can change the trial number in the bit of code above to compare what the rasterplots look like in different trials. You'll notice that they all look kind of the same: the 3 conditions are very hard (impossible?) to distinguish by eye-balling.
Now that we have seen the data, let's see if we can extract self-motion judgements from the spike counts.
</font>
<br>
'''
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown21))
print(f'The shape of `spikes` is: {np.shape(spikes)}')
print(f'The shape of `perception` is: {np.shape(perception)}')
print(f'The mean of `perception` is: {np.mean(perception, axis=1)}')
display(Markdown(markdown22))
for move_no in range(3):
plt.plot(np.arange(-1.5,1.5+(1/100),(1/100)),np.mean(np.mean(spikes[move_no,:,:,:], axis=0), axis=0), label=['no motion', '$1 m/s^2$', '$2 m/s^2$'][move_no])
plt.xlabel('time [s]');
plt.ylabel('averaged spike counts');
plt.legend()
plt.show()
display(Markdown(markdown23))
for move in range(3):
rasterplot(spikes = spikes, movement = move, trial = 0)
plt.show()
display(Markdown(markdown24))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Here you will do a literature review (**to be done AFTER this tutorial!**). For the projects, do not spend too much time on this. A thorough literature review could take weeks or months depending on your prior knowledge of the field...The important thing for your project here is not to exhaustively survey the literature but rather to learn the process of modeling. 1-2 days of digging into the literature should be enough!**Here is what you should get out of it**:* Survey the literature * What’s known? * What has already been done? * Previous models as a starting point? * What hypotheses have been emitted in the field? * Are there any alternative / complementary modeling approaches?* What skill sets are required? * Do I need learn something before I can start? * Ensure that no important aspect is missed* Potentially provides specific data sets / alternative modeling approaches for comparison **Do this AFTER the tutorial** ---- Step 3: Determining the basic ingredients
###Code
# @title Video 4: Determining basic ingredients
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="XpEj-p7JkFE", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 3
from ipywidgets import widgets
from IPython.display import Markdown, Math
markdown1 = r'''
## Step 3
<br>
<font size='3pt'>
We determined that we probably needed the following ingredients for our model:
* Vestibular input: *v(t)*
* Binary decision output: *d* - time dependent?
* Decision threshold: θ
* A filter (maybe running average?): *f*
* An integration mechanism to get from vestibular acceleration to sensed velocity: ∫
</font>
'''
markdown2 = '''
## Step 3
<br>
<font size='3pt'>
In order to address our question we need to design an appropriate computational data analysis pipeline. We did some brainstorming and think that we need to somehow extract the self-motion judgements from the spike counts of our neurons. Based on that, our algorithm needs to make a decision: was there self motion or not? This is a classical 2-choice classification problem. We will have to transform the raw spike data into the right input for the algorithm (spike pre-processing).
So we determined that we probably needed the following ingredients:
* spike trains *S* of 3-second trials (10ms spike bins)
* ground truth movement *m<sub>r</sub>* (real) and perceived movement *m<sub>p</sub>*
* some form of classifier *C* giving us a classification *c*
* spike pre-processing
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 3: Determine your basic ingredients *Please discuss the following for about 25 min*This will allow you to think deeper about what your modeling project will need. It's a crucial step before you can formulate hypotheses because you first need to understand what your modeling approach will need. There are 2 aspects you want to think about:1) What parameters / variables are needed? * Constants? * Do they change over space, time, conditions…? * What details can be omitted? * Constraints, initial conditions? * Model inputs / outputs?2) Variables needed to describe the process to be modelled? * Brainstorming! * What can be observed / measured? latent variables? * Where do these variables come from? * Do any abstract concepts need to be instantiated as variables? * E.g. value, utility, uncertainty, cost, salience, goals, strategy, plant, dynamics * Instantiate them so that they relate to potential measurements! This is a step where your prior knowledge and intuition is tested. You want to end up with an inventory of *specific* concepts and/or interactions that need to be instantiated. **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsI’m experienced, I don’t need to think about ingredients anymore Or so you think… I can’t think of any ingredients Think about the potential experiment. What are your stimuli? What parameters? What would you control? What do you measure? I have all inputs and outputs Good! But what will link them? Thinking about that will start shaping your model and hypotheses I can’t think of any links (= mechanisms) You will acquire a library of potential mechanisms as you keep modeling and learning But the literature will often give you hints through hypotheses If you still can't think of links, then maybe you're missing ingredients? ---- Step 4: Formulating specific, mathematically defined hypotheses
###Code
# @title Video 5: Formulating a hypothesis
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="nHXMSXLcd9A", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 4
from ipywidgets import widgets
from IPython.display import Markdown
# Not writing in latex because that didn't render in jupyterbook
markdown1 = r'''
## Step 4
<br>
<font size='3pt'>
Our main hypothesis is that the strength of the illusion has a linear relationship to the amplitude of vestibular noise.
Mathematically, this would write as
<div align="center">
<em>S</em> = <em>k</em> ⋅ <em>N</em>
</div>
where *S* is the illusion strength and *N* is the noise level, and *k* is a free parameter.
>we could simply use the frequency of occurance across repetitions as the "strength of the illusion"
We would get the noise as the standard deviation of *v(t)*, i.e.
<div align="center">
<em>N</em> = <b>E</b>[<em>v(t)</em><sup>2</sup>],
</div>
where $\mathbf{E}$ stands for the expected value.
Do we need to take the average across time points?
> doesn't really matter because we have the generative process, so we can just use the $\sigma$ that we define
</font>
'''
markdown2 = '''
## Step 4
<br>
<font size='3pt'>
We think that noise in the signal drives whether or not people perceive self motion. Maybe the brain uses the strongest signal at peak acceleration to decide on self motion, but we actually think it is better to accumulate evidence over some period of time. We want to test this. The noise idea also means that when the signal-to-noise ratio is higher, the brain does better, and this would be in the faster acceleration condition. We want to test this too.
We came up with the following hypotheses focussing on specific details of our overall research question:
* Hyp 1: Accumulated vestibular spike rates explain self-motion judgements better than average spike rates around peak acceleration.
* Hyp 2: Classification performance should be better for faster vs slower self-motion.
> There are many other hypotheses you could come up with, but for simplicity, let's go with those.
Mathematically, we can write our hypotheses as follows (using our above ingredients):
* Hyp 1: **E**(c<sub>accum</sub>) > **E**(c<sub>win</sub>)
* Hyp 2: **E**(c<sub>fast</sub>) > **E**(c<sub>slow</sub>)
Where **E** denotes taking the expected value (in this case the mean) of its argument: classification outcome in a given trial type.
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Modeling Steps 1 - 4**By Neuromatch Academy**__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm__Content reviewers:__ Eric DeWitt, Tara van Viegen, Marius Pachitariu__Production editors:__ Ella Batty **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** **Note that this is the same as W1D2 Tutorial 1 - we provide it here as well for ease of access.** --- Tutorial objectivesYesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by thinking through the logic of modeling based on your project ideas.We assume that you have a general idea of a project in mind, i.e. a preliminary question, and/or phenomenon you would like to understand. You should have started developing a project idea yesterday with [this brainstorming demo](https://youtu.be/H6rSlZzlrgQ). Maybe you have a goal in mind. We will now work through the 4 first steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)): **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypothesesThe remaining steps 5-10 will be covered in a second notebook that you can consult throughout the modeling process when you work on your projects.**Importantly**, we will guide you through Steps 1-4 today. After you do more work on projects, you likely have to revite some or all of these steps *before* you move on the the remaining steps of modeling. **Note**: there will be no coding today. It's important that you think through the different steps of this how-to-model tutorial to maximize your chance of succeeding in your group projects. **Also**: "Models" here can be data analysis pipelines, not just computational models...**Think! Sections**: All activities you should perform are labeled with **Think!**. These are discussion based exercises and can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on! DemosWe will demo the modeling process to you based on the train illusion. The introductory video will explain the phenomenon to you. Then we will do roleplay to showcase some common pitfalls to you based on a computational modeling project around the train illusion. In addition to the computational model, we will also provide a data neuroscience project example to you so you can appreciate similarities and differences. Enjoy!
###Code
# @title Video 1: Introduction to tutorial
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mf4y1b7xS", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GyGNs1fLIYQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
# for random distributions:
from scipy.stats import norm, poisson
# for logistic regression:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# @title Plotting Functions
def rasterplot(spikes,movement,trial):
[movements, trials, neurons, timepoints] = np.shape(spikes)
trial_spikes = spikes[movement,trial,:,:]
trial_events = [((trial_spikes[x,:] > 0).nonzero()[0]-150)/100 for x in range(neurons)]
plt.figure()
dt=1/100
plt.eventplot(trial_events, linewidths=1);
plt.title('movement: %d - trial: %d'%(movement, trial))
plt.ylabel('neuron')
plt.xlabel('time [s]')
def plotCrossValAccuracies(accuracies):
f, ax = plt.subplots(figsize=(8, 3))
ax.boxplot(accuracies, vert=False, widths=.7)
ax.scatter(accuracies, np.ones(8))
ax.set(
xlabel="Accuracy",
yticks=[],
title=f"Average test accuracy: {accuracies.mean():.2%}"
)
ax.spines["left"].set_visible(False)
#@title Generate Data
def generateSpikeTrains():
gain = 2
neurons = 50
movements = [0,1,2]
repetitions = 800
np.random.seed(37)
# set up the basic parameters:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt) # a time interval
Velocity_sigma = 0.5 # std dev of the velocity profile
Velocity_Profile = norm.pdf(t,0,Velocity_sigma)/norm.pdf(0,0,Velocity_sigma) # The Gaussian velocity profile, normalized to a peak of 1
# set up the neuron properties:
Gains = np.random.rand(neurons) * gain # random sensitivity between 0 and `gain`
FRs = (np.random.rand(neurons) * 60 ) - 10 # random base firing rate between -10 and 50
# output matrix will have this shape:
target_shape = [len(movements), repetitions, neurons, len(Velocity_Profile)]
# build matrix for spikes, first, they depend on the velocity profile:
Spikes = np.repeat(Velocity_Profile.reshape([1,1,1,len(Velocity_Profile)]),len(movements)*repetitions*neurons,axis=2).reshape(target_shape)
# multiplied by gains:
S_gains = np.repeat(np.repeat(Gains.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes * S_gains
# and multiplied by the movement:
S_moves = np.repeat( np.array(movements).reshape([len(movements),1,1,1]), repetitions*neurons*len(Velocity_Profile), axis=3 ).reshape(target_shape)
Spikes = Spikes * S_moves
# on top of a baseline firing rate:
S_FR = np.repeat(np.repeat(FRs.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes + S_FR
# can not run the poisson random number generator on input lower than 0:
Spikes = np.where(Spikes < 0, 0, Spikes)
# so far, these were expected firing rates per second, correct for dt:
Spikes = poisson.rvs(Spikes * dt)
return(Spikes)
def subsetPerception(spikes):
movements = [0,1,2]
split = 400
subset = 40
hwin = 3
[num_movements, repetitions, neurons, timepoints] = np.shape(spikes)
decision = np.zeros([num_movements, repetitions])
# ground truth for logistic regression:
y_train = np.repeat([0,1,1],split)
y_test = np.repeat([0,1,1],repetitions-split)
m_train = np.repeat(movements, split)
m_test = np.repeat(movements, split)
# reproduce the time points:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt)
w_idx = list( (abs(t) < (hwin*dt)).nonzero()[0] )
w_0 = min(w_idx)
w_1 = max(w_idx)+1 # python...
# get the total spike counts from stationary and movement trials:
spikes_stat = np.sum( spikes[0,:,:,:], axis=2)
spikes_move = np.sum( spikes[1:,:,:,:], axis=3)
train_spikes_stat = spikes_stat[:split,:]
train_spikes_move = spikes_move[:,:split,:].reshape([-1,neurons])
test_spikes_stat = spikes_stat[split:,:]
test_spikes_move = spikes_move[:,split:,:].reshape([-1,neurons])
# data to use to predict y:
x_train = np.concatenate((train_spikes_stat, train_spikes_move))
x_test = np.concatenate(( test_spikes_stat, test_spikes_move))
# this line creates a logistics regression model object, and immediately fits it:
population_model = LogisticRegression(solver='liblinear', random_state=0).fit(x_train, y_train)
# solver, one of: 'liblinear', 'newton-cg', 'lbfgs', 'sag', and 'saga'
# some of those require certain other options
#print(population_model.coef_) # slope
#print(population_model.intercept_) # intercept
ground_truth = np.array(population_model.predict(x_test))
ground_truth = ground_truth.reshape([3,-1])
output = {}
output['perception'] = ground_truth
output['spikes'] = spikes[:,split:,:subset,:]
return(output)
def getData():
spikes = generateSpikeTrains()
dataset = subsetPerception(spikes=spikes)
return(dataset)
dataset = getData()
perception = dataset['perception']
spikes = dataset['spikes']
###Output
_____no_output_____
###Markdown
---- Step 1: Finding a phenomenon and a question to ask about it
###Code
# @title Video 2: Asking a question
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1VK4y1M7dc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="4Gl8X_y_uoA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 1
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people have the wrong percept. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We asked the following (arbitrary) question for our demo project: "How do noisy vestibular estimates of motion lead to illusory percepts of self motion?"
</font>
'''
markdown2 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people mix this up. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We assume that we have build the train illusion model (see the other example project colab). That model predicts that accumulated sensory evidence from vestibular signals determines the decision of whether self-motion is experienced or not. We now have vestibular neuron data (simulated in our case, but let's pretend) and would like to see if that prediction holds true.
The data contains *N* neurons and *M* trials for each of 3 motion conditions: no self-motion, slowly accelerating self-motion and faster accelerating self-motion. In our data,
*N* = 40 and *M* = 400.
**So we can ask the following question**: "Does accumulated vestibular neuron activity correlate with self-motion judgements?"
</font>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 1: Asking your own question *Please discuss the following for about 25 min*You should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.** As a reminder, here is what you should discuss and write down:* What exact aspect of data needs modeling? * Answer this question clearly and precisely!Otherwise you will get lost (almost guaranteed) * Write everything down! * Also identify aspects of data that you do not want to address (yet)* Define an evaluation method! * How will you know your modeling is good? * E.g. comparison to specific data (quantitative method of comparison?)* For computational models: think of an experiment that could test your model * You essentially want your model to interface with this experiment, i.e. you want to simulate this experimentYou can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need? **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsQuestion is too general Remember: science advances one small step at the time. Get the small step right… Precise aspect of phenomenon you want to model is unclear You will fail to ask a meaningful question You have already chosen a toolkit This will prevent you from thinking deeply about the best way to answer your scientific question You don’t have a clear goal What do you want to get out of modeling? You don’t have a potential experiment in mind This will help concretize your objectives and think through the logic behind your goal **Note**The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step. ---- Step 2: Understanding the state of the art & background Here you will do a literature review (**to be done AFTER this tutorial!**).
###Code
# @title Video 3: Literature Review & Background Knowledge
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1by4y1M7TZ", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d8zriLaMc14", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 2
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 2
<br>
<font size='3pt'>
You have learned all about the vestibular system in the Intro video. This is also where you would do a literature search to learn more about what's known about self-motion perception and vestibular signals. You would also want to examine any attempts to model self-motion, perceptual decision making and vestibular processing.</font>
'''
markdown21 = '''
## Step 2
<br>
<font size='3pt'>
While it seems a well-known fact that vestibular signals are noisy, we should check if we can also find this in the literature.
Let's also see what's in our data, there should be a 4d array called `spikes` that has spike counts (positive integers), a 2d array called `perception` with self-motion judgements (0=no motion or 1=motion). Let's see what this data looks like:
</font><br>
'''
markdown22 = '''
<br>
<font size='3pt'>
In the `spikes` array, we see our 3 acceleration conditions (first dimension), with 400 trials each (second dimensions) and simultaneous recordings from 40 neurons (third dimension), across 3 seconds in 10 ms bins (fourth dimension). The first two dimensions are also there in the `perception` array.
Perfect perception would have looked like [0, 1, 1]. The average judgements are far from correct (lots of self-motion illusions) but they do make some sense: it's closer to 0 in the no-motion condition and closer to 1 in both of the real-motion conditions.
The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. Let's see if we can reproduce the stimuli from the data:
</font>
<br>
'''
markdown23 = '''
<br>
<font size='3pt'>
Blue is the no-motion condition, and produces flat average spike counts across the 3 s time interval. The orange and green line do show a bell-shaped curve that corresponds to the acceleration profile. But there also seems to be considerable noise: exactly what we need. Let's see what the spike trains for a single trial look like:
</font>
<br>
'''
markdown24 = '''
<br>
<font size='3pt'>
You can change the trial number in the bit of code above to compare what the rasterplots look like in different trials. You'll notice that they all look kind of the same: the 3 conditions are very hard (impossible?) to distinguish by eye-balling.
Now that we have seen the data, let's see if we can extract self-motion judgements from the spike counts.
</font>
<br>
'''
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown21))
print(f'The shape of `spikes` is: {np.shape(spikes)}')
print(f'The shape of `perception` is: {np.shape(perception)}')
print(f'The mean of `perception` is: {np.mean(perception, axis=1)}')
display(Markdown(markdown22))
for move_no in range(3):
plt.plot(np.arange(-1.5,1.5+(1/100),(1/100)),np.mean(np.mean(spikes[move_no,:,:,:], axis=0), axis=0), label=['no motion', '$1 m/s^2$', '$2 m/s^2$'][move_no])
plt.xlabel('time [s]');
plt.ylabel('averaged spike counts');
plt.legend()
plt.show()
display(Markdown(markdown23))
for move in range(3):
rasterplot(spikes = spikes, movement = move, trial = 0)
plt.show()
display(Markdown(markdown24))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Here you will do a literature review (**to be done AFTER this tutorial!**). For the projects, do not spend too much time on this. A thorough literature review could take weeks or months depending on your prior knowledge of the field...The important thing for your project here is not to exhaustively survey the literature but rather to learn the process of modeling. 1-2 days of digging into the literature should be enough!**Here is what you should get out of it**:* Survey the literature * What’s known? * What has already been done? * Previous models as a starting point? * What hypotheses have been emitted in the field? * Are there any alternative / complementary modeling approaches?* What skill sets are required? * Do I need learn something before I can start? * Ensure that no important aspect is missed* Potentially provides specific data sets / alternative modeling approaches for comparison **Do this AFTER the tutorial** ---- Step 3: Determining the basic ingredients
###Code
# @title Video 4: Determining basic ingredients
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mq4y1x77s", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="XpEj-p7JkFE", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 3
from ipywidgets import widgets
from IPython.display import Markdown, Math
markdown1 = r'''
## Step 3
<br>
<font size='3pt'>
We determined that we probably needed the following ingredients for our model:
* Vestibular input: *v(t)*
* Binary decision output: *d* - time dependent?
* Decision threshold: θ
* A filter (maybe running average?): *f*
* An integration mechanism to get from vestibular acceleration to sensed velocity: ∫
</font>
'''
markdown2 = '''
## Step 3
<br>
<font size='3pt'>
In order to address our question we need to design an appropriate computational data analysis pipeline. We did some brainstorming and think that we need to somehow extract the self-motion judgements from the spike counts of our neurons. Based on that, our algorithm needs to make a decision: was there self motion or not? This is a classical 2-choice classification problem. We will have to transform the raw spike data into the right input for the algorithm (spike pre-processing).
So we determined that we probably needed the following ingredients:
* spike trains *S* of 3-second trials (10ms spike bins)
* ground truth movement *m<sub>r</sub>* (real) and perceived movement *m<sub>p</sub>*
* some form of classifier *C* giving us a classification *c*
* spike pre-processing
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____
###Markdown
Think! 3: Determine your basic ingredients *Please discuss the following for about 25 min*This will allow you to think deeper about what your modeling project will need. It's a crucial step before you can formulate hypotheses because you first need to understand what your modeling approach will need. There are 2 aspects you want to think about:1. What parameters / variables are needed?] * Constants? * Do they change over space, time, conditions…? * What details can be omitted? * Constraints, initial conditions? * Model inputs / outputs?2. Variables needed to describe the process to be modelled? * Brainstorming! * What can be observed / measured? latent variables? * Where do these variables come from? * Do any abstract concepts need to be instantiated as variables? * E.g. value, utility, uncertainty, cost, salience, goals, strategy, plant, dynamics * Instantiate them so that they relate to potential measurements!This is a step where your prior knowledge and intuition is tested. You want to end up with an inventory of *specific* concepts and/or interactions that need to be instantiated. **Make sure to avoid the pitfalls!**Click here for a recap on pitfallsI’m experienced, I don’t need to think about ingredients anymore Or so you think… I can’t think of any ingredients Think about the potential experiment. What are your stimuli? What parameters? What would you control? What do you measure? I have all inputs and outputs Good! But what will link them? Thinking about that will start shaping your model and hypotheses I can’t think of any links (= mechanisms) You will acquire a library of potential mechanisms as you keep modeling and learning But the literature will often give you hints through hypotheses If you still can't think of links, then maybe you're missing ingredients? ---- Step 4: Formulating specific, mathematically defined hypotheses
###Code
# @title Video 5: Formulating a hypothesis
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1fh411h7aX", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="nHXMSXLcd9A", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 4
from ipywidgets import widgets
from IPython.display import Markdown
# Not writing in latex because that didn't render in jupyterbook
markdown1 = r'''
## Step 4
<br>
<font size='3pt'>
Our main hypothesis is that the strength of the illusion has a linear relationship to the amplitude of vestibular noise.
Mathematically, this would write as
<div align="center">
<em>S</em> = <em>k</em> ⋅ <em>N</em>
</div>
where *S* is the illusion strength and *N* is the noise level, and *k* is a free parameter.
>we could simply use the frequency of occurance across repetitions as the "strength of the illusion"
We would get the noise as the standard deviation of *v(t)*, i.e.
<div align="center">
<em>N</em> = <b>E</b>[<em>v(t)</em><sup>2</sup>],
</div>
where **E** stands for the expected value.
Do we need to take the average across time points?
> doesn't really matter because we have the generative process, so we can just use the σ that we define
</font>
'''
markdown2 = '''
## Step 4
<br>
<font size='3pt'>
We think that noise in the signal drives whether or not people perceive self motion. Maybe the brain uses the strongest signal at peak acceleration to decide on self motion, but we actually think it is better to accumulate evidence over some period of time. We want to test this. The noise idea also means that when the signal-to-noise ratio is higher, the brain does better, and this would be in the faster acceleration condition. We want to test this too.
We came up with the following hypotheses focussing on specific details of our overall research question:
* Hyp 1: Accumulated vestibular spike rates explain self-motion judgements better than average spike rates around peak acceleration.
* Hyp 2: Classification performance should be better for faster vs slower self-motion.
> There are many other hypotheses you could come up with, but for simplicity, let's go with those.
Mathematically, we can write our hypotheses as follows (using our above ingredients):
* Hyp 1: **E**(c<sub>accum</sub>) > **E**(c<sub>win</sub>)
* Hyp 2: **E**(c<sub>fast</sub>) > **E**(c<sub>slow</sub>)
Where **E** denotes taking the expected value (in this case the mean) of its argument: classification outcome in a given trial type.
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
###Output
_____no_output_____ |
ch14_fig4.ipynb | ###Markdown
Ch14 Figure4
###Code
# The data science team used this data to create a word cloud of all the organization’s job injuries, and then the team presented a simple visualization of the cloud at their storytelling session
string = 'Rope, cordage, twine, tire cord, and tire fabric mills Nursing and residential care facilities Fire protection Rendering and meat byproduct processing Skiing facilities Police protection Interurban and rural bus transportation Veterinary services Travel trailer and camper manufacturing Manufactured home (mobile home) manufacturing Truss manufacturing Hog and pig farming Steel foundries (except investment) Hospitals Heavy and civil engineering construction Prefabricated wood building manufacturing Truck trailer manufacturing Iron foundries Materials recovery facilities Other nonferrous metal foundries (except die-casting) Aluminum foundries (except die-casting) Luggage and leather goods stores Scheduled passenger air transportation Correctional institutions Ambulance services Abrasion Brain Injuries Bruising Burns Cluster Headaches Concussions Congestive Heart Failure Construction Injuries Coronary Artery Disease Defective Products Dislocation Flail Chest Fracture Hemothorax Herniated Disc Hip Pointer Hypothermia Lacerations Pinched Nerve Pneumothorax Prescription Medications Quadriplegia Definition Rib Fracture Sciatica Spinal Cord Injury Temporalmandibular Joint Tendons Ligaments Fascia Injury Traumatic Brain Injury Whiplash'
tokens = string.replace(',', '').replace('-', '').replace('(', '').replace(')', '').replace('and', '').split(' ')
data = []
for i in range(500):
claim_idx = np.arange(0, len(tokens)-1)
np.random.shuffle(claim_idx)
claim = ' '.join([tokens[x] for x in claim_idx[:rd.randint(1,30)]])
data.append([i, claim])
df = pd.DataFrame(data, columns=['id', 'claim'])
# df.to_csv('csv_output/ch14_fig6.csv', index=False)
df = pd.read_csv('csv_output/ch14_fig6.csv')
df.head()
df = pd.read_csv('csv_output/ch14_fig6.csv')
%matplotlib inline
sns.set_style("white")
all_claims = ''
for x in df.iterrows():
all_claims += x[1][1] + ''
wgb = pd.DataFrame(all_claims.split(' ')).groupby(0)[0].count()
rank = wgb.sort_values(ascending=False)[1:11]
f, ax = plt.subplots(1, figsize=(8,6))
ax.barh(bottom = np.arange(10)[::-1], width=rank);
ax.set_yticks(np.arange(0,10)+.5);
ax.set_yticklabels(rank.index[::-1]);
ax.set_title('word count')
f.savefig('svg_output/ch14_fig6.svg', format='svg')
# join the joined claims text and use https://www.jasondavies.com/wordcloud/ to generate word cloud
###Output
_____no_output_____ |
notebooks/RL Development Notebook.ipynb | ###Markdown
RL Testing NotebookHere I plan on testing different aspects of the RL algorithm
###Code
%pylab inline
import seaborn as sns
from dphutils import fft_pad
from scipy.signal import fftconvolve
from scipy.ndimage.interpolation import shift
from itertools import product, combinations
from dphplotting import slice_plot, display_grid
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Roll functionsIt seems pretty obvious that in certain situations we'll need to roll axes, the question is how much of a speed hit is it
###Code
a = ones(2**16)
%timeit roll(a, -1)
%timeit a[::-1]
%timeit rfft(a)
a = arange(2**16).reshape(2**8,2**8)
%timeit roll(roll(a, 1, axis=1), 1, axis=0)
def rollmulti(a, shift):
""""""
for i in range(a.ndim):
a = roll(a, shift, i)
return a
%timeit rollmulti(a, 1)
###Output
69 µs ± 5.36 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
###Markdown
RL Stuff
###Code
def rl_core_matlab(image, psf, y_t):
"""The core update step of the RL algorithm"""
otf = rfftn(ifftshift(fft_pad(psf, image.shape, mode="constant")))
reblur = irfftn(otf * rfftn(y_t), y_t.shape)
reblur = ensure_positive(reblur)
im_ratio = image / reblur
estimate = irfftn(np.conj(otf) * rfftn(im_ratio), im_ratio.shape)
return y_t * estimate
def rl_core_accurate(image, psf, y_t):
"""The core update step of the RL algorithm
An accurate version"""
reblur = fftconvolve(y_t, psf, "same")
reblur = ensure_positive(reblur)
im_ratio = image / reblur
# reverse slicing
s = slice(None, None, -1)
estimate = fftconvolve(im_ratio, psf[(s, ) * psf.ndim], "same")
return y_t * estimate
def rl_core_fast(image, psf, y_t):
"""The core update step of the RL algorithm
Fast version"""
pad_psf = fft_pad(psf, image.shape, mode="constant")
otf = rfftn(ifftshift(pad_psf))
reblur = irfftn(otf * rfftn(y_t), y_t.shape)
reblur = ensure_positive(reblur)
im_ratio = image / reblur
estimate = irfftn(np.conj(otf) * rfftn(im_ratio), im_ratio.shape)
for i, (s, p) in enumerate(zip(image.shape, psf.shape)):
if s % 2 and not p % 2:
estimate = roll(estimate, 1, i)
return y_t * estimate
def richardson_lucy(image, psf, iterations=10, core=rl_core_accurate, guess=False):
"""Richardson-Lucy deconvolution."""
# initialize variable for iterations
# previous estimate
u_tm1 = None
# current estimate
if guess:
u_t = image
else:
u_t = ones_like(image) * image.mean()
# previous difference
g_tm1 = None
for i in range(iterations):
# call the update function
u_tp1 = core(image, psf, u_t)
# enure positivity
u_t = ensure_positive(u_tp1)
# return final estimate
return u_tp1
def ensure_positive(data):
"""Make sure data is positive and has no zeros
For numerical stability
If we realize that mutating data is not a problem
and that changing in place could lead to signifcant
speed ups we can lose the data.copy() line"""
# make a copy of the data
data = data.copy()
data[data <= 0] = np.finfo(data.dtype).resolution
return data
###Output
_____no_output_____
###Markdown
1D ExamplesCheck the algorithm with 1 dimensional examples first.
###Code
for k_size, s_size in product((66, 65), (256, 257)):
# make kernel
x = linspace(-2.5, 2.5, k_size, True)
k = exp(-x**2)
# normalize kernel
k /= k.sum()
# make signal
x = linspace(-10, 10, s_size)
f = logical_and(x < 3, x > -3)
y = fftconvolve(f, k, "same")
y = ensure_positive(y)
k = ensure_positive(k)
decon_a = richardson_lucy(y, k)
decon_m = richardson_lucy(y, k, core=rl_core_matlab)
decon_f = richardson_lucy(y, k, core=rl_core_fast)
fig, (ax0, ax1, ax2, ax3) = subplots(1, 4, figsize=(9,3))
ax0.plot(y)
ax0.plot(k)
ax0.plot(f)
ax1.plot(decon_a)
ax1.plot(decon_m)
ax1.plot(decon_f)
# plot differences
ax2.plot(decon_a - decon_f)
ax2.set_title("Accurate - Fast")
ax3.plot(decon_a - decon_m, label="Accurate - matlab")
# ax2.plot(decon_m - decon_f, label="matlab - Fast")
fig.suptitle("k size = {}, s size = {}".format(k_size, s_size), y=1)
###Output
_____no_output_____
###Markdown
2D ExamplesMove on to two dimensions to see if there's any difference
###Code
for k_size, s_size in product((65, 66), (256, 257)):
# make kernel
x = linspace(-2.5, 2.5, k_size, True)
k = exp(-x**2)
k = k[newaxis] * k[:, newaxis]
# normalize kernel
k /= k.sum()
# make signal
x = linspace(-10, 10, s_size)
f = logical_and(x < 3, x > -3)
f = f[newaxis] * f[:, newaxis]
y = fftconvolve(f, k, "same")
y = ensure_positive(y)
k = ensure_positive(k)
d = dict(
f=f,
k=k,
decon_a= richardson_lucy(y, k),
decon_m= richardson_lucy(y, k, core=rl_core_matlab),
decon_f= richardson_lucy(y, k, core=rl_core_fast)
)
d["Accurate - Fast"] = (d["decon_a"] - d["decon_f"])
d["Accurate - MatLab"] = d["decon_a"] - d["decon_m"]
fig, axs = display_grid(d)
fig.suptitle("k size = {}, s size = {}".format(k_size, s_size))
###Output
_____no_output_____
###Markdown
Check the effects of noise
###Code
x = linspace(-2.5, 2.5, 66, True)
k = exp(-x**2)
k = k[newaxis] * k[:, newaxis]
# normalize kernel
k /= k.sum()
k = pad(k, ((1, 1), (10, 3)), "constant")
# make signal
x = linspace(-10, 10, 257)
f = logical_and(x < 3, x > -3)
f = f[newaxis] * f[:, newaxis]
y = fftconvolve(f, k, "same")
y = pad(y, ((20,20), (3,2)), "constant")
y = ensure_positive(y)
k = ensure_positive(k)
y[y < 0] = 0
yy = poisson(y * 100) + randn(*y.shape) ** 2
k[k < 0] = 0
kk = poisson(k * 1e6) + randn(*k.shape) ** 2
kk /= kk.sum()
# also see the difference between using the image data as the first "guess" or using the mean value
with_guess = richardson_lucy(yy, k, 20, core=rl_core_accurate, guess=True)
without_guess = richardson_lucy(yy, k, 20, core=rl_core_accurate, guess=False)
matshow(with_guess - without_guess, cmap="seismic")
colorbar()
slice_plot(with_guess)
slice_plot(without_guess)
plt.figure()
plot(with_guess[128])
plot(without_guess[128])
y.shape
###Output
_____no_output_____
###Markdown
The mean value is ___clearly___ superior. The result is nearly noiseless. This makes sense as any noise present at the beginning guess will be amplified throughout the algorithm.
###Code
from dphutils import richardson_lucy as old_rl
from scipy.signal import signaltools as sig
def get_fshapeslice(in1, in2):
in1 = np.asarray(in1)
in2 = np.asarray(in2)
s1 = np.array(in1.shape)
s2 = np.array(in2.shape)
assert (s1 >= s2).all()
shape = s1 + s2 - 1
# Speed up FFT by padding to optimal size for FFTPACK
fshape = [sig.fftpack.helper.next_fast_len(int(d)) for d in shape]
fslice = tuple([slice(0, int(sz)) for sz in shape])
return fshape, fslice
# Pre-1.9 NumPy FFT routines are not threadsafe. For older NumPys, make
# sure we only call rfftn/irfftn from one thread at a time.
# sp1 = rfftn(in1, fshape, threads=threads)
# sp2 = rfftn(in2, fshape, threads=threads)
# ret = (irfftn(sp1 * sp2, fshape, threads=threads)[fslice].copy())
def rl_core2(image, otf, iotf, y_t, fshape, fslice):
"""The core update step of the RL algorithm
Fast version"""
reblur = irfftn(rfftn(y_t, fshape) * otf, fshape)[fslice]
reblur = sig._centered(reblur, image.shape)
reblur = ensure_positive(reblur)
im_ratio = image / reblur
estimate = irfftn(rfftn(im_ratio, fshape) * iotf, fshape)[fslice]
estimate = sig._centered(estimate, image.shape)
return y_t * estimate
def richardson_lucy2(image, psf, iterations=10):
"""Richardson-Lucy deconvolution."""
# initialize variable for iterations
fshape, fslice = get_fshapeslice(image, psf)
otf = rfftn(psf, fshape)
iotf = rfftn(psf[::-1, ::-1], fshape)
u_tm1 = None
# current estimate
u_t = ones_like(image) * image.mean()
# previous difference
g_tm1 = None
for i in range(iterations):
# call the update function
u_tp1 = rl_core2(image, otf, iotf, u_t, fshape, fslice)
# enure positivity
u_t = ensure_positive(u_tp1)
# return final estimate
return u_tp1
slice_plot(richardson_lucy2(yy, k, 100))
allclose(richardson_lucy2(yy, k, 20), richardson_lucy(yy, k, 20, core=rl_core_accurate, guess=False))
%timeit richardson_lucy2(yy, k, 20)
%timeit richardson_lucy(yy, k, 20, core=rl_core_accurate, guess=False)
%timeit richardson_lucy(yy, k, 20, core=rl_core_accurate, guess=False)
%timeit richardson_lucy(yy, k, 20, core=rl_core_fast, guess=False)
%timeit richardson_lucy2(yy, k, 20)
d = dict(
decon_a = richardson_lucy(yy, k, 20, core=rl_core_accurate, guess=False),
decon_f = richardson_lucy(yy, k, 20, core=rl_core_fast, guess=False),
decon_m = richardson_lucy(yy, k, 20, core=rl_core_matlab, guess=False),
decon_fa = richardson_lucy2(yy, k, 20),
)
display_grid(d)
display_grid({k1 + " - " + k2:v1-v2 for (k1, v1), (k2, v2) in combinations(d.items(), 2)})
%load_ext autoreload
%autoreload 2
from pyDecon.decon import richardson_lucy as new_rl
allclose(new_rl(yy, k, 20, prediction_order=0), richardson_lucy(yy, k, 20, core=rl_core_accurate, guess=False))
matshow(new_rl(yy, k, 20, prediction_order=2) - new_rl(yy, k, 20, prediction_order=0), cmap="seismic")
colorbar()
matshow(new_rl(yy, k, 13, prediction_order=1) - new_rl(yy, k, 20, prediction_order=0), cmap="seismic")
colorbar()
matshow(new_rl(yy, k, 14, prediction_order=2) - new_rl(yy, k, 20, prediction_order=0), cmap="seismic")
colorbar()
%timeit richardson_lucy(yy, k, 20, core=rl_core_accurate, guess=False)
%timeit richardson_lucy(yy, k, 20, core=rl_core_fast, guess=False)
%timeit new_rl(yy, k, 20, 0)
%timeit new_rl(yy, k, 20, 1)
%timeit new_rl(yy, k, 13, 1)
%timeit new_rl(yy, k, 13, 1, threads=1)
%timeit new_rl(yy, k, 13, 1, threads=2)
%timeit new_rl(yy, k, 13, 1, threads=4)
%timeit new_rl(yy, k, 13, 1, threads=8)
%timeit new_rl(yy, k, 13, 1, threads=16)
import pyfftw
x = linspace(-2.5, 2.5, 64, True)
k = exp(-x**2)
k = k[newaxis] * k[:, newaxis]
# normalize kernel
k /= k.sum()
# make signal
x = linspace(-10, 10, 512)
f = logical_and(x < 3, x > -3)
f = f[newaxis] * f[:, newaxis]
y = fftconvolve(f, k, "same")
y = ensure_positive(y)
k = ensure_positive(k)
y[y < 0] = 0
yy = poisson(y * 100) + randn(*y.shape) ** 2
k[k < 0] = 0
kk = poisson(k * 1e6) + randn(*k.shape) ** 2
kk /= kk.sum()
pyfftw.forget_wisdom()
start = time.time()
%time new_rl(yy, k, 13, 1, threads=1)
end = "g"
import time
time.time()
###Output
_____no_output_____
###Markdown
Let's try a bigger problem to see if more threads is better
###Code
x = linspace(-2.5, 2.5, 65, True)
k = exp(-x**2)
k = k[newaxis] * k[:, newaxis]
# normalize kernel
k /= k.sum()
k = pad(k, ((1, 1), (10, 3)), "constant")
# make signal
x = linspace(-10, 10, 2048)
f = logical_and(x < 3, x > -3)
f = f[newaxis] * f[:, newaxis]
y = fftconvolve(f, k, "same")
y = pad(y, ((20,20), (3,2)), "constant")
y = ensure_positive(y)
k = ensure_positive(k)
y[y < 0] = 0
yy = poisson(y * 100) + randn(*y.shape) ** 2
k[k < 0] = 0
kk = poisson(k * 1e6) + randn(*k.shape) ** 2
kk /= kk.sum()
%timeit new_rl(yy, k, 13, 1, threads=1)
%timeit new_rl(yy, k, 13, 1, threads=2)
%timeit new_rl(yy, k, 13, 1, threads=4)
%timeit new_rl(yy, k, 13, 1, threads=8)
%timeit new_rl(yy, k, 13, 1, threads=16)
%timeit new_rl(yy, k, 13, 1, planner_effort="FFTW_PATIENT")
%timeit new_rl(yy, k, 13, 1, planner_effort="FFTW_EXHAUSTIVE")
###Output
The slowest run took 91.16 times longer than the fastest. This could mean that an intermediate result is being cached.
1 loop, best of 3: 10.1 s per loop
The slowest run took 268.40 times longer than the fastest. This could mean that an intermediate result is being cached.
1 loop, best of 3: 10.9 s per loop
###Markdown
RL Testing NotebookHere I plan on testing different aspects of the RL algorithm
###Code
%pylab inline
import seaborn as sns
from dphutils import fft_pad
from scipy.signal import fftconvolve
from scipy.ndimage.interpolation import shift
from itertools import product, combinations
from dphplotting import slice_plot, display_grid
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Roll functionsIt seems pretty obvious that in certain situations we'll need to roll axes, the question is how much of a speed hit is it
###Code
a = ones(2**16)
%timeit roll(a, -1)
%timeit a[::-1]
%timeit rfft(a)
a = arange(2**16).reshape(2**8, 2**8)
%timeit roll(roll(a, 1, axis=1), 1, axis=0)
def rollmulti(a, shift):
""""""
for i in range(a.ndim):
a = roll(a, shift, i)
return a
%timeit rollmulti(a, 1)
###Output
69 µs ± 5.36 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
###Markdown
RL Stuff
###Code
def rl_core_matlab(image, psf, y_t):
"""The core update step of the RL algorithm"""
otf = rfftn(ifftshift(fft_pad(psf, image.shape, mode="constant")))
reblur = irfftn(otf * rfftn(y_t), y_t.shape)
reblur = ensure_positive(reblur)
im_ratio = image / reblur
estimate = irfftn(np.conj(otf) * rfftn(im_ratio), im_ratio.shape)
return y_t * estimate
def rl_core_accurate(image, psf, y_t):
"""The core update step of the RL algorithm
An accurate version"""
reblur = fftconvolve(y_t, psf, "same")
reblur = ensure_positive(reblur)
im_ratio = image / reblur
# reverse slicing
s = slice(None, None, -1)
estimate = fftconvolve(im_ratio, psf[(s,) * psf.ndim], "same")
return y_t * estimate
def rl_core_fast(image, psf, y_t):
"""The core update step of the RL algorithm
Fast version"""
pad_psf = fft_pad(psf, image.shape, mode="constant")
otf = rfftn(ifftshift(pad_psf))
reblur = irfftn(otf * rfftn(y_t), y_t.shape)
reblur = ensure_positive(reblur)
im_ratio = image / reblur
estimate = irfftn(np.conj(otf) * rfftn(im_ratio), im_ratio.shape)
for i, (s, p) in enumerate(zip(image.shape, psf.shape)):
if s % 2 and not p % 2:
estimate = roll(estimate, 1, i)
return y_t * estimate
def richardson_lucy(image, psf, iterations=10, core=rl_core_accurate, guess=False):
"""Richardson-Lucy deconvolution."""
# initialize variable for iterations
# previous estimate
u_tm1 = None
# current estimate
if guess:
u_t = image
else:
u_t = ones_like(image) * image.mean()
# previous difference
g_tm1 = None
for i in range(iterations):
# call the update function
u_tp1 = core(image, psf, u_t)
# enure positivity
u_t = ensure_positive(u_tp1)
# return final estimate
return u_tp1
def ensure_positive(data):
"""Make sure data is positive and has no zeros
For numerical stability
If we realize that mutating data is not a problem
and that changing in place could lead to signifcant
speed ups we can lose the data.copy() line"""
# make a copy of the data
data = data.copy()
data[data <= 0] = np.finfo(data.dtype).resolution
return data
###Output
_____no_output_____
###Markdown
1D ExamplesCheck the algorithm with 1 dimensional examples first.
###Code
for k_size, s_size in product((66, 65), (256, 257)):
# make kernel
x = linspace(-2.5, 2.5, k_size, True)
k = exp(-(x**2))
# normalize kernel
k /= k.sum()
# make signal
x = linspace(-10, 10, s_size)
f = logical_and(x < 3, x > -3)
y = fftconvolve(f, k, "same")
y = ensure_positive(y)
k = ensure_positive(k)
decon_a = richardson_lucy(y, k)
decon_m = richardson_lucy(y, k, core=rl_core_matlab)
decon_f = richardson_lucy(y, k, core=rl_core_fast)
fig, (ax0, ax1, ax2, ax3) = subplots(1, 4, figsize=(9, 3))
ax0.plot(y)
ax0.plot(k)
ax0.plot(f)
ax1.plot(decon_a)
ax1.plot(decon_m)
ax1.plot(decon_f)
# plot differences
ax2.plot(decon_a - decon_f)
ax2.set_title("Accurate - Fast")
ax3.plot(decon_a - decon_m, label="Accurate - matlab")
# ax2.plot(decon_m - decon_f, label="matlab - Fast")
fig.suptitle("k size = {}, s size = {}".format(k_size, s_size), y=1)
###Output
_____no_output_____
###Markdown
2D ExamplesMove on to two dimensions to see if there's any difference
###Code
for k_size, s_size in product((65, 66), (256, 257)):
# make kernel
x = linspace(-2.5, 2.5, k_size, True)
k = exp(-(x**2))
k = k[newaxis] * k[:, newaxis]
# normalize kernel
k /= k.sum()
# make signal
x = linspace(-10, 10, s_size)
f = logical_and(x < 3, x > -3)
f = f[newaxis] * f[:, newaxis]
y = fftconvolve(f, k, "same")
y = ensure_positive(y)
k = ensure_positive(k)
d = dict(
f=f,
k=k,
decon_a=richardson_lucy(y, k),
decon_m=richardson_lucy(y, k, core=rl_core_matlab),
decon_f=richardson_lucy(y, k, core=rl_core_fast),
)
d["Accurate - Fast"] = d["decon_a"] - d["decon_f"]
d["Accurate - MatLab"] = d["decon_a"] - d["decon_m"]
fig, axs = display_grid(d)
fig.suptitle("k size = {}, s size = {}".format(k_size, s_size))
###Output
_____no_output_____
###Markdown
Check the effects of noise
###Code
x = linspace(-2.5, 2.5, 66, True)
k = exp(-(x**2))
k = k[newaxis] * k[:, newaxis]
# normalize kernel
k /= k.sum()
k = pad(k, ((1, 1), (10, 3)), "constant")
# make signal
x = linspace(-10, 10, 257)
f = logical_and(x < 3, x > -3)
f = f[newaxis] * f[:, newaxis]
y = fftconvolve(f, k, "same")
y = pad(y, ((20, 20), (3, 2)), "constant")
y = ensure_positive(y)
k = ensure_positive(k)
y[y < 0] = 0
yy = poisson(y * 100) + randn(*y.shape) ** 2
k[k < 0] = 0
kk = poisson(k * 1e6) + randn(*k.shape) ** 2
kk /= kk.sum()
# also see the difference between using the image data as the first "guess" or using the mean value
with_guess = richardson_lucy(yy, k, 20, core=rl_core_accurate, guess=True)
without_guess = richardson_lucy(yy, k, 20, core=rl_core_accurate, guess=False)
matshow(with_guess - without_guess, cmap="seismic")
colorbar()
slice_plot(with_guess)
slice_plot(without_guess)
plt.figure()
plot(with_guess[128])
plot(without_guess[128])
y.shape
###Output
_____no_output_____
###Markdown
The mean value is ___clearly___ superior. The result is nearly noiseless. This makes sense as any noise present at the beginning guess will be amplified throughout the algorithm.
###Code
from dphutils import richardson_lucy as old_rl
from scipy.signal import signaltools as sig
def get_fshapeslice(in1, in2):
in1 = np.asarray(in1)
in2 = np.asarray(in2)
s1 = np.array(in1.shape)
s2 = np.array(in2.shape)
assert (s1 >= s2).all()
shape = s1 + s2 - 1
# Speed up FFT by padding to optimal size for FFTPACK
fshape = [sig.fftpack.helper.next_fast_len(int(d)) for d in shape]
fslice = tuple([slice(0, int(sz)) for sz in shape])
return fshape, fslice
# Pre-1.9 NumPy FFT routines are not threadsafe. For older NumPys, make
# sure we only call rfftn/irfftn from one thread at a time.
# sp1 = rfftn(in1, fshape, threads=threads)
# sp2 = rfftn(in2, fshape, threads=threads)
# ret = (irfftn(sp1 * sp2, fshape, threads=threads)[fslice].copy())
def rl_core2(image, otf, iotf, y_t, fshape, fslice):
"""The core update step of the RL algorithm
Fast version"""
reblur = irfftn(rfftn(y_t, fshape) * otf, fshape)[fslice]
reblur = sig._centered(reblur, image.shape)
reblur = ensure_positive(reblur)
im_ratio = image / reblur
estimate = irfftn(rfftn(im_ratio, fshape) * iotf, fshape)[fslice]
estimate = sig._centered(estimate, image.shape)
return y_t * estimate
def richardson_lucy2(image, psf, iterations=10):
"""Richardson-Lucy deconvolution."""
# initialize variable for iterations
fshape, fslice = get_fshapeslice(image, psf)
otf = rfftn(psf, fshape)
iotf = rfftn(psf[::-1, ::-1], fshape)
u_tm1 = None
# current estimate
u_t = ones_like(image) * image.mean()
# previous difference
g_tm1 = None
for i in range(iterations):
# call the update function
u_tp1 = rl_core2(image, otf, iotf, u_t, fshape, fslice)
# enure positivity
u_t = ensure_positive(u_tp1)
# return final estimate
return u_tp1
slice_plot(richardson_lucy2(yy, k, 100))
allclose(
richardson_lucy2(yy, k, 20), richardson_lucy(yy, k, 20, core=rl_core_accurate, guess=False)
)
%timeit richardson_lucy2(yy, k, 20)
%timeit richardson_lucy(yy, k, 20, core=rl_core_accurate, guess=False)
%timeit richardson_lucy(yy, k, 20, core=rl_core_accurate, guess=False)
%timeit richardson_lucy(yy, k, 20, core=rl_core_fast, guess=False)
%timeit richardson_lucy2(yy, k, 20)
d = dict(
decon_a=richardson_lucy(yy, k, 20, core=rl_core_accurate, guess=False),
decon_f=richardson_lucy(yy, k, 20, core=rl_core_fast, guess=False),
decon_m=richardson_lucy(yy, k, 20, core=rl_core_matlab, guess=False),
decon_fa=richardson_lucy2(yy, k, 20),
)
display_grid(d)
display_grid({k1 + " - " + k2: v1 - v2 for (k1, v1), (k2, v2) in combinations(d.items(), 2)})
%load_ext autoreload
%autoreload 2
from pyDecon.decon import richardson_lucy as new_rl
allclose(
new_rl(yy, k, 20, prediction_order=0),
richardson_lucy(yy, k, 20, core=rl_core_accurate, guess=False),
)
matshow(
new_rl(yy, k, 20, prediction_order=2) - new_rl(yy, k, 20, prediction_order=0), cmap="seismic"
)
colorbar()
matshow(
new_rl(yy, k, 13, prediction_order=1) - new_rl(yy, k, 20, prediction_order=0), cmap="seismic"
)
colorbar()
matshow(
new_rl(yy, k, 14, prediction_order=2) - new_rl(yy, k, 20, prediction_order=0), cmap="seismic"
)
colorbar()
%timeit richardson_lucy(yy, k, 20, core=rl_core_accurate, guess=False)
%timeit richardson_lucy(yy, k, 20, core=rl_core_fast, guess=False)
%timeit new_rl(yy, k, 20, 0)
%timeit new_rl(yy, k, 20, 1)
%timeit new_rl(yy, k, 13, 1)
%timeit new_rl(yy, k, 13, 1, threads=1)
%timeit new_rl(yy, k, 13, 1, threads=2)
%timeit new_rl(yy, k, 13, 1, threads=4)
%timeit new_rl(yy, k, 13, 1, threads=8)
%timeit new_rl(yy, k, 13, 1, threads=16)
import pyfftw
x = linspace(-2.5, 2.5, 64, True)
k = exp(-(x**2))
k = k[newaxis] * k[:, newaxis]
# normalize kernel
k /= k.sum()
# make signal
x = linspace(-10, 10, 512)
f = logical_and(x < 3, x > -3)
f = f[newaxis] * f[:, newaxis]
y = fftconvolve(f, k, "same")
y = ensure_positive(y)
k = ensure_positive(k)
y[y < 0] = 0
yy = poisson(y * 100) + randn(*y.shape) ** 2
k[k < 0] = 0
kk = poisson(k * 1e6) + randn(*k.shape) ** 2
kk /= kk.sum()
pyfftw.forget_wisdom()
start = time.time()
%time new_rl(yy, k, 13, 1, threads=1)
end = "g"
import time
time.time()
###Output
_____no_output_____
###Markdown
Let's try a bigger problem to see if more threads is better
###Code
x = linspace(-2.5, 2.5, 65, True)
k = exp(-(x**2))
k = k[newaxis] * k[:, newaxis]
# normalize kernel
k /= k.sum()
k = pad(k, ((1, 1), (10, 3)), "constant")
# make signal
x = linspace(-10, 10, 2048)
f = logical_and(x < 3, x > -3)
f = f[newaxis] * f[:, newaxis]
y = fftconvolve(f, k, "same")
y = pad(y, ((20, 20), (3, 2)), "constant")
y = ensure_positive(y)
k = ensure_positive(k)
y[y < 0] = 0
yy = poisson(y * 100) + randn(*y.shape) ** 2
k[k < 0] = 0
kk = poisson(k * 1e6) + randn(*k.shape) ** 2
kk /= kk.sum()
%timeit new_rl(yy, k, 13, 1, threads=1)
%timeit new_rl(yy, k, 13, 1, threads=2)
%timeit new_rl(yy, k, 13, 1, threads=4)
%timeit new_rl(yy, k, 13, 1, threads=8)
%timeit new_rl(yy, k, 13, 1, threads=16)
%timeit new_rl(yy, k, 13, 1, planner_effort="FFTW_PATIENT")
%timeit new_rl(yy, k, 13, 1, planner_effort="FFTW_EXHAUSTIVE")
###Output
The slowest run took 91.16 times longer than the fastest. This could mean that an intermediate result is being cached.
1 loop, best of 3: 10.1 s per loop
The slowest run took 268.40 times longer than the fastest. This could mean that an intermediate result is being cached.
1 loop, best of 3: 10.9 s per loop
|
docs/tutorials/google/colab.ipynb | ###Markdown
Copyright 2020 The Cirq Developers
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Notebook template View on QuantumLib Run in Google Colab View source on GitHub Download notebook
###Code
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
###Output
_____no_output_____
###Markdown
Make a copy of this templateYou will need to have access to Quantum Computing Service before running this colab.This notebook can serve as a starter kit for you to run programs on Google's quantum hardware. You can download it using the directions below, open it in colab (or Jupyter), and modify it to begin your experiments. How to download iPython notebooks from GitHubYou can retrieve iPython notebooks in the Cirq repository bygoing to the [docs/ directory](https://github.com/quantumlib/Cirq/tree/master/docs). For instance, this Colab template is found [here](https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/colab.ipynb). Select the file that you would like to download and then click the *Raw* button in the upper-right part of the window:This will show the entire file contents. Right-click and select *Save as* to save this file to your computer. Make sure to save to a file with a `.ipynb` extension (you may need to select *All files* from the format dropdown instead of *text*). You can also get to this Colab's [raw content directly](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/tutorials/google/colab.ipynb)You can also retrieve the entire Cirq repository by running the following command in a terminal that has `git` installed:```git checkout https://github.com/quantumlib/Cirq.git``` How to open Google ColabYou can open a new Colab notebook from your Google Drive window or by visiting the [Colab site](https://colab.research.google.com/notebooks/intro.ipynb). From the Colaboratory site, you can use the menu to upload an iPython notebook:This will upload the ipynb file that you downloaded before. You can now run all the commands, modify it to suit your goals, and share it with others. More Documentation Links* [Quantum Engine concepts](../../google/concepts.md)* [Quantum Engine documentation](../../google/engine.md)* [Cirq documentation](https://cirq.readthedocs.io)* [Colab documentation](https://colab.research.google.com/notebooks/welcome.ipynb) Authenticate and install CirqFor details of authentication and installation, please see [go/quantum-engine-quickstart](http://go/quantum-engine-quickstart). Note: The below code will install the latest stable release of cirq. If you need the latest and greatest features and don't mind if a few things aren't quite working correctly, you can install `cirq-unstable` instead of `cirq` to get the most up-to-date features of cirq.1. Enter the Cloud project ID you'd like to use in the `project_id` field.2. Then run the cell below (and go through the auth flow for access to the project id you entered).
###Code
# The Google Cloud Project id to use.
project_id = '' #@param {type:"string"}
if project_id == '':
raise Exception("Please setup project_id in this cell!")
def setup_auth():
"""Runs the user through the Colab OAuth process.
Sets the local Application Default Credentials. For more information on on
using Application Default Credentials see
https://cloud.google.com/docs/authentication/production
"""
from google.colab import auth
auth.authenticate_user(clear_output=False)
print("Getting OAuth2 credentials.")
print("Press enter after entering the verification code.")
setup_auth()
print("Authentication complete.")
###Output
Getting OAuth2 credentials.
Press enter after entering the verification code.
Authentication complete.
Requirement already satisfied: cirq in /usr/local/lib/python3.6/dist-packages (0.8.2)
Requirement already satisfied: freezegun~=0.3.15 in /usr/local/lib/python3.6/dist-packages (from cirq) (0.3.15)
Requirement already satisfied: google-api-core[grpc]<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from cirq) (1.16.0)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.6/dist-packages (from cirq) (3.7.4.3)
Requirement already satisfied: dataclasses; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from cirq) (0.7)
Requirement already satisfied: sympy in /usr/local/lib/python3.6/dist-packages (from cirq) (1.1.1)
Requirement already satisfied: networkx~=2.4 in /usr/local/lib/python3.6/dist-packages (from cirq) (2.5)
Requirement already satisfied: protobuf~=3.12.0 in /usr/local/lib/python3.6/dist-packages (from cirq) (3.12.4)
Requirement already satisfied: sortedcontainers~=2.0 in /usr/local/lib/python3.6/dist-packages (from cirq) (2.2.2)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from cirq) (1.4.1)
Requirement already satisfied: numpy~=1.16 in /usr/local/lib/python3.6/dist-packages (from cirq) (1.18.5)
Requirement already satisfied: requests~=2.18 in /usr/local/lib/python3.6/dist-packages (from cirq) (2.23.0)
Requirement already satisfied: matplotlib~=3.0 in /usr/local/lib/python3.6/dist-packages (from cirq) (3.2.2)
Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from cirq) (1.0.5)
Requirement already satisfied: python-dateutil!=2.0,>=1.0 in /usr/local/lib/python3.6/dist-packages (from freezegun~=0.3.15->cirq) (2.8.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from freezegun~=0.3.15->cirq) (1.15.0)
Requirement already satisfied: setuptools>=34.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (49.6.0)
Requirement already satisfied: pytz in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (2018.9)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (1.52.0)
Requirement already satisfied: google-auth<2.0dev,>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (1.17.2)
Requirement already satisfied: grpcio<2.0dev,>=1.8.2; extra == "grpc" in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (1.31.0)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.6/dist-packages (from sympy->cirq) (1.1.0)
Requirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx~=2.4->cirq) (4.4.2)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq) (2020.6.20)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib~=3.0->cirq) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib~=3.0->cirq) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib~=3.0->cirq) (1.2.0)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from google-auth<2.0dev,>=0.4.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (4.6)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2.0dev,>=0.4.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (4.1.1)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2.0dev,>=0.4.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (0.2.8)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<5,>=3.1.4; python_version >= "3"->google-auth<2.0dev,>=0.4.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (0.4.8)
Cirq installed.
###Markdown
Create an Engine variableThe following creates and engine variable which can be used to run programs under the project ID you entered above.
###Code
import cirq
# Create an Engine object to use, providing the project id and the args
# used for authentication (produced by running the authentication above).
engine = cirq.google.Engine(project_id=project_id)
###Output
_____no_output_____
###Markdown
Example
###Code
# A simple example.
q = cirq.GridQubit(5, 2)
circuit = cirq.Circuit(cirq.X(q)**0.5, cirq.measure(q, key='m'))
job = engine.run_sweep(
program=circuit,
repetitions=10000,
processor_ids=['rainbow'],
gate_set=cirq.google.SYC_GATESET)
results = [str(int(b)) for b in job.results()[0].measurements['m'][:, 0]]
print('Success! Results:')
print(''.join(results))
###Output
Success! Results:
0000001111010100111111000100000000011100001000000011110011101011101101100000101000000110010110011100001001000110001110100101010010101100011010110000010001011001111001111001110011111001100011110110001001101010000000000111101101010110101010011011011111110010111010000010001000101110101100111111111101110100100100000011110010010000110111100111010010101111010001111100011100100101010011111011100000000100100001001100110001000101000011010101010111110001000000011100100000001011011111111001001011000001100110101000000001100110000111000100110101110111101010100110001101001000010110011001011010000111111000111010101111100101100001011101100101000110001100001100010001001001110010000010001011111101101101111011101101100010110001000111001010101001011111011111100011001110001110010111000011010011101101000011100111011000001001010101010010001101011010101011001001010001000111100101000011000101011011110111011101001011001001011011001101111010010011110010011101011011010111001110110101010101010101010001100101000001000101101001010000101110101011011111101010001011111010111110011101111111010100000001100000001100100011001011110111011111110011101010111000011101100111111101100001111100001110100100001010011001110111010000011101110011010001111011110101010011010111011001000101111010000110000111010111110001101011010010100010111111100001011111000001010000001001101100110011001100001000101101001100101011101100011011000111100111000001001100001110100010110011101001111101011000100111010001111001111000110000101001000100010110110001011111001100110101110001110110101101010001010101110100100110100000101101000101010011000110101010101011110111011111110010111011111100000001000000011110101010100111110000010001000101000111011001100100011001101101010000101010001100000001010001110110000000000111100110000010101100001000101110101100010110000010101000110110100010110101001001010010001010111010001001100000000011101110110010111100000000010011001010010010111111010100001011010100101001001110101100000001011001100011100111110011001101111000001001101111100111010100111111101000011100100110111010100010010000001101111111100101111101000101101010000010010100000100111000000110101111101000101001010010001111101000000001011011000111110001111100111010000110000100000110010101110011010100110010010111111010101100101101001111001001000000000100011111000101101001011010110010100100000011001111100111101111111011010110010000111001000110000011001111111010010010100110000011100101011001000111011010101000001011100010001011001111001101000111011000110110001001110010000010111010110100100110000010011101001000100010010011100000010010110100100011011010111000110110110000001001011000000000100111000101110010110001111001001111001011011110010110010101010001100010101000100000110111010111001111000111000010000100101011001001100111100011111000001010110011110111000101000010000110011010110110101110100000101011011001011001110011001001101010001000000111110000101100001100101000110000011110001110000101001001100010101010000000100101100001101001111001001101000010100001011110011100000100000000100010101100111101100011101101010100010110000101010011011101011010100110111100110011001011001000110100001000100010111011011110111011011110100010010011100001011001111001010111000000010110110100001010111010000111110101000000110101100110001010001101110101001110100110110110011111100111101111111001010011011001101111000000110110111110111000010001011001011011010110000000010010100101000110110000100011011010011110100010101101101001001001100001100001011000110011011011110100100000010001011011111000010110010111010011110011110010000101111011100010010011101011110000101010010000011001110101111000001011110010111110001110110100101001001100000011001010010100011101001100110111000001011111011010000111100010111100000011000110001111010100000111000101100000011100100101111100101111100111011110100110011100011110100111010000010111000110100011001000010011100101111010010100110100100011101100111001100101111111110101101001010111100000110010000111100010100100110001010000010011101110111010110010100000110001111100001001000100001010001010110101110110110111100101110001110010010001010110000001000101010111110001100011111011100010110111011000000010101110111101100110000010000000010100111110011000010000001010101111010100000110001001011000010101111100001111110001101100011110110010010110011110110010001011100010010000000010000101010010111101001010101011001011110100000101111100111010010110010001110001010101000111001101110011100000010101101011000100010110011011011111100010110111110100111111101100001100000000000101000101011100111000000100110000010110110010001111000011000101110000010101110011011000010111110111110101101000000100000110101010001000100100000010000101010000011010001000001111001001010011101101001001100100101011111011010100101010001011110111110011101100001010111111110110111010000011111100001101101000010111010000110001111000101110010110110101110000100000111101001101100111100000110100011101101111011111010010001110010011011110101101100100001001000110100110110001010111011101101100101101000111001000110111101000000001011000100110100010011001000111110010101110010010011011011011001111110010000111011011101010001011011000011010111011100001011000111100101000011010101010010001000010000100001000101100111111111110000111110010011001000110110111010000000110110100100111101101011001111110011010100000111000100100110110000000001110110010000010100010100101001000001101101100101001111101000111100001111110011110001000110000011011100100110100000010111011011000000011000000001011101010110100111101001101100100001111000011101110011001111111011101011000100010011001101010100110000001100111000001011001010110100111001101111101101000100111110001111100001110010110000000100101011010111001001100011101110111011100111011111111111111010010101100111111100101011101101011011010111110011011110111011111100110101010000000000101111000010100100100000100001110011010011001001110111111110001101010001010000010110111111111011010000110101001010111111110110111000111011000101100001000011010111010010000111110110100110010111000011011100000110011000110100011111100110000101101111101001100101001010101001011111100010110110000100110111110010111100011010110000011001100111010010000111111011000010110100111111101000101101100110000001010011000011111101010000000010011101101101111000101100010100001100011011100001101111101011101010110001110000101011011100010100110011001100110001100001101001000000010101100001000000110101110111000110100100101000001110110111001000001101100011011000010101110011100001111000011101011001011010100001101101011111001000101001011101000010111011000101011101110110100011110111101110110011101111010111010111001001101110001011000100011101000111000001000000100100000101000100001001010101110111010010011000100000010000111101011000010100100111001111101111011001110101000000100001111111000111100100011101111111000001111111101100000110010001110110001111010100010000000010000010101010010101000001001000000011001101100001110111010001000101011001010000000100111101001111010010001011011101011011011101110100000110110001100001010010111000111010010011110010001001010001001000011110000001000001111001101001111011011100010110010010101000001011000000100000101000111110010001000100000000110111100001011100010011100001011100110100001000010101101010100001000000100100010101001000101110110111010111111000110000100010000011110100100111010100000011111101010110100110110100001000001001001010101011010101011110011111000011000000111101110100000100101111100111110110000100110100000011111110111110100001011010111010000100101010101110001100100101000101101101001011101010101100111001001101001000110101111111001111011110101011101001110100000010010001101101100000010110000011001100001110110010100000000011110111100000100001101011001100110110000001001000001101111001101101111001110011101101101010001101100110111110011110010001000110110010100100010100111000000100010111011100100110100100110111000110001010010000110000001101000010100011101110010110101100001101001101000000100100010000010000011110110101100000011110010101010010001100110101001100101000101100010101001010000111000111000000000001001100100110001111000110100001000000010001110011011001110000011001000011101010011110000101001101111100110000111110101111111111101011100101011001001010101101001100110110111000111110111011101000011001100101010111101100111010001001000001011001101101111110001001111111111110001000100011100101100110110101101100110110010110011111001101001011011111110001011110110000100010000101011110011001000100010100011101111100010011011010011100111110111000111100110000010011100111011100011001011001101101011001000001001010011101111010000101011010100011010011001111011010001111010010111001111000001000001111100100010110001000100011011110001010011101000110000100110010111110111110011000101001000111000001100001110010000011100010001111100011001110011100001100011010010000001000001100010100100110101001101110011001001010100011010110011111010101100111110010011100101101111001110001011001000101101100111010000100111101010101010100001100011001101011111111110010001111100111001001110010101100111001010000001000100000111101110111001110111100100110101000110000001100000101001010001111000001110110100000011100011011100011011100101010100110011111110110011110100100111100110110100111010100001011101010001000010011010011101010001010101000100011111110001010000000100100111101110100000100100110100000101001111011011011110011111101001011100010111110110100011001100000101000110100101101011001010111100000000111010101011101110101100100101001001111010001111100101010011011000101000111100010010010000001111001001100011110001000110011010110110100100111011100111010110111010110011010101010111100110000101101111101110010001010011000010111001110110000100100101010001001010110110001001001011001000010100111100010110110101000111100110010110001010010010000110010101111000010001100111010100010111100110000000010001101001011100110001011000100101000011101000010110100001110011010101111011101111110110011111111000111010010010100010011100100100100011111111011111110100110001100110
###Markdown
Copyright 2020 The Cirq Developers
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Notebook template View on QuantumLib Run in Google Colab View source on GitHub Download notebook Make a copy of this templateYou will need to have access to Quantum Computing Service before running this colab.This notebook can serve as a starter kit for you to run programs on Google's quantum hardware. You can download it using the directions below, open it in colab (or Jupyter), and modify it to begin your experiments. How to download iPython notebooks from GitHubYou can retrieve iPython notebooks in the Cirq repository bygoing to the [docs/ directory](https://github.com/quantumlib/Cirq/tree/master/docs). For instance, this Colab template is found [here](https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/colab.ipynb). Select the file that you would like to download and then click the *Raw* button in the upper-right part of the window:This will show the entire file contents. Right-click and select *Save as* to save this file to your computer. Make sure to save to a file with a `.ipynb` extension (you may need to select *All files* from the format dropdown instead of *text*). You can also get to this Colab's [raw content directly](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/tutorials/google/colab.ipynb)You can also retrieve the entire Cirq repository by running the following command in a terminal that has `git` installed:```git checkout https://github.com/quantumlib/Cirq.git``` How to open Google ColabYou can open a new Colab notebook from your Google Drive window or by visiting the [Colab site](https://colab.research.google.com/notebooks/intro.ipynb). From the Colaboratory site, you can use the menu to upload an iPython notebook:This will upload the ipynb file that you downloaded before. You can now run all the commands, modify it to suit your goals, and share it with others. More Documentation Links* [Quantum Engine concepts](../../google/concepts.md)* [Quantum Engine documentation](../../google/engine.md)* [Cirq documentation](https://cirq.readthedocs.io)* [Colab documentation](https://colab.research.google.com/notebooks/welcome.ipynb) Authenticate and install CirqFor details of authentication and installation, please see [go/quantum-engine-quickstart](http://go/quantum-engine-quickstart). Note: The below code will install the latest stable release of cirq. If you need the latest and greatest features and don't mind if a few things aren't quite working correctly, you can install `cirq-unstable` instead of `cirq` to get the most up-to-date features of cirq.1. Enter the Cloud project ID you'd like to use in the `project_id` field.2. Then run the cell below (and go through the auth flow for access to the project id you entered).
###Code
# The Google Cloud Project id to use.
project_id = 'quantum-cloud-client' #@param {type:"string"}
def setup_auth():
"""Runs the user through the Colab OAuth process.
Sets the local Application Default Credentials. For more information on on
using Application Default Credentials see
https://cloud.google.com/docs/authentication/production
"""
from google.colab import auth
auth.authenticate_user(clear_output=False)
print("Getting OAuth2 credentials.")
print("Press enter after entering the verification code.")
setup_auth()
print("Authentication complete.")
!pip install cirq
print("Cirq installed.")
###Output
Getting OAuth2 credentials.
Press enter after entering the verification code.
Authentication complete.
Requirement already satisfied: cirq in /usr/local/lib/python3.6/dist-packages (0.8.2)
Requirement already satisfied: freezegun~=0.3.15 in /usr/local/lib/python3.6/dist-packages (from cirq) (0.3.15)
Requirement already satisfied: google-api-core[grpc]<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from cirq) (1.16.0)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.6/dist-packages (from cirq) (3.7.4.3)
Requirement already satisfied: dataclasses; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from cirq) (0.7)
Requirement already satisfied: sympy in /usr/local/lib/python3.6/dist-packages (from cirq) (1.1.1)
Requirement already satisfied: networkx~=2.4 in /usr/local/lib/python3.6/dist-packages (from cirq) (2.5)
Requirement already satisfied: protobuf~=3.12.0 in /usr/local/lib/python3.6/dist-packages (from cirq) (3.12.4)
Requirement already satisfied: sortedcontainers~=2.0 in /usr/local/lib/python3.6/dist-packages (from cirq) (2.2.2)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from cirq) (1.4.1)
Requirement already satisfied: numpy~=1.16 in /usr/local/lib/python3.6/dist-packages (from cirq) (1.18.5)
Requirement already satisfied: requests~=2.18 in /usr/local/lib/python3.6/dist-packages (from cirq) (2.23.0)
Requirement already satisfied: matplotlib~=3.0 in /usr/local/lib/python3.6/dist-packages (from cirq) (3.2.2)
Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from cirq) (1.0.5)
Requirement already satisfied: python-dateutil!=2.0,>=1.0 in /usr/local/lib/python3.6/dist-packages (from freezegun~=0.3.15->cirq) (2.8.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from freezegun~=0.3.15->cirq) (1.15.0)
Requirement already satisfied: setuptools>=34.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (49.6.0)
Requirement already satisfied: pytz in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (2018.9)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (1.52.0)
Requirement already satisfied: google-auth<2.0dev,>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (1.17.2)
Requirement already satisfied: grpcio<2.0dev,>=1.8.2; extra == "grpc" in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (1.31.0)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.6/dist-packages (from sympy->cirq) (1.1.0)
Requirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx~=2.4->cirq) (4.4.2)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq) (2020.6.20)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib~=3.0->cirq) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib~=3.0->cirq) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib~=3.0->cirq) (1.2.0)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from google-auth<2.0dev,>=0.4.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (4.6)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2.0dev,>=0.4.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (4.1.1)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2.0dev,>=0.4.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (0.2.8)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<5,>=3.1.4; python_version >= "3"->google-auth<2.0dev,>=0.4.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (0.4.8)
Cirq installed.
###Markdown
Create an Engine variableThe following creates and engine variable which can be used to run programs under the project ID you entered above.
###Code
import cirq
# Create an Engine object to use, providing the project id and the args
# used for authentication (produced by running the authentication above).
engine = cirq.google.Engine(project_id=project_id)
###Output
_____no_output_____
###Markdown
Example
###Code
# A simple example.
q = cirq.GridQubit(5, 2)
circuit = cirq.Circuit(cirq.X(q)**0.5, cirq.measure(q, key='m'))
job = engine.run_sweep(
program=circuit,
repetitions=10000,
processor_ids=['rainbow'],
gate_set=cirq.google.SYC_GATESET)
results = [str(int(b)) for b in job.results()[0].measurements['m'][:, 0]]
print('Success! Results:')
print(''.join(results))
###Output
Success! Results:
0000001111010100111111000100000000011100001000000011110011101011101101100000101000000110010110011100001001000110001110100101010010101100011010110000010001011001111001111001110011111001100011110110001001101010000000000111101101010110101010011011011111110010111010000010001000101110101100111111111101110100100100000011110010010000110111100111010010101111010001111100011100100101010011111011100000000100100001001100110001000101000011010101010111110001000000011100100000001011011111111001001011000001100110101000000001100110000111000100110101110111101010100110001101001000010110011001011010000111111000111010101111100101100001011101100101000110001100001100010001001001110010000010001011111101101101111011101101100010110001000111001010101001011111011111100011001110001110010111000011010011101101000011100111011000001001010101010010001101011010101011001001010001000111100101000011000101011011110111011101001011001001011011001101111010010011110010011101011011010111001110110101010101010101010001100101000001000101101001010000101110101011011111101010001011111010111110011101111111010100000001100000001100100011001011110111011111110011101010111000011101100111111101100001111100001110100100001010011001110111010000011101110011010001111011110101010011010111011001000101111010000110000111010111110001101011010010100010111111100001011111000001010000001001101100110011001100001000101101001100101011101100011011000111100111000001001100001110100010110011101001111101011000100111010001111001111000110000101001000100010110110001011111001100110101110001110110101101010001010101110100100110100000101101000101010011000110101010101011110111011111110010111011111100000001000000011110101010100111110000010001000101000111011001100100011001101101010000101010001100000001010001110110000000000111100110000010101100001000101110101100010110000010101000110110100010110101001001010010001010111010001001100000000011101110110010111100000000010011001010010010111111010100001011010100101001001110101100000001011001100011100111110011001101111000001001101111100111010100111111101000011100100110111010100010010000001101111111100101111101000101101010000010010100000100111000000110101111101000101001010010001111101000000001011011000111110001111100111010000110000100000110010101110011010100110010010111111010101100101101001111001001000000000100011111000101101001011010110010100100000011001111100111101111111011010110010000111001000110000011001111111010010010100110000011100101011001000111011010101000001011100010001011001111001101000111011000110110001001110010000010111010110100100110000010011101001000100010010011100000010010110100100011011010111000110110110000001001011000000000100111000101110010110001111001001111001011011110010110010101010001100010101000100000110111010111001111000111000010000100101011001001100111100011111000001010110011110111000101000010000110011010110110101110100000101011011001011001110011001001101010001000000111110000101100001100101000110000011110001110000101001001100010101010000000100101100001101001111001001101000010100001011110011100000100000000100010101100111101100011101101010100010110000101010011011101011010100110111100110011001011001000110100001000100010111011011110111011011110100010010011100001011001111001010111000000010110110100001010111010000111110101000000110101100110001010001101110101001110100110110110011111100111101111111001010011011001101111000000110110111110111000010001011001011011010110000000010010100101000110110000100011011010011110100010101101101001001001100001100001011000110011011011110100100000010001011011111000010110010111010011110011110010000101111011100010010011101011110000101010010000011001110101111000001011110010111110001110110100101001001100000011001010010100011101001100110111000001011111011010000111100010111100000011000110001111010100000111000101100000011100100101111100101111100111011110100110011100011110100111010000010111000110100011001000010011100101111010010100110100100011101100111001100101111111110101101001010111100000110010000111100010100100110001010000010011101110111010110010100000110001111100001001000100001010001010110101110110110111100101110001110010010001010110000001000101010111110001100011111011100010110111011000000010101110111101100110000010000000010100111110011000010000001010101111010100000110001001011000010101111100001111110001101100011110110010010110011110110010001011100010010000000010000101010010111101001010101011001011110100000101111100111010010110010001110001010101000111001101110011100000010101101011000100010110011011011111100010110111110100111111101100001100000000000101000101011100111000000100110000010110110010001111000011000101110000010101110011011000010111110111110101101000000100000110101010001000100100000010000101010000011010001000001111001001010011101101001001100100101011111011010100101010001011110111110011101100001010111111110110111010000011111100001101101000010111010000110001111000101110010110110101110000100000111101001101100111100000110100011101101111011111010010001110010011011110101101100100001001000110100110110001010111011101101100101101000111001000110111101000000001011000100110100010011001000111110010101110010010011011011011001111110010000111011011101010001011011000011010111011100001011000111100101000011010101010010001000010000100001000101100111111111110000111110010011001000110110111010000000110110100100111101101011001111110011010100000111000100100110110000000001110110010000010100010100101001000001101101100101001111101000111100001111110011110001000110000011011100100110100000010111011011000000011000000001011101010110100111101001101100100001111000011101110011001111111011101011000100010011001101010100110000001100111000001011001010110100111001101111101101000100111110001111100001110010110000000100101011010111001001100011101110111011100111011111111111111010010101100111111100101011101101011011010111110011011110111011111100110101010000000000101111000010100100100000100001110011010011001001110111111110001101010001010000010110111111111011010000110101001010111111110110111000111011000101100001000011010111010010000111110110100110010111000011011100000110011000110100011111100110000101101111101001100101001010101001011111100010110110000100110111110010111100011010110000011001100111010010000111111011000010110100111111101000101101100110000001010011000011111101010000000010011101101101111000101100010100001100011011100001101111101011101010110001110000101011011100010100110011001100110001100001101001000000010101100001000000110101110111000110100100101000001110110111001000001101100011011000010101110011100001111000011101011001011010100001101101011111001000101001011101000010111011000101011101110110100011110111101110110011101111010111010111001001101110001011000100011101000111000001000000100100000101000100001001010101110111010010011000100000010000111101011000010100100111001111101111011001110101000000100001111111000111100100011101111111000001111111101100000110010001110110001111010100010000000010000010101010010101000001001000000011001101100001110111010001000101011001010000000100111101001111010010001011011101011011011101110100000110110001100001010010111000111010010011110010001001010001001000011110000001000001111001101001111011011100010110010010101000001011000000100000101000111110010001000100000000110111100001011100010011100001011100110100001000010101101010100001000000100100010101001000101110110111010111111000110000100010000011110100100111010100000011111101010110100110110100001000001001001010101011010101011110011111000011000000111101110100000100101111100111110110000100110100000011111110111110100001011010111010000100101010101110001100100101000101101101001011101010101100111001001101001000110101111111001111011110101011101001110100000010010001101101100000010110000011001100001110110010100000000011110111100000100001101011001100110110000001001000001101111001101101111001110011101101101010001101100110111110011110010001000110110010100100010100111000000100010111011100100110100100110111000110001010010000110000001101000010100011101110010110101100001101001101000000100100010000010000011110110101100000011110010101010010001100110101001100101000101100010101001010000111000111000000000001001100100110001111000110100001000000010001110011011001110000011001000011101010011110000101001101111100110000111110101111111111101011100101011001001010101101001100110110111000111110111011101000011001100101010111101100111010001001000001011001101101111110001001111111111110001000100011100101100110110101101100110110010110011111001101001011011111110001011110110000100010000101011110011001000100010100011101111100010011011010011100111110111000111100110000010011100111011100011001011001101101011001000001001010011101111010000101011010100011010011001111011010001111010010111001111000001000001111100100010110001000100011011110001010011101000110000100110010111110111110011000101001000111000001100001110010000011100010001111100011001110011100001100011010010000001000001100010100100110101001101110011001001010100011010110011111010101100111110010011100101101111001110001011001000101101100111010000100111101010101010100001100011001101011111111110010001111100111001001110010101100111001010000001000100000111101110111001110111100100110101000110000001100000101001010001111000001110110100000011100011011100011011100101010100110011111110110011110100100111100110110100111010100001011101010001000010011010011101010001010101000100011111110001010000000100100111101110100000100100110100000101001111011011011110011111101001011100010111110110100011001100000101000110100101101011001010111100000000111010101011101110101100100101001001111010001111100101010011011000101000111100010010010000001111001001100011110001000110011010110110100100111011100111010110111010110011010101010111100110000101101111101110010001010011000010111001110110000100100101010001001010110110001001001011001000010100111100010110110101000111100110010110001010010010000110010101111000010001100111010100010111100110000000010001101001011100110001011000100101000011101000010110100001110011010101111011101111110110011111111000111010010010100010011100100100100011111111011111110100110001100110
###Markdown
Copyright 2020 The Cirq Developers
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
###Markdown
Notebook template View on QuantumAI Run in Google Colab View source on GitHub Download notebook SetupNote: this notebook relies on unreleased Cirq features. If you want to try these features, make sure you install cirq via `pip install cirq --pre`.
###Code
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq --pre
print("installed cirq.")
###Output
###Markdown
Make a copy of this templateYou will need to have access to Quantum Computing Service before running this colab.This notebook can serve as a starter kit for you to run programs on Google's quantum hardware. You can download it using the directions below, open it in colab (or Jupyter), and modify it to begin your experiments. How to download iPython notebooks from GitHubYou can retrieve iPython notebooks in the Cirq repository bygoing to the [docs/ directory](https://github.com/quantumlib/Cirq/tree/master/docs). For instance, this Colab template is found [here](https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/colab.ipynb). Select the file that you would like to download and then click the *Raw* button in the upper-right part of the window:This will show the entire file contents. Right-click and select *Save as* to save this file to your computer. Make sure to save to a file with a `.ipynb` extension (you may need to select *All files* from the format dropdown instead of *text*). You can also get to this Colab's [raw content directly](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/tutorials/google/colab.ipynb)You can also retrieve the entire Cirq repository by running the following command in a terminal that has `git` installed:```git checkout https://github.com/quantumlib/Cirq.git``` How to open Google ColabYou can open a new Colab notebook from your Google Drive window or by visiting the [Colab site](https://colab.research.google.com/notebooks/intro.ipynb). From the Colaboratory site, you can use the menu to upload an iPython notebook:This will upload the ipynb file that you downloaded before. You can now run all the commands, modify it to suit your goals, and share it with others. More Documentation Links* [Quantum Engine concepts](../../google/concepts.md)* [Quantum Engine documentation](../../google/engine.md)* [Cirq documentation](https://cirq.readthedocs.io)* [Colab documentation](https://colab.research.google.com/notebooks/welcome.ipynb) Authenticate and install CirqFor details of authentication and installation, please see [Get started with Quantum Computing Service](start.ipynb).Note: The below code will install the latest stable release of cirq. If you need the latest and greatest features and don't mind if a few things aren't quite working correctly, you can install the pre-release version of `cirq` using `pip install --pre cirq` instead of `pip install cirq` to get the most up-to-date features of cirq.1. Enter the Cloud project ID you'd like to use in the `project_id` field.2. Then run the cell below (and go through the auth flow for access to the project id you entered).
###Code
import cirq_google as cg
# The Google Cloud Project id to use.
project_id = '' #@param {type:"string"}
processor_id = "" #@param {type:"string"}
from cirq_google.engine.qcs_notebook import get_qcs_objects_for_notebook
device_sampler = get_qcs_objects_for_notebook(project_id, processor_id)
###Output
Notebook is not executed with Colab, assuming Application Default Credentials are setup.
Successful authentication to Google Cloud.
###Markdown
Create an Engine variableThe following creates an engine variable which can be used to run programs under the project ID you entered above.
###Code
import cirq
import cirq_google
from google.auth.exceptions import DefaultCredentialsError
from google.api_core.exceptions import PermissionDenied
# Create an Engine object to use, providing the project id and the args
try:
engine = cirq_google.get_engine()
engine.list_processors()
print(f"Successful authentication using project {project_id}!")
except DefaultCredentialsError as err:
print("Could not authenticate to Google Quantum Computing Service.")
print(" Tips: If you are using Colab: make sure the previous cell was executed successfully.")
print(" If this notebook is not in Colab (e.g. Jupyter notebook), make sure gcloud is installed and `gcloud auth application-default login` was executed.")
print()
print("Error message:")
print(err)
except PermissionDenied as err:
print(f"While you are authenticated to Google Cloud it seems the project '{project_id}' does not exist or does not have the Quantum Engine API enabled.")
print("Error message:")
print(err)
###Output
###Markdown
Example
###Code
# A simple example.
q = cirq.GridQubit(5, 2)
circuit = cirq.Circuit(cirq.X(q)**0.5, cirq.measure(q, key='m'))
job = engine.run_sweep(
program=circuit,
repetitions=10000,
processor_ids=['rainbow'],
gate_set=cirq_google.SYC_GATESET)
results = [str(int(b)) for b in job.results()[0].measurements['m'][:, 0]]
print('Success! Results:')
print(''.join(results))
###Output
Success! Results:
0000001111010100111111000100000000011100001000000011110011101011101101100000101000000110010110011100001001000110001110100101010010101100011010110000010001011001111001111001110011111001100011110110001001101010000000000111101101010110101010011011011111110010111010000010001000101110101100111111111101110100100100000011110010010000110111100111010010101111010001111100011100100101010011111011100000000100100001001100110001000101000011010101010111110001000000011100100000001011011111111001001011000001100110101000000001100110000111000100110101110111101010100110001101001000010110011001011010000111111000111010101111100101100001011101100101000110001100001100010001001001110010000010001011111101101101111011101101100010110001000111001010101001011111011111100011001110001110010111000011010011101101000011100111011000001001010101010010001101011010101011001001010001000111100101000011000101011011110111011101001011001001011011001101111010010011110010011101011011010111001110110101010101010101010001100101000001000101101001010000101110101011011111101010001011111010111110011101111111010100000001100000001100100011001011110111011111110011101010111000011101100111111101100001111100001110100100001010011001110111010000011101110011010001111011110101010011010111011001000101111010000110000111010111110001101011010010100010111111100001011111000001010000001001101100110011001100001000101101001100101011101100011011000111100111000001001100001110100010110011101001111101011000100111010001111001111000110000101001000100010110110001011111001100110101110001110110101101010001010101110100100110100000101101000101010011000110101010101011110111011111110010111011111100000001000000011110101010100111110000010001000101000111011001100100011001101101010000101010001100000001010001110110000000000111100110000010101100001000101110101100010110000010101000110110100010110101001001010010001010111010001001100000000011101110110010111100000000010011001010010010111111010100001011010100101001001110101100000001011001100011100111110011001101111000001001101111100111010100111111101000011100100110111010100010010000001101111111100101111101000101101010000010010100000100111000000110101111101000101001010010001111101000000001011011000111110001111100111010000110000100000110010101110011010100110010010111111010101100101101001111001001000000000100011111000101101001011010110010100100000011001111100111101111111011010110010000111001000110000011001111111010010010100110000011100101011001000111011010101000001011100010001011001111001101000111011000110110001001110010000010111010110100100110000010011101001000100010010011100000010010110100100011011010111000110110110000001001011000000000100111000101110010110001111001001111001011011110010110010101010001100010101000100000110111010111001111000111000010000100101011001001100111100011111000001010110011110111000101000010000110011010110110101110100000101011011001011001110011001001101010001000000111110000101100001100101000110000011110001110000101001001100010101010000000100101100001101001111001001101000010100001011110011100000100000000100010101100111101100011101101010100010110000101010011011101011010100110111100110011001011001000110100001000100010111011011110111011011110100010010011100001011001111001010111000000010110110100001010111010000111110101000000110101100110001010001101110101001110100110110110011111100111101111111001010011011001101111000000110110111110111000010001011001011011010110000000010010100101000110110000100011011010011110100010101101101001001001100001100001011000110011011011110100100000010001011011111000010110010111010011110011110010000101111011100010010011101011110000101010010000011001110101111000001011110010111110001110110100101001001100000011001010010100011101001100110111000001011111011010000111100010111100000011000110001111010100000111000101100000011100100101111100101111100111011110100110011100011110100111010000010111000110100011001000010011100101111010010100110100100011101100111001100101111111110101101001010111100000110010000111100010100100110001010000010011101110111010110010100000110001111100001001000100001010001010110101110110110111100101110001110010010001010110000001000101010111110001100011111011100010110111011000000010101110111101100110000010000000010100111110011000010000001010101111010100000110001001011000010101111100001111110001101100011110110010010110011110110010001011100010010000000010000101010010111101001010101011001011110100000101111100111010010110010001110001010101000111001101110011100000010101101011000100010110011011011111100010110111110100111111101100001100000000000101000101011100111000000100110000010110110010001111000011000101110000010101110011011000010111110111110101101000000100000110101010001000100100000010000101010000011010001000001111001001010011101101001001100100101011111011010100101010001011110111110011101100001010111111110110111010000011111100001101101000010111010000110001111000101110010110110101110000100000111101001101100111100000110100011101101111011111010010001110010011011110101101100100001001000110100110110001010111011101101100101101000111001000110111101000000001011000100110100010011001000111110010101110010010011011011011001111110010000111011011101010001011011000011010111011100001011000111100101000011010101010010001000010000100001000101100111111111110000111110010011001000110110111010000000110110100100111101101011001111110011010100000111000100100110110000000001110110010000010100010100101001000001101101100101001111101000111100001111110011110001000110000011011100100110100000010111011011000000011000000001011101010110100111101001101100100001111000011101110011001111111011101011000100010011001101010100110000001100111000001011001010110100111001101111101101000100111110001111100001110010110000000100101011010111001001100011101110111011100111011111111111111010010101100111111100101011101101011011010111110011011110111011111100110101010000000000101111000010100100100000100001110011010011001001110111111110001101010001010000010110111111111011010000110101001010111111110110111000111011000101100001000011010111010010000111110110100110010111000011011100000110011000110100011111100110000101101111101001100101001010101001011111100010110110000100110111110010111100011010110000011001100111010010000111111011000010110100111111101000101101100110000001010011000011111101010000000010011101101101111000101100010100001100011011100001101111101011101010110001110000101011011100010100110011001100110001100001101001000000010101100001000000110101110111000110100100101000001110110111001000001101100011011000010101110011100001111000011101011001011010100001101101011111001000101001011101000010111011000101011101110110100011110111101110110011101111010111010111001001101110001011000100011101000111000001000000100100000101000100001001010101110111010010011000100000010000111101011000010100100111001111101111011001110101000000100001111111000111100100011101111111000001111111101100000110010001110110001111010100010000000010000010101010010101000001001000000011001101100001110111010001000101011001010000000100111101001111010010001011011101011011011101110100000110110001100001010010111000111010010011110010001001010001001000011110000001000001111001101001111011011100010110010010101000001011000000100000101000111110010001000100000000110111100001011100010011100001011100110100001000010101101010100001000000100100010101001000101110110111010111111000110000100010000011110100100111010100000011111101010110100110110100001000001001001010101011010101011110011111000011000000111101110100000100101111100111110110000100110100000011111110111110100001011010111010000100101010101110001100100101000101101101001011101010101100111001001101001000110101111111001111011110101011101001110100000010010001101101100000010110000011001100001110110010100000000011110111100000100001101011001100110110000001001000001101111001101101111001110011101101101010001101100110111110011110010001000110110010100100010100111000000100010111011100100110100100110111000110001010010000110000001101000010100011101110010110101100001101001101000000100100010000010000011110110101100000011110010101010010001100110101001100101000101100010101001010000111000111000000000001001100100110001111000110100001000000010001110011011001110000011001000011101010011110000101001101111100110000111110101111111111101011100101011001001010101101001100110110111000111110111011101000011001100101010111101100111010001001000001011001101101111110001001111111111110001000100011100101100110110101101100110110010110011111001101001011011111110001011110110000100010000101011110011001000100010100011101111100010011011010011100111110111000111100110000010011100111011100011001011001101101011001000001001010011101111010000101011010100011010011001111011010001111010010111001111000001000001111100100010110001000100011011110001010011101000110000100110010111110111110011000101001000111000001100001110010000011100010001111100011001110011100001100011010010000001000001100010100100110101001101110011001001010100011010110011111010101100111110010011100101101111001110001011001000101101100111010000100111101010101010100001100011001101011111111110010001111100111001001110010101100111001010000001000100000111101110111001110111100100110101000110000001100000101001010001111000001110110100000011100011011100011011100101010100110011111110110011110100100111100110110100111010100001011101010001000010011010011101010001010101000100011111110001010000000100100111101110100000100100110100000101001111011011011110011111101001011100010111110110100011001100000101000110100101101011001010111100000000111010101011101110101100100101001001111010001111100101010011011000101000111100010010010000001111001001100011110001000110011010110110100100111011100111010110111010110011010101010111100110000101101111101110010001010011000010111001110110000100100101010001001010110110001001001011001000010100111100010110110101000111100110010110001010010010000110010101111000010001100111010100010111100110000000010001101001011100110001011000100101000011101000010110100001110011010101111011101111110110011111111000111010010010100010011100100100100011111111011111110100110001100110
###Markdown
Copyright 2020 The Cirq Developers
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
###Markdown
Notebook template View on QuantumAI Run in Google Colab View source on GitHub Download notebook
###Code
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq --pre
print("installed cirq.")
###Output
###Markdown
Make a copy of this templateYou will need to have access to Quantum Computing Service before running this colab.This notebook can serve as a starter kit for you to run programs on Google's quantum hardware. You can download it using the directions below, open it in colab (or Jupyter), and modify it to begin your experiments. How to download iPython notebooks from GitHubYou can retrieve iPython notebooks in the Cirq repository bygoing to the [docs/ directory](https://github.com/quantumlib/Cirq/tree/master/docs). For instance, this Colab template is found [here](https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/colab.ipynb). Select the file that you would like to download and then click the *Raw* button in the upper-right part of the window:This will show the entire file contents. Right-click and select *Save as* to save this file to your computer. Make sure to save to a file with a `.ipynb` extension (you may need to select *All files* from the format dropdown instead of *text*). You can also get to this Colab's [raw content directly](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/tutorials/google/colab.ipynb)You can also retrieve the entire Cirq repository by running the following command in a terminal that has `git` installed:```git checkout https://github.com/quantumlib/Cirq.git``` How to open Google ColabYou can open a new Colab notebook from your Google Drive window or by visiting the [Colab site](https://colab.research.google.com/notebooks/intro.ipynb). From the Colaboratory site, you can use the menu to upload an iPython notebook:This will upload the ipynb file that you downloaded before. You can now run all the commands, modify it to suit your goals, and share it with others. More Documentation Links* [Quantum Engine concepts](../../google/concepts.md)* [Quantum Engine documentation](../../google/engine.md)* [Cirq documentation](https://cirq.readthedocs.io)* [Colab documentation](https://colab.research.google.com/notebooks/welcome.ipynb) Authenticate and install CirqFor details of authentication and installation, please see [Get started with Quantum Computing Service](start.ipynb).Note: The below code will install the latest stable release of cirq. If you need the latest and greatest features and don't mind if a few things aren't quite working correctly, you can install the pre-release version of `cirq` using `pip install --pre cirq` instead of `pip install cirq` to get the most up-to-date features of cirq.1. Enter the Cloud project ID you'd like to use in the `project_id` field.2. Then run the cell below (and go through the auth flow for access to the project id you entered).
###Code
# The Google Cloud Project id to use.
project_id = '' #@param {type:"string"}
if project_id == '':
import os
if 'GOOGLE_CLOUD_PROJECT' not in os.environ:
raise Exception("Please setup project_id in this cell or set the `GOOGLE_CLOUD_PROJECT` env var to your project id.")
project_id = os.environ['GOOGLE_CLOUD_PROJECT']
else:
os.environ['GOOGLE_CLOUD_PROJECT'] = project_id
def authenticate_user():
"""Runs the user through the Colab OAuth process.
Checks for Google Application Default Credentials and runs interactive login
if the notebook is executed in Colab. In case the notebook is executed in Jupyter notebook
or other IPython runtimes, no interactive login is provided, it is assumed that the
`GOOGLE_APPLICATION_CREDENTIALS` env var is set or `gcloud auth application-default login`
was executed already.
For more information on using Application Default Credentials see
https://cloud.google.com/docs/authentication/production
"""
in_colab = False
try:
from IPython import get_ipython
in_colab = 'google.colab' in str(get_ipython())
except:
# Notebook is not executed within IPython. Assuming external authentication.
return
if in_colab:
from google.colab import auth
print("Getting OAuth2 credentials.")
print("Press enter after entering the verification code.")
auth.authenticate_user(clear_output=False)
print("Authentication complete.")
else:
print("Notebook is not executed with Colab, assuming Application Default Credentials are setup.")
authenticate_user()
print("Successful authentication to Google Cloud.")
###Output
Notebook is not executed with Colab, assuming Application Default Credentials are setup.
Successful authentication to Google Cloud.
###Markdown
Create an Engine variableThe following creates an engine variable which can be used to run programs under the project ID you entered above.
###Code
import cirq
import cirq_google
from google.auth.exceptions import DefaultCredentialsError
from google.api_core.exceptions import PermissionDenied
# Create an Engine object to use, providing the project id and the args
try:
engine = cirq_google.get_engine()
engine.list_processors()
print(f"Successful authentication using project {project_id}!")
except DefaultCredentialsError as err:
print("Could not authenticate to Google Quantum Computing Service.")
print(" Tips: If you are using Colab: make sure the previous cell was executed successfully.")
print(" If this notebook is not in Colab (e.g. Jupyter notebook), make sure gcloud is installed and `gcloud auth application-default login` was executed.")
print()
print("Error message:")
print(err)
except PermissionDenied as err:
print(f"While you are authenticated to Google Cloud it seems the project '{project_id}' does not exist or does not have the Quantum Engine API enabled.")
print("Error message:")
print(err)
###Output
###Markdown
Example
###Code
# A simple example.
q = cirq.GridQubit(5, 2)
circuit = cirq.Circuit(cirq.X(q)**0.5, cirq.measure(q, key='m'))
job = engine.run_sweep(
program=circuit,
repetitions=10000,
processor_ids=['rainbow'],
gate_set=cirq_google.SYC_GATESET)
results = [str(int(b)) for b in job.results()[0].measurements['m'][:, 0]]
print('Success! Results:')
print(''.join(results))
###Output
Success! Results:
0000001111010100111111000100000000011100001000000011110011101011101101100000101000000110010110011100001001000110001110100101010010101100011010110000010001011001111001111001110011111001100011110110001001101010000000000111101101010110101010011011011111110010111010000010001000101110101100111111111101110100100100000011110010010000110111100111010010101111010001111100011100100101010011111011100000000100100001001100110001000101000011010101010111110001000000011100100000001011011111111001001011000001100110101000000001100110000111000100110101110111101010100110001101001000010110011001011010000111111000111010101111100101100001011101100101000110001100001100010001001001110010000010001011111101101101111011101101100010110001000111001010101001011111011111100011001110001110010111000011010011101101000011100111011000001001010101010010001101011010101011001001010001000111100101000011000101011011110111011101001011001001011011001101111010010011110010011101011011010111001110110101010101010101010001100101000001000101101001010000101110101011011111101010001011111010111110011101111111010100000001100000001100100011001011110111011111110011101010111000011101100111111101100001111100001110100100001010011001110111010000011101110011010001111011110101010011010111011001000101111010000110000111010111110001101011010010100010111111100001011111000001010000001001101100110011001100001000101101001100101011101100011011000111100111000001001100001110100010110011101001111101011000100111010001111001111000110000101001000100010110110001011111001100110101110001110110101101010001010101110100100110100000101101000101010011000110101010101011110111011111110010111011111100000001000000011110101010100111110000010001000101000111011001100100011001101101010000101010001100000001010001110110000000000111100110000010101100001000101110101100010110000010101000110110100010110101001001010010001010111010001001100000000011101110110010111100000000010011001010010010111111010100001011010100101001001110101100000001011001100011100111110011001101111000001001101111100111010100111111101000011100100110111010100010010000001101111111100101111101000101101010000010010100000100111000000110101111101000101001010010001111101000000001011011000111110001111100111010000110000100000110010101110011010100110010010111111010101100101101001111001001000000000100011111000101101001011010110010100100000011001111100111101111111011010110010000111001000110000011001111111010010010100110000011100101011001000111011010101000001011100010001011001111001101000111011000110110001001110010000010111010110100100110000010011101001000100010010011100000010010110100100011011010111000110110110000001001011000000000100111000101110010110001111001001111001011011110010110010101010001100010101000100000110111010111001111000111000010000100101011001001100111100011111000001010110011110111000101000010000110011010110110101110100000101011011001011001110011001001101010001000000111110000101100001100101000110000011110001110000101001001100010101010000000100101100001101001111001001101000010100001011110011100000100000000100010101100111101100011101101010100010110000101010011011101011010100110111100110011001011001000110100001000100010111011011110111011011110100010010011100001011001111001010111000000010110110100001010111010000111110101000000110101100110001010001101110101001110100110110110011111100111101111111001010011011001101111000000110110111110111000010001011001011011010110000000010010100101000110110000100011011010011110100010101101101001001001100001100001011000110011011011110100100000010001011011111000010110010111010011110011110010000101111011100010010011101011110000101010010000011001110101111000001011110010111110001110110100101001001100000011001010010100011101001100110111000001011111011010000111100010111100000011000110001111010100000111000101100000011100100101111100101111100111011110100110011100011110100111010000010111000110100011001000010011100101111010010100110100100011101100111001100101111111110101101001010111100000110010000111100010100100110001010000010011101110111010110010100000110001111100001001000100001010001010110101110110110111100101110001110010010001010110000001000101010111110001100011111011100010110111011000000010101110111101100110000010000000010100111110011000010000001010101111010100000110001001011000010101111100001111110001101100011110110010010110011110110010001011100010010000000010000101010010111101001010101011001011110100000101111100111010010110010001110001010101000111001101110011100000010101101011000100010110011011011111100010110111110100111111101100001100000000000101000101011100111000000100110000010110110010001111000011000101110000010101110011011000010111110111110101101000000100000110101010001000100100000010000101010000011010001000001111001001010011101101001001100100101011111011010100101010001011110111110011101100001010111111110110111010000011111100001101101000010111010000110001111000101110010110110101110000100000111101001101100111100000110100011101101111011111010010001110010011011110101101100100001001000110100110110001010111011101101100101101000111001000110111101000000001011000100110100010011001000111110010101110010010011011011011001111110010000111011011101010001011011000011010111011100001011000111100101000011010101010010001000010000100001000101100111111111110000111110010011001000110110111010000000110110100100111101101011001111110011010100000111000100100110110000000001110110010000010100010100101001000001101101100101001111101000111100001111110011110001000110000011011100100110100000010111011011000000011000000001011101010110100111101001101100100001111000011101110011001111111011101011000100010011001101010100110000001100111000001011001010110100111001101111101101000100111110001111100001110010110000000100101011010111001001100011101110111011100111011111111111111010010101100111111100101011101101011011010111110011011110111011111100110101010000000000101111000010100100100000100001110011010011001001110111111110001101010001010000010110111111111011010000110101001010111111110110111000111011000101100001000011010111010010000111110110100110010111000011011100000110011000110100011111100110000101101111101001100101001010101001011111100010110110000100110111110010111100011010110000011001100111010010000111111011000010110100111111101000101101100110000001010011000011111101010000000010011101101101111000101100010100001100011011100001101111101011101010110001110000101011011100010100110011001100110001100001101001000000010101100001000000110101110111000110100100101000001110110111001000001101100011011000010101110011100001111000011101011001011010100001101101011111001000101001011101000010111011000101011101110110100011110111101110110011101111010111010111001001101110001011000100011101000111000001000000100100000101000100001001010101110111010010011000100000010000111101011000010100100111001111101111011001110101000000100001111111000111100100011101111111000001111111101100000110010001110110001111010100010000000010000010101010010101000001001000000011001101100001110111010001000101011001010000000100111101001111010010001011011101011011011101110100000110110001100001010010111000111010010011110010001001010001001000011110000001000001111001101001111011011100010110010010101000001011000000100000101000111110010001000100000000110111100001011100010011100001011100110100001000010101101010100001000000100100010101001000101110110111010111111000110000100010000011110100100111010100000011111101010110100110110100001000001001001010101011010101011110011111000011000000111101110100000100101111100111110110000100110100000011111110111110100001011010111010000100101010101110001100100101000101101101001011101010101100111001001101001000110101111111001111011110101011101001110100000010010001101101100000010110000011001100001110110010100000000011110111100000100001101011001100110110000001001000001101111001101101111001110011101101101010001101100110111110011110010001000110110010100100010100111000000100010111011100100110100100110111000110001010010000110000001101000010100011101110010110101100001101001101000000100100010000010000011110110101100000011110010101010010001100110101001100101000101100010101001010000111000111000000000001001100100110001111000110100001000000010001110011011001110000011001000011101010011110000101001101111100110000111110101111111111101011100101011001001010101101001100110110111000111110111011101000011001100101010111101100111010001001000001011001101101111110001001111111111110001000100011100101100110110101101100110110010110011111001101001011011111110001011110110000100010000101011110011001000100010100011101111100010011011010011100111110111000111100110000010011100111011100011001011001101101011001000001001010011101111010000101011010100011010011001111011010001111010010111001111000001000001111100100010110001000100011011110001010011101000110000100110010111110111110011000101001000111000001100001110010000011100010001111100011001110011100001100011010010000001000001100010100100110101001101110011001001010100011010110011111010101100111110010011100101101111001110001011001000101101100111010000100111101010101010100001100011001101011111111110010001111100111001001110010101100111001010000001000100000111101110111001110111100100110101000110000001100000101001010001111000001110110100000011100011011100011011100101010100110011111110110011110100100111100110110100111010100001011101010001000010011010011101010001010101000100011111110001010000000100100111101110100000100100110100000101001111011011011110011111101001011100010111110110100011001100000101000110100101101011001010111100000000111010101011101110101100100101001001111010001111100101010011011000101000111100010010010000001111001001100011110001000110011010110110100100111011100111010110111010110011010101010111100110000101101111101110010001010011000010111001110110000100100101010001001010110110001001001011001000010100111100010110110101000111100110010110001010010010000110010101111000010001100111010100010111100110000000010001101001011100110001011000100101000011101000010110100001110011010101111011101111110110011111111000111010010010100010011100100100100011111111011111110100110001100110
###Markdown
Make a copy of this templateYou will need to have access to Quantum Computing Service before running this colab.This notebook can serve as a starter kit for you to run programs on Google's quantum hardware. You can download it using the directions below, open it in colab (or Jupyter), and modify it to begin your experiments. How to download iPython notebooks from githubYou can retrieve ipython notebooks in the cirq repository bygoing to the [doc directory](https://github.com/quantumlib/Cirq/tree/master/docs). For instance, this colab template can be found [here](https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/colab.ipynb). Select the file that you would like to download and then click the "Raw" button in the upper right part of the window:![Raw button](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/images/colab_github.png)This will show the entire file contents. Right-click and select "Save as" to save this file to your computer. Make sure to save to a file with a ".ipynb" extension. (Note: you may need to select "All files" from the format dropdown instead of "text"). You can also get to this colab's [raw contenti directly](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/tutorials/google/colab.ipynb)You can also retrieve the entire cirq repository by running the following command in a terminal that has git installed. `git checkout https://github.com/quantumlib/Cirq.git` How to open colabYou can open a new colab from your Google Drive window or by visiting the [colab site](https://colab.research.google.com/notebooks/intro.ipynb). From the colaboratory site, you can use the menu to upload an ipython notebook:![Upload menu](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/images/colab_upload.png)This will upload the ipynb file that you downloaded before. You can now run all the commands, modify it to suit your goals, and share it with others. More Documentation Links* [Quantum Engine concepts](../../google/concepts.md)* [Quantum Engine documentation](../../google/engine.md)* [Cirq documentation](https://cirq.readthedocs.io)* [Colab documentation](https://colab.sandbox.google.com/notebooks/welcome.ipynb) Authenticate and install cirqFor details of authentication and installation, please see [go/quantum-engine-quickstart](https://go/quantum-engine-quickstart). Note: The below code will install the latest stable release of cirq. If you need the latest and greatest features and don't mind if a few things aren't quite working correctly, you can install `cirq-unstable` instead of `cirq` to get the most up-to-date features of cirq.1. Enter the cloud project_id you'd like to use in the 'project_id' field.2. Then run the cell below (and go through the auth flow for access to the project id you entered).![Quantum Engine console](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/images/run-code-block.png)
###Code
# The Google Cloud Project id to use.
project_id = 'quantum-cloud-client' #@param {type:"string"}
def setup_auth():
"""Runs the user through the Colab OAuth process.
Sets the local Application Default Credentials. For more information on on
using Application Default Credentials see
https://cloud.google.com/docs/authentication/production
"""
from google.colab import auth
auth.authenticate_user(clear_output=False)
print("Getting OAuth2 credentials.")
print("Press enter after entering the verification code.")
setup_auth()
print("Authentication complete.")
!pip install cirq
print("Cirq installed.")
###Output
Getting OAuth2 credentials.
Press enter after entering the verification code.
Authentication complete.
Requirement already satisfied: cirq in /usr/local/lib/python3.6/dist-packages (0.8.2)
Requirement already satisfied: freezegun~=0.3.15 in /usr/local/lib/python3.6/dist-packages (from cirq) (0.3.15)
Requirement already satisfied: google-api-core[grpc]<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from cirq) (1.16.0)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.6/dist-packages (from cirq) (3.7.4.3)
Requirement already satisfied: dataclasses; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from cirq) (0.7)
Requirement already satisfied: sympy in /usr/local/lib/python3.6/dist-packages (from cirq) (1.1.1)
Requirement already satisfied: networkx~=2.4 in /usr/local/lib/python3.6/dist-packages (from cirq) (2.5)
Requirement already satisfied: protobuf~=3.12.0 in /usr/local/lib/python3.6/dist-packages (from cirq) (3.12.4)
Requirement already satisfied: sortedcontainers~=2.0 in /usr/local/lib/python3.6/dist-packages (from cirq) (2.2.2)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from cirq) (1.4.1)
Requirement already satisfied: numpy~=1.16 in /usr/local/lib/python3.6/dist-packages (from cirq) (1.18.5)
Requirement already satisfied: requests~=2.18 in /usr/local/lib/python3.6/dist-packages (from cirq) (2.23.0)
Requirement already satisfied: matplotlib~=3.0 in /usr/local/lib/python3.6/dist-packages (from cirq) (3.2.2)
Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from cirq) (1.0.5)
Requirement already satisfied: python-dateutil!=2.0,>=1.0 in /usr/local/lib/python3.6/dist-packages (from freezegun~=0.3.15->cirq) (2.8.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from freezegun~=0.3.15->cirq) (1.15.0)
Requirement already satisfied: setuptools>=34.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (49.6.0)
Requirement already satisfied: pytz in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (2018.9)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (1.52.0)
Requirement already satisfied: google-auth<2.0dev,>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (1.17.2)
Requirement already satisfied: grpcio<2.0dev,>=1.8.2; extra == "grpc" in /usr/local/lib/python3.6/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (1.31.0)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.6/dist-packages (from sympy->cirq) (1.1.0)
Requirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx~=2.4->cirq) (4.4.2)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq) (2020.6.20)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib~=3.0->cirq) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib~=3.0->cirq) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib~=3.0->cirq) (1.2.0)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from google-auth<2.0dev,>=0.4.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (4.6)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2.0dev,>=0.4.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (4.1.1)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2.0dev,>=0.4.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (0.2.8)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<5,>=3.1.4; python_version >= "3"->google-auth<2.0dev,>=0.4.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq) (0.4.8)
Cirq installed.
###Markdown
Create an Engine variable The following creates and engine variable which can be used to run programs under the project id you entered. above
###Code
import cirq
# Create an Engine object to use, providing the project id and the args
# used for authentication (produced by running the authentication above).
engine = cirq.google.Engine(project_id=project_id)
###Output
_____no_output_____
###Markdown
Example
###Code
# A simple example.
q = cirq.GridQubit(5, 2)
circuit = cirq.Circuit(cirq.X(q)**0.5, cirq.measure(q, key='m'))
job = engine.run_sweep(
program=circuit,
repetitions=10000,
processor_ids=['rainbow'],
gate_set=cirq.google.SYC_GATESET)
results = [str(int(b)) for b in job.results()[0].measurements['m'][:, 0]]
print('Success! Results:')
print(''.join(results))
###Output
Success! Results:
0000001111010100111111000100000000011100001000000011110011101011101101100000101000000110010110011100001001000110001110100101010010101100011010110000010001011001111001111001110011111001100011110110001001101010000000000111101101010110101010011011011111110010111010000010001000101110101100111111111101110100100100000011110010010000110111100111010010101111010001111100011100100101010011111011100000000100100001001100110001000101000011010101010111110001000000011100100000001011011111111001001011000001100110101000000001100110000111000100110101110111101010100110001101001000010110011001011010000111111000111010101111100101100001011101100101000110001100001100010001001001110010000010001011111101101101111011101101100010110001000111001010101001011111011111100011001110001110010111000011010011101101000011100111011000001001010101010010001101011010101011001001010001000111100101000011000101011011110111011101001011001001011011001101111010010011110010011101011011010111001110110101010101010101010001100101000001000101101001010000101110101011011111101010001011111010111110011101111111010100000001100000001100100011001011110111011111110011101010111000011101100111111101100001111100001110100100001010011001110111010000011101110011010001111011110101010011010111011001000101111010000110000111010111110001101011010010100010111111100001011111000001010000001001101100110011001100001000101101001100101011101100011011000111100111000001001100001110100010110011101001111101011000100111010001111001111000110000101001000100010110110001011111001100110101110001110110101101010001010101110100100110100000101101000101010011000110101010101011110111011111110010111011111100000001000000011110101010100111110000010001000101000111011001100100011001101101010000101010001100000001010001110110000000000111100110000010101100001000101110101100010110000010101000110110100010110101001001010010001010111010001001100000000011101110110010111100000000010011001010010010111111010100001011010100101001001110101100000001011001100011100111110011001101111000001001101111100111010100111111101000011100100110111010100010010000001101111111100101111101000101101010000010010100000100111000000110101111101000101001010010001111101000000001011011000111110001111100111010000110000100000110010101110011010100110010010111111010101100101101001111001001000000000100011111000101101001011010110010100100000011001111100111101111111011010110010000111001000110000011001111111010010010100110000011100101011001000111011010101000001011100010001011001111001101000111011000110110001001110010000010111010110100100110000010011101001000100010010011100000010010110100100011011010111000110110110000001001011000000000100111000101110010110001111001001111001011011110010110010101010001100010101000100000110111010111001111000111000010000100101011001001100111100011111000001010110011110111000101000010000110011010110110101110100000101011011001011001110011001001101010001000000111110000101100001100101000110000011110001110000101001001100010101010000000100101100001101001111001001101000010100001011110011100000100000000100010101100111101100011101101010100010110000101010011011101011010100110111100110011001011001000110100001000100010111011011110111011011110100010010011100001011001111001010111000000010110110100001010111010000111110101000000110101100110001010001101110101001110100110110110011111100111101111111001010011011001101111000000110110111110111000010001011001011011010110000000010010100101000110110000100011011010011110100010101101101001001001100001100001011000110011011011110100100000010001011011111000010110010111010011110011110010000101111011100010010011101011110000101010010000011001110101111000001011110010111110001110110100101001001100000011001010010100011101001100110111000001011111011010000111100010111100000011000110001111010100000111000101100000011100100101111100101111100111011110100110011100011110100111010000010111000110100011001000010011100101111010010100110100100011101100111001100101111111110101101001010111100000110010000111100010100100110001010000010011101110111010110010100000110001111100001001000100001010001010110101110110110111100101110001110010010001010110000001000101010111110001100011111011100010110111011000000010101110111101100110000010000000010100111110011000010000001010101111010100000110001001011000010101111100001111110001101100011110110010010110011110110010001011100010010000000010000101010010111101001010101011001011110100000101111100111010010110010001110001010101000111001101110011100000010101101011000100010110011011011111100010110111110100111111101100001100000000000101000101011100111000000100110000010110110010001111000011000101110000010101110011011000010111110111110101101000000100000110101010001000100100000010000101010000011010001000001111001001010011101101001001100100101011111011010100101010001011110111110011101100001010111111110110111010000011111100001101101000010111010000110001111000101110010110110101110000100000111101001101100111100000110100011101101111011111010010001110010011011110101101100100001001000110100110110001010111011101101100101101000111001000110111101000000001011000100110100010011001000111110010101110010010011011011011001111110010000111011011101010001011011000011010111011100001011000111100101000011010101010010001000010000100001000101100111111111110000111110010011001000110110111010000000110110100100111101101011001111110011010100000111000100100110110000000001110110010000010100010100101001000001101101100101001111101000111100001111110011110001000110000011011100100110100000010111011011000000011000000001011101010110100111101001101100100001111000011101110011001111111011101011000100010011001101010100110000001100111000001011001010110100111001101111101101000100111110001111100001110010110000000100101011010111001001100011101110111011100111011111111111111010010101100111111100101011101101011011010111110011011110111011111100110101010000000000101111000010100100100000100001110011010011001001110111111110001101010001010000010110111111111011010000110101001010111111110110111000111011000101100001000011010111010010000111110110100110010111000011011100000110011000110100011111100110000101101111101001100101001010101001011111100010110110000100110111110010111100011010110000011001100111010010000111111011000010110100111111101000101101100110000001010011000011111101010000000010011101101101111000101100010100001100011011100001101111101011101010110001110000101011011100010100110011001100110001100001101001000000010101100001000000110101110111000110100100101000001110110111001000001101100011011000010101110011100001111000011101011001011010100001101101011111001000101001011101000010111011000101011101110110100011110111101110110011101111010111010111001001101110001011000100011101000111000001000000100100000101000100001001010101110111010010011000100000010000111101011000010100100111001111101111011001110101000000100001111111000111100100011101111111000001111111101100000110010001110110001111010100010000000010000010101010010101000001001000000011001101100001110111010001000101011001010000000100111101001111010010001011011101011011011101110100000110110001100001010010111000111010010011110010001001010001001000011110000001000001111001101001111011011100010110010010101000001011000000100000101000111110010001000100000000110111100001011100010011100001011100110100001000010101101010100001000000100100010101001000101110110111010111111000110000100010000011110100100111010100000011111101010110100110110100001000001001001010101011010101011110011111000011000000111101110100000100101111100111110110000100110100000011111110111110100001011010111010000100101010101110001100100101000101101101001011101010101100111001001101001000110101111111001111011110101011101001110100000010010001101101100000010110000011001100001110110010100000000011110111100000100001101011001100110110000001001000001101111001101101111001110011101101101010001101100110111110011110010001000110110010100100010100111000000100010111011100100110100100110111000110001010010000110000001101000010100011101110010110101100001101001101000000100100010000010000011110110101100000011110010101010010001100110101001100101000101100010101001010000111000111000000000001001100100110001111000110100001000000010001110011011001110000011001000011101010011110000101001101111100110000111110101111111111101011100101011001001010101101001100110110111000111110111011101000011001100101010111101100111010001001000001011001101101111110001001111111111110001000100011100101100110110101101100110110010110011111001101001011011111110001011110110000100010000101011110011001000100010100011101111100010011011010011100111110111000111100110000010011100111011100011001011001101101011001000001001010011101111010000101011010100011010011001111011010001111010010111001111000001000001111100100010110001000100011011110001010011101000110000100110010111110111110011000101001000111000001100001110010000011100010001111100011001110011100001100011010010000001000001100010100100110101001101110011001001010100011010110011111010101100111110010011100101101111001110001011001000101101100111010000100111101010101010100001100011001101011111111110010001111100111001001110010101100111001010000001000100000111101110111001110111100100110101000110000001100000101001010001111000001110110100000011100011011100011011100101010100110011111110110011110100100111100110110100111010100001011101010001000010011010011101010001010101000100011111110001010000000100100111101110100000100100110100000101001111011011011110011111101001011100010111110110100011001100000101000110100101101011001010111100000000111010101011101110101100100101001001111010001111100101010011011000101000111100010010010000001111001001100011110001000110011010110110100100111011100111010110111010110011010101010111100110000101101111101110010001010011000010111001110110000100100101010001001010110110001001001011001000010100111100010110110101000111100110010110001010010010000110010101111000010001100111010100010111100110000000010001101001011100110001011000100101000011101000010110100001110011010101111011101111110110011111111000111010010010100010011100100100100011111111011111110100110001100110
###Markdown
Copyright 2020 The Cirq Developers
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
###Markdown
Notebook template View on QuantumLib Run in Google Colab View source on GitHub Download notebook
###Code
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
###Output
###Markdown
Make a copy of this templateYou will need to have access to Quantum Computing Service before running this colab.This notebook can serve as a starter kit for you to run programs on Google's quantum hardware. You can download it using the directions below, open it in colab (or Jupyter), and modify it to begin your experiments. How to download iPython notebooks from GitHubYou can retrieve iPython notebooks in the Cirq repository bygoing to the [docs/ directory](https://github.com/quantumlib/Cirq/tree/master/docs). For instance, this Colab template is found [here](https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/colab.ipynb). Select the file that you would like to download and then click the *Raw* button in the upper-right part of the window:This will show the entire file contents. Right-click and select *Save as* to save this file to your computer. Make sure to save to a file with a `.ipynb` extension (you may need to select *All files* from the format dropdown instead of *text*). You can also get to this Colab's [raw content directly](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/tutorials/google/colab.ipynb)You can also retrieve the entire Cirq repository by running the following command in a terminal that has `git` installed:```git checkout https://github.com/quantumlib/Cirq.git``` How to open Google ColabYou can open a new Colab notebook from your Google Drive window or by visiting the [Colab site](https://colab.research.google.com/notebooks/intro.ipynb). From the Colaboratory site, you can use the menu to upload an iPython notebook:This will upload the ipynb file that you downloaded before. You can now run all the commands, modify it to suit your goals, and share it with others. More Documentation Links* [Quantum Engine concepts](../../google/concepts.md)* [Quantum Engine documentation](../../google/engine.md)* [Cirq documentation](https://cirq.readthedocs.io)* [Colab documentation](https://colab.research.google.com/notebooks/welcome.ipynb) Authenticate and install CirqFor details of authentication and installation, please see [go/quantum-engine-quickstart](http://go/quantum-engine-quickstart). Note: The below code will install the latest stable release of cirq. If you need the latest and greatest features and don't mind if a few things aren't quite working correctly, you can install the pre-release version of `cirq` using `pip install --pre cirq` instead of `pip install cirq` to get the most up-to-date features of cirq.1. Enter the Cloud project ID you'd like to use in the `project_id` field.2. Then run the cell below (and go through the auth flow for access to the project id you entered).
###Code
# The Google Cloud Project id to use.
project_id = '' #@param {type:"string"}
if project_id == '':
import os
if 'GOOGLE_CLOUD_PROJECT' not in os.environ:
raise Exception("Please setup project_id in this cell or set the `GOOGLE_CLOUD_PROJECT` env var to your project id.")
project_id = os.environ['GOOGLE_CLOUD_PROJECT']
else:
os.environ['GOOGLE_CLOUD_PROJECT'] = project_id
def authenticate_user():
"""Runs the user through the Colab OAuth process.
Checks for Google Application Default Credentials and runs interactive login
if the notebook is executed in Colab. In case the notebook is executed in Jupyter notebook
or other IPython runtimes, no interactive login is provided, it is assumed that the
`GOOGLE_APPLICATION_CREDENTIALS` env var is set or `gcloud auth application-default login`
was executed already.
For more information on using Application Default Credentials see
https://cloud.google.com/docs/authentication/production
"""
in_colab = False
try:
from IPython import get_ipython
in_colab = 'google.colab' in str(get_ipython())
except:
# Notebook is not executed within IPython. Assuming external authentication.
return
if in_colab:
from google.colab import auth
print("Getting OAuth2 credentials.")
print("Press enter after entering the verification code.")
auth.authenticate_user(clear_output=False)
print("Authentication complete.")
else:
print("Notebook is not executed with Colab, assuming Application Default Credentials are setup.")
authenticate_user()
print("Successful authentication to Google Cloud.")
###Output
Notebook is not executed with Colab, assuming Application Default Credentials are setup.
Successful authentication to Google Cloud.
###Markdown
Create an Engine variableThe following creates an engine variable which can be used to run programs under the project ID you entered above.
###Code
import cirq
from google.auth.exceptions import DefaultCredentialsError
from google.api_core.exceptions import PermissionDenied
# Create an Engine object to use, providing the project id and the args
try:
engine = cirq.google.get_engine()
engine.list_processors()
print(f"Successful authentication using project {project_id}!")
except DefaultCredentialsError as err:
print("Could not authenticate to Google Quantum Computing Service.")
print(" Tips: If you are using Colab: make sure the previous cell was executed successfully.")
print(" If this notebook is not in Colab (e.g. Jupyter notebook), make sure gcloud is installed and `gcloud auth application-default login` was executed.")
print()
print("Error message:")
print(err)
except PermissionDenied as err:
print(f"While you are authenticated to Google Cloud it seems the project '{project_id}' does not exist or does not have the Quantum Engine API enabled.")
print("Error message:")
print(err)
###Output
###Markdown
Example
###Code
# A simple example.
q = cirq.GridQubit(5, 2)
circuit = cirq.Circuit(cirq.X(q)**0.5, cirq.measure(q, key='m'))
job = engine.run_sweep(
program=circuit,
repetitions=10000,
processor_ids=['rainbow'],
gate_set=cirq.google.SYC_GATESET)
results = [str(int(b)) for b in job.results()[0].measurements['m'][:, 0]]
print('Success! Results:')
print(''.join(results))
###Output
Success! Results:
0000001111010100111111000100000000011100001000000011110011101011101101100000101000000110010110011100001001000110001110100101010010101100011010110000010001011001111001111001110011111001100011110110001001101010000000000111101101010110101010011011011111110010111010000010001000101110101100111111111101110100100100000011110010010000110111100111010010101111010001111100011100100101010011111011100000000100100001001100110001000101000011010101010111110001000000011100100000001011011111111001001011000001100110101000000001100110000111000100110101110111101010100110001101001000010110011001011010000111111000111010101111100101100001011101100101000110001100001100010001001001110010000010001011111101101101111011101101100010110001000111001010101001011111011111100011001110001110010111000011010011101101000011100111011000001001010101010010001101011010101011001001010001000111100101000011000101011011110111011101001011001001011011001101111010010011110010011101011011010111001110110101010101010101010001100101000001000101101001010000101110101011011111101010001011111010111110011101111111010100000001100000001100100011001011110111011111110011101010111000011101100111111101100001111100001110100100001010011001110111010000011101110011010001111011110101010011010111011001000101111010000110000111010111110001101011010010100010111111100001011111000001010000001001101100110011001100001000101101001100101011101100011011000111100111000001001100001110100010110011101001111101011000100111010001111001111000110000101001000100010110110001011111001100110101110001110110101101010001010101110100100110100000101101000101010011000110101010101011110111011111110010111011111100000001000000011110101010100111110000010001000101000111011001100100011001101101010000101010001100000001010001110110000000000111100110000010101100001000101110101100010110000010101000110110100010110101001001010010001010111010001001100000000011101110110010111100000000010011001010010010111111010100001011010100101001001110101100000001011001100011100111110011001101111000001001101111100111010100111111101000011100100110111010100010010000001101111111100101111101000101101010000010010100000100111000000110101111101000101001010010001111101000000001011011000111110001111100111010000110000100000110010101110011010100110010010111111010101100101101001111001001000000000100011111000101101001011010110010100100000011001111100111101111111011010110010000111001000110000011001111111010010010100110000011100101011001000111011010101000001011100010001011001111001101000111011000110110001001110010000010111010110100100110000010011101001000100010010011100000010010110100100011011010111000110110110000001001011000000000100111000101110010110001111001001111001011011110010110010101010001100010101000100000110111010111001111000111000010000100101011001001100111100011111000001010110011110111000101000010000110011010110110101110100000101011011001011001110011001001101010001000000111110000101100001100101000110000011110001110000101001001100010101010000000100101100001101001111001001101000010100001011110011100000100000000100010101100111101100011101101010100010110000101010011011101011010100110111100110011001011001000110100001000100010111011011110111011011110100010010011100001011001111001010111000000010110110100001010111010000111110101000000110101100110001010001101110101001110100110110110011111100111101111111001010011011001101111000000110110111110111000010001011001011011010110000000010010100101000110110000100011011010011110100010101101101001001001100001100001011000110011011011110100100000010001011011111000010110010111010011110011110010000101111011100010010011101011110000101010010000011001110101111000001011110010111110001110110100101001001100000011001010010100011101001100110111000001011111011010000111100010111100000011000110001111010100000111000101100000011100100101111100101111100111011110100110011100011110100111010000010111000110100011001000010011100101111010010100110100100011101100111001100101111111110101101001010111100000110010000111100010100100110001010000010011101110111010110010100000110001111100001001000100001010001010110101110110110111100101110001110010010001010110000001000101010111110001100011111011100010110111011000000010101110111101100110000010000000010100111110011000010000001010101111010100000110001001011000010101111100001111110001101100011110110010010110011110110010001011100010010000000010000101010010111101001010101011001011110100000101111100111010010110010001110001010101000111001101110011100000010101101011000100010110011011011111100010110111110100111111101100001100000000000101000101011100111000000100110000010110110010001111000011000101110000010101110011011000010111110111110101101000000100000110101010001000100100000010000101010000011010001000001111001001010011101101001001100100101011111011010100101010001011110111110011101100001010111111110110111010000011111100001101101000010111010000110001111000101110010110110101110000100000111101001101100111100000110100011101101111011111010010001110010011011110101101100100001001000110100110110001010111011101101100101101000111001000110111101000000001011000100110100010011001000111110010101110010010011011011011001111110010000111011011101010001011011000011010111011100001011000111100101000011010101010010001000010000100001000101100111111111110000111110010011001000110110111010000000110110100100111101101011001111110011010100000111000100100110110000000001110110010000010100010100101001000001101101100101001111101000111100001111110011110001000110000011011100100110100000010111011011000000011000000001011101010110100111101001101100100001111000011101110011001111111011101011000100010011001101010100110000001100111000001011001010110100111001101111101101000100111110001111100001110010110000000100101011010111001001100011101110111011100111011111111111111010010101100111111100101011101101011011010111110011011110111011111100110101010000000000101111000010100100100000100001110011010011001001110111111110001101010001010000010110111111111011010000110101001010111111110110111000111011000101100001000011010111010010000111110110100110010111000011011100000110011000110100011111100110000101101111101001100101001010101001011111100010110110000100110111110010111100011010110000011001100111010010000111111011000010110100111111101000101101100110000001010011000011111101010000000010011101101101111000101100010100001100011011100001101111101011101010110001110000101011011100010100110011001100110001100001101001000000010101100001000000110101110111000110100100101000001110110111001000001101100011011000010101110011100001111000011101011001011010100001101101011111001000101001011101000010111011000101011101110110100011110111101110110011101111010111010111001001101110001011000100011101000111000001000000100100000101000100001001010101110111010010011000100000010000111101011000010100100111001111101111011001110101000000100001111111000111100100011101111111000001111111101100000110010001110110001111010100010000000010000010101010010101000001001000000011001101100001110111010001000101011001010000000100111101001111010010001011011101011011011101110100000110110001100001010010111000111010010011110010001001010001001000011110000001000001111001101001111011011100010110010010101000001011000000100000101000111110010001000100000000110111100001011100010011100001011100110100001000010101101010100001000000100100010101001000101110110111010111111000110000100010000011110100100111010100000011111101010110100110110100001000001001001010101011010101011110011111000011000000111101110100000100101111100111110110000100110100000011111110111110100001011010111010000100101010101110001100100101000101101101001011101010101100111001001101001000110101111111001111011110101011101001110100000010010001101101100000010110000011001100001110110010100000000011110111100000100001101011001100110110000001001000001101111001101101111001110011101101101010001101100110111110011110010001000110110010100100010100111000000100010111011100100110100100110111000110001010010000110000001101000010100011101110010110101100001101001101000000100100010000010000011110110101100000011110010101010010001100110101001100101000101100010101001010000111000111000000000001001100100110001111000110100001000000010001110011011001110000011001000011101010011110000101001101111100110000111110101111111111101011100101011001001010101101001100110110111000111110111011101000011001100101010111101100111010001001000001011001101101111110001001111111111110001000100011100101100110110101101100110110010110011111001101001011011111110001011110110000100010000101011110011001000100010100011101111100010011011010011100111110111000111100110000010011100111011100011001011001101101011001000001001010011101111010000101011010100011010011001111011010001111010010111001111000001000001111100100010110001000100011011110001010011101000110000100110010111110111110011000101001000111000001100001110010000011100010001111100011001110011100001100011010010000001000001100010100100110101001101110011001001010100011010110011111010101100111110010011100101101111001110001011001000101101100111010000100111101010101010100001100011001101011111111110010001111100111001001110010101100111001010000001000100000111101110111001110111100100110101000110000001100000101001010001111000001110110100000011100011011100011011100101010100110011111110110011110100100111100110110100111010100001011101010001000010011010011101010001010101000100011111110001010000000100100111101110100000100100110100000101001111011011011110011111101001011100010111110110100011001100000101000110100101101011001010111100000000111010101011101110101100100101001001111010001111100101010011011000101000111100010010010000001111001001100011110001000110011010110110100100111011100111010110111010110011010101010111100110000101101111101110010001010011000010111001110110000100100101010001001010110110001001001011001000010100111100010110110101000111100110010110001010010010000110010101111000010001100111010100010111100110000000010001101001011100110001011000100101000011101000010110100001110011010101111011101111110110011111111000111010010010100010011100100100100011111111011111110100110001100110
###Markdown
Copyright 2020 The Cirq Developers
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
###Markdown
Notebook template View on QuantumAI Run in Google Colab View source on GitHub Download notebook
###Code
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
###Output
###Markdown
Make a copy of this templateYou will need to have access to Quantum Computing Service before running this colab.This notebook can serve as a starter kit for you to run programs on Google's quantum hardware. You can download it using the directions below, open it in colab (or Jupyter), and modify it to begin your experiments. How to download iPython notebooks from GitHubYou can retrieve iPython notebooks in the Cirq repository bygoing to the [docs/ directory](https://github.com/quantumlib/Cirq/tree/master/docs). For instance, this Colab template is found [here](https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/colab.ipynb). Select the file that you would like to download and then click the *Raw* button in the upper-right part of the window:This will show the entire file contents. Right-click and select *Save as* to save this file to your computer. Make sure to save to a file with a `.ipynb` extension (you may need to select *All files* from the format dropdown instead of *text*). You can also get to this Colab's [raw content directly](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/tutorials/google/colab.ipynb)You can also retrieve the entire Cirq repository by running the following command in a terminal that has `git` installed:```git checkout https://github.com/quantumlib/Cirq.git``` How to open Google ColabYou can open a new Colab notebook from your Google Drive window or by visiting the [Colab site](https://colab.research.google.com/notebooks/intro.ipynb). From the Colaboratory site, you can use the menu to upload an iPython notebook:This will upload the ipynb file that you downloaded before. You can now run all the commands, modify it to suit your goals, and share it with others. More Documentation Links* [Quantum Engine concepts](../../google/concepts.md)* [Quantum Engine documentation](../../google/engine.md)* [Cirq documentation](https://cirq.readthedocs.io)* [Colab documentation](https://colab.research.google.com/notebooks/welcome.ipynb) Authenticate and install CirqFor details of authentication and installation, please see [Get started with Quantum Computing Service](start.ipynb).Note: The below code will install the latest stable release of cirq. If you need the latest and greatest features and don't mind if a few things aren't quite working correctly, you can install the pre-release version of `cirq` using `pip install --pre cirq` instead of `pip install cirq` to get the most up-to-date features of cirq.1. Enter the Cloud project ID you'd like to use in the `project_id` field.2. Then run the cell below (and go through the auth flow for access to the project id you entered).
###Code
# The Google Cloud Project id to use.
project_id = '' #@param {type:"string"}
if project_id == '':
import os
if 'GOOGLE_CLOUD_PROJECT' not in os.environ:
raise Exception("Please setup project_id in this cell or set the `GOOGLE_CLOUD_PROJECT` env var to your project id.")
project_id = os.environ['GOOGLE_CLOUD_PROJECT']
else:
os.environ['GOOGLE_CLOUD_PROJECT'] = project_id
def authenticate_user():
"""Runs the user through the Colab OAuth process.
Checks for Google Application Default Credentials and runs interactive login
if the notebook is executed in Colab. In case the notebook is executed in Jupyter notebook
or other IPython runtimes, no interactive login is provided, it is assumed that the
`GOOGLE_APPLICATION_CREDENTIALS` env var is set or `gcloud auth application-default login`
was executed already.
For more information on using Application Default Credentials see
https://cloud.google.com/docs/authentication/production
"""
in_colab = False
try:
from IPython import get_ipython
in_colab = 'google.colab' in str(get_ipython())
except:
# Notebook is not executed within IPython. Assuming external authentication.
return
if in_colab:
from google.colab import auth
print("Getting OAuth2 credentials.")
print("Press enter after entering the verification code.")
auth.authenticate_user(clear_output=False)
print("Authentication complete.")
else:
print("Notebook is not executed with Colab, assuming Application Default Credentials are setup.")
authenticate_user()
print("Successful authentication to Google Cloud.")
###Output
Notebook is not executed with Colab, assuming Application Default Credentials are setup.
Successful authentication to Google Cloud.
###Markdown
Create an Engine variableThe following creates an engine variable which can be used to run programs under the project ID you entered above.
###Code
import cirq
from google.auth.exceptions import DefaultCredentialsError
from google.api_core.exceptions import PermissionDenied
# Create an Engine object to use, providing the project id and the args
try:
engine = cirq.google.get_engine()
engine.list_processors()
print(f"Successful authentication using project {project_id}!")
except DefaultCredentialsError as err:
print("Could not authenticate to Google Quantum Computing Service.")
print(" Tips: If you are using Colab: make sure the previous cell was executed successfully.")
print(" If this notebook is not in Colab (e.g. Jupyter notebook), make sure gcloud is installed and `gcloud auth application-default login` was executed.")
print()
print("Error message:")
print(err)
except PermissionDenied as err:
print(f"While you are authenticated to Google Cloud it seems the project '{project_id}' does not exist or does not have the Quantum Engine API enabled.")
print("Error message:")
print(err)
###Output
###Markdown
Example
###Code
# A simple example.
q = cirq.GridQubit(5, 2)
circuit = cirq.Circuit(cirq.X(q)**0.5, cirq.measure(q, key='m'))
job = engine.run_sweep(
program=circuit,
repetitions=10000,
processor_ids=['rainbow'],
gate_set=cirq.google.SYC_GATESET)
results = [str(int(b)) for b in job.results()[0].measurements['m'][:, 0]]
print('Success! Results:')
print(''.join(results))
###Output
Success! Results:
0000001111010100111111000100000000011100001000000011110011101011101101100000101000000110010110011100001001000110001110100101010010101100011010110000010001011001111001111001110011111001100011110110001001101010000000000111101101010110101010011011011111110010111010000010001000101110101100111111111101110100100100000011110010010000110111100111010010101111010001111100011100100101010011111011100000000100100001001100110001000101000011010101010111110001000000011100100000001011011111111001001011000001100110101000000001100110000111000100110101110111101010100110001101001000010110011001011010000111111000111010101111100101100001011101100101000110001100001100010001001001110010000010001011111101101101111011101101100010110001000111001010101001011111011111100011001110001110010111000011010011101101000011100111011000001001010101010010001101011010101011001001010001000111100101000011000101011011110111011101001011001001011011001101111010010011110010011101011011010111001110110101010101010101010001100101000001000101101001010000101110101011011111101010001011111010111110011101111111010100000001100000001100100011001011110111011111110011101010111000011101100111111101100001111100001110100100001010011001110111010000011101110011010001111011110101010011010111011001000101111010000110000111010111110001101011010010100010111111100001011111000001010000001001101100110011001100001000101101001100101011101100011011000111100111000001001100001110100010110011101001111101011000100111010001111001111000110000101001000100010110110001011111001100110101110001110110101101010001010101110100100110100000101101000101010011000110101010101011110111011111110010111011111100000001000000011110101010100111110000010001000101000111011001100100011001101101010000101010001100000001010001110110000000000111100110000010101100001000101110101100010110000010101000110110100010110101001001010010001010111010001001100000000011101110110010111100000000010011001010010010111111010100001011010100101001001110101100000001011001100011100111110011001101111000001001101111100111010100111111101000011100100110111010100010010000001101111111100101111101000101101010000010010100000100111000000110101111101000101001010010001111101000000001011011000111110001111100111010000110000100000110010101110011010100110010010111111010101100101101001111001001000000000100011111000101101001011010110010100100000011001111100111101111111011010110010000111001000110000011001111111010010010100110000011100101011001000111011010101000001011100010001011001111001101000111011000110110001001110010000010111010110100100110000010011101001000100010010011100000010010110100100011011010111000110110110000001001011000000000100111000101110010110001111001001111001011011110010110010101010001100010101000100000110111010111001111000111000010000100101011001001100111100011111000001010110011110111000101000010000110011010110110101110100000101011011001011001110011001001101010001000000111110000101100001100101000110000011110001110000101001001100010101010000000100101100001101001111001001101000010100001011110011100000100000000100010101100111101100011101101010100010110000101010011011101011010100110111100110011001011001000110100001000100010111011011110111011011110100010010011100001011001111001010111000000010110110100001010111010000111110101000000110101100110001010001101110101001110100110110110011111100111101111111001010011011001101111000000110110111110111000010001011001011011010110000000010010100101000110110000100011011010011110100010101101101001001001100001100001011000110011011011110100100000010001011011111000010110010111010011110011110010000101111011100010010011101011110000101010010000011001110101111000001011110010111110001110110100101001001100000011001010010100011101001100110111000001011111011010000111100010111100000011000110001111010100000111000101100000011100100101111100101111100111011110100110011100011110100111010000010111000110100011001000010011100101111010010100110100100011101100111001100101111111110101101001010111100000110010000111100010100100110001010000010011101110111010110010100000110001111100001001000100001010001010110101110110110111100101110001110010010001010110000001000101010111110001100011111011100010110111011000000010101110111101100110000010000000010100111110011000010000001010101111010100000110001001011000010101111100001111110001101100011110110010010110011110110010001011100010010000000010000101010010111101001010101011001011110100000101111100111010010110010001110001010101000111001101110011100000010101101011000100010110011011011111100010110111110100111111101100001100000000000101000101011100111000000100110000010110110010001111000011000101110000010101110011011000010111110111110101101000000100000110101010001000100100000010000101010000011010001000001111001001010011101101001001100100101011111011010100101010001011110111110011101100001010111111110110111010000011111100001101101000010111010000110001111000101110010110110101110000100000111101001101100111100000110100011101101111011111010010001110010011011110101101100100001001000110100110110001010111011101101100101101000111001000110111101000000001011000100110100010011001000111110010101110010010011011011011001111110010000111011011101010001011011000011010111011100001011000111100101000011010101010010001000010000100001000101100111111111110000111110010011001000110110111010000000110110100100111101101011001111110011010100000111000100100110110000000001110110010000010100010100101001000001101101100101001111101000111100001111110011110001000110000011011100100110100000010111011011000000011000000001011101010110100111101001101100100001111000011101110011001111111011101011000100010011001101010100110000001100111000001011001010110100111001101111101101000100111110001111100001110010110000000100101011010111001001100011101110111011100111011111111111111010010101100111111100101011101101011011010111110011011110111011111100110101010000000000101111000010100100100000100001110011010011001001110111111110001101010001010000010110111111111011010000110101001010111111110110111000111011000101100001000011010111010010000111110110100110010111000011011100000110011000110100011111100110000101101111101001100101001010101001011111100010110110000100110111110010111100011010110000011001100111010010000111111011000010110100111111101000101101100110000001010011000011111101010000000010011101101101111000101100010100001100011011100001101111101011101010110001110000101011011100010100110011001100110001100001101001000000010101100001000000110101110111000110100100101000001110110111001000001101100011011000010101110011100001111000011101011001011010100001101101011111001000101001011101000010111011000101011101110110100011110111101110110011101111010111010111001001101110001011000100011101000111000001000000100100000101000100001001010101110111010010011000100000010000111101011000010100100111001111101111011001110101000000100001111111000111100100011101111111000001111111101100000110010001110110001111010100010000000010000010101010010101000001001000000011001101100001110111010001000101011001010000000100111101001111010010001011011101011011011101110100000110110001100001010010111000111010010011110010001001010001001000011110000001000001111001101001111011011100010110010010101000001011000000100000101000111110010001000100000000110111100001011100010011100001011100110100001000010101101010100001000000100100010101001000101110110111010111111000110000100010000011110100100111010100000011111101010110100110110100001000001001001010101011010101011110011111000011000000111101110100000100101111100111110110000100110100000011111110111110100001011010111010000100101010101110001100100101000101101101001011101010101100111001001101001000110101111111001111011110101011101001110100000010010001101101100000010110000011001100001110110010100000000011110111100000100001101011001100110110000001001000001101111001101101111001110011101101101010001101100110111110011110010001000110110010100100010100111000000100010111011100100110100100110111000110001010010000110000001101000010100011101110010110101100001101001101000000100100010000010000011110110101100000011110010101010010001100110101001100101000101100010101001010000111000111000000000001001100100110001111000110100001000000010001110011011001110000011001000011101010011110000101001101111100110000111110101111111111101011100101011001001010101101001100110110111000111110111011101000011001100101010111101100111010001001000001011001101101111110001001111111111110001000100011100101100110110101101100110110010110011111001101001011011111110001011110110000100010000101011110011001000100010100011101111100010011011010011100111110111000111100110000010011100111011100011001011001101101011001000001001010011101111010000101011010100011010011001111011010001111010010111001111000001000001111100100010110001000100011011110001010011101000110000100110010111110111110011000101001000111000001100001110010000011100010001111100011001110011100001100011010010000001000001100010100100110101001101110011001001010100011010110011111010101100111110010011100101101111001110001011001000101101100111010000100111101010101010100001100011001101011111111110010001111100111001001110010101100111001010000001000100000111101110111001110111100100110101000110000001100000101001010001111000001110110100000011100011011100011011100101010100110011111110110011110100100111100110110100111010100001011101010001000010011010011101010001010101000100011111110001010000000100100111101110100000100100110100000101001111011011011110011111101001011100010111110110100011001100000101000110100101101011001010111100000000111010101011101110101100100101001001111010001111100101010011011000101000111100010010010000001111001001100011110001000110011010110110100100111011100111010110111010110011010101010111100110000101101111101110010001010011000010111001110110000100100101010001001010110110001001001011001000010100111100010110110101000111100110010110001010010010000110010101111000010001100111010100010111100110000000010001101001011100110001011000100101000011101000010110100001110011010101111011101111110110011111111000111010010010100010011100100100100011111111011111110100110001100110
###Markdown
Copyright 2020 The Cirq Developers
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
###Markdown
Notebook template View on QuantumAI Run in Google Colab View source on GitHub Download notebook
###Code
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
###Output
###Markdown
Make a copy of this templateYou will need to have access to Quantum Computing Service before running this colab.This notebook can serve as a starter kit for you to run programs on Google's quantum hardware. You can download it using the directions below, open it in colab (or Jupyter), and modify it to begin your experiments. How to download iPython notebooks from GitHubYou can retrieve iPython notebooks in the Cirq repository bygoing to the [docs/ directory](https://github.com/quantumlib/Cirq/tree/master/docs). For instance, this Colab template is found [here](https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/colab.ipynb). Select the file that you would like to download and then click the *Raw* button in the upper-right part of the window:This will show the entire file contents. Right-click and select *Save as* to save this file to your computer. Make sure to save to a file with a `.ipynb` extension (you may need to select *All files* from the format dropdown instead of *text*). You can also get to this Colab's [raw content directly](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/tutorials/google/colab.ipynb)You can also retrieve the entire Cirq repository by running the following command in a terminal that has `git` installed:```git checkout https://github.com/quantumlib/Cirq.git``` How to open Google ColabYou can open a new Colab notebook from your Google Drive window or by visiting the [Colab site](https://colab.research.google.com/notebooks/intro.ipynb). From the Colaboratory site, you can use the menu to upload an iPython notebook:This will upload the ipynb file that you downloaded before. You can now run all the commands, modify it to suit your goals, and share it with others. More Documentation Links* [Quantum Engine concepts](../../google/concepts.md)* [Quantum Engine documentation](../../google/engine.md)* [Cirq documentation](https://cirq.readthedocs.io)* [Colab documentation](https://colab.research.google.com/notebooks/welcome.ipynb) Authenticate and install CirqFor details of authentication and installation, please see [go/quantum-engine-quickstart](http://go/quantum-engine-quickstart). Note: The below code will install the latest stable release of cirq. If you need the latest and greatest features and don't mind if a few things aren't quite working correctly, you can install the pre-release version of `cirq` using `pip install --pre cirq` instead of `pip install cirq` to get the most up-to-date features of cirq.1. Enter the Cloud project ID you'd like to use in the `project_id` field.2. Then run the cell below (and go through the auth flow for access to the project id you entered).
###Code
# The Google Cloud Project id to use.
project_id = '' #@param {type:"string"}
if project_id == '':
import os
if 'GOOGLE_CLOUD_PROJECT' not in os.environ:
raise Exception("Please setup project_id in this cell or set the `GOOGLE_CLOUD_PROJECT` env var to your project id.")
project_id = os.environ['GOOGLE_CLOUD_PROJECT']
else:
os.environ['GOOGLE_CLOUD_PROJECT'] = project_id
def authenticate_user():
"""Runs the user through the Colab OAuth process.
Checks for Google Application Default Credentials and runs interactive login
if the notebook is executed in Colab. In case the notebook is executed in Jupyter notebook
or other IPython runtimes, no interactive login is provided, it is assumed that the
`GOOGLE_APPLICATION_CREDENTIALS` env var is set or `gcloud auth application-default login`
was executed already.
For more information on using Application Default Credentials see
https://cloud.google.com/docs/authentication/production
"""
in_colab = False
try:
from IPython import get_ipython
in_colab = 'google.colab' in str(get_ipython())
except:
# Notebook is not executed within IPython. Assuming external authentication.
return
if in_colab:
from google.colab import auth
print("Getting OAuth2 credentials.")
print("Press enter after entering the verification code.")
auth.authenticate_user(clear_output=False)
print("Authentication complete.")
else:
print("Notebook is not executed with Colab, assuming Application Default Credentials are setup.")
authenticate_user()
print("Successful authentication to Google Cloud.")
###Output
Notebook is not executed with Colab, assuming Application Default Credentials are setup.
Successful authentication to Google Cloud.
###Markdown
Create an Engine variableThe following creates an engine variable which can be used to run programs under the project ID you entered above.
###Code
import cirq
from google.auth.exceptions import DefaultCredentialsError
from google.api_core.exceptions import PermissionDenied
# Create an Engine object to use, providing the project id and the args
try:
engine = cirq.google.get_engine()
engine.list_processors()
print(f"Successful authentication using project {project_id}!")
except DefaultCredentialsError as err:
print("Could not authenticate to Google Quantum Computing Service.")
print(" Tips: If you are using Colab: make sure the previous cell was executed successfully.")
print(" If this notebook is not in Colab (e.g. Jupyter notebook), make sure gcloud is installed and `gcloud auth application-default login` was executed.")
print()
print("Error message:")
print(err)
except PermissionDenied as err:
print(f"While you are authenticated to Google Cloud it seems the project '{project_id}' does not exist or does not have the Quantum Engine API enabled.")
print("Error message:")
print(err)
###Output
###Markdown
Example
###Code
# A simple example.
q = cirq.GridQubit(5, 2)
circuit = cirq.Circuit(cirq.X(q)**0.5, cirq.measure(q, key='m'))
job = engine.run_sweep(
program=circuit,
repetitions=10000,
processor_ids=['rainbow'],
gate_set=cirq.google.SYC_GATESET)
results = [str(int(b)) for b in job.results()[0].measurements['m'][:, 0]]
print('Success! Results:')
print(''.join(results))
###Output
Success! Results:
0000001111010100111111000100000000011100001000000011110011101011101101100000101000000110010110011100001001000110001110100101010010101100011010110000010001011001111001111001110011111001100011110110001001101010000000000111101101010110101010011011011111110010111010000010001000101110101100111111111101110100100100000011110010010000110111100111010010101111010001111100011100100101010011111011100000000100100001001100110001000101000011010101010111110001000000011100100000001011011111111001001011000001100110101000000001100110000111000100110101110111101010100110001101001000010110011001011010000111111000111010101111100101100001011101100101000110001100001100010001001001110010000010001011111101101101111011101101100010110001000111001010101001011111011111100011001110001110010111000011010011101101000011100111011000001001010101010010001101011010101011001001010001000111100101000011000101011011110111011101001011001001011011001101111010010011110010011101011011010111001110110101010101010101010001100101000001000101101001010000101110101011011111101010001011111010111110011101111111010100000001100000001100100011001011110111011111110011101010111000011101100111111101100001111100001110100100001010011001110111010000011101110011010001111011110101010011010111011001000101111010000110000111010111110001101011010010100010111111100001011111000001010000001001101100110011001100001000101101001100101011101100011011000111100111000001001100001110100010110011101001111101011000100111010001111001111000110000101001000100010110110001011111001100110101110001110110101101010001010101110100100110100000101101000101010011000110101010101011110111011111110010111011111100000001000000011110101010100111110000010001000101000111011001100100011001101101010000101010001100000001010001110110000000000111100110000010101100001000101110101100010110000010101000110110100010110101001001010010001010111010001001100000000011101110110010111100000000010011001010010010111111010100001011010100101001001110101100000001011001100011100111110011001101111000001001101111100111010100111111101000011100100110111010100010010000001101111111100101111101000101101010000010010100000100111000000110101111101000101001010010001111101000000001011011000111110001111100111010000110000100000110010101110011010100110010010111111010101100101101001111001001000000000100011111000101101001011010110010100100000011001111100111101111111011010110010000111001000110000011001111111010010010100110000011100101011001000111011010101000001011100010001011001111001101000111011000110110001001110010000010111010110100100110000010011101001000100010010011100000010010110100100011011010111000110110110000001001011000000000100111000101110010110001111001001111001011011110010110010101010001100010101000100000110111010111001111000111000010000100101011001001100111100011111000001010110011110111000101000010000110011010110110101110100000101011011001011001110011001001101010001000000111110000101100001100101000110000011110001110000101001001100010101010000000100101100001101001111001001101000010100001011110011100000100000000100010101100111101100011101101010100010110000101010011011101011010100110111100110011001011001000110100001000100010111011011110111011011110100010010011100001011001111001010111000000010110110100001010111010000111110101000000110101100110001010001101110101001110100110110110011111100111101111111001010011011001101111000000110110111110111000010001011001011011010110000000010010100101000110110000100011011010011110100010101101101001001001100001100001011000110011011011110100100000010001011011111000010110010111010011110011110010000101111011100010010011101011110000101010010000011001110101111000001011110010111110001110110100101001001100000011001010010100011101001100110111000001011111011010000111100010111100000011000110001111010100000111000101100000011100100101111100101111100111011110100110011100011110100111010000010111000110100011001000010011100101111010010100110100100011101100111001100101111111110101101001010111100000110010000111100010100100110001010000010011101110111010110010100000110001111100001001000100001010001010110101110110110111100101110001110010010001010110000001000101010111110001100011111011100010110111011000000010101110111101100110000010000000010100111110011000010000001010101111010100000110001001011000010101111100001111110001101100011110110010010110011110110010001011100010010000000010000101010010111101001010101011001011110100000101111100111010010110010001110001010101000111001101110011100000010101101011000100010110011011011111100010110111110100111111101100001100000000000101000101011100111000000100110000010110110010001111000011000101110000010101110011011000010111110111110101101000000100000110101010001000100100000010000101010000011010001000001111001001010011101101001001100100101011111011010100101010001011110111110011101100001010111111110110111010000011111100001101101000010111010000110001111000101110010110110101110000100000111101001101100111100000110100011101101111011111010010001110010011011110101101100100001001000110100110110001010111011101101100101101000111001000110111101000000001011000100110100010011001000111110010101110010010011011011011001111110010000111011011101010001011011000011010111011100001011000111100101000011010101010010001000010000100001000101100111111111110000111110010011001000110110111010000000110110100100111101101011001111110011010100000111000100100110110000000001110110010000010100010100101001000001101101100101001111101000111100001111110011110001000110000011011100100110100000010111011011000000011000000001011101010110100111101001101100100001111000011101110011001111111011101011000100010011001101010100110000001100111000001011001010110100111001101111101101000100111110001111100001110010110000000100101011010111001001100011101110111011100111011111111111111010010101100111111100101011101101011011010111110011011110111011111100110101010000000000101111000010100100100000100001110011010011001001110111111110001101010001010000010110111111111011010000110101001010111111110110111000111011000101100001000011010111010010000111110110100110010111000011011100000110011000110100011111100110000101101111101001100101001010101001011111100010110110000100110111110010111100011010110000011001100111010010000111111011000010110100111111101000101101100110000001010011000011111101010000000010011101101101111000101100010100001100011011100001101111101011101010110001110000101011011100010100110011001100110001100001101001000000010101100001000000110101110111000110100100101000001110110111001000001101100011011000010101110011100001111000011101011001011010100001101101011111001000101001011101000010111011000101011101110110100011110111101110110011101111010111010111001001101110001011000100011101000111000001000000100100000101000100001001010101110111010010011000100000010000111101011000010100100111001111101111011001110101000000100001111111000111100100011101111111000001111111101100000110010001110110001111010100010000000010000010101010010101000001001000000011001101100001110111010001000101011001010000000100111101001111010010001011011101011011011101110100000110110001100001010010111000111010010011110010001001010001001000011110000001000001111001101001111011011100010110010010101000001011000000100000101000111110010001000100000000110111100001011100010011100001011100110100001000010101101010100001000000100100010101001000101110110111010111111000110000100010000011110100100111010100000011111101010110100110110100001000001001001010101011010101011110011111000011000000111101110100000100101111100111110110000100110100000011111110111110100001011010111010000100101010101110001100100101000101101101001011101010101100111001001101001000110101111111001111011110101011101001110100000010010001101101100000010110000011001100001110110010100000000011110111100000100001101011001100110110000001001000001101111001101101111001110011101101101010001101100110111110011110010001000110110010100100010100111000000100010111011100100110100100110111000110001010010000110000001101000010100011101110010110101100001101001101000000100100010000010000011110110101100000011110010101010010001100110101001100101000101100010101001010000111000111000000000001001100100110001111000110100001000000010001110011011001110000011001000011101010011110000101001101111100110000111110101111111111101011100101011001001010101101001100110110111000111110111011101000011001100101010111101100111010001001000001011001101101111110001001111111111110001000100011100101100110110101101100110110010110011111001101001011011111110001011110110000100010000101011110011001000100010100011101111100010011011010011100111110111000111100110000010011100111011100011001011001101101011001000001001010011101111010000101011010100011010011001111011010001111010010111001111000001000001111100100010110001000100011011110001010011101000110000100110010111110111110011000101001000111000001100001110010000011100010001111100011001110011100001100011010010000001000001100010100100110101001101110011001001010100011010110011111010101100111110010011100101101111001110001011001000101101100111010000100111101010101010100001100011001101011111111110010001111100111001001110010101100111001010000001000100000111101110111001110111100100110101000110000001100000101001010001111000001110110100000011100011011100011011100101010100110011111110110011110100100111100110110100111010100001011101010001000010011010011101010001010101000100011111110001010000000100100111101110100000100100110100000101001111011011011110011111101001011100010111110110100011001100000101000110100101101011001010111100000000111010101011101110101100100101001001111010001111100101010011011000101000111100010010010000001111001001100011110001000110011010110110100100111011100111010110111010110011010101010111100110000101101111101110010001010011000010111001110110000100100101010001001010110110001001001011001000010100111100010110110101000111100110010110001010010010000110010101111000010001100111010100010111100110000000010001101001011100110001011000100101000011101000010110100001110011010101111011101111110110011111111000111010010010100010011100100100100011111111011111110100110001100110
###Markdown
Copyright 2020 The Cirq Developers
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
###Markdown
Notebook template View on QuantumAI Run in Google Colab View source on GitHub Download notebook
###Code
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
###Output
###Markdown
Make a copy of this templateYou will need to have access to Quantum Computing Service before running this colab.This notebook can serve as a starter kit for you to run programs on Google's quantum hardware. You can download it using the directions below, open it in colab (or Jupyter), and modify it to begin your experiments. How to download iPython notebooks from GitHubYou can retrieve iPython notebooks in the Cirq repository bygoing to the [docs/ directory](https://github.com/quantumlib/Cirq/tree/master/docs). For instance, this Colab template is found [here](https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/colab.ipynb). Select the file that you would like to download and then click the *Raw* button in the upper-right part of the window:This will show the entire file contents. Right-click and select *Save as* to save this file to your computer. Make sure to save to a file with a `.ipynb` extension (you may need to select *All files* from the format dropdown instead of *text*). You can also get to this Colab's [raw content directly](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/tutorials/google/colab.ipynb)You can also retrieve the entire Cirq repository by running the following command in a terminal that has `git` installed:```git checkout https://github.com/quantumlib/Cirq.git``` How to open Google ColabYou can open a new Colab notebook from your Google Drive window or by visiting the [Colab site](https://colab.research.google.com/notebooks/intro.ipynb). From the Colaboratory site, you can use the menu to upload an iPython notebook:This will upload the ipynb file that you downloaded before. You can now run all the commands, modify it to suit your goals, and share it with others. More Documentation Links* [Quantum Engine concepts](../../google/concepts.md)* [Quantum Engine documentation](../../google/engine.md)* [Cirq documentation](https://cirq.readthedocs.io)* [Colab documentation](https://colab.research.google.com/notebooks/welcome.ipynb) Authenticate and install CirqFor details of authentication and installation, please see [Get started with Quantum Computing Service](start.ipynb).Note: The below code will install the latest stable release of cirq. If you need the latest and greatest features and don't mind if a few things aren't quite working correctly, you can install the pre-release version of `cirq` using `pip install --pre cirq` instead of `pip install cirq` to get the most up-to-date features of cirq.1. Enter the Cloud project ID you'd like to use in the `project_id` field.2. Then run the cell below (and go through the auth flow for access to the project id you entered).
###Code
# The Google Cloud Project id to use.
project_id = '' #@param {type:"string"}
if project_id == '':
import os
if 'GOOGLE_CLOUD_PROJECT' not in os.environ:
raise Exception("Please setup project_id in this cell or set the `GOOGLE_CLOUD_PROJECT` env var to your project id.")
project_id = os.environ['GOOGLE_CLOUD_PROJECT']
else:
os.environ['GOOGLE_CLOUD_PROJECT'] = project_id
def authenticate_user():
"""Runs the user through the Colab OAuth process.
Checks for Google Application Default Credentials and runs interactive login
if the notebook is executed in Colab. In case the notebook is executed in Jupyter notebook
or other IPython runtimes, no interactive login is provided, it is assumed that the
`GOOGLE_APPLICATION_CREDENTIALS` env var is set or `gcloud auth application-default login`
was executed already.
For more information on using Application Default Credentials see
https://cloud.google.com/docs/authentication/production
"""
in_colab = False
try:
from IPython import get_ipython
in_colab = 'google.colab' in str(get_ipython())
except:
# Notebook is not executed within IPython. Assuming external authentication.
return
if in_colab:
from google.colab import auth
print("Getting OAuth2 credentials.")
print("Press enter after entering the verification code.")
auth.authenticate_user(clear_output=False)
print("Authentication complete.")
else:
print("Notebook is not executed with Colab, assuming Application Default Credentials are setup.")
authenticate_user()
print("Successful authentication to Google Cloud.")
###Output
Notebook is not executed with Colab, assuming Application Default Credentials are setup.
Successful authentication to Google Cloud.
###Markdown
Create an Engine variableThe following creates an engine variable which can be used to run programs under the project ID you entered above.
###Code
import cirq
import cirq_google
from google.auth.exceptions import DefaultCredentialsError
from google.api_core.exceptions import PermissionDenied
# Create an Engine object to use, providing the project id and the args
try:
engine = cirq_google.get_engine()
engine.list_processors()
print(f"Successful authentication using project {project_id}!")
except DefaultCredentialsError as err:
print("Could not authenticate to Google Quantum Computing Service.")
print(" Tips: If you are using Colab: make sure the previous cell was executed successfully.")
print(" If this notebook is not in Colab (e.g. Jupyter notebook), make sure gcloud is installed and `gcloud auth application-default login` was executed.")
print()
print("Error message:")
print(err)
except PermissionDenied as err:
print(f"While you are authenticated to Google Cloud it seems the project '{project_id}' does not exist or does not have the Quantum Engine API enabled.")
print("Error message:")
print(err)
###Output
###Markdown
Example
###Code
# A simple example.
q = cirq.GridQubit(5, 2)
circuit = cirq.Circuit(cirq.X(q)**0.5, cirq.measure(q, key='m'))
job = engine.run_sweep(
program=circuit,
repetitions=10000,
processor_ids=['rainbow'],
gate_set=cirq_google.SYC_GATESET)
results = [str(int(b)) for b in job.results()[0].measurements['m'][:, 0]]
print('Success! Results:')
print(''.join(results))
###Output
Success! Results:
0000001111010100111111000100000000011100001000000011110011101011101101100000101000000110010110011100001001000110001110100101010010101100011010110000010001011001111001111001110011111001100011110110001001101010000000000111101101010110101010011011011111110010111010000010001000101110101100111111111101110100100100000011110010010000110111100111010010101111010001111100011100100101010011111011100000000100100001001100110001000101000011010101010111110001000000011100100000001011011111111001001011000001100110101000000001100110000111000100110101110111101010100110001101001000010110011001011010000111111000111010101111100101100001011101100101000110001100001100010001001001110010000010001011111101101101111011101101100010110001000111001010101001011111011111100011001110001110010111000011010011101101000011100111011000001001010101010010001101011010101011001001010001000111100101000011000101011011110111011101001011001001011011001101111010010011110010011101011011010111001110110101010101010101010001100101000001000101101001010000101110101011011111101010001011111010111110011101111111010100000001100000001100100011001011110111011111110011101010111000011101100111111101100001111100001110100100001010011001110111010000011101110011010001111011110101010011010111011001000101111010000110000111010111110001101011010010100010111111100001011111000001010000001001101100110011001100001000101101001100101011101100011011000111100111000001001100001110100010110011101001111101011000100111010001111001111000110000101001000100010110110001011111001100110101110001110110101101010001010101110100100110100000101101000101010011000110101010101011110111011111110010111011111100000001000000011110101010100111110000010001000101000111011001100100011001101101010000101010001100000001010001110110000000000111100110000010101100001000101110101100010110000010101000110110100010110101001001010010001010111010001001100000000011101110110010111100000000010011001010010010111111010100001011010100101001001110101100000001011001100011100111110011001101111000001001101111100111010100111111101000011100100110111010100010010000001101111111100101111101000101101010000010010100000100111000000110101111101000101001010010001111101000000001011011000111110001111100111010000110000100000110010101110011010100110010010111111010101100101101001111001001000000000100011111000101101001011010110010100100000011001111100111101111111011010110010000111001000110000011001111111010010010100110000011100101011001000111011010101000001011100010001011001111001101000111011000110110001001110010000010111010110100100110000010011101001000100010010011100000010010110100100011011010111000110110110000001001011000000000100111000101110010110001111001001111001011011110010110010101010001100010101000100000110111010111001111000111000010000100101011001001100111100011111000001010110011110111000101000010000110011010110110101110100000101011011001011001110011001001101010001000000111110000101100001100101000110000011110001110000101001001100010101010000000100101100001101001111001001101000010100001011110011100000100000000100010101100111101100011101101010100010110000101010011011101011010100110111100110011001011001000110100001000100010111011011110111011011110100010010011100001011001111001010111000000010110110100001010111010000111110101000000110101100110001010001101110101001110100110110110011111100111101111111001010011011001101111000000110110111110111000010001011001011011010110000000010010100101000110110000100011011010011110100010101101101001001001100001100001011000110011011011110100100000010001011011111000010110010111010011110011110010000101111011100010010011101011110000101010010000011001110101111000001011110010111110001110110100101001001100000011001010010100011101001100110111000001011111011010000111100010111100000011000110001111010100000111000101100000011100100101111100101111100111011110100110011100011110100111010000010111000110100011001000010011100101111010010100110100100011101100111001100101111111110101101001010111100000110010000111100010100100110001010000010011101110111010110010100000110001111100001001000100001010001010110101110110110111100101110001110010010001010110000001000101010111110001100011111011100010110111011000000010101110111101100110000010000000010100111110011000010000001010101111010100000110001001011000010101111100001111110001101100011110110010010110011110110010001011100010010000000010000101010010111101001010101011001011110100000101111100111010010110010001110001010101000111001101110011100000010101101011000100010110011011011111100010110111110100111111101100001100000000000101000101011100111000000100110000010110110010001111000011000101110000010101110011011000010111110111110101101000000100000110101010001000100100000010000101010000011010001000001111001001010011101101001001100100101011111011010100101010001011110111110011101100001010111111110110111010000011111100001101101000010111010000110001111000101110010110110101110000100000111101001101100111100000110100011101101111011111010010001110010011011110101101100100001001000110100110110001010111011101101100101101000111001000110111101000000001011000100110100010011001000111110010101110010010011011011011001111110010000111011011101010001011011000011010111011100001011000111100101000011010101010010001000010000100001000101100111111111110000111110010011001000110110111010000000110110100100111101101011001111110011010100000111000100100110110000000001110110010000010100010100101001000001101101100101001111101000111100001111110011110001000110000011011100100110100000010111011011000000011000000001011101010110100111101001101100100001111000011101110011001111111011101011000100010011001101010100110000001100111000001011001010110100111001101111101101000100111110001111100001110010110000000100101011010111001001100011101110111011100111011111111111111010010101100111111100101011101101011011010111110011011110111011111100110101010000000000101111000010100100100000100001110011010011001001110111111110001101010001010000010110111111111011010000110101001010111111110110111000111011000101100001000011010111010010000111110110100110010111000011011100000110011000110100011111100110000101101111101001100101001010101001011111100010110110000100110111110010111100011010110000011001100111010010000111111011000010110100111111101000101101100110000001010011000011111101010000000010011101101101111000101100010100001100011011100001101111101011101010110001110000101011011100010100110011001100110001100001101001000000010101100001000000110101110111000110100100101000001110110111001000001101100011011000010101110011100001111000011101011001011010100001101101011111001000101001011101000010111011000101011101110110100011110111101110110011101111010111010111001001101110001011000100011101000111000001000000100100000101000100001001010101110111010010011000100000010000111101011000010100100111001111101111011001110101000000100001111111000111100100011101111111000001111111101100000110010001110110001111010100010000000010000010101010010101000001001000000011001101100001110111010001000101011001010000000100111101001111010010001011011101011011011101110100000110110001100001010010111000111010010011110010001001010001001000011110000001000001111001101001111011011100010110010010101000001011000000100000101000111110010001000100000000110111100001011100010011100001011100110100001000010101101010100001000000100100010101001000101110110111010111111000110000100010000011110100100111010100000011111101010110100110110100001000001001001010101011010101011110011111000011000000111101110100000100101111100111110110000100110100000011111110111110100001011010111010000100101010101110001100100101000101101101001011101010101100111001001101001000110101111111001111011110101011101001110100000010010001101101100000010110000011001100001110110010100000000011110111100000100001101011001100110110000001001000001101111001101101111001110011101101101010001101100110111110011110010001000110110010100100010100111000000100010111011100100110100100110111000110001010010000110000001101000010100011101110010110101100001101001101000000100100010000010000011110110101100000011110010101010010001100110101001100101000101100010101001010000111000111000000000001001100100110001111000110100001000000010001110011011001110000011001000011101010011110000101001101111100110000111110101111111111101011100101011001001010101101001100110110111000111110111011101000011001100101010111101100111010001001000001011001101101111110001001111111111110001000100011100101100110110101101100110110010110011111001101001011011111110001011110110000100010000101011110011001000100010100011101111100010011011010011100111110111000111100110000010011100111011100011001011001101101011001000001001010011101111010000101011010100011010011001111011010001111010010111001111000001000001111100100010110001000100011011110001010011101000110000100110010111110111110011000101001000111000001100001110010000011100010001111100011001110011100001100011010010000001000001100010100100110101001101110011001001010100011010110011111010101100111110010011100101101111001110001011001000101101100111010000100111101010101010100001100011001101011111111110010001111100111001001110010101100111001010000001000100000111101110111001110111100100110101000110000001100000101001010001111000001110110100000011100011011100011011100101010100110011111110110011110100100111100110110100111010100001011101010001000010011010011101010001010101000100011111110001010000000100100111101110100000100100110100000101001111011011011110011111101001011100010111110110100011001100000101000110100101101011001010111100000000111010101011101110101100100101001001111010001111100101010011011000101000111100010010010000001111001001100011110001000110011010110110100100111011100111010110111010110011010101010111100110000101101111101110010001010011000010111001110110000100100101010001001010110110001001001011001000010100111100010110110101000111100110010110001010010010000110010101111000010001100111010100010111100110000000010001101001011100110001011000100101000011101000010110100001110011010101111011101111110110011111111000111010010010100010011100100100100011111111011111110100110001100110
###Markdown
Copyright 2020 The Cirq Developers
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
###Markdown
Notebook template View on QuantumAI Run in Google Colab View source on GitHub Download notebook
###Code
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
###Output
###Markdown
Make a copy of this templateYou will need to have access to Quantum Computing Service before running this colab.This notebook can serve as a starter kit for you to run programs on Google's quantum hardware. You can download it using the directions below, open it in colab (or Jupyter), and modify it to begin your experiments. How to download iPython notebooks from GitHubYou can retrieve iPython notebooks in the Cirq repository bygoing to the [docs/ directory](https://github.com/quantumlib/Cirq/tree/master/docs). For instance, this Colab template is found [here](https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/colab.ipynb). Select the file that you would like to download and then click the *Raw* button in the upper-right part of the window:This will show the entire file contents. Right-click and select *Save as* to save this file to your computer. Make sure to save to a file with a `.ipynb` extension (you may need to select *All files* from the format dropdown instead of *text*). You can also get to this Colab's [raw content directly](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/tutorials/google/colab.ipynb)You can also retrieve the entire Cirq repository by running the following command in a terminal that has `git` installed:```git checkout https://github.com/quantumlib/Cirq.git``` How to open Google ColabYou can open a new Colab notebook from your Google Drive window or by visiting the [Colab site](https://colab.research.google.com/notebooks/intro.ipynb). From the Colaboratory site, you can use the menu to upload an iPython notebook:This will upload the ipynb file that you downloaded before. You can now run all the commands, modify it to suit your goals, and share it with others. More Documentation Links* [Quantum Engine concepts](../../google/concepts.md)* [Quantum Engine documentation](../../google/engine.md)* [Cirq documentation](https://cirq.readthedocs.io)* [Colab documentation](https://colab.research.google.com/notebooks/welcome.ipynb) Authenticate and install CirqFor details of authentication and installation, please see [Get started with Quantum Computing Service](start.ipynb).Note: The below code will install the latest stable release of cirq. If you need the latest and greatest features and don't mind if a few things aren't quite working correctly, you can install the pre-release version of `cirq` using `pip install --pre cirq` instead of `pip install cirq` to get the most up-to-date features of cirq.1. Enter the Cloud project ID you'd like to use in the `project_id` field.2. Then run the cell below (and go through the auth flow for access to the project id you entered).
###Code
# The Google Cloud Project id to use.
project_id = '' #@param {type:"string"}
if project_id == '':
import os
if 'GOOGLE_CLOUD_PROJECT' not in os.environ:
raise Exception("Please setup project_id in this cell or set the `GOOGLE_CLOUD_PROJECT` env var to your project id.")
project_id = os.environ['GOOGLE_CLOUD_PROJECT']
else:
os.environ['GOOGLE_CLOUD_PROJECT'] = project_id
def authenticate_user():
"""Runs the user through the Colab OAuth process.
Checks for Google Application Default Credentials and runs interactive login
if the notebook is executed in Colab. In case the notebook is executed in Jupyter notebook
or other IPython runtimes, no interactive login is provided, it is assumed that the
`GOOGLE_APPLICATION_CREDENTIALS` env var is set or `gcloud auth application-default login`
was executed already.
For more information on using Application Default Credentials see
https://cloud.google.com/docs/authentication/production
"""
in_colab = False
try:
from IPython import get_ipython
in_colab = 'google.colab' in str(get_ipython())
except:
# Notebook is not executed within IPython. Assuming external authentication.
return
if in_colab:
from google.colab import auth
print("Getting OAuth2 credentials.")
print("Press enter after entering the verification code.")
auth.authenticate_user(clear_output=False)
print("Authentication complete.")
else:
print("Notebook is not executed with Colab, assuming Application Default Credentials are setup.")
authenticate_user()
print("Successful authentication to Google Cloud.")
###Output
Notebook is not executed with Colab, assuming Application Default Credentials are setup.
Successful authentication to Google Cloud.
###Markdown
Create an Engine variableThe following creates an engine variable which can be used to run programs under the project ID you entered above.
###Code
import cirq
import cirq_google
from google.auth.exceptions import DefaultCredentialsError
from google.api_core.exceptions import PermissionDenied
# Create an Engine object to use, providing the project id and the args
try:
engine = cirq_google.get_engine()
engine.list_processors()
print(f"Successful authentication using project {project_id}!")
except DefaultCredentialsError as err:
print("Could not authenticate to Google Quantum Computing Service.")
print(" Tips: If you are using Colab: make sure the previous cell was executed successfully.")
print(" If this notebook is not in Colab (e.g. Jupyter notebook), make sure gcloud is installed and `gcloud auth application-default login` was executed.")
print()
print("Error message:")
print(err)
except PermissionDenied as err:
print(f"While you are authenticated to Google Cloud it seems the project '{project_id}' does not exist or does not have the Quantum Engine API enabled.")
print("Error message:")
print(err)
###Output
###Markdown
Example
###Code
# A simple example.
q = cirq.GridQubit(5, 2)
circuit = cirq.Circuit(cirq.X(q)**0.5, cirq.measure(q, key='m'))
job = engine.run_sweep(
program=circuit,
repetitions=10000,
processor_ids=['rainbow'],
gate_set=cirq_google.SYC_GATESET)
results = [str(int(b)) for b in job.results()[0].measurements['m'][:, 0]]
print('Success! Results:')
print(''.join(results))
###Output
Success! Results:
0000001111010100111111000100000000011100001000000011110011101011101101100000101000000110010110011100001001000110001110100101010010101100011010110000010001011001111001111001110011111001100011110110001001101010000000000111101101010110101010011011011111110010111010000010001000101110101100111111111101110100100100000011110010010000110111100111010010101111010001111100011100100101010011111011100000000100100001001100110001000101000011010101010111110001000000011100100000001011011111111001001011000001100110101000000001100110000111000100110101110111101010100110001101001000010110011001011010000111111000111010101111100101100001011101100101000110001100001100010001001001110010000010001011111101101101111011101101100010110001000111001010101001011111011111100011001110001110010111000011010011101101000011100111011000001001010101010010001101011010101011001001010001000111100101000011000101011011110111011101001011001001011011001101111010010011110010011101011011010111001110110101010101010101010001100101000001000101101001010000101110101011011111101010001011111010111110011101111111010100000001100000001100100011001011110111011111110011101010111000011101100111111101100001111100001110100100001010011001110111010000011101110011010001111011110101010011010111011001000101111010000110000111010111110001101011010010100010111111100001011111000001010000001001101100110011001100001000101101001100101011101100011011000111100111000001001100001110100010110011101001111101011000100111010001111001111000110000101001000100010110110001011111001100110101110001110110101101010001010101110100100110100000101101000101010011000110101010101011110111011111110010111011111100000001000000011110101010100111110000010001000101000111011001100100011001101101010000101010001100000001010001110110000000000111100110000010101100001000101110101100010110000010101000110110100010110101001001010010001010111010001001100000000011101110110010111100000000010011001010010010111111010100001011010100101001001110101100000001011001100011100111110011001101111000001001101111100111010100111111101000011100100110111010100010010000001101111111100101111101000101101010000010010100000100111000000110101111101000101001010010001111101000000001011011000111110001111100111010000110000100000110010101110011010100110010010111111010101100101101001111001001000000000100011111000101101001011010110010100100000011001111100111101111111011010110010000111001000110000011001111111010010010100110000011100101011001000111011010101000001011100010001011001111001101000111011000110110001001110010000010111010110100100110000010011101001000100010010011100000010010110100100011011010111000110110110000001001011000000000100111000101110010110001111001001111001011011110010110010101010001100010101000100000110111010111001111000111000010000100101011001001100111100011111000001010110011110111000101000010000110011010110110101110100000101011011001011001110011001001101010001000000111110000101100001100101000110000011110001110000101001001100010101010000000100101100001101001111001001101000010100001011110011100000100000000100010101100111101100011101101010100010110000101010011011101011010100110111100110011001011001000110100001000100010111011011110111011011110100010010011100001011001111001010111000000010110110100001010111010000111110101000000110101100110001010001101110101001110100110110110011111100111101111111001010011011001101111000000110110111110111000010001011001011011010110000000010010100101000110110000100011011010011110100010101101101001001001100001100001011000110011011011110100100000010001011011111000010110010111010011110011110010000101111011100010010011101011110000101010010000011001110101111000001011110010111110001110110100101001001100000011001010010100011101001100110111000001011111011010000111100010111100000011000110001111010100000111000101100000011100100101111100101111100111011110100110011100011110100111010000010111000110100011001000010011100101111010010100110100100011101100111001100101111111110101101001010111100000110010000111100010100100110001010000010011101110111010110010100000110001111100001001000100001010001010110101110110110111100101110001110010010001010110000001000101010111110001100011111011100010110111011000000010101110111101100110000010000000010100111110011000010000001010101111010100000110001001011000010101111100001111110001101100011110110010010110011110110010001011100010010000000010000101010010111101001010101011001011110100000101111100111010010110010001110001010101000111001101110011100000010101101011000100010110011011011111100010110111110100111111101100001100000000000101000101011100111000000100110000010110110010001111000011000101110000010101110011011000010111110111110101101000000100000110101010001000100100000010000101010000011010001000001111001001010011101101001001100100101011111011010100101010001011110111110011101100001010111111110110111010000011111100001101101000010111010000110001111000101110010110110101110000100000111101001101100111100000110100011101101111011111010010001110010011011110101101100100001001000110100110110001010111011101101100101101000111001000110111101000000001011000100110100010011001000111110010101110010010011011011011001111110010000111011011101010001011011000011010111011100001011000111100101000011010101010010001000010000100001000101100111111111110000111110010011001000110110111010000000110110100100111101101011001111110011010100000111000100100110110000000001110110010000010100010100101001000001101101100101001111101000111100001111110011110001000110000011011100100110100000010111011011000000011000000001011101010110100111101001101100100001111000011101110011001111111011101011000100010011001101010100110000001100111000001011001010110100111001101111101101000100111110001111100001110010110000000100101011010111001001100011101110111011100111011111111111111010010101100111111100101011101101011011010111110011011110111011111100110101010000000000101111000010100100100000100001110011010011001001110111111110001101010001010000010110111111111011010000110101001010111111110110111000111011000101100001000011010111010010000111110110100110010111000011011100000110011000110100011111100110000101101111101001100101001010101001011111100010110110000100110111110010111100011010110000011001100111010010000111111011000010110100111111101000101101100110000001010011000011111101010000000010011101101101111000101100010100001100011011100001101111101011101010110001110000101011011100010100110011001100110001100001101001000000010101100001000000110101110111000110100100101000001110110111001000001101100011011000010101110011100001111000011101011001011010100001101101011111001000101001011101000010111011000101011101110110100011110111101110110011101111010111010111001001101110001011000100011101000111000001000000100100000101000100001001010101110111010010011000100000010000111101011000010100100111001111101111011001110101000000100001111111000111100100011101111111000001111111101100000110010001110110001111010100010000000010000010101010010101000001001000000011001101100001110111010001000101011001010000000100111101001111010010001011011101011011011101110100000110110001100001010010111000111010010011110010001001010001001000011110000001000001111001101001111011011100010110010010101000001011000000100000101000111110010001000100000000110111100001011100010011100001011100110100001000010101101010100001000000100100010101001000101110110111010111111000110000100010000011110100100111010100000011111101010110100110110100001000001001001010101011010101011110011111000011000000111101110100000100101111100111110110000100110100000011111110111110100001011010111010000100101010101110001100100101000101101101001011101010101100111001001101001000110101111111001111011110101011101001110100000010010001101101100000010110000011001100001110110010100000000011110111100000100001101011001100110110000001001000001101111001101101111001110011101101101010001101100110111110011110010001000110110010100100010100111000000100010111011100100110100100110111000110001010010000110000001101000010100011101110010110101100001101001101000000100100010000010000011110110101100000011110010101010010001100110101001100101000101100010101001010000111000111000000000001001100100110001111000110100001000000010001110011011001110000011001000011101010011110000101001101111100110000111110101111111111101011100101011001001010101101001100110110111000111110111011101000011001100101010111101100111010001001000001011001101101111110001001111111111110001000100011100101100110110101101100110110010110011111001101001011011111110001011110110000100010000101011110011001000100010100011101111100010011011010011100111110111000111100110000010011100111011100011001011001101101011001000001001010011101111010000101011010100011010011001111011010001111010010111001111000001000001111100100010110001000100011011110001010011101000110000100110010111110111110011000101001000111000001100001110010000011100010001111100011001110011100001100011010010000001000001100010100100110101001101110011001001010100011010110011111010101100111110010011100101101111001110001011001000101101100111010000100111101010101010100001100011001101011111111110010001111100111001001110010101100111001010000001000100000111101110111001110111100100110101000110000001100000101001010001111000001110110100000011100011011100011011100101010100110011111110110011110100100111100110110100111010100001011101010001000010011010011101010001010101000100011111110001010000000100100111101110100000100100110100000101001111011011011110011111101001011100010111110110100011001100000101000110100101101011001010111100000000111010101011101110101100100101001001111010001111100101010011011000101000111100010010010000001111001001100011110001000110011010110110100100111011100111010110111010110011010101010111100110000101101111101110010001010011000010111001110110000100100101010001001010110110001001001011001000010100111100010110110101000111100110010110001010010010000110010101111000010001100111010100010111100110000000010001101001011100110001011000100101000011101000010110100001110011010101111011101111110110011111111000111010010010100010011100100100100011111111011111110100110001100110
###Markdown
Copyright 2020 The Cirq Developers
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
###Markdown
Notebook template View on QuantumLib Run in Google Colab View source on GitHub Download notebook
###Code
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
###Output
###Markdown
Make a copy of this templateYou will need to have access to Quantum Computing Service before running this colab.This notebook can serve as a starter kit for you to run programs on Google's quantum hardware. You can download it using the directions below, open it in colab (or Jupyter), and modify it to begin your experiments. How to download iPython notebooks from GitHubYou can retrieve iPython notebooks in the Cirq repository bygoing to the [docs/ directory](https://github.com/quantumlib/Cirq/tree/master/docs). For instance, this Colab template is found [here](https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/colab.ipynb). Select the file that you would like to download and then click the *Raw* button in the upper-right part of the window:This will show the entire file contents. Right-click and select *Save as* to save this file to your computer. Make sure to save to a file with a `.ipynb` extension (you may need to select *All files* from the format dropdown instead of *text*). You can also get to this Colab's [raw content directly](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/tutorials/google/colab.ipynb)You can also retrieve the entire Cirq repository by running the following command in a terminal that has `git` installed:```git checkout https://github.com/quantumlib/Cirq.git``` How to open Google ColabYou can open a new Colab notebook from your Google Drive window or by visiting the [Colab site](https://colab.research.google.com/notebooks/intro.ipynb). From the Colaboratory site, you can use the menu to upload an iPython notebook:This will upload the ipynb file that you downloaded before. You can now run all the commands, modify it to suit your goals, and share it with others. More Documentation Links* [Quantum Engine concepts](../../google/concepts.md)* [Quantum Engine documentation](../../google/engine.md)* [Cirq documentation](https://cirq.readthedocs.io)* [Colab documentation](https://colab.research.google.com/notebooks/welcome.ipynb) Authenticate and install CirqFor details of authentication and installation, please see [go/quantum-engine-quickstart](http://go/quantum-engine-quickstart). Note: The below code will install the latest stable release of cirq. If you need the latest and greatest features and don't mind if a few things aren't quite working correctly, you can install `cirq-unstable` instead of `cirq` to get the most up-to-date features of cirq.1. Enter the Cloud project ID you'd like to use in the `project_id` field.2. Then run the cell below (and go through the auth flow for access to the project id you entered).
###Code
# The Google Cloud Project id to use.
project_id = '' #@param {type:"string"}
if project_id == '':
import os
if 'GOOGLE_CLOUD_PROJECT' not in os.environ:
raise Exception("Please setup project_id in this cell or set the `GOOGLE_CLOUD_PROJECT` env var to your project id.")
project_id = os.environ['GOOGLE_CLOUD_PROJECT']
else:
os.environ['GOOGLE_CLOUD_PROJECT'] = project_id
def authenticate_user():
"""Runs the user through the Colab OAuth process.
Checks for Google Application Default Credentials and runs interactive login
if the notebook is executed in Colab. In case the notebook is executed in Jupyter notebook
or other IPython runtimes, no interactive login is provided, it is assumed that the
`GOOGLE_APPLICATION_CREDENTIALS` env var is set or `gcloud auth application-default login`
was executed already.
For more information on using Application Default Credentials see
https://cloud.google.com/docs/authentication/production
"""
in_colab = False
try:
from IPython import get_ipython
in_colab = 'google.colab' in str(get_ipython())
except:
# Notebook is not executed within IPython. Assuming external authentication.
return
if in_colab:
from google.colab import auth
print("Getting OAuth2 credentials.")
print("Press enter after entering the verification code.")
auth.authenticate_user(clear_output=False)
print("Authentication complete.")
else:
print("Notebook is not executed with Colab, assuming Application Default Credentials are setup.")
authenticate_user()
print("Successful authentication to Google Cloud.")
###Output
Notebook is not executed with Colab, assuming Application Default Credentials are setup.
Successful authentication to Google Cloud.
###Markdown
Create an Engine variableThe following creates an engine variable which can be used to run programs under the project ID you entered above.
###Code
import cirq
from google.auth.exceptions import DefaultCredentialsError
from google.api_core.exceptions import PermissionDenied
# Create an Engine object to use, providing the project id and the args
try:
engine = cirq.google.get_engine()
engine.list_processors()
print(f"Successful authentication using project {project_id}!")
except DefaultCredentialsError as err:
print("Could not authenticate to Google Quantum Computing Service.")
print(" Tips: If you are using Colab: make sure the previous cell was executed successfully.")
print(" If this notebook is not in Colab (e.g. Jupyter notebook), make sure gcloud is installed and `gcloud auth application-default login` was executed.")
print()
print("Error message:")
print(err)
except PermissionDenied as err:
print(f"While you are authenticated to Google Cloud it seems the project '{project_id}' does not exist or does not have the Quantum Engine API enabled.")
print("Error message:")
print(err)
###Output
###Markdown
Example
###Code
# A simple example.
q = cirq.GridQubit(5, 2)
circuit = cirq.Circuit(cirq.X(q)**0.5, cirq.measure(q, key='m'))
job = engine.run_sweep(
program=circuit,
repetitions=10000,
processor_ids=['rainbow'],
gate_set=cirq.google.SYC_GATESET)
results = [str(int(b)) for b in job.results()[0].measurements['m'][:, 0]]
print('Success! Results:')
print(''.join(results))
###Output
Success! Results:
0000001111010100111111000100000000011100001000000011110011101011101101100000101000000110010110011100001001000110001110100101010010101100011010110000010001011001111001111001110011111001100011110110001001101010000000000111101101010110101010011011011111110010111010000010001000101110101100111111111101110100100100000011110010010000110111100111010010101111010001111100011100100101010011111011100000000100100001001100110001000101000011010101010111110001000000011100100000001011011111111001001011000001100110101000000001100110000111000100110101110111101010100110001101001000010110011001011010000111111000111010101111100101100001011101100101000110001100001100010001001001110010000010001011111101101101111011101101100010110001000111001010101001011111011111100011001110001110010111000011010011101101000011100111011000001001010101010010001101011010101011001001010001000111100101000011000101011011110111011101001011001001011011001101111010010011110010011101011011010111001110110101010101010101010001100101000001000101101001010000101110101011011111101010001011111010111110011101111111010100000001100000001100100011001011110111011111110011101010111000011101100111111101100001111100001110100100001010011001110111010000011101110011010001111011110101010011010111011001000101111010000110000111010111110001101011010010100010111111100001011111000001010000001001101100110011001100001000101101001100101011101100011011000111100111000001001100001110100010110011101001111101011000100111010001111001111000110000101001000100010110110001011111001100110101110001110110101101010001010101110100100110100000101101000101010011000110101010101011110111011111110010111011111100000001000000011110101010100111110000010001000101000111011001100100011001101101010000101010001100000001010001110110000000000111100110000010101100001000101110101100010110000010101000110110100010110101001001010010001010111010001001100000000011101110110010111100000000010011001010010010111111010100001011010100101001001110101100000001011001100011100111110011001101111000001001101111100111010100111111101000011100100110111010100010010000001101111111100101111101000101101010000010010100000100111000000110101111101000101001010010001111101000000001011011000111110001111100111010000110000100000110010101110011010100110010010111111010101100101101001111001001000000000100011111000101101001011010110010100100000011001111100111101111111011010110010000111001000110000011001111111010010010100110000011100101011001000111011010101000001011100010001011001111001101000111011000110110001001110010000010111010110100100110000010011101001000100010010011100000010010110100100011011010111000110110110000001001011000000000100111000101110010110001111001001111001011011110010110010101010001100010101000100000110111010111001111000111000010000100101011001001100111100011111000001010110011110111000101000010000110011010110110101110100000101011011001011001110011001001101010001000000111110000101100001100101000110000011110001110000101001001100010101010000000100101100001101001111001001101000010100001011110011100000100000000100010101100111101100011101101010100010110000101010011011101011010100110111100110011001011001000110100001000100010111011011110111011011110100010010011100001011001111001010111000000010110110100001010111010000111110101000000110101100110001010001101110101001110100110110110011111100111101111111001010011011001101111000000110110111110111000010001011001011011010110000000010010100101000110110000100011011010011110100010101101101001001001100001100001011000110011011011110100100000010001011011111000010110010111010011110011110010000101111011100010010011101011110000101010010000011001110101111000001011110010111110001110110100101001001100000011001010010100011101001100110111000001011111011010000111100010111100000011000110001111010100000111000101100000011100100101111100101111100111011110100110011100011110100111010000010111000110100011001000010011100101111010010100110100100011101100111001100101111111110101101001010111100000110010000111100010100100110001010000010011101110111010110010100000110001111100001001000100001010001010110101110110110111100101110001110010010001010110000001000101010111110001100011111011100010110111011000000010101110111101100110000010000000010100111110011000010000001010101111010100000110001001011000010101111100001111110001101100011110110010010110011110110010001011100010010000000010000101010010111101001010101011001011110100000101111100111010010110010001110001010101000111001101110011100000010101101011000100010110011011011111100010110111110100111111101100001100000000000101000101011100111000000100110000010110110010001111000011000101110000010101110011011000010111110111110101101000000100000110101010001000100100000010000101010000011010001000001111001001010011101101001001100100101011111011010100101010001011110111110011101100001010111111110110111010000011111100001101101000010111010000110001111000101110010110110101110000100000111101001101100111100000110100011101101111011111010010001110010011011110101101100100001001000110100110110001010111011101101100101101000111001000110111101000000001011000100110100010011001000111110010101110010010011011011011001111110010000111011011101010001011011000011010111011100001011000111100101000011010101010010001000010000100001000101100111111111110000111110010011001000110110111010000000110110100100111101101011001111110011010100000111000100100110110000000001110110010000010100010100101001000001101101100101001111101000111100001111110011110001000110000011011100100110100000010111011011000000011000000001011101010110100111101001101100100001111000011101110011001111111011101011000100010011001101010100110000001100111000001011001010110100111001101111101101000100111110001111100001110010110000000100101011010111001001100011101110111011100111011111111111111010010101100111111100101011101101011011010111110011011110111011111100110101010000000000101111000010100100100000100001110011010011001001110111111110001101010001010000010110111111111011010000110101001010111111110110111000111011000101100001000011010111010010000111110110100110010111000011011100000110011000110100011111100110000101101111101001100101001010101001011111100010110110000100110111110010111100011010110000011001100111010010000111111011000010110100111111101000101101100110000001010011000011111101010000000010011101101101111000101100010100001100011011100001101111101011101010110001110000101011011100010100110011001100110001100001101001000000010101100001000000110101110111000110100100101000001110110111001000001101100011011000010101110011100001111000011101011001011010100001101101011111001000101001011101000010111011000101011101110110100011110111101110110011101111010111010111001001101110001011000100011101000111000001000000100100000101000100001001010101110111010010011000100000010000111101011000010100100111001111101111011001110101000000100001111111000111100100011101111111000001111111101100000110010001110110001111010100010000000010000010101010010101000001001000000011001101100001110111010001000101011001010000000100111101001111010010001011011101011011011101110100000110110001100001010010111000111010010011110010001001010001001000011110000001000001111001101001111011011100010110010010101000001011000000100000101000111110010001000100000000110111100001011100010011100001011100110100001000010101101010100001000000100100010101001000101110110111010111111000110000100010000011110100100111010100000011111101010110100110110100001000001001001010101011010101011110011111000011000000111101110100000100101111100111110110000100110100000011111110111110100001011010111010000100101010101110001100100101000101101101001011101010101100111001001101001000110101111111001111011110101011101001110100000010010001101101100000010110000011001100001110110010100000000011110111100000100001101011001100110110000001001000001101111001101101111001110011101101101010001101100110111110011110010001000110110010100100010100111000000100010111011100100110100100110111000110001010010000110000001101000010100011101110010110101100001101001101000000100100010000010000011110110101100000011110010101010010001100110101001100101000101100010101001010000111000111000000000001001100100110001111000110100001000000010001110011011001110000011001000011101010011110000101001101111100110000111110101111111111101011100101011001001010101101001100110110111000111110111011101000011001100101010111101100111010001001000001011001101101111110001001111111111110001000100011100101100110110101101100110110010110011111001101001011011111110001011110110000100010000101011110011001000100010100011101111100010011011010011100111110111000111100110000010011100111011100011001011001101101011001000001001010011101111010000101011010100011010011001111011010001111010010111001111000001000001111100100010110001000100011011110001010011101000110000100110010111110111110011000101001000111000001100001110010000011100010001111100011001110011100001100011010010000001000001100010100100110101001101110011001001010100011010110011111010101100111110010011100101101111001110001011001000101101100111010000100111101010101010100001100011001101011111111110010001111100111001001110010101100111001010000001000100000111101110111001110111100100110101000110000001100000101001010001111000001110110100000011100011011100011011100101010100110011111110110011110100100111100110110100111010100001011101010001000010011010011101010001010101000100011111110001010000000100100111101110100000100100110100000101001111011011011110011111101001011100010111110110100011001100000101000110100101101011001010111100000000111010101011101110101100100101001001111010001111100101010011011000101000111100010010010000001111001001100011110001000110011010110110100100111011100111010110111010110011010101010111100110000101101111101110010001010011000010111001110110000100100101010001001010110110001001001011001000010100111100010110110101000111100110010110001010010010000110010101111000010001100111010100010111100110000000010001101001011100110001011000100101000011101000010110100001110011010101111011101111110110011111111000111010010010100010011100100100100011111111011111110100110001100110
###Markdown
Copyright 2020 The Cirq Developers
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
###Markdown
Notebook template View on QuantumAI Run in Google Colab View source on GitHub Download notebook
###Code
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
###Output
###Markdown
Make a copy of this templateYou will need to have access to Quantum Computing Service before running this colab.This notebook can serve as a starter kit for you to run programs on Google's quantum hardware. You can download it using the directions below, open it in colab (or Jupyter), and modify it to begin your experiments. How to download iPython notebooks from GitHubYou can retrieve iPython notebooks in the Cirq repository bygoing to the [docs/ directory](https://github.com/quantumlib/Cirq/tree/master/docs). For instance, this Colab template is found [here](https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/colab.ipynb). Select the file that you would like to download and then click the *Raw* button in the upper-right part of the window:This will show the entire file contents. Right-click and select *Save as* to save this file to your computer. Make sure to save to a file with a `.ipynb` extension (you may need to select *All files* from the format dropdown instead of *text*). You can also get to this Colab's [raw content directly](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/tutorials/google/colab.ipynb)You can also retrieve the entire Cirq repository by running the following command in a terminal that has `git` installed:```git checkout https://github.com/quantumlib/Cirq.git``` How to open Google ColabYou can open a new Colab notebook from your Google Drive window or by visiting the [Colab site](https://colab.research.google.com/notebooks/intro.ipynb). From the Colaboratory site, you can use the menu to upload an iPython notebook:This will upload the ipynb file that you downloaded before. You can now run all the commands, modify it to suit your goals, and share it with others. More Documentation Links* [Quantum Engine concepts](../../google/concepts.md)* [Quantum Engine documentation](../../google/engine.md)* [Cirq documentation](https://cirq.readthedocs.io)* [Colab documentation](https://colab.research.google.com/notebooks/welcome.ipynb) Authenticate and install CirqFor details of authentication and installation, please see [Get started with Quantum Computing Service](start.ipynb).Note: The below code will install the latest stable release of cirq. If you need the latest and greatest features and don't mind if a few things aren't quite working correctly, you can install the pre-release version of `cirq` using `pip install --pre cirq` instead of `pip install cirq` to get the most up-to-date features of cirq.1. Enter the Cloud project ID you'd like to use in the `project_id` field.2. Then run the cell below (and go through the auth flow for access to the project id you entered).
###Code
import cirq_google as cg
# The Google Cloud Project id to use.
project_id = '' #@param {type:"string"}
processor_id = "" #@param {type:"string"}
from cirq_google.engine.qcs_notebook import get_qcs_objects_for_notebook
device_sampler = get_qcs_objects_for_notebook(project_id, processor_id)
###Output
Notebook is not executed with Colab, assuming Application Default Credentials are setup.
Successful authentication to Google Cloud.
###Markdown
Create an Engine variableThe following creates an engine variable which can be used to run programs under the project ID you entered above.
###Code
import cirq
import cirq_google
from google.auth.exceptions import DefaultCredentialsError
from google.api_core.exceptions import PermissionDenied
# Create an Engine object to use, providing the project id and the args
try:
engine = cirq_google.get_engine()
engine.list_processors()
print(f"Successful authentication using project {project_id}!")
except DefaultCredentialsError as err:
print("Could not authenticate to Google Quantum Computing Service.")
print(" Tips: If you are using Colab: make sure the previous cell was executed successfully.")
print(" If this notebook is not in Colab (e.g. Jupyter notebook), make sure gcloud is installed and `gcloud auth application-default login` was executed.")
print()
print("Error message:")
print(err)
except PermissionDenied as err:
print(f"While you are authenticated to Google Cloud it seems the project '{project_id}' does not exist or does not have the Quantum Engine API enabled.")
print("Error message:")
print(err)
###Output
###Markdown
Example
###Code
# A simple example.
q = cirq.GridQubit(5, 2)
circuit = cirq.Circuit(cirq.X(q)**0.5, cirq.measure(q, key='m'))
job = engine.run_sweep(
program=circuit,
repetitions=10000,
processor_ids=['rainbow'],
gate_set=cirq_google.SYC_GATESET)
results = [str(int(b)) for b in job.results()[0].measurements['m'][:, 0]]
print('Success! Results:')
print(''.join(results))
###Output
Success! Results:
0000001111010100111111000100000000011100001000000011110011101011101101100000101000000110010110011100001001000110001110100101010010101100011010110000010001011001111001111001110011111001100011110110001001101010000000000111101101010110101010011011011111110010111010000010001000101110101100111111111101110100100100000011110010010000110111100111010010101111010001111100011100100101010011111011100000000100100001001100110001000101000011010101010111110001000000011100100000001011011111111001001011000001100110101000000001100110000111000100110101110111101010100110001101001000010110011001011010000111111000111010101111100101100001011101100101000110001100001100010001001001110010000010001011111101101101111011101101100010110001000111001010101001011111011111100011001110001110010111000011010011101101000011100111011000001001010101010010001101011010101011001001010001000111100101000011000101011011110111011101001011001001011011001101111010010011110010011101011011010111001110110101010101010101010001100101000001000101101001010000101110101011011111101010001011111010111110011101111111010100000001100000001100100011001011110111011111110011101010111000011101100111111101100001111100001110100100001010011001110111010000011101110011010001111011110101010011010111011001000101111010000110000111010111110001101011010010100010111111100001011111000001010000001001101100110011001100001000101101001100101011101100011011000111100111000001001100001110100010110011101001111101011000100111010001111001111000110000101001000100010110110001011111001100110101110001110110101101010001010101110100100110100000101101000101010011000110101010101011110111011111110010111011111100000001000000011110101010100111110000010001000101000111011001100100011001101101010000101010001100000001010001110110000000000111100110000010101100001000101110101100010110000010101000110110100010110101001001010010001010111010001001100000000011101110110010111100000000010011001010010010111111010100001011010100101001001110101100000001011001100011100111110011001101111000001001101111100111010100111111101000011100100110111010100010010000001101111111100101111101000101101010000010010100000100111000000110101111101000101001010010001111101000000001011011000111110001111100111010000110000100000110010101110011010100110010010111111010101100101101001111001001000000000100011111000101101001011010110010100100000011001111100111101111111011010110010000111001000110000011001111111010010010100110000011100101011001000111011010101000001011100010001011001111001101000111011000110110001001110010000010111010110100100110000010011101001000100010010011100000010010110100100011011010111000110110110000001001011000000000100111000101110010110001111001001111001011011110010110010101010001100010101000100000110111010111001111000111000010000100101011001001100111100011111000001010110011110111000101000010000110011010110110101110100000101011011001011001110011001001101010001000000111110000101100001100101000110000011110001110000101001001100010101010000000100101100001101001111001001101000010100001011110011100000100000000100010101100111101100011101101010100010110000101010011011101011010100110111100110011001011001000110100001000100010111011011110111011011110100010010011100001011001111001010111000000010110110100001010111010000111110101000000110101100110001010001101110101001110100110110110011111100111101111111001010011011001101111000000110110111110111000010001011001011011010110000000010010100101000110110000100011011010011110100010101101101001001001100001100001011000110011011011110100100000010001011011111000010110010111010011110011110010000101111011100010010011101011110000101010010000011001110101111000001011110010111110001110110100101001001100000011001010010100011101001100110111000001011111011010000111100010111100000011000110001111010100000111000101100000011100100101111100101111100111011110100110011100011110100111010000010111000110100011001000010011100101111010010100110100100011101100111001100101111111110101101001010111100000110010000111100010100100110001010000010011101110111010110010100000110001111100001001000100001010001010110101110110110111100101110001110010010001010110000001000101010111110001100011111011100010110111011000000010101110111101100110000010000000010100111110011000010000001010101111010100000110001001011000010101111100001111110001101100011110110010010110011110110010001011100010010000000010000101010010111101001010101011001011110100000101111100111010010110010001110001010101000111001101110011100000010101101011000100010110011011011111100010110111110100111111101100001100000000000101000101011100111000000100110000010110110010001111000011000101110000010101110011011000010111110111110101101000000100000110101010001000100100000010000101010000011010001000001111001001010011101101001001100100101011111011010100101010001011110111110011101100001010111111110110111010000011111100001101101000010111010000110001111000101110010110110101110000100000111101001101100111100000110100011101101111011111010010001110010011011110101101100100001001000110100110110001010111011101101100101101000111001000110111101000000001011000100110100010011001000111110010101110010010011011011011001111110010000111011011101010001011011000011010111011100001011000111100101000011010101010010001000010000100001000101100111111111110000111110010011001000110110111010000000110110100100111101101011001111110011010100000111000100100110110000000001110110010000010100010100101001000001101101100101001111101000111100001111110011110001000110000011011100100110100000010111011011000000011000000001011101010110100111101001101100100001111000011101110011001111111011101011000100010011001101010100110000001100111000001011001010110100111001101111101101000100111110001111100001110010110000000100101011010111001001100011101110111011100111011111111111111010010101100111111100101011101101011011010111110011011110111011111100110101010000000000101111000010100100100000100001110011010011001001110111111110001101010001010000010110111111111011010000110101001010111111110110111000111011000101100001000011010111010010000111110110100110010111000011011100000110011000110100011111100110000101101111101001100101001010101001011111100010110110000100110111110010111100011010110000011001100111010010000111111011000010110100111111101000101101100110000001010011000011111101010000000010011101101101111000101100010100001100011011100001101111101011101010110001110000101011011100010100110011001100110001100001101001000000010101100001000000110101110111000110100100101000001110110111001000001101100011011000010101110011100001111000011101011001011010100001101101011111001000101001011101000010111011000101011101110110100011110111101110110011101111010111010111001001101110001011000100011101000111000001000000100100000101000100001001010101110111010010011000100000010000111101011000010100100111001111101111011001110101000000100001111111000111100100011101111111000001111111101100000110010001110110001111010100010000000010000010101010010101000001001000000011001101100001110111010001000101011001010000000100111101001111010010001011011101011011011101110100000110110001100001010010111000111010010011110010001001010001001000011110000001000001111001101001111011011100010110010010101000001011000000100000101000111110010001000100000000110111100001011100010011100001011100110100001000010101101010100001000000100100010101001000101110110111010111111000110000100010000011110100100111010100000011111101010110100110110100001000001001001010101011010101011110011111000011000000111101110100000100101111100111110110000100110100000011111110111110100001011010111010000100101010101110001100100101000101101101001011101010101100111001001101001000110101111111001111011110101011101001110100000010010001101101100000010110000011001100001110110010100000000011110111100000100001101011001100110110000001001000001101111001101101111001110011101101101010001101100110111110011110010001000110110010100100010100111000000100010111011100100110100100110111000110001010010000110000001101000010100011101110010110101100001101001101000000100100010000010000011110110101100000011110010101010010001100110101001100101000101100010101001010000111000111000000000001001100100110001111000110100001000000010001110011011001110000011001000011101010011110000101001101111100110000111110101111111111101011100101011001001010101101001100110110111000111110111011101000011001100101010111101100111010001001000001011001101101111110001001111111111110001000100011100101100110110101101100110110010110011111001101001011011111110001011110110000100010000101011110011001000100010100011101111100010011011010011100111110111000111100110000010011100111011100011001011001101101011001000001001010011101111010000101011010100011010011001111011010001111010010111001111000001000001111100100010110001000100011011110001010011101000110000100110010111110111110011000101001000111000001100001110010000011100010001111100011001110011100001100011010010000001000001100010100100110101001101110011001001010100011010110011111010101100111110010011100101101111001110001011001000101101100111010000100111101010101010100001100011001101011111111110010001111100111001001110010101100111001010000001000100000111101110111001110111100100110101000110000001100000101001010001111000001110110100000011100011011100011011100101010100110011111110110011110100100111100110110100111010100001011101010001000010011010011101010001010101000100011111110001010000000100100111101110100000100100110100000101001111011011011110011111101001011100010111110110100011001100000101000110100101101011001010111100000000111010101011101110101100100101001001111010001111100101010011011000101000111100010010010000001111001001100011110001000110011010110110100100111011100111010110111010110011010101010111100110000101101111101110010001010011000010111001110110000100100101010001001010110110001001001011001000010100111100010110110101000111100110010110001010010010000110010101111000010001100111010100010111100110000000010001101001011100110001011000100101000011101000010110100001110011010101111011101111110110011111111000111010010010100010011100100100100011111111011111110100110001100110
|
models/6_logistic_regression_with_sgd.ipynb | ###Markdown
Reference: Machine Learning- Sudeshna Sarkar (Logistic Regression): https://www.youtube.com/watch?v=CE03E80wbRE We will build a basic logistic regression model on titanic dataset
###Code
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve, auc, roc_auc_score
np.random.seed(42)
df = pd.read_csv('../data/titanic_train.csv')
df = df.replace([np.inf, -np.inf], np.nan)
df['Age'].fillna(df['Age'].mean(), inplace=True)
df['has_cabin'] = df['Cabin'].apply(lambda val: 0 if pd.isnull(val) else 1)
df.columns
df = df[['Age', 'Survived', 'Sex', 'PassengerId', 'has_cabin']]
df.set_index('PassengerId', inplace=True)
df['Sex'] = df['Sex'].map({'male':0, 'female':1})
df.shape
df.head()
from sklearn import preprocessing
min_max_scaler = preprocessing.MinMaxScaler(feature_range=(-1,1))
df_survived = df.pop('Survived')
min_max_scaler.fit(df)
df['Survived'] = df_survived
X_train, X_test = train_test_split(df, test_size=0.3)
Y_train = X_train.pop('Survived')
Y_test = X_test.pop('Survived')
X_train = min_max_scaler.transform(X_train)
X_test = min_max_scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Build a custom logistic regression model
###Code
# sigmoid is simply: 1/(1+ e^(-x))
def sigmoid(values):
return 1/(1+np.exp(-values))
sigmoid(np.array([1,2,3]))
# log_likelihood which we want to maximise
# Formula:
# Sigma (y_i * log(h(x_i)) + (1-y_i) log(1-h(x_i)))
# where y_i is ith output value
# h(x_i) is sigmoid(B'X) [sigmoid of output of i_th input features]
def log_likelihood(expected_output, predicted_output):
return np.sum(expected_output * np.log(predicted_output) + (1 - expected_output) * np.log(1 - predicted_output))
print('log_likelihood 1', log_likelihood(np.array([1,1,0,0]), np.array([0.9,0.8,0.3,0.1])))
print('log_likelihood 2 ', log_likelihood(np.array([1,1,0,0]), np.array([0.9,0.9,0.3,0.1])))
# log_likelihood 2 > 1 because the predicted values are closer to expected values
X_train.shape, Y_train.shape
Y_train.values
# if we wan't an intercept value then we add a new feature as all 1s
# Formula: for each step
# compute the prediction using B' * X done where B is weights and X is features
# prediccted output is passed to sigmoid fuction
# difference in prediction and actual is used to get error
# this error is used to get get gradient along with learning rate which is used to update weight
def logistic_regression(features, target, num_steps, learning_rate, fit_intercept=True, log_steps=100):
if fit_intercept:
# adding intercept, Beta_0
intercept = np.ones((features.shape[0], 1))
features = np.hstack((intercept, features))
weights = np.zeros(features.shape[1])
for i in range(num_steps):
predicted_outputs = []
for idx in range(features.shape[0]):
row = features[idx,:]
prediction = np.dot(row, weights)
predicted_output = sigmoid(prediction)
predicted_outputs.append(predicted_output)
# Stochastic Gradient descent to update weights
output_error = target[idx] - predicted_output
gradient = np.dot(row.T, output_error)
weights += learning_rate * gradient
if i % log_steps == 0:
print('step:', i, 'log_likelihood:', log_likelihood(target, np.array(predicted_outputs)))
return weights
# Same process as above but no learning rate or updateing weights. just making predictions.
def logistic_predict(features, weights, fit_intercept=True):
if fit_intercept:
# adding intercept, Beta_0
intercept = np.ones((features.shape[0], 1))
features = np.hstack((intercept, features))
prediction = np.dot(features, weights)
predicted_output = sigmoid(prediction)
return predicted_output
weights = logistic_regression(X_train, Y_train.values, num_steps=10000, learning_rate=5e-5, log_steps=1000)
print('intercept', weights[0], 'coef_', weights[1:])
roc_auc_score(Y_test, logistic_predict(X_test, weights))
###Output
_____no_output_____
###Markdown
Build a logistic regression model with Sklean for validation
###Code
from sklearn.linear_model import LogisticRegression
# C = 5 to reduce regularization on coeficients, as we don't have any regularization in our model.
model = LogisticRegression(fit_intercept=True, C=5, max_iter=10000, verbose=5)
model = model.fit(X_train, Y_train)
roc_auc_score(Y_test, model.predict_proba(X_test)[:,1])
print('intercept', model.intercept_, 'coef', model.coef_)
###Output
intercept [-0.02545202] coef [[-0.76697725 1.22293027 0.84869292]]
|
prediction_exploration.ipynb | ###Markdown
Results drill down
###Code
results_ = results[['ImageId','number_of_ships','f2']].drop_duplicates()
size = len(results_)
empty = results_[results_['number_of_ships']==0]['f2']
f2_empty, size_empty = empty.mean(), len(empty)
gain_empty = (1-f2_empty)*size_empty/size
non_empty = results_[results_['number_of_ships']!=0]['f2']
f2_non_empty, size_non_empty = non_empty.mean(), len(non_empty)
gain_non_empty = (1-f2_non_empty)*size_non_empty/size
ship_1 = results_[results_['number_of_ships']==1]['f2']
f2_1_ship, size_1_ship = ship_1.mean(), len(ship_1)
gain_1_ship = (1-f2_1_ship)*size_1_ship/size
ship_2_to_5 = results_[results_['number_of_ships'].between(2,5)]['f2']
f2_2_to_5_ships, size_2_to_5_ships = ship_2_to_5.mean(), len(ship_2_to_5)
gain_2_to_5_ship = (1-f2_2_to_5_ships)*size_2_to_5_ships/size
ship_6_to_10 = results_[results_['number_of_ships'].between(6,10)]['f2']
f2_6_to_10_ships, size_6_to_10_ships = ship_6_to_10.mean(), len(ship_6_to_10)
gain_6_to_10_ship = (1-f2_6_to_10_ships)*size_6_to_10_ships/size
ship_10_plus = results_[results_['number_of_ships']>10]['f2']
f2_more_than_10_ships, size_more_than_10_ships = ship_10_plus.mean(), len(ship_10_plus)
gain_10_ships = (1-f2_more_than_10_ships)*size_more_than_10_ships/size
print('Empty | f2: {0:.3f} | gain: {1:.3f}'.format(f2_empty, gain_empty))
print('Non Empty f2: {0:.3f} | gain: {1:.3f}'.format(f2_non_empty, gain_non_empty))
print('1 ship f2: {0:.3f} | gain: {1:.3f}'.format(f2_1_ship, gain_1_ship))
print('2-5 ships f2: {0:.3f} | gain: {1:.3f}'.format(f2_2_to_5_ships, gain_2_to_5_ship))
print('5-10 ships f2: {0:.3f} | gain: {1:.3f}'.format(f2_6_to_10_ships, gain_6_to_10_ship))
print('10+ ships f2: {0:.3f} | gain: {1:.3f}'.format(f2_more_than_10_ships, gain_10_ships))
###Output
_____no_output_____
###Markdown
Predictions Exploration Non Empty
###Code
selected_predictions = results[(results['number_of_ships']!=0) &
(results['f2'].between(0.0, 1.0))
][['ImageId','number_of_ships','f2']].\
drop_duplicates().sort_values('f2').reset_index(drop=True)
selected_predictions.head()
@ipy.interact(idx=ipy.IntSlider(min=0.0, max=len(selected_predictions)-1, step=1.0, value=0.0))
def plot(idx):
idx_pred = selected_predictions.iloc[idx]
print('f2 {}'.format(idx_pred['f2']))
plot_results_for_id(results, idx=idx_pred['ImageId'])
# def plot(idx):
# idx_pred = selected_predictions.iloc[idx]
# print('f2 {}'.format(idx_pred['f2']))
# plot_results_for_id(results, idx=idx_pred['ImageId'])
# plot(0)
###Output
_____no_output_____ |
Practicas/practica2/sistemas_mecanicos.ipynb | ###Markdown
Sistemas mecánicos El objetivo de esta práctica es analizar sistemas mecánicos a traves de las herramientas matemáticas de la ingeniería de control.Empecemos modelando el siguiente sistema mecánico:![](./imagenes/mra.png) Si dibujamos nuestro diagrama de cuerpo libre y escribimos la sumatoria de fuerzas en $x$, obtendremos:$$\sum F_x = F - F_R - F_A = ma$$en donde $F$ es la fuerza aplicada hacia la derecha, $F_R$ es la fuerza de reacción en el resorte y $F_A$ es la fuerza de reacción en el amortiguador.Si ahora tomamos en cuenta que:$$\begin{align}F_R &= k x \\F_A &= c v = c\dot{x} \\ma &= m \ddot{x}\end{align}$$podemos escribir esta sumatoria de fuerzas como:$$F - kx - c\dot{x} = m \ddot{x}$$ y sacando la transformada de Laplace, y factorizando terminos comúnes:$$F(s) = X(s)\left[ ms^2 + cs + k \right]$$ Por lo que, cuando consideramos a $F(s)$ como la entrada de nuestro sistema y a $X(s)$ como la salida de nuestro sistema, podemos obtener la función de transferencia:$$\frac{X(s)}{F(s)} = \frac{1}{ms^2 + cs + k}$$ y simular el comportamiento de este:
###Code
from control import tf, step_response, root_locus
from numpy import linspace
from matplotlib.pyplot import plot
m = 1200/4
c = 1500
k = 15000
G = tf([1], [m, c, k])
ts = linspace(0, 10, 500)
t, y = step_response(G, ts)
plot(t, y);
###Output
_____no_output_____
###Markdown
Sin embargo, tenemos que revisar una cosa. Los datos ingresados a este sistema se obtuvieron de valores comerciales para la suspensión de un automovil tipo sedán; sin embargo la función ```step_response``` simula el comportamiento del sistema para una entrada unitaria (en este caso $1N$), por lo que para que esta simulación tenga relevancia tenemos que amplificar esta entrada.Se propone una entrada de $1100N$ que corresponse al peso de un hombre pesado, lo que esperaremos entonces es un movimiento como el que pasa cuando un hombre pesado se sube a un sedán:
###Code
ts = linspace(0, 10, 500)
t, y = step_response(1100*G, ts)
plot(t, y);
###Output
_____no_output_____
###Markdown
Y ahora obtenemos una simulación con el comportamiento que esperariamos de la suspensión de un coche cuando un hombre pesado se sube en el; la suspensión deja de moverse despues de unos $3s$ y se estabiliza en un valor de aproximadamente $0.07m$, es decir $7cm$ despues de haberse comprimido casi $10cm$ y regresar unas $3$ o $4$ veces. --- Ejercicio * Define un sistema ```G1``` con un amortiguador con constante $c=0\frac{Ns}{m}$, un resorte con constante $k=100\frac{N}{m}$ y una masa de $10kg$.
###Code
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
from numpy.testing import assert_allclose
assert_allclose(G1.dcgain(), [0.01], 2)
assert_allclose(G1.pole(), [0.+3.16j, 0.-3.16j], 2)
assert_allclose(G1.zero(), [], 2)
G1.zero()
###Output
_____no_output_____
###Markdown
* Simula el comportamiento de este sistema para una fuerza aplicada de $5N$ del tiempo $0s$ al tiempo $15s$.
###Code
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
plot(t, y);
from nose.tools import assert_almost_equal, assert_equal
assert_equal(ts[0], 0)
assert_equal(ts[-1], 15)
assert_almost_equal(max(y), 0.02, 4)
assert_almost_equal(min(y), 0.0, 4)
###Output
_____no_output_____
###Markdown
* Define un sistema ```G2``` con un amortiguador con constante $c=10\frac{Ns}{m}$, un resorte con constante $k=0\frac{N}{m}$ y una masa de $10kg$.
###Code
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
from numpy.testing import assert_allclose
from numpy import inf
assert_allclose(G2.dcgain(), [inf])
assert_allclose(G2.pole(), [-1, 0])
assert_allclose(G2.zero(), [], 2)
###Output
_____no_output_____
###Markdown
* Simula el comportamiento de este sistema para una fuerza aplicada de $5N$ del tiempo $0s$ al tiempo $20s$.
###Code
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
plot(t, y);
from nose.tools import assert_almost_equal, assert_equal
assert_equal(ts[0], 0)
assert_equal(ts[-1], 20)
assert_almost_equal(max(y), 1.9, 4)
assert_almost_equal(min(y), 0.0, 4)
###Output
_____no_output_____
###Markdown
--- Una vez que hemos comprobado la manera de simular estos sistemas mecánicos, podemos pasar al siguiente paso, predecir su comportamiento a partir de la pura función de transferencia.La función de transferencia tiene varias caracteristicas de las que no hemos hablado, por ejemplo los polos del sistema:
###Code
G1 = tf([1], [1, 1, 1])
G1.pole()
###Output
_____no_output_____
###Markdown
Estos polos son obtenidos al resolver la ecuación formada al igualar el denominador de la función de transferencia a $0$.> Este denominador es llamado **Polinomio caracteristico del sistema** ya que es el que determina el comportamiento del sistema.Si graficamos estos polos tendremos:
###Code
rs, ks = root_locus(G1)
###Output
_____no_output_____
###Markdown
Esta gráfica se le conoce como **Lugar geométrico de las raíces**, y en ella las cruces representan a los polos que obtuvimos, las lineas que parten de estas cruces representan el movimiento de estos polos bajo relalimentación, lo cual vermos en la próxima práctica.En esta gráfica podemos notar que las raices son complejas, y que su parte real es negativa; esta última característica es la que nos inidca que el comportamiento de este sistema será estable; para corroborar esto podemos simular y gráficar su comportamiento:
###Code
ts = linspace(0, 15, 500)
t, y = step_response(G1, ts)
plot(t, y);
###Output
_____no_output_____
###Markdown
Por otro lado, si creamos una función de transferencia que tenga polos con parte real $0$, nos dará un comportamiento **Criticamente estable**.
###Code
G2 = tf([1], [1, 0, 1])
G2.pole()
rs, ks = root_locus(G2)
ts = linspace(0, 15, 500)
t, y = step_response(G2, ts)
plot(t, y);
###Output
_____no_output_____
###Markdown
O una función de transferencia con polos con parte real positiva, su comportamiento será inestable:
###Code
G3 = tf([1], [1, -1, 1])
G3.pole()
rs, ks = root_locus(G3)
ts = linspace(0, 15, 500)
t, y = step_response(G3, ts)
plot(t, y);
###Output
_____no_output_____
###Markdown
Despues de esto, podemos preguntarnos si en lugar de esperar que el comportamiento de nuestro sistema sea el adecuado, podemos construir una función con el comportamiento deseado, y la respuesta es si; podemos por ejemplo utilizar dos raices copmletamente reales para obtener un comportamiento subamortiguado:$$G_4 = \frac{1}{s+1} \cdot \frac{1}{s+3} = \frac{1}{s^2 + 4s + 3}$$
###Code
G4 = tf([1], [1, 1])*tf([1], [1, 3])
G4.pole()
rs, ks = root_locus(G4)
ts = linspace(0, 15, 500)
t, y = step_response(G4, ts)
plot(t, y);
###Output
_____no_output_____
###Markdown
O incluso asegurarnos que el comportamiento sea inestable:
###Code
G5 = tf([1], [1, -1])*tf([1], [1, 3])
G5.pole()
rs, ks = root_locus(G5)
ts = linspace(0, 15, 500)
t, y = step_response(G5, ts)
plot(t, y);
###Output
_____no_output_____
###Markdown
--- Ejercicio * Define una función de transferencia ```G3```, con un comportamiento inestable.
###Code
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
assert not all([polo.real<0 for polo in G3.pole()])
###Output
_____no_output_____
###Markdown
* Define una función de transferencia ```G4```, con un comportamiento estable.
###Code
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
assert all([polo.real<0 for polo in G4.pole()])
###Output
_____no_output_____
###Markdown
* Define una función de transferencia ```G5```, con un comportamiento criticamente estable.
###Code
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
assert any([polo.real==0 for polo in G5.pole()])
###Output
_____no_output_____
###Markdown
* Define una función de transferencia ```G6```, con un comportamiento sobreamortiguado.
###Code
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
from numpy import pi, angle
assert all([pi - pi/4 < angle(polo) < pi + pi/4 for polo in G6.pole()])
###Output
_____no_output_____
###Markdown
* Define una función de transferencia ```G7```, con un comportamiento subamortiguado.> Sugerencia: Utiliza como base la función de transferencia del sistema masa - resorte - amortiguador.
###Code
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
assert not all([pi - pi/4 < angle(polo) < pi + pi/4 for polo in G7.pole()])
###Output
_____no_output_____ |
examples/Corporate Campaigns.ipynb | ###Markdown
This is an simplified version of Saulius Simcikas's [model](https://forum.effectivealtruism.org/posts/L5EZjjXKdNgcm253H/corporate-campaigns-affect-9-to-120-years-of-chicken-life) of the cost effectiveness of the Animal Welfare's corportate campaigning activities to get companies to pledge to use cage-free eggs. In this example model, the total-cost input is drawn from Salius's guesstimate distribution. This model also only covers egg-laying hens, while Salius's also adresses broiler's chickens.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sys
# To find module in parent directory
sys.path.append('..')
import simulator as sim
###Output
_____no_output_____
###Markdown
Model Building First we define the model, _M_.We can create normally-distributed parameters with the `simulator.Parameter.normal()` function, either passing in a mean and standard deviation or a confidence interval (and corresponding confidence-level, assumed to be 90%). These distributions can then be combined through use of normal python operators `+, -, *, \`. There is currently no support anything except these basic operations and (log)normal distributions.While not required it is helpful to use the function `Model.add_params()` to give names to parameters. The function `Model.add_inputs()` is similar except that it should be used when the parameter intuitively corresponds to an input to the model. This can be used to see which inputs an output is most sensitive to, for instance.
###Code
M = sim.Model()
# The Parameter class, for easy access to creation routines for Parameter objects.
P = sim.Parameter
# First we calculate the number of hens that would be affected with 100% commtiment rate
us_commitments = P.normal(ci=[210e6, 270e6])
us_p_cage_free_anyway = P.lognormal(ci=[.18, .26])
us_counterfactual = us_commitments * (1 - us_p_cage_free_anyway)
int_commitments = P.lognormal(ci=[100e6, 300e6])
int_p_cage_free_anyway = P.normal(ci=[.2, .5])
int_counterfactual = int_commitments * (1 - int_p_cage_free_anyway)
tot_counterfactual = us_counterfactual + int_counterfactual
# We can register the adjustable input parameters alongside engish names/descriptions
# This will potentially happen automatically in the future, unsure exactly what makes sense
M.add_inputs({
"us commitments (hens)": us_commitments,
"us proportion cage free anyway": us_p_cage_free_anyway,
"international commitments (hens)": int_commitments,
"international proportion cage free anyway": int_p_cage_free_anyway
})
# Total spending is taken by sampling outputs from the guesstimate distribution in order to not
# have to replicate the whole calculation. This is done by copy and pasting samples from
# guesstimateinto a file which can then be read by our model.
# `P.sample_dist(list)` just randomly samples outcomes from `list`
tot_spending = P.sample_dist(np.loadtxt("spending samples.txt"))
ideal_per_dollar_per_year = tot_counterfactual / tot_spending
us_follow_through = P.lognormal(ci=[.33, .85])
int_follow_through = P.normal(ci=[.63, .9])
# The follow through rate (a weighted average of us and internation follow through)
follow_through = (us_follow_through*us_counterfactual + \
int_follow_through* int_counterfactual) / tot_counterfactual
hens_per_year_per_dollar = follow_through * ideal_per_dollar_per_year
years_of_impact = P.lognormal(ci=[4, 36])
# Our final output
hen_years_per_dollar = hens_per_year_per_dollar * years_of_impact
M.add_inputs({
"total spending": tot_spending,
"us follow through": us_follow_through,
"international follow through": int_follow_through,
"years of impact": years_of_impact
})
M.add_params({
"US counterfactual": us_counterfactual,
"Interational counterfactual": int_counterfactual,
"Total counterfactual": tot_counterfactual,
"Follow through": follow_through,
"Hens per year, per dollar": hens_per_year_per_dollar,
"Hen years affected per dollar": hen_years_per_dollar
})
###Output
_____no_output_____
###Markdown
This is a simple way to illustrate what sub-calculations a particular parameter of interest depends on. It will only list parameters that are named with either `add_inputs()` or `add_params()`. While this is intended to be a full feature at some point, it currently does not deal with many edgecases and can be unweildy, especially when a parameter depends on the same sub-caluclation in multiple ways (e.g. "Total counterfactual" below).
###Code
def print_parents(param, prefix="- "):
if param.name is not None:
print(prefix + param.name)
for parent in param.parents:
if parent.name is None:
print_parents(parent, prefix)
else:
print_parents(parent, '\t' + prefix)
print_parents(hen_years_per_dollar)
###Output
- Hen years affected per dollar
- Hens per year, per dollar
- Follow through
- us follow through
- US counterfactual
- us commitments (hens)
- us proportion cage free anyway
- international follow through
- Interational counterfactual
- international commitments (hens)
- international proportion cage free anyway
- Total counterfactual
- US counterfactual
- us commitments (hens)
- us proportion cage free anyway
- Interational counterfactual
- international commitments (hens)
- international proportion cage free anyway
- Total counterfactual
- US counterfactual
- us commitments (hens)
- us proportion cage free anyway
- Interational counterfactual
- international commitments (hens)
- international proportion cage free anyway
- total spending
- years of impact
###Markdown
Model Analysis Now that we've created the model we can graph the sampled results for parameters of interest:
###Code
hen_years_per_dollar.print_summary();
###Output
5th: 5.41
mean: 47.64
95th: 152.62
std: 60.6164
###Markdown
We can also graph the correlations between one variable and all the input distributions. Note this uses the english names that we registered the input variables with!
###Code
M.input_r2s(hen_years_per_dollar);
###Output
Input r^2
----------------------------------------- ------
years of impact 0.5398
us follow through 0.0557
international commitments (hens) 0.0446
total spending 0.0302
us proportion cage free anyway 0.0028
international proportion cage free anyway 0.0022
international follow through 0.0005
us commitments (hens) 0.0005
###Markdown
Or look at scatterplots for the correlations between two variables:
###Code
M.sensitivity(years_of_impact, hen_years_per_dollar)
###Output
slope: 2.8971011079075297
intercept: -0.19122114979298033
r^2: 0.5398015550583193
###Markdown
Or create a graph comparing the outcome for the lowest 10% of sampled inputs for total spending (blue) to the highest 10% (orange).
###Code
M.sensitivty_comparisons(tot_spending, hen_years_per_dollar)
###Output
_____no_output_____ |
LSTM/2. LSTM Training, Part of Speech Tagging.ipynb | ###Markdown
LSTM for Part-of-Speech TaggingIn this section, we will use an LSTM to predict part-of-speech tags for words. What exactly is part-of-speech tagging?Part of speech tagging is the process of determining the *category* of a word from the words in its surrounding context. You can think of part of speech tagging as a way to go from words to their [Mad Libs](https://en.wikipedia.org/wiki/Mad_LibsFormat) categories. Mad Libs are incomplete short stories that have many words replaced by blanks. Each blank has a specified word-category, such as `"noun"`, `"verb"`, `"adjective"`, and so on. One player asks another to fill in these blanks (prompted only by the word-category) until they have created a complete, silly story of their own. Here is an example of such categories:```textToday, you'll be learning how to [verb]. It may be a [adjective] process, but I think it will be rewarding! If you want to take a break you should [verb] and treat yourself to some [plural noun].```... and a set of possible words that fall into those categories:```textToday, you'll be learning how to code. It may be a challenging process, but I think it will be rewarding! If you want to take a break you should stretch and treat yourself to some puppies.``` Why Tag Speech?Tagging parts of speech is often used to help disambiguate natural language phrases because it can be done quickly and with high accuracy. It can help answer: what subject is someone talking about? Tagging can be used for many NLP tasks like creating new sentences using a sequence of tags that make sense together, filling in a Mad Libs style game, and determining correct pronunciation during speech synthesis. It is also used in information retrieval, and for word disambiguation (ex. determining when someone says *right* like the direction versus *right* like "that's right!").--- Preparing the DataNow, we know that neural networks do not do well with words as input and so our first step will be to prepare our training data and map each word to a numerical value. We start by creating a small set of training data, you can see that this is a few simple sentences broken down into a list of words and their corresponding word-tags. Note that the sentences are turned into lowercase words using `lower()` and then split into separate words using `split()`, which splits the sentence by whitespace characters. Words to indicesThen, from this training data, we create a dictionary that maps each unique word in our vocabulary to a numerical value; a unique index `idx`. We do the same for each word-tag, for example: a noun will be represented by the number `1`.
###Code
# import resources
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt
%matplotlib inline
# training sentences and their corresponding word-tags
training_data = [
("The cat ate the cheese".lower().split(), ["DET", "NN", "V", "DET", "NN"]),
("She read that book".lower().split(), ["NN", "V", "DET", "NN"]),
("The dog loves art".lower().split(), ["DET", "NN", "V", "NN"]),
("The elephant answers the phone".lower().split(), ["DET", "NN", "V", "DET", "NN"])
]
# create a dictionary that maps words to indices
word2idx = {}
for sent, tags in training_data:
for word in sent:
if word not in word2idx:
word2idx[word] = len(word2idx)
# create a dictionary that maps tags to indices
tag2idx = {"DET": 0, "NN": 1, "V": 2}
###Output
_____no_output_____
###Markdown
Next, print out the created dictionary to see the words and their numerical values! You should see every word in our training set and its index value. Note that the word "the" only appears once because our vocabulary only includes *unique* words.
###Code
# print out the created dictionary
print(word2idx)
import numpy as np
# a helper function for converting a sequence of words to a Tensor of numerical values
# will be used later in training
def prepare_sequence(seq, to_idx):
'''This function takes in a sequence of words and returns a
corresponding Tensor of numerical values (indices for each word).'''
idxs = [to_idx[w] for w in seq]
idxs = np.array(idxs)
return torch.from_numpy(idxs)
# check out what prepare_sequence does for one of our training sentences:
example_input = prepare_sequence("The dog answers the phone".lower().split(), word2idx)
print(example_input)
###Output
tensor([ 0, 8, 12, 0, 13], dtype=torch.int32)
###Markdown
--- Creating the ModelOur model will assume a few things:1. Our input is broken down into a sequence of words, so a sentence will be [w1, w2, ...]2. These words come from a larger list of words that we already know (a vocabulary)3. We have a limited set of tags, `[NN, V, DET]`, which mean: a noun, a verb, and a determinant (words like "the" or "that"), respectively4. We want to predict\* a tag for each input word\* To do the prediction, we will pass an LSTM over a test sentence and apply a softmax function to the hidden state of the LSTM; the result is a vector of tag scores from which we can get the predicted tag for a word based on the *maximum* value in this distribution of tag scores. Mathematically, we can represent any tag prediction $\hat{y}_i$ as: \begin{align}\hat{y}_i = \text{argmax}_j \ (\log \text{Softmax}(Ah_i + b))_j\end{align}Where $A$ is a learned weight and $b$, a learned bias term, and the hidden state at timestep $i$ is $h_i$. Word embeddingsWe know that an LSTM takes in an expected input size and hidden_dim, but sentences are rarely of a consistent size, so how can we define the input of our LSTM?Well, at the very start of this net, we'll create an `Embedding` layer that takes in the size of our vocabulary and returns a vector of a specified size, `embedding_dim`, for each word in an input sequence of words. It's important that this be the first layer in this net. You can read more about this embedding layer in [the PyTorch documentation](https://pytorch.org/tutorials/beginner/nlp/word_embeddings_tutorial.htmlword-embeddings-in-pytorch).Pictured below is the expected architecture for this tagger model.
###Code
class LSTMTagger(nn.Module):
def __init__(self, embedding_dim, hidden_dim, vocab_size, tagset_size):
''' Initialize the layers of this model.'''
super(LSTMTagger, self).__init__()
self.hidden_dim = hidden_dim
# embedding layer that turns words into a vector of a specified size
self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)
# the LSTM takes embedded word vectors (of a specified size) as inputs
# and outputs hidden states of size hidden_dim
self.lstm = nn.LSTM(embedding_dim, hidden_dim)
# the linear layer that maps the hidden state output dimension
# to the number of tags we want as output, tagset_size (in this case this is 3 tags)
self.hidden2tag = nn.Linear(hidden_dim, tagset_size)
# initialize the hidden state (see code below)
self.hidden = self.init_hidden()
def init_hidden(self):
''' At the start of training, we need to initialize a hidden state;
there will be none because the hidden state is formed based on perviously seen data.
So, this function defines a hidden state with all zeroes and of a specified size.'''
# The axes dimensions are (n_layers, batch_size, hidden_dim)
return (torch.zeros(1, 1, self.hidden_dim),
torch.zeros(1, 1, self.hidden_dim))
def forward(self, sentence):
''' Define the feedforward behavior of the model.'''
# create embedded word vectors for each word in a sentence
embeds = self.word_embeddings(sentence)
# get the output and hidden state by passing the lstm over our word embeddings
# the lstm takes in our embeddings and hiddent state
lstm_out, self.hidden = self.lstm(
embeds.view(len(sentence), 1, -1), self.hidden)
# get the scores for the most likely tag for a word
tag_outputs = self.hidden2tag(lstm_out.view(len(sentence), -1))
tag_scores = F.log_softmax(tag_outputs, dim=1)
return tag_scores
###Output
_____no_output_____
###Markdown
Define how the model trainsTo train the model, we have to instantiate it and define the loss and optimizers that we want to use.First, we define the size of our word embeddings. The `EMBEDDING_DIM` defines the size of our word vectors for our simple vocabulary and training set; we will keep them small so we can see how the weights change as we train.**Note: the embedding dimension for a complex dataset will usually be much larger, around 64, 128, or 256 dimensional.** Loss and OptimizationSince our LSTM outputs a series of tag scores with a softmax layer, we will use `NLLLoss`. In tandem with a softmax layer, NLL Loss creates the kind of cross entropy loss that we typically use for analyzing a distribution of class scores. We'll use standard gradient descent optimization, but you are encouraged to play around with other optimizers!
###Code
# the embedding dimension defines the size of our word vectors
# for our simple vocabulary and training set, we will keep these small
EMBEDDING_DIM = 6
HIDDEN_DIM = 6
# instantiate our model
model = LSTMTagger(EMBEDDING_DIM, HIDDEN_DIM, len(word2idx), len(tag2idx))
# define our loss and optimizer
loss_function = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
###Output
_____no_output_____
###Markdown
Just to check that our model has learned something, let's first look at the scores for a sample test sentence *before* our model is trained. Note that the test sentence *must* be made of words from our vocabulary otherwise its words cannot be turned into indices.The scores should be Tensors of length 3 (for each of our tags) and there should be scores for each word in the input sentence.For the test sentence, "The cheese loves the elephant", we know that this has the tags (DET, NN, V, DET, NN) or `[0, 1, 2, 0, 1]`, but our network does not yet know this. In fact, in this case, our model starts out with a hidden state of all zeroes and so all the scores and the predicted tags should be low, random, and about what you'd expect for a network that is not yet trained!
###Code
test_sentence = "The cheese loves the elephant".lower().split()
# see what the scores are before training
# element [i,j] of the output is the *score* for tag j for word i.
# to check the initial accuracy of our model, we don't need to train, so we use model.eval()
inputs = prepare_sequence(test_sentence, word2idx)
inputs = inputs
tag_scores = model(inputs)
print(tag_scores)
# tag_scores outputs a vector of tag scores for each word in an inpit sentence
# to get the most likely tag index, we grab the index with the maximum score!
# recall that these numbers correspond to tag2idx = {"DET": 0, "NN": 1, "V": 2}
_, predicted_tags = torch.max(tag_scores, 1)
print('\n')
print('Predicted tags: \n',predicted_tags)
###Output
tensor([[-1.6372, -0.9253, -0.8939],
[-1.5766, -0.9248, -0.9245],
[-1.6839, -0.8132, -0.9918],
[-1.6793, -0.8331, -0.9707],
[-1.5990, -0.8312, -1.0151]], grad_fn=<LogSoftmaxBackward>)
Predicted tags:
tensor([2, 2, 1, 1, 1])
###Markdown
--- Train the ModelLoop through all our training data for multiple epochs (again we are using a small epoch value for this simple training data). This loop:1. Prepares our model for training by zero-ing the gradients2. Initializes the hidden state of our LSTM3. Prepares our data for training4. Runs a forward pass on our inputs to get tag_scores5. Calculates the loss between tag_scores and the true tag6. Updates the weights of our model using backpropagationIn this example, we are printing out the average epoch loss, every 20 epochs; you should see it decrease over time.
###Code
# normally these epochs take a lot longer
# but with our toy data (only 3 sentences), we can do many epochs in a short time
n_epochs = 300
for epoch in range(n_epochs):
epoch_loss = 0.0
# get all sentences and corresponding tags in the training data
for sentence, tags in training_data:
# zero the gradients
model.zero_grad()
# zero the hidden state of the LSTM, this detaches it from its history
model.hidden = model.init_hidden()
# prepare the inputs for processing by out network,
# turn all sentences and targets into Tensors of numerical indices
sentence_in = prepare_sequence(sentence, word2idx)
targets = prepare_sequence(tags, tag2idx)
# forward pass to get tag scores
tag_scores = model(sentence_in)
# compute the loss, and gradients
loss = loss_function(tag_scores, targets)
epoch_loss += loss.item()
loss.backward()
# update the model parameters with optimizer.step()
optimizer.step()
# print out avg loss per 20 epochs
if(epoch%20 == 19):
print("Epoch: %d, loss: %1.5f" % (epoch+1, epoch_loss/len(training_data)))
###Output
_____no_output_____
###Markdown
TestingSee how your model performs *after* training. Compare this output with the scores from before training, above.Again, for the test sentence, "The cheese loves the elephant", we know that this has the tags (DET, NN, V, DET, NN) or `[0, 1, 2, 0, 1]`. Let's see if our model has learned to find these tags!
###Code
test_sentence = "The cheese loves the elephant".lower().split()
# see what the scores are after training
inputs = prepare_sequence(test_sentence, word2idx)
inputs = inputs
tag_scores = model(inputs)
print(tag_scores)
# print the most likely tag index, by grabbing the index with the maximum score!
# recall that these numbers correspond to tag2idx = {"DET": 0, "NN": 1, "V": 2}
_, predicted_tags = torch.max(tag_scores, 1)
print('\n')
print('Predicted tags: \n',predicted_tags)
###Output
_____no_output_____ |
examples/4-hyperdrive.ipynb | ###Markdown
Hyperdrive によるハイパーパラメータチューニングHyperdrive は Azure Machine Learning が提供する高度なハイパーパラメータチューニングの機能です。コンピューティングクラスター(Compute Cluster) 上で、並列で高速に学習を行い精度が高いハイパーパラメータの組み合わせを探索します。**探索アルゴリズム**- Gird Search- Random Search- Beysian Optimization ※ 参考ドキュメント : [Azure Machine Learning でモデルのハイパーパラメータを調整する](https://docs.microsoft.com/ja-JP/azure/machine-learning/how-to-tune-hyperparameters)
###Code
from azureml.core import Workspace, Experiment, Environment, ScriptRunConfig, Dataset
from azureml.widgets import RunDetails
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
###Output
_____no_output_____
###Markdown
Dataset の取得
###Code
dataset = Dataset.get_by_name(ws, name='cifar10')
###Output
_____no_output_____
###Markdown
実験名の設定
###Code
experiment = Experiment(workspace=ws, name='dummy-hyperdrive2')
###Output
_____no_output_____
###Markdown
学習スクリプトの実行設定
###Code
config = ScriptRunConfig(source_directory='./code/pytorch-hyperdrive',
script='train.py',
compute_target='gpucluster',
arguments=[
'--data_path', dataset.as_named_input('input').as_mount(),
'--learning_rate', 0.003,
'--momentum', 0.92])
###Output
_____no_output_____
###Markdown
Environment の呼び出しと実行設定へのインプット
###Code
env = Environment.get(ws, "pytorch-env")
config.run_config.environment = env
###Output
_____no_output_____
###Markdown
Hyperdrive におけるパラメータの設定
###Code
from azureml.train.hyperdrive import RandomParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal
from azureml.train.hyperdrive import choice, loguniform
# パラメータ探索範囲の設定
ps = RandomParameterSampling(
{
'--learning_rate': loguniform(-6, -1),
'--momentum': loguniform(-6, -1),
}
)
policy = BanditPolicy(evaluation_interval=2, slack_factor=0.1)
###Output
_____no_output_____
###Markdown
Hyperdrive 実行設定
###Code
hyperdrive_config = HyperDriveConfig(run_config=config,
hyperparameter_sampling=ps,
policy=policy,
primary_metric_name='train_loss', # チューニング目標のメトリック
primary_metric_goal=PrimaryMetricGoal.MINIMIZE, # or MAXIMIZE
max_total_runs=20, # 最大試行回数
max_concurrent_runs=4) # 最大並列ど
###Output
_____no_output_____
###Markdown
実行と結果確認Jupyter Widget や Azure Machine Learning Studio の可視化機能にアクセスして、結果を確認してください。
###Code
run = experiment.submit(hyperdrive_config)
# Jupyter Widgets
RunDetails(run).show()
# テキストでの出力
run.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
使用Hyperdrive进行超参数调整
Hyperdrive是Azure机器学习提供的高级超参数调整功能。在Compute Cluster上,以并行方式进行高速学习,并搜索高精度的超参数组合。
**搜索算法**
- Gird Search
- Random Search
- Beysian Optimization
※ 参考文件 : [使用Azure机器学习调整模型超参数](https://docs.microsoft.com/zh-cn/azure/machine-learning/how-to-tune-hyperparameters)
###Code
from azureml.core import Workspace, Experiment, Environment, ScriptRunConfig, Dataset
from azureml.widgets import RunDetails
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
###Output
_____no_output_____
###Markdown
获取数据集
###Code
dataset = Dataset.get_by_name(ws, name='cifar10')
###Output
_____no_output_____
###Markdown
实验名称设定
###Code
experiment = Experiment(workspace=ws, name='dummy-hyperdrive2')
###Output
_____no_output_____
###Markdown
训练脚本执行设置
###Code
config = ScriptRunConfig(source_directory='./code/pytorch-hyperdrive',
script='train.py',
compute_target='compute1',
arguments=[
'--data_path', dataset.as_named_input('input').as_mount(),
'--learning_rate', 0.003,
'--momentum', 0.92])
###Output
_____no_output_____
###Markdown
环境调用和执行设置的输入
###Code
env = Environment.get(ws, "pytorch-env")
config.run_config.environment = env
###Output
_____no_output_____
###Markdown
Hyperdrive中的参数设置
###Code
from azureml.train.hyperdrive import RandomParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal
from azureml.train.hyperdrive import choice, loguniform
# 参数搜索范围设定
ps = RandomParameterSampling(
{
'--learning_rate': loguniform(-6, -1),
'--momentum': loguniform(-6, -1),
}
)
policy = BanditPolicy(evaluation_interval=2, slack_factor=0.1)
###Output
_____no_output_____
###Markdown
Hyperdrive 実行設定
###Code
hyperdrive_config = HyperDriveConfig(run_config=config,
hyperparameter_sampling=ps,
policy=policy,
primary_metric_name='train_loss', # 调整目标指标
primary_metric_goal=PrimaryMetricGoal.MINIMIZE, # or MAXIMIZE
max_total_runs=20, # 最大试验次数
max_concurrent_runs=4) # 最大平行
###Output
_____no_output_____
###Markdown
执行和结果确认访问Jupyter Widget和Azure Machine Learning Studio的可视化功能以查看结果。
###Code
run = experiment.submit(hyperdrive_config)
# Jupyter Widgets
RunDetails(run).show()
# 文字输出
run.wait_for_completion(show_output=True)
###Output
_____no_output_____ |
qick_demos/05_PhaseCoherence_QickProgram.ipynb | ###Markdown
Calibrating the QICK for phase coherent readout In this demo you will calibrate the QICK clocks to have the same phase.Before you measure a resonance with your QICK this is the first calibration you should do. It is a calibration for the two synthesizers which belong to the QICK signal generator and the QICK readout, respectively. The two synthesizers are running at the same frequency, but there is initially a constant phase difference $\phi$ between these two synthesizers. Doing this calibration results in you finding that phase difference $\phi$. In your subsequent measurements, you can specify this initial phase difference $\phi$ to compensate for it. From then on, the signal generator can synthesize any frequency and then if you read in data (doing a digital down conversion in the process), the readout will still be phase coherent with respect to the signal generator. The angular frequency $\omega = 2 \pi f$ . Also, $\phi = (\omega t) + \phi_0$. So, $\phi = (2 \pi f)*t + \phi_0 $. If $f$ goes up linearly, the phase difference will also change linearly (it will either increase or decrease, depending on whether the readout is ahead or behind of the signal generator- this is randomly determined each time the board clocks are initialized). Once the phase hits 360 degrees it cycles back to 0 again. For a readout frequency of interest $f_i$ there is a corresponding phase difference $\phi_i$. In this demonstration we assume $f_i \approx 180$ MHz. You can plot $\phi(f)$ and evaluate $\phi(f_i)=\phi_i$.
###Code
# Import the QICK drivers and auxiliary libraries
from qick import *
%pylab inline
# Load bitstream with custom overlay
soc = QickSoc()
soccfg = soc
###Output
_____no_output_____
###Markdown
Hardware ConfigurationtProc channel 7 : DAC 229 CH3 Readout channel 0 : ADC 224 CH0
###Code
class LoopbackProgram(AveragerProgram):
def initialize(self):
cfg=self.cfg
# set the nyquist zone
self.declare_gen(ch=cfg["res_ch"], nqz=1)
self.r_rp=self.ch_page(self.cfg["res_ch"]) # get register page for res_ch
self.r_gain=self.sreg(cfg["res_ch"], "gain") #Get gain register for res_ch
#configure the readout lengths and downconversion frequencies
self.declare_readout(ch=cfg["ro_ch"], length=self.cfg["readout_length"],
freq=self.cfg["pulse_freq"], gen_ch=cfg["res_ch"])
freq=self.freq2reg(cfg["pulse_freq"], gen_ch=cfg["res_ch"], ro_ch=cfg["ro_ch"]) # convert frequency to dac frequency (ensuring it is an available adc frequency)
self.set_pulse_registers(ch=cfg["res_ch"], style="const", freq=freq, phase=0, gain=cfg["pulse_gain"],
length=cfg["length"])
self.synci(200) # give processor some time to configure pulses
def body(self):
self.measure(pulse_ch=self.cfg["res_ch"],
adcs=[self.cfg["ro_ch"]],
adc_trig_offset=self.cfg["adc_trig_offset"],
t=0,
wait=True,
syncdelay=self.us2cycles(self.cfg["relax_delay"]))
###Output
_____no_output_____
###Markdown
First, sanity check that we can see the pulse we want to calibrate
###Code
config={"res_ch":6, # --Fixed
"ro_ch":0, # --Fixed
"relax_delay":1.0, # --Fixed
"res_phase":0, # --Fixed
"length":400, # [Clock ticks]
"readout_length":200, # [Clock ticks]
"pulse_gain":10000, # [DAC units]
"pulse_freq": 100, # [MHz]
"adc_trig_offset": 200, # [Clock ticks]
"reps":1,
"soft_avgs":1,
}
prog =LoopbackProgram(soccfg, config)
(iq0,) = prog.acquire_decimated(soc, load_pulses=True,progress=False)
# Plot results.
plt.figure(1)
plt.plot(iq0[0], label="I value; ADC 0")
plt.plot(iq0[1], label="Q value; ADC 0")
plt.ylabel("a.u.")
plt.xlabel("Clock ticks")
plt.title("Averages = " + str(config["soft_avgs"]))
plt.legend()
###Output
_____no_output_____
###Markdown
Now we perform the calibration: Params 1 (spacing between points is too large)
###Code
sweep_cfg={"start":100, "step":0.0005, "expts":40}
gpts=sweep_cfg["start"] + sweep_cfg["step"]*np.arange(sweep_cfg["expts"])
resultsi=[]
resultsq=[]
for g in gpts:
time.sleep(0.1)
config["pulse_freq"]=g
prog =LoopbackProgram(soccfg, config)
(iq0,) = prog.acquire_decimated(soc, load_pulses=True,progress=False)
di0 = np.sum(iq0[0])/config["readout_length"]
dq0 = np.sum(iq0[1])/config["readout_length"]
resultsi.append(di0)
resultsq.append(dq0)
resultsi=np.array(resultsi)
resultsq=np.array(resultsq)
# Plot results.
sig = resultsi + 1j * resultsq
amp_array = np.abs(sig)
phase_array = np.angle(sig,deg=True)
for x in range(0,len(phase_array)):
if phase_array[x] <0:
phase_array[x] = phase_array[x] +360
plt.figure(1)
# plt.plot(gpts, resultsi,label="I value; ADC 0")
# plt.plot(gpts, resultsq,label="Q value; ADC 0")
# plt.plot(gpts, amp_array,label="Amplitude (DAC units); ADC 0")
plt.plot(gpts, phase_array, label="Phase (degrees); ADC 0")
plt.plot(gpts,phase_array, marker='.', linestyle="None",color="Red")
plt.xticks(rotation=90)
plt.title(r"$\phi$ vs $f$")
plt.ylabel(r"$\phi$ (degrees)")
plt.xlabel(r"$f$ (MHz)")
plt.legend()
plt.savefig("images/Phase_sweep.pdf", dpi=350)
###Output
_____no_output_____
###Markdown
Params 2 (We try again with finer spacing and now there is enough data for us to calibrate phase)
###Code
sweep_cfg={"start":100, "step":0.000125, "expts":160}
gpts=sweep_cfg["start"] + sweep_cfg["step"]*np.arange(sweep_cfg["expts"])
resultsi=[]
resultsq=[]
for g in gpts:
time.sleep(0.1)
config["pulse_freq"]=g
prog =LoopbackProgram(soccfg, config)
(iq0,) = prog.acquire_decimated(soc, load_pulses=True,progress=False)
di0 = np.sum(iq0[0])/config["readout_length"]
dq0 = np.sum(iq0[1])/config["readout_length"]
resultsi.append(di0)
resultsq.append(dq0)
resultsi=np.array(resultsi)
resultsq=np.array(resultsq)
# Plot results.
sig = resultsi + 1j * resultsq
amp_array = np.abs(sig)
phase_array = np.angle(sig,deg=True)
for x in range(0,len(phase_array)):
if phase_array[x] <0:
phase_array[x] = phase_array[x] +360
plt.figure(1)
# plt.plot(gpts, resultsi,label="I value; ADC 0")
# plt.plot(gpts, resultsq,label="Q value; ADC 0")
# plt.plot(gpts, amp_array,label="Amplitude (DAC units); ADC 0")
plt.plot(gpts, phase_array, label="Phase (degrees); ADC 0")
plt.plot(gpts,phase_array, marker='.', linestyle="None",color="Red")
plt.xticks(rotation=90)
plt.title(r"$\phi$ vs $f$")
plt.ylabel(r"$\phi$ (degrees)")
plt.xlabel(r"$f$ (MHz)")
plt.legend()
plt.savefig("images/Phase_sweep.pdf", dpi=350)
###Output
_____no_output_____
###Markdown
Params 3 (We zoom in on the frequency area of interest and then print out the associated phase of interest)
###Code
sweep_cfg={"start":100.0075, "step":0.000125, "expts":40}
gpts=sweep_cfg["start"] + sweep_cfg["step"]*np.arange(sweep_cfg["expts"])
resultsi=[]
resultsq=[]
for g in gpts:
time.sleep(0.1)
config["pulse_freq"]=g
prog =LoopbackProgram(soccfg, config)
(iq0,) = prog.acquire_decimated(soc, load_pulses=True,progress=False)
di0 = np.sum(iq0[0])/config["readout_length"]
dq0 = np.sum(iq0[1])/config["readout_length"]
resultsi.append(di0)
resultsq.append(dq0)
resultsi=np.array(resultsi)
resultsq=np.array(resultsq)
# Plot results.
sig = resultsi + 1j * resultsq
amp_array = np.abs(sig)
phase_array = np.angle(sig,deg=True)
for x in range(0,len(phase_array)):
if phase_array[x] <0:
phase_array[x] = phase_array[x] +360
print("Iteration i = %d, freq_i = %f MHz, phi_i = %f degrees" %(x,gpts[x], phase_array[x]))
plt.figure(1)
# plt.plot(gpts, resultsi,label="I value; ADC 0")
# plt.plot(gpts, resultsq,label="Q value; ADC 0")
# plt.plot(gpts, amp_array,label="Amplitude (DAC units); ADC 0")
plt.plot(gpts, phase_array, label="Phase (degrees); ADC 0")
plt.plot(gpts,phase_array, marker='.', linestyle="None",color="Red")
plt.xticks(rotation=90)
plt.title(r"$\phi$ vs $f$")
plt.ylabel(r"$\phi$ (degrees)")
plt.xlabel(r"$f$ (MHz)")
plt.legend()
plt.savefig("images/Phase_sweep.pdf", dpi=350)
###Output
Iteration i = 0, freq_i = 100.007500 MHz, phi_i = 355.168452 degrees
Iteration i = 1, freq_i = 100.007625 MHz, phi_i = 344.942773 degrees
Iteration i = 2, freq_i = 100.007750 MHz, phi_i = 334.561054 degrees
Iteration i = 3, freq_i = 100.007875 MHz, phi_i = 324.265874 degrees
Iteration i = 4, freq_i = 100.008000 MHz, phi_i = 313.901978 degrees
Iteration i = 5, freq_i = 100.008125 MHz, phi_i = 303.607619 degrees
Iteration i = 6, freq_i = 100.008250 MHz, phi_i = 293.346924 degrees
Iteration i = 7, freq_i = 100.008375 MHz, phi_i = 282.962804 degrees
Iteration i = 8, freq_i = 100.008500 MHz, phi_i = 272.687116 degrees
Iteration i = 9, freq_i = 100.008625 MHz, phi_i = 262.384146 degrees
Iteration i = 10, freq_i = 100.008750 MHz, phi_i = 252.026269 degrees
Iteration i = 11, freq_i = 100.008875 MHz, phi_i = 241.732993 degrees
Iteration i = 12, freq_i = 100.009000 MHz, phi_i = 231.350591 degrees
Iteration i = 13, freq_i = 100.009125 MHz, phi_i = 221.087037 degrees
Iteration i = 14, freq_i = 100.009250 MHz, phi_i = 210.793508 degrees
Iteration i = 15, freq_i = 100.009375 MHz, phi_i = 200.399291 degrees
Iteration i = 16, freq_i = 100.009500 MHz, phi_i = 190.139522 degrees
Iteration i = 17, freq_i = 100.009625 MHz, phi_i = 179.734384 degrees
Iteration i = 18, freq_i = 100.009750 MHz, phi_i = 169.496957 degrees
Iteration i = 19, freq_i = 100.009875 MHz, phi_i = 159.184603 degrees
Iteration i = 20, freq_i = 100.010000 MHz, phi_i = 148.810858 degrees
Iteration i = 21, freq_i = 100.010125 MHz, phi_i = 138.574445 degrees
Iteration i = 22, freq_i = 100.010250 MHz, phi_i = 128.281286 degrees
Iteration i = 23, freq_i = 100.010375 MHz, phi_i = 117.892335 degrees
Iteration i = 24, freq_i = 100.010500 MHz, phi_i = 107.620575 degrees
Iteration i = 25, freq_i = 100.010625 MHz, phi_i = 97.225735 degrees
Iteration i = 26, freq_i = 100.010750 MHz, phi_i = 86.949832 degrees
Iteration i = 27, freq_i = 100.010875 MHz, phi_i = 76.650230 degrees
Iteration i = 28, freq_i = 100.011000 MHz, phi_i = 66.272948 degrees
Iteration i = 29, freq_i = 100.011125 MHz, phi_i = 56.004722 degrees
Iteration i = 30, freq_i = 100.011250 MHz, phi_i = 45.684729 degrees
Iteration i = 31, freq_i = 100.011375 MHz, phi_i = 35.369000 degrees
Iteration i = 32, freq_i = 100.011500 MHz, phi_i = 25.065924 degrees
Iteration i = 33, freq_i = 100.011625 MHz, phi_i = 14.665463 degrees
Iteration i = 34, freq_i = 100.011750 MHz, phi_i = 4.390865 degrees
Iteration i = 35, freq_i = 100.011875 MHz, phi_i = 354.139750 degrees
Iteration i = 36, freq_i = 100.012000 MHz, phi_i = 343.761122 degrees
Iteration i = 37, freq_i = 100.012125 MHz, phi_i = 333.467373 degrees
Iteration i = 38, freq_i = 100.012250 MHz, phi_i = 323.097440 degrees
Iteration i = 39, freq_i = 100.012375 MHz, phi_i = 312.817796 degrees
###Markdown
Calibrating the QICK for phase coherent readout In this demo you will calibrate the QICK clocks to have the same phase.Before you measure a resonance with your QICK this is the first calibration you should do. It is a calibration for the two synthesizers which belong to the QICK signal generator and the QICK readout, respectively. The two synthesizers are running at the same frequency, but there is initially a constant phase difference $\phi$ between these two synthesizers. Doing this calibration results in you finding that phase difference $\phi$. In your subsequent measurements, you can specify this initial phase difference $\phi$ to compensate for it. From then on, the signal generator can synthesize any frequency and then if you read in data (doing a digital down conversion in the process), the readout will still be phase coherent with respect to the signal generator. The angular frequency $\omega = 2 \pi f$ . Also, $\phi = (\omega t) + \phi_0$. So, $\phi = (2 \pi f)*t + \phi_0 $. If $f$ goes up linearly, the phase difference will also change linearly (it will either increase or decrease, depending on whether the readout is ahead or behind of the signal generator- this is randomly determined each time the board clocks are initialized). Once the phase hits 360 degrees it cycles back to 0 again. For a readout frequency of interest $f_i$ there is a corresponding phase difference $\phi_i$. In this demonstration we assume $f_i \approx 180$ MHz. You can plot $\phi(f)$ and evaluate $\phi(f_i)=\phi_i$.
###Code
# Import the QICK drivers and auxiliary libraries
from qick import *
from qick.helpers import gauss
import time
import cmath
%pylab inline
# Load bitstream with custom overlay
soc = QickSoc(force_init_clks=False)
# Set the loopback DAC channel to be in 1st Nyquist zone mode
soc.set_nyquist(ch=7,nqz=1);
###Output
_____no_output_____
###Markdown
Hardware ConfigurationtProc channel 7 : DAC 229 CH3 Readout channel 0 : ADC 224 CH0
###Code
class LoopbackProgram(AveragerProgram):
def __init__(self,cfg):
AveragerProgram.__init__(self,cfg)
def initialize(self):
cfg=self.cfg
r_freq=self.sreg(cfg["res_ch"], "freq") #Get frequency register for res_ch
self.cfg["adc_lengths"]=[self.cfg["readout_length"]]*2 #add length of adc acquisition to config
self.cfg["adc_freqs"]=[adcfreq(self.cfg["pulse_freq"])]*2 #add frequency of adc ddc to config
self.add_pulse(ch=self.cfg["res_ch"], name="measure", style=self.cfg["pulse_style"], length=self.cfg["length"],idata = self.cfg["idata"]) #add a constant pulse to the pulse library
freq=freq2reg(adcfreq(cfg["pulse_freq"])) # convert frequency to dac frequency (ensuring it is an available adc frequency)
# print("ADC freq = ", adcfreq(cfg["pulse_freq"]))
self.pulse(ch=cfg["res_ch"], name="measure", freq=freq, phase=0, gain=cfg["pulse_gain"], t= 0, play=False) # pre-configure readout pulse
self.synci(1000) # give processor some time to configure pulses
def body(self):
self.trigger_adc(adc1=1, adc2=1,adc_trig_offset=self.cfg["adc_trig_offset"]) # trigger the adc acquisition
self.pulse(ch=self.cfg["res_ch"], name="measure", play=True, outsel=1) # play readout pulse
self.sync_all(us2cycles(self.cfg["relax_delay"]))
###Output
_____no_output_____
###Markdown
First, sanity check that we can see the pulse we want to calibrate
###Code
config={"res_ch":7, # --Fixed
"relax_delay":0, # --Fixed
"res_phase":0, # --Fixed
"pulse_style": "const", # --Fixed
"length":250, # [Clock ticks]
"sigma": 30, # [Clock ticks]
"readout_length":200, # [Clock ticks]
"pulse_gain":10000, # [DAC units]
"pulse_freq": 100, # [MHz]
"adc_trig_offset": 220, # [Clock ticks]
"reps":1,
"soft_avgs":1,
}
config["idata"] = gauss(mu=config["sigma"]*16*5/2,si=config["sigma"]*16,length=5*config["sigma"]*16,maxv=32000)
prog =LoopbackProgram(config)
iq0, iq1 = prog.acquire_decimated(soc, load_pulses=True,progress=False)
# Plot results.
plt.figure(1)
plt.plot(iq0[0], label="I value; ADC 0")
plt.plot(iq0[1], label="Q value; ADC 0")
plt.plot(iq1[0], label="I value; ADC 1")
plt.plot(iq1[1], label="Q value; ADC 1")
plt.ylabel("a.u.")
plt.xlabel("Clock ticks")
plt.title("Averages = " + str(config["soft_avgs"]))
plt.legend()
###Output
_____no_output_____
###Markdown
Now we perform the calibration: Params 1 (spacing between points is too large)
###Code
sweep_cfg={"start":100, "step":0.0005, "expts":40}
gpts=sweep_cfg["start"] + sweep_cfg["step"]*np.arange(sweep_cfg["expts"])
resultsi=[]
resultsq=[]
for g in gpts:
time.sleep(0.1)
config["pulse_freq"]=g
prog =LoopbackProgram(config)
iq0, iq1 = prog.acquire_decimated(soc, load_pulses=True,progress=False)
di0 = np.sum(iq0[0])/config["readout_length"]
dq0 = np.sum(iq0[1])/config["readout_length"]
resultsi.append(di0)
resultsq.append(dq0)
resultsi=np.array(resultsi)
resultsq=np.array(resultsq)
# Plot results.
sig = resultsi + 1j * resultsq
amp_array = np.abs(sig)
phase_array = np.angle(sig,deg=True)
for x in range(0,len(phase_array)):
if phase_array[x] <0:
phase_array[x] = phase_array[x] +360
plt.figure(1)
# plt.plot(gpts, resultsi,label="I value; ADC 0")
# plt.plot(gpts, resultsq,label="Q value; ADC 0")
# plt.plot(gpts, amp_array,label="Amplitude (DAC units); ADC 0")
plt.plot(gpts, phase_array, label="Phase (degrees); ADC 0")
plt.plot(gpts,phase_array, marker='.', linestyle="None",color="Red")
plt.xticks(rotation=90)
plt.title(r"$\phi$ vs $f$")
plt.ylabel(r"$\phi$ (degrees)")
plt.xlabel(r"$f$ (MHz)")
plt.legend()
plt.savefig("images/Phase_sweep.pdf", dpi=350)
###Output
_____no_output_____
###Markdown
Params 2 (We try again with finer spacing and now there is enough data for us to calibrate phase)
###Code
sweep_cfg={"start":100, "step":0.000125, "expts":160}
gpts=sweep_cfg["start"] + sweep_cfg["step"]*np.arange(sweep_cfg["expts"])
resultsi=[]
resultsq=[]
for g in gpts:
time.sleep(0.1)
config["pulse_freq"]=g
prog =LoopbackProgram(config)
iq0, iq1 = prog.acquire_decimated(soc, load_pulses=True,progress=False)
di0 = np.sum(iq0[0])/config["readout_length"]
dq0 = np.sum(iq0[1])/config["readout_length"]
resultsi.append(di0)
resultsq.append(dq0)
resultsi=np.array(resultsi)
resultsq=np.array(resultsq)
# Plot results.
sig = resultsi + 1j * resultsq
amp_array = np.abs(sig)
phase_array = np.angle(sig,deg=True)
for x in range(0,len(phase_array)):
if phase_array[x] <0:
phase_array[x] = phase_array[x] +360
plt.figure(1)
# plt.plot(gpts, resultsi,label="I value; ADC 0")
# plt.plot(gpts, resultsq,label="Q value; ADC 0")
# plt.plot(gpts, amp_array,label="Amplitude (DAC units); ADC 0")
plt.plot(gpts, phase_array, label="Phase (degrees); ADC 0")
plt.plot(gpts,phase_array, marker='.', linestyle="None",color="Red")
plt.xticks(rotation=90)
plt.title(r"$\phi$ vs $f$")
plt.ylabel(r"$\phi$ (degrees)")
plt.xlabel(r"$f$ (MHz)")
plt.legend()
plt.savefig("images/Phase_sweep.pdf", dpi=350)
###Output
_____no_output_____
###Markdown
Params 3 (We zoom in on the frequency area of interest and then print out the associated phase of interest)
###Code
sweep_cfg={"start":100.0075, "step":0.000125, "expts":40}
gpts=sweep_cfg["start"] + sweep_cfg["step"]*np.arange(sweep_cfg["expts"])
resultsi=[]
resultsq=[]
for g in gpts:
time.sleep(0.1)
config["pulse_freq"]=g
prog =LoopbackProgram(config)
iq0, iq1 = prog.acquire_decimated(soc, load_pulses=True,progress=False)
di0 = np.sum(iq0[0])/config["readout_length"]
dq0 = np.sum(iq0[1])/config["readout_length"]
resultsi.append(di0)
resultsq.append(dq0)
resultsi=np.array(resultsi)
resultsq=np.array(resultsq)
# Plot results.
sig = resultsi + 1j * resultsq
amp_array = np.abs(sig)
phase_array = np.angle(sig,deg=True)
for x in range(0,len(phase_array)):
if phase_array[x] <0:
phase_array[x] = phase_array[x] +360
print("Iteration i = %d, freq_i = %f MHz, phi_i = %f degrees" %(x,gpts[x], phase_array[x]))
plt.figure(1)
# plt.plot(gpts, resultsi,label="I value; ADC 0")
# plt.plot(gpts, resultsq,label="Q value; ADC 0")
# plt.plot(gpts, amp_array,label="Amplitude (DAC units); ADC 0")
plt.plot(gpts, phase_array, label="Phase (degrees); ADC 0")
plt.plot(gpts,phase_array, marker='.', linestyle="None",color="Red")
plt.xticks(rotation=90)
plt.title(r"$\phi$ vs $f$")
plt.ylabel(r"$\phi$ (degrees)")
plt.xlabel(r"$f$ (MHz)")
plt.legend()
plt.savefig("images/Phase_sweep.pdf", dpi=350)
###Output
Iteration i = 0, freq_i = 100.007500 MHz, phi_i = 187.992025 degrees
Iteration i = 1, freq_i = 100.007625 MHz, phi_i = 177.553958 degrees
Iteration i = 2, freq_i = 100.007750 MHz, phi_i = 167.033003 degrees
Iteration i = 3, freq_i = 100.007875 MHz, phi_i = 156.486800 degrees
Iteration i = 4, freq_i = 100.008000 MHz, phi_i = 145.976953 degrees
Iteration i = 5, freq_i = 100.008125 MHz, phi_i = 135.516772 degrees
Iteration i = 6, freq_i = 100.008250 MHz, phi_i = 124.963514 degrees
Iteration i = 7, freq_i = 100.008375 MHz, phi_i = 114.494701 degrees
Iteration i = 8, freq_i = 100.008500 MHz, phi_i = 103.891709 degrees
Iteration i = 9, freq_i = 100.008625 MHz, phi_i = 93.454635 degrees
Iteration i = 10, freq_i = 100.008750 MHz, phi_i = 82.956005 degrees
Iteration i = 11, freq_i = 100.008875 MHz, phi_i = 72.357968 degrees
Iteration i = 12, freq_i = 100.009000 MHz, phi_i = 61.906141 degrees
Iteration i = 13, freq_i = 100.009125 MHz, phi_i = 51.323893 degrees
Iteration i = 14, freq_i = 100.009250 MHz, phi_i = 40.865427 degrees
Iteration i = 15, freq_i = 100.009375 MHz, phi_i = 30.394780 degrees
Iteration i = 16, freq_i = 100.009500 MHz, phi_i = 19.802899 degrees
Iteration i = 17, freq_i = 100.009625 MHz, phi_i = 9.326770 degrees
Iteration i = 18, freq_i = 100.009750 MHz, phi_i = 358.846537 degrees
Iteration i = 19, freq_i = 100.009875 MHz, phi_i = 348.272914 degrees
Iteration i = 20, freq_i = 100.010000 MHz, phi_i = 337.812802 degrees
Iteration i = 21, freq_i = 100.010125 MHz, phi_i = 327.223179 degrees
Iteration i = 22, freq_i = 100.010250 MHz, phi_i = 316.753581 degrees
Iteration i = 23, freq_i = 100.010375 MHz, phi_i = 306.303491 degrees
Iteration i = 24, freq_i = 100.010500 MHz, phi_i = 295.691120 degrees
Iteration i = 25, freq_i = 100.010625 MHz, phi_i = 285.230260 degrees
Iteration i = 26, freq_i = 100.010750 MHz, phi_i = 274.770969 degrees
Iteration i = 27, freq_i = 100.010875 MHz, phi_i = 264.158280 degrees
Iteration i = 28, freq_i = 100.011000 MHz, phi_i = 253.679665 degrees
Iteration i = 29, freq_i = 100.011125 MHz, phi_i = 243.117201 degrees
Iteration i = 30, freq_i = 100.011250 MHz, phi_i = 232.628533 degrees
Iteration i = 31, freq_i = 100.011375 MHz, phi_i = 222.198801 degrees
Iteration i = 32, freq_i = 100.011500 MHz, phi_i = 211.611933 degrees
Iteration i = 33, freq_i = 100.011625 MHz, phi_i = 201.135250 degrees
Iteration i = 34, freq_i = 100.011750 MHz, phi_i = 190.533969 degrees
Iteration i = 35, freq_i = 100.011875 MHz, phi_i = 180.054081 degrees
Iteration i = 36, freq_i = 100.012000 MHz, phi_i = 169.602428 degrees
Iteration i = 37, freq_i = 100.012125 MHz, phi_i = 159.041472 degrees
Iteration i = 38, freq_i = 100.012250 MHz, phi_i = 148.559134 degrees
Iteration i = 39, freq_i = 100.012375 MHz, phi_i = 138.066087 degrees
###Markdown
Calibrating the QICK for phase coherent readout In this demo you will calibrate the QICK clocks to have the same phase.Before you measure a resonance with your QICK this is the first calibration you should do. It is a calibration for the two synthesizers which belong to the QICK signal generator and the QICK readout, respectively. The two synthesizers are running at the same frequency, but there is initially a constant phase difference $\phi$ between these two synthesizers. Doing this calibration results in you finding that phase difference $\phi$. In your subsequent measurements, you can specify this initial phase difference $\phi$ to compensate for it. From then on, the signal generator can synthesize any frequency and then if you read in data (doing a digital down conversion in the process), the readout will still be phase coherent with respect to the signal generator. The angular frequency $\omega = 2 \pi f$ . Also, $\phi = (\omega t) + \phi_0$. So, $\phi = (2 \pi f)*t + \phi_0 $. If $f$ goes up linearly, the phase difference will also change linearly (it will either increase or decrease, depending on whether the readout is ahead or behind of the signal generator- this is randomly determined each time the board clocks are initialized). Once the phase hits 360 degrees it cycles back to 0 again. For a readout frequency of interest $f_i$ there is a corresponding phase difference $\phi_i$. In this demonstration we assume $f_i \approx 180$ MHz. You can plot $\phi(f)$ and evaluate $\phi(f_i)=\phi_i$.
###Code
# Import the QICK drivers and auxiliary libraries
from qick import *
from qick.helpers import gauss
import time
import cmath
%pylab inline
# Load bitstream with custom overlay
soc = QickSoc(force_init_clks=False)
# Set the loopback DAC channel to be in 1st Nyquist zone mode
soc.set_nyquist(ch=7,nqz=1);
###Output
_____no_output_____
###Markdown
Hardware ConfigurationtProc channel 7 : DAC 229 CH3 Readout channel 0 : ADC 224 CH0
###Code
class LoopbackProgram(AveragerProgram):
def __init__(self,cfg):
AveragerProgram.__init__(self,cfg)
def initialize(self):
cfg=self.cfg
r_freq=self.sreg(cfg["res_ch"], "freq") #Get frequency register for res_ch
self.cfg["adc_lengths"]=[self.cfg["readout_length"]]*2 #add length of adc acquisition to config
self.cfg["adc_freqs"]=[self.cfg["pulse_freq"]]*2 #add frequency of adc ddc to config
self.add_pulse(ch=self.cfg["res_ch"], name="measure", style=self.cfg["pulse_style"], length=self.cfg["length"],idata = self.cfg["idata"]) #add a constant pulse to the pulse library
freq=soc.freq2reg(cfg["pulse_freq"]) # convert frequency to dac frequency (ensuring it is an available adc frequency)
# print("ADC freq = ", adcfreq(cfg["pulse_freq"]))
self.pulse(ch=cfg["res_ch"], name="measure", freq=freq, phase=0, gain=cfg["pulse_gain"], t= 0, play=False) # pre-configure readout pulse
self.synci(1000) # give processor some time to configure pulses
def body(self):
self.trigger_adc(adc1=1, adc2=1,adc_trig_offset=self.cfg["adc_trig_offset"]) # trigger the adc acquisition
self.pulse(ch=self.cfg["res_ch"], name="measure", play=True, outsel=1) # play readout pulse
# control should wait until the readout is over
self.waiti(0, self.cfg["adc_trig_offset"]+self.cfg["readout_length"])
self.sync_all(soc.us2cycles(self.cfg["relax_delay"]))
###Output
_____no_output_____
###Markdown
First, sanity check that we can see the pulse we want to calibrate
###Code
config={"res_ch":7, # --Fixed
"relax_delay":0, # --Fixed
"res_phase":0, # --Fixed
"pulse_style": "const", # --Fixed
"length":250, # [Clock ticks]
"sigma": 30, # [Clock ticks]
"readout_length":200, # [Clock ticks]
"pulse_gain":10000, # [DAC units]
"pulse_freq": 100, # [MHz]
"adc_trig_offset": 220, # [Clock ticks]
"reps":1,
"soft_avgs":1,
}
config["idata"] = gauss(mu=config["sigma"]*16*5/2,si=config["sigma"]*16,length=5*config["sigma"]*16,maxv=32000)
prog =LoopbackProgram(config)
iq0, iq1 = prog.acquire_decimated(soc, load_pulses=True,progress=False)
# Plot results.
plt.figure(1)
plt.plot(iq0[0], label="I value; ADC 0")
plt.plot(iq0[1], label="Q value; ADC 0")
plt.plot(iq1[0], label="I value; ADC 1")
plt.plot(iq1[1], label="Q value; ADC 1")
plt.ylabel("a.u.")
plt.xlabel("Clock ticks")
plt.title("Averages = " + str(config["soft_avgs"]))
plt.legend()
###Output
_____no_output_____
###Markdown
Now we perform the calibration: Params 1 (spacing between points is too large)
###Code
sweep_cfg={"start":100, "step":0.0005, "expts":40}
gpts=sweep_cfg["start"] + sweep_cfg["step"]*np.arange(sweep_cfg["expts"])
resultsi=[]
resultsq=[]
for g in gpts:
time.sleep(0.1)
config["pulse_freq"]=g
prog =LoopbackProgram(config)
iq0, iq1 = prog.acquire_decimated(soc, load_pulses=True,progress=False)
di0 = np.sum(iq0[0])/config["readout_length"]
dq0 = np.sum(iq0[1])/config["readout_length"]
resultsi.append(di0)
resultsq.append(dq0)
resultsi=np.array(resultsi)
resultsq=np.array(resultsq)
# Plot results.
sig = resultsi + 1j * resultsq
amp_array = np.abs(sig)
phase_array = np.angle(sig,deg=True)
for x in range(0,len(phase_array)):
if phase_array[x] <0:
phase_array[x] = phase_array[x] +360
plt.figure(1)
# plt.plot(gpts, resultsi,label="I value; ADC 0")
# plt.plot(gpts, resultsq,label="Q value; ADC 0")
# plt.plot(gpts, amp_array,label="Amplitude (DAC units); ADC 0")
plt.plot(gpts, phase_array, label="Phase (degrees); ADC 0")
plt.plot(gpts,phase_array, marker='.', linestyle="None",color="Red")
plt.xticks(rotation=90)
plt.title(r"$\phi$ vs $f$")
plt.ylabel(r"$\phi$ (degrees)")
plt.xlabel(r"$f$ (MHz)")
plt.legend()
plt.savefig("images/Phase_sweep.pdf", dpi=350)
###Output
_____no_output_____
###Markdown
Params 2 (We try again with finer spacing and now there is enough data for us to calibrate phase)
###Code
sweep_cfg={"start":100, "step":0.000125, "expts":160}
gpts=sweep_cfg["start"] + sweep_cfg["step"]*np.arange(sweep_cfg["expts"])
resultsi=[]
resultsq=[]
for g in gpts:
time.sleep(0.1)
config["pulse_freq"]=g
prog =LoopbackProgram(config)
iq0, iq1 = prog.acquire_decimated(soc, load_pulses=True,progress=False)
di0 = np.sum(iq0[0])/config["readout_length"]
dq0 = np.sum(iq0[1])/config["readout_length"]
resultsi.append(di0)
resultsq.append(dq0)
resultsi=np.array(resultsi)
resultsq=np.array(resultsq)
# Plot results.
sig = resultsi + 1j * resultsq
amp_array = np.abs(sig)
phase_array = np.angle(sig,deg=True)
for x in range(0,len(phase_array)):
if phase_array[x] <0:
phase_array[x] = phase_array[x] +360
plt.figure(1)
# plt.plot(gpts, resultsi,label="I value; ADC 0")
# plt.plot(gpts, resultsq,label="Q value; ADC 0")
# plt.plot(gpts, amp_array,label="Amplitude (DAC units); ADC 0")
plt.plot(gpts, phase_array, label="Phase (degrees); ADC 0")
plt.plot(gpts,phase_array, marker='.', linestyle="None",color="Red")
plt.xticks(rotation=90)
plt.title(r"$\phi$ vs $f$")
plt.ylabel(r"$\phi$ (degrees)")
plt.xlabel(r"$f$ (MHz)")
plt.legend()
plt.savefig("images/Phase_sweep.pdf", dpi=350)
###Output
_____no_output_____
###Markdown
Params 3 (We zoom in on the frequency area of interest and then print out the associated phase of interest)
###Code
sweep_cfg={"start":100.0075, "step":0.000125, "expts":40}
gpts=sweep_cfg["start"] + sweep_cfg["step"]*np.arange(sweep_cfg["expts"])
resultsi=[]
resultsq=[]
for g in gpts:
time.sleep(0.1)
config["pulse_freq"]=g
prog =LoopbackProgram(config)
iq0, iq1 = prog.acquire_decimated(soc, load_pulses=True,progress=False)
di0 = np.sum(iq0[0])/config["readout_length"]
dq0 = np.sum(iq0[1])/config["readout_length"]
resultsi.append(di0)
resultsq.append(dq0)
resultsi=np.array(resultsi)
resultsq=np.array(resultsq)
# Plot results.
sig = resultsi + 1j * resultsq
amp_array = np.abs(sig)
phase_array = np.angle(sig,deg=True)
for x in range(0,len(phase_array)):
if phase_array[x] <0:
phase_array[x] = phase_array[x] +360
print("Iteration i = %d, freq_i = %f MHz, phi_i = %f degrees" %(x,gpts[x], phase_array[x]))
plt.figure(1)
# plt.plot(gpts, resultsi,label="I value; ADC 0")
# plt.plot(gpts, resultsq,label="Q value; ADC 0")
# plt.plot(gpts, amp_array,label="Amplitude (DAC units); ADC 0")
plt.plot(gpts, phase_array, label="Phase (degrees); ADC 0")
plt.plot(gpts,phase_array, marker='.', linestyle="None",color="Red")
plt.xticks(rotation=90)
plt.title(r"$\phi$ vs $f$")
plt.ylabel(r"$\phi$ (degrees)")
plt.xlabel(r"$f$ (MHz)")
plt.legend()
plt.savefig("images/Phase_sweep.pdf", dpi=350)
###Output
Iteration i = 0, freq_i = 100.007500 MHz, phi_i = 52.903117 degrees
Iteration i = 1, freq_i = 100.007625 MHz, phi_i = 42.614056 degrees
Iteration i = 2, freq_i = 100.007750 MHz, phi_i = 32.204966 degrees
Iteration i = 3, freq_i = 100.007875 MHz, phi_i = 21.937536 degrees
Iteration i = 4, freq_i = 100.008000 MHz, phi_i = 11.548153 degrees
Iteration i = 5, freq_i = 100.008125 MHz, phi_i = 1.233681 degrees
Iteration i = 6, freq_i = 100.008250 MHz, phi_i = 350.989206 degrees
Iteration i = 7, freq_i = 100.008375 MHz, phi_i = 340.623779 degrees
Iteration i = 8, freq_i = 100.008500 MHz, phi_i = 330.335087 degrees
Iteration i = 9, freq_i = 100.008625 MHz, phi_i = 320.055206 degrees
Iteration i = 10, freq_i = 100.008750 MHz, phi_i = 309.649147 degrees
Iteration i = 11, freq_i = 100.008875 MHz, phi_i = 299.403463 degrees
Iteration i = 12, freq_i = 100.009000 MHz, phi_i = 288.995978 degrees
Iteration i = 13, freq_i = 100.009125 MHz, phi_i = 278.744806 degrees
Iteration i = 14, freq_i = 100.009250 MHz, phi_i = 268.464626 degrees
Iteration i = 15, freq_i = 100.009375 MHz, phi_i = 258.051361 degrees
Iteration i = 16, freq_i = 100.009500 MHz, phi_i = 247.781225 degrees
Iteration i = 17, freq_i = 100.009625 MHz, phi_i = 237.399006 degrees
Iteration i = 18, freq_i = 100.009750 MHz, phi_i = 227.126110 degrees
Iteration i = 19, freq_i = 100.009875 MHz, phi_i = 216.844654 degrees
Iteration i = 20, freq_i = 100.010000 MHz, phi_i = 206.432689 degrees
Iteration i = 21, freq_i = 100.010125 MHz, phi_i = 196.172768 degrees
Iteration i = 22, freq_i = 100.010250 MHz, phi_i = 185.890094 degrees
Iteration i = 23, freq_i = 100.010375 MHz, phi_i = 175.487849 degrees
Iteration i = 24, freq_i = 100.010500 MHz, phi_i = 165.204636 degrees
Iteration i = 25, freq_i = 100.010625 MHz, phi_i = 154.838695 degrees
Iteration i = 26, freq_i = 100.010750 MHz, phi_i = 144.575705 degrees
Iteration i = 27, freq_i = 100.010875 MHz, phi_i = 134.269904 degrees
Iteration i = 28, freq_i = 100.011000 MHz, phi_i = 123.862658 degrees
Iteration i = 29, freq_i = 100.011125 MHz, phi_i = 113.593485 degrees
Iteration i = 30, freq_i = 100.011250 MHz, phi_i = 103.324819 degrees
Iteration i = 31, freq_i = 100.011375 MHz, phi_i = 92.933727 degrees
Iteration i = 32, freq_i = 100.011500 MHz, phi_i = 82.653880 degrees
Iteration i = 33, freq_i = 100.011625 MHz, phi_i = 72.282556 degrees
Iteration i = 34, freq_i = 100.011750 MHz, phi_i = 61.999773 degrees
Iteration i = 35, freq_i = 100.011875 MHz, phi_i = 51.725837 degrees
Iteration i = 36, freq_i = 100.012000 MHz, phi_i = 41.353272 degrees
Iteration i = 37, freq_i = 100.012125 MHz, phi_i = 31.035699 degrees
Iteration i = 38, freq_i = 100.012250 MHz, phi_i = 20.672140 degrees
Iteration i = 39, freq_i = 100.012375 MHz, phi_i = 10.395885 degrees
###Markdown
Calibrating the QICK for phase coherent readout In this demo you will calibrate the QICK clocks to have the same phase.Before you measure a resonance with your QICK this is the first calibration you should do. It is a calibration for the two synthesizers which belong to the QICK signal generator and the QICK readout, respectively. The two synthesizers are running at the same frequency, but there is initially a constant phase difference $\phi$ between these two synthesizers. Doing this calibration results in you finding that phase difference $\phi$. In your subsequent measurements, you can specify this initial phase difference $\phi$ to compensate for it. From then on, the signal generator can synthesize any frequency and then if you read in data (doing a digital down conversion in the process), the readout will still be phase coherent with respect to the signal generator. The angular frequency $\omega = 2 \pi f$ . Also, $\phi = (\omega t) + \phi_0$. So, $\phi = (2 \pi f)*t + \phi_0 $. If $f$ goes up linearly, the phase difference will also change linearly (it will either increase or decrease, depending on whether the readout is ahead or behind of the signal generator- this is randomly determined each time the board clocks are initialized). Once the phase hits 360 degrees it cycles back to 0 again. For a readout frequency of interest $f_i$ there is a corresponding phase difference $\phi_i$. In this demonstration we assume $f_i \approx 180$ MHz. You can plot $\phi(f)$ and evaluate $\phi(f_i)=\phi_i$.
###Code
# Import the QICK drivers and auxiliary libraries
from qick import *
from qick.helpers import gauss
import time
import cmath
%pylab inline
# Load bitstream with custom overlay
soc = QickSoc(force_init_clks=False)
# Set the loopback DAC channel to be in 1st Nyquist zone mode
soc.set_nyquist(ch=7,nqz=1);
###Output
_____no_output_____
###Markdown
Hardware ConfigurationtProc channel 7 : DAC 229 CH3 Readout channel 0 : ADC 224 CH0
###Code
class LoopbackProgram(AveragerProgram):
def __init__(self,cfg):
AveragerProgram.__init__(self,cfg)
def initialize(self):
cfg=self.cfg
r_freq=self.sreg(cfg["res_ch"], "freq") #Get frequency register for res_ch
self.cfg["adc_lengths"]=[self.cfg["readout_length"]]*2 #add length of adc acquisition to config
self.cfg["adc_freqs"]=[adcfreq(self.cfg["pulse_freq"])]*2 #add frequency of adc ddc to config
self.add_pulse(ch=self.cfg["res_ch"], name="measure", style=self.cfg["pulse_style"], length=self.cfg["length"],idata = self.cfg["idata"]) #add a constant pulse to the pulse library
freq=freq2reg(adcfreq(cfg["pulse_freq"])) # convert frequency to dac frequency (ensuring it is an available adc frequency)
# print("ADC freq = ", adcfreq(cfg["pulse_freq"]))
self.pulse(ch=cfg["res_ch"], name="measure", freq=freq, phase=0, gain=cfg["pulse_gain"], t= 0, play=False) # pre-configure readout pulse
self.synci(1000) # give processor some time to configure pulses
def body(self):
self.trigger_adc(adc1=1, adc2=1,adc_trig_offset=self.cfg["adc_trig_offset"]) # trigger the adc acquisition
self.pulse(ch=self.cfg["res_ch"], name="measure", play=True, outsel=1) # play readout pulse
self.sync_all(us2cycles(self.cfg["relax_delay"]))
###Output
_____no_output_____
###Markdown
First, sanity check that we can see the pulse we want to calibrate
###Code
config={"res_ch":7, # --Fixed
"relax_delay":0, # --Fixed
"res_phase":0, # --Fixed
"pulse_style": "const", # --Fixed
"length":250, # [Clock ticks]
"sigma": 30, # [Clock ticks]
"readout_length":200, # [Clock ticks]
"pulse_gain":10000, # [DAC units]
"pulse_freq": 100, # [MHz]
"adc_trig_offset": 220, # [Clock ticks]
"reps":1,
"soft_avgs":1,
}
config["idata"] = gauss(mu=config["sigma"]*16*5/2,si=config["sigma"]*16,length=5*config["sigma"]*16,maxv=32000)
prog =LoopbackProgram(config)
iq0, iq1 = prog.acquire_decimated(soc, load_pulses=True,progress=False)
# Plot results.
plt.figure(1)
plt.plot(iq0[0], label="I value; ADC 0")
plt.plot(iq0[1], label="Q value; ADC 0")
plt.plot(iq1[0], label="I value; ADC 1")
plt.plot(iq1[1], label="Q value; ADC 1")
plt.ylabel("a.u.")
plt.xlabel("Clock ticks")
plt.title("Averages = " + str(config["soft_avgs"]))
plt.legend()
###Output
_____no_output_____
###Markdown
Now we perform the calibration: Params 1 (spacing between points is too large)
###Code
sweep_cfg={"start":100, "step":0.0005, "expts":40}
gpts=sweep_cfg["start"] + sweep_cfg["step"]*np.arange(sweep_cfg["expts"])
resultsi=[]
resultsq=[]
for g in gpts:
time.sleep(0.1)
config["pulse_freq"]=g
prog =LoopbackProgram(config)
iq0, iq1 = prog.acquire_decimated(soc, load_pulses=True,progress=False)
di0 = np.sum(iq0[0])/config["readout_length"]
dq0 = np.sum(iq0[1])/config["readout_length"]
resultsi.append(di0)
resultsq.append(dq0)
resultsi=np.array(resultsi)
resultsq=np.array(resultsq)
# Plot results.
sig = resultsi + 1j * resultsq
amp_array = np.abs(sig)
phase_array = np.angle(sig,deg=True)
for x in range(0,len(phase_array)):
if phase_array[x] <0:
phase_array[x] = phase_array[x] +360
plt.figure(1)
# plt.plot(gpts, resultsi,label="I value; ADC 0")
# plt.plot(gpts, resultsq,label="Q value; ADC 0")
# plt.plot(gpts, amp_array,label="Amplitude (DAC units); ADC 0")
plt.plot(gpts, phase_array, label="Phase (degrees); ADC 0")
plt.plot(gpts,phase_array, marker='.', linestyle="None",color="Red")
plt.xticks(rotation=90)
plt.title(r"$\phi$ vs $f$")
plt.ylabel(r"$\phi$ (degrees)")
plt.xlabel(r"$f$ (MHz)")
plt.legend()
plt.savefig("images/Phase_sweep.pdf", dpi=350)
###Output
_____no_output_____
###Markdown
Params 2 (We try again with finer spacing and now there is enough data for us to calibrate phase)
###Code
sweep_cfg={"start":100, "step":0.000125, "expts":160}
gpts=sweep_cfg["start"] + sweep_cfg["step"]*np.arange(sweep_cfg["expts"])
resultsi=[]
resultsq=[]
for g in gpts:
time.sleep(0.1)
config["pulse_freq"]=g
prog =LoopbackProgram(config)
iq0, iq1 = prog.acquire_decimated(soc, load_pulses=True,progress=False)
di0 = np.sum(iq0[0])/config["readout_length"]
dq0 = np.sum(iq0[1])/config["readout_length"]
resultsi.append(di0)
resultsq.append(dq0)
resultsi=np.array(resultsi)
resultsq=np.array(resultsq)
# Plot results.
sig = resultsi + 1j * resultsq
amp_array = np.abs(sig)
phase_array = np.angle(sig,deg=True)
for x in range(0,len(phase_array)):
if phase_array[x] <0:
phase_array[x] = phase_array[x] +360
plt.figure(1)
# plt.plot(gpts, resultsi,label="I value; ADC 0")
# plt.plot(gpts, resultsq,label="Q value; ADC 0")
# plt.plot(gpts, amp_array,label="Amplitude (DAC units); ADC 0")
plt.plot(gpts, phase_array, label="Phase (degrees); ADC 0")
plt.plot(gpts,phase_array, marker='.', linestyle="None",color="Red")
plt.xticks(rotation=90)
plt.title(r"$\phi$ vs $f$")
plt.ylabel(r"$\phi$ (degrees)")
plt.xlabel(r"$f$ (MHz)")
plt.legend()
plt.savefig("images/Phase_sweep.pdf", dpi=350)
###Output
_____no_output_____
###Markdown
Params 3 (We zoom in on the frequency area of interest and then print out the associated phase of interest)
###Code
sweep_cfg={"start":100.0075, "step":0.000125, "expts":40}
gpts=sweep_cfg["start"] + sweep_cfg["step"]*np.arange(sweep_cfg["expts"])
resultsi=[]
resultsq=[]
for g in gpts:
time.sleep(0.1)
config["pulse_freq"]=g
prog =LoopbackProgram(config)
iq0, iq1 = prog.acquire_decimated(soc, load_pulses=True,progress=False)
di0 = np.sum(iq0[0])/config["readout_length"]
dq0 = np.sum(iq0[1])/config["readout_length"]
resultsi.append(di0)
resultsq.append(dq0)
resultsi=np.array(resultsi)
resultsq=np.array(resultsq)
# Plot results.
sig = resultsi + 1j * resultsq
amp_array = np.abs(sig)
phase_array = np.angle(sig,deg=True)
for x in range(0,len(phase_array)):
if phase_array[x] <0:
phase_array[x] = phase_array[x] +360
print("Iteration i = %d, freq_i = %f MHz, phi_i = %f degrees" %(x,gpts[x], phase_array[x]))
plt.figure(1)
# plt.plot(gpts, resultsi,label="I value; ADC 0")
# plt.plot(gpts, resultsq,label="Q value; ADC 0")
# plt.plot(gpts, amp_array,label="Amplitude (DAC units); ADC 0")
plt.plot(gpts, phase_array, label="Phase (degrees); ADC 0")
plt.plot(gpts,phase_array, marker='.', linestyle="None",color="Red")
plt.xticks(rotation=90)
plt.title(r"$\phi$ vs $f$")
plt.ylabel(r"$\phi$ (degrees)")
plt.xlabel(r"$f$ (MHz)")
plt.legend()
plt.savefig("images/Phase_sweep.pdf", dpi=350)
###Output
Iteration i = 0, freq_i = 100.007500 MHz, phi_i = 187.992025 degrees
Iteration i = 1, freq_i = 100.007625 MHz, phi_i = 177.553958 degrees
Iteration i = 2, freq_i = 100.007750 MHz, phi_i = 167.033003 degrees
Iteration i = 3, freq_i = 100.007875 MHz, phi_i = 156.486800 degrees
Iteration i = 4, freq_i = 100.008000 MHz, phi_i = 145.976953 degrees
Iteration i = 5, freq_i = 100.008125 MHz, phi_i = 135.516772 degrees
Iteration i = 6, freq_i = 100.008250 MHz, phi_i = 124.963514 degrees
Iteration i = 7, freq_i = 100.008375 MHz, phi_i = 114.494701 degrees
Iteration i = 8, freq_i = 100.008500 MHz, phi_i = 103.891709 degrees
Iteration i = 9, freq_i = 100.008625 MHz, phi_i = 93.454635 degrees
Iteration i = 10, freq_i = 100.008750 MHz, phi_i = 82.956005 degrees
Iteration i = 11, freq_i = 100.008875 MHz, phi_i = 72.357968 degrees
Iteration i = 12, freq_i = 100.009000 MHz, phi_i = 61.906141 degrees
Iteration i = 13, freq_i = 100.009125 MHz, phi_i = 51.323893 degrees
Iteration i = 14, freq_i = 100.009250 MHz, phi_i = 40.865427 degrees
Iteration i = 15, freq_i = 100.009375 MHz, phi_i = 30.394780 degrees
Iteration i = 16, freq_i = 100.009500 MHz, phi_i = 19.802899 degrees
Iteration i = 17, freq_i = 100.009625 MHz, phi_i = 9.326770 degrees
Iteration i = 18, freq_i = 100.009750 MHz, phi_i = 358.846537 degrees
Iteration i = 19, freq_i = 100.009875 MHz, phi_i = 348.272914 degrees
Iteration i = 20, freq_i = 100.010000 MHz, phi_i = 337.812802 degrees
Iteration i = 21, freq_i = 100.010125 MHz, phi_i = 327.223179 degrees
Iteration i = 22, freq_i = 100.010250 MHz, phi_i = 316.753581 degrees
Iteration i = 23, freq_i = 100.010375 MHz, phi_i = 306.303491 degrees
Iteration i = 24, freq_i = 100.010500 MHz, phi_i = 295.691120 degrees
Iteration i = 25, freq_i = 100.010625 MHz, phi_i = 285.230260 degrees
Iteration i = 26, freq_i = 100.010750 MHz, phi_i = 274.770969 degrees
Iteration i = 27, freq_i = 100.010875 MHz, phi_i = 264.158280 degrees
Iteration i = 28, freq_i = 100.011000 MHz, phi_i = 253.679665 degrees
Iteration i = 29, freq_i = 100.011125 MHz, phi_i = 243.117201 degrees
Iteration i = 30, freq_i = 100.011250 MHz, phi_i = 232.628533 degrees
Iteration i = 31, freq_i = 100.011375 MHz, phi_i = 222.198801 degrees
Iteration i = 32, freq_i = 100.011500 MHz, phi_i = 211.611933 degrees
Iteration i = 33, freq_i = 100.011625 MHz, phi_i = 201.135250 degrees
Iteration i = 34, freq_i = 100.011750 MHz, phi_i = 190.533969 degrees
Iteration i = 35, freq_i = 100.011875 MHz, phi_i = 180.054081 degrees
Iteration i = 36, freq_i = 100.012000 MHz, phi_i = 169.602428 degrees
Iteration i = 37, freq_i = 100.012125 MHz, phi_i = 159.041472 degrees
Iteration i = 38, freq_i = 100.012250 MHz, phi_i = 148.559134 degrees
Iteration i = 39, freq_i = 100.012375 MHz, phi_i = 138.066087 degrees
|
qiskit/advanced/ignis/5a_randomized_benchmarking.ipynb | ###Markdown
Trusted Notebook" align="middle"> Randomized Benchmarking---* **Last Updated:** March 1, 2019* **Requires:** qiskit-terra 0.8, qiskit-ignis 0.1.1, qiskit-aer 0.2 Introduction**Randomization benchmarking (RB)** is a well-known technique to measure average gate performance by running sequences of random Clifford gates that should return the qubits to the initial state. Qiskit Ignis has tools to generate one- and two-qubit Clifford gate sequences simultaneously. This notebook gives an example for how to use the ``ignis.verification.randomized_benchmarking`` module. This particular example shows how to run 2-qubit randomized benchmarking (RB) simultaneous with 1-qubit RB. There are also examples on how to use some of the companion functions for predicting RB fidelity.
###Code
#Import general libraries (needed for functions)
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
#Import Qiskit classes classes
import qiskit
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error, thermal_relaxation_error
#Import the RB Functions
import qiskit.ignis.verification.randomized_benchmarking as rb
###Output
_____no_output_____
###Markdown
Select the Parameters of the RB RunFirst, wee need to choose the following parameters:- **nseeds:** The number of seeds. For each seed you will get a separate list of output circuits in rb_circs.- **length_vector:** The length vector of Clifford lengths. Must be in ascending order. RB sequences of increasing length grow on top of the previous sequences.- **rb_pattern:** A list of the form [[i,j],[k],...] which will make simultaneous RB sequences where Qi,Qj are a 2-qubit RB sequence and Qk is a 1-qubit sequence, etc. The number of qubits is the sum of the entries. For 'regular' RB the qubit_pattern is just [[0]],[[0,1]].- **length_multiplier:** If this is an array it scales each rb_sequence by the multiplier.- **seed_offset:** What to start the seeds at (e.g. if we want to add more seeds later).- **align_cliffs:** If true adds a barrier across all qubits in rb_pattern after each set of cliffords.In this example we have 3 qubits Q0,Q1,Q2. We are running 2Q RB (on qubits Q0,Q2) and 1Q RB (on qubit Q1) simultaneously, where there are twice as many 1Q Clifford gates.
###Code
#Number of qubits
nQ = 3
#There are 3 qubits: Q0,Q1,Q2.
#Number of seeds (random sequences)
nseeds = 5
#Number of Cliffords in the sequence (start, stop, steps)
nCliffs = np.arange(1,200,20)
#2Q RB on Q0,Q2 and 1Q RB on Q1
rb_pattern = [[0,2],[1]]
#Do three times as many 1Q Cliffords
length_multiplier = [1,3]
###Output
_____no_output_____
###Markdown
Generate RB sequencesWe generate RB sequences. We start with a small example (so it doesn't take too long to run).In order to generate the RB sequences **rb_circs**, which is a list of lists of quantum circuits, we run the function `rb.randomized_benchmarking_seq`.This function returns:- **rb_circs:** A list of lists of circuits for the rb sequences (separate list for each seed).- **xdata:** The Clifford lengths (with multiplier if applicable).
###Code
rb_opts = {}
rb_opts['length_vector'] = nCliffs
rb_opts['nseeds'] = nseeds
rb_opts['rb_pattern'] = rb_pattern
rb_opts['length_multiplier'] = length_multiplier
rb_circs, xdata = rb.randomized_benchmarking_seq(**rb_opts)
###Output
_____no_output_____
###Markdown
As an example, we print the circuit corresponding to the first RB sequence:
###Code
print(rb_circs[0][0])
###Output
┌───┐ ┌─────┐┌───┐ ┌───┐┌───┐ ░ ┌─────┐┌───┐ »
qr_0: |0>─┤ H ├─┤ Sdg ├┤ H ├──■─────┤ H ├┤ S ├─░──────┤ Sdg ├┤ H ├──■───────»
┌┴───┴┐└┬───┬┘├───┤ │ ░ ├───┤└─░─┘ ░ ┌───┐├─────┤├───┤ │ ┌───┐»
qr_1: |0>┤ Sdg ├─┤ H ├─┤ Y ├──┼───░─┤ Z ├──░─────┤ H ├┤ Sdg ├┤ H ├──┼──┤ X ├»
└─────┘ └───┘ └───┘┌─┴─┐ ░ ├───┤┌───┐ ░ └───┘├─────┤├───┤┌─┴─┐└───┘»
qr_2: |0>───────────────────┤ X ├───┤ H ├┤ S ├─░──────┤ Sdg ├┤ H ├┤ X ├─────»
└───┘ └───┘└───┘ ░ └─────┘└───┘└───┘ »
cr_0: 0 ═══════════════════════════════════════════════════════════════════»
»
cr_1: 0 ═══════════════════════════════════════════════════════════════════»
»
cr_2: 0 ═══════════════════════════════════════════════════════════════════»
»
« ┌───┐┌───┐┌───┐┌─┐
«qr_0: ┤ H ├┤ S ├┤ H ├┤M├───
« └─░─┘├───┤├───┤└╥┘┌─┐
«qr_1: ──░──┤ X ├┤ H ├─╫─┤M├
« ┌─┐ └───┘└───┘ ║ └╥┘
«qr_2: ─┤M├────────────╫──╫─
« └╥┘ ║ ║
«cr_0: ══╬═════════════╩══╬═
« ║ ║
«cr_1: ══╩════════════════╬═
« ║
«cr_2: ═══════════════════╩═
«
###Markdown
Look at the Unitary for 1 Circuit The Unitary representing each RB circuit should be the identity (with a global phase),since we multiply random Clifford elements, including a computed reversal gate. We simulate this using an Aer unitary simulator.
###Code
#Create a new circuit without the measurement
qc = qiskit.QuantumCircuit(*rb_circs[0][-1].qregs,*rb_circs[0][-1].cregs)
for i in rb_circs[0][-1][0:-nQ]:
qc.data.append(i)
#The Unitary is an identity (with a global phase)
backend = qiskit.Aer.get_backend('unitary_simulator')
basis_gates = ['u1', 'u2', 'u3', 'cx'] # use U,CX for now
job = qiskit.execute(qc, backend=backend, basis_gates=basis_gates)
print(np.around(job.result().get_unitary(), 3))
###Output
[[0.707+0.707j 0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j ]
[0. +0.j 0.707+0.707j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j ]
[0. +0.j 0. +0.j 0.707+0.707j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j ]
[0. +0.j 0. +0.j 0. +0.j 0.707+0.707j 0. +0.j
0. +0.j 0. +0.j 0. +0.j ]
[0. +0.j 0. +0.j 0. +0.j 0. +0.j 0.707+0.707j
0. +0.j 0. +0.j 0. +0.j ]
[0. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j
0.707+0.707j 0. +0.j 0. +0.j ]
[0. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0.707+0.707j 0. +0.j ]
[0. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0.707+0.707j]]
###Markdown
Define the noise model We define a noise model for the simulator. To simulate decay, we add depolarizing error probabilities to the CNOT and U gates.
###Code
noise_model = NoiseModel()
p1Q = 0.002
p2Q = 0.01
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1Q, 1), 'u2')
noise_model.add_all_qubit_quantum_error(depolarizing_error(2*p1Q, 1), 'u3')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p2Q, 2), 'cx')
###Output
_____no_output_____
###Markdown
Execute on Aer simulatorWe can execute the RB sequences either using a Qiskit Aer Simulator (with some noise model) or using an IBMQ provider, and obtain a list of results, `result_list`.
###Code
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 200
result_list = []
qobj_list = []
import time
for rb_seed,rb_circ_seed in enumerate(rb_circs):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model, backend_options={'max_parallel_experiments': 0})
result_list.append(job.result())
qobj_list.append(qobj)
print("Finished Simulating")
###Output
Compiling seed 0
Simulating seed 0
Compiling seed 1
Simulating seed 1
Compiling seed 2
Simulating seed 2
Compiling seed 3
Simulating seed 3
Compiling seed 4
Simulating seed 4
Finished Simulating
###Markdown
Get statistics about the survival probabilitiesThe results in **result_list** should fit to an exponentially decaying function $A \cdot \alpha ^ m + B$, where $m$ is the Clifford length.From $\alpha$ we can calculate the **Error per Clifford (EPC)**:$$ EPC = \frac{2^n-1}{2^n} (1-\alpha)$$(where $n=nQ$ is the number of qubits).
###Code
#Create an RBFitter object with 1 seed of data
rbfit = rb.fitters.RBFitter(result_list[0], xdata, rb_opts['rb_pattern'])
###Output
_____no_output_____
###Markdown
Plot After 1 Seed
###Code
plt.figure(figsize=(15, 6))
for i in range(2):
ax = plt.subplot(1, 2, i+1)
pattern_ind = i
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(pattern_ind, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit RB'%(len(rb_opts['rb_pattern'][i])), fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Plot with the Rest of the Seeds The plot is being updated after each seed.
###Code
rbfit = rb.fitters.RBFitter(result_list[0], xdata, rb_opts['rb_pattern'])
for seed_num, data in enumerate(result_list):#range(1,len(result_list)):
plt.figure(figsize=(15, 6))
axis = [plt.subplot(1, 2, 1), plt.subplot(1, 2, 2)]
# Add another seed to the data
rbfit.add_data([data])
for i in range(2):
pattern_ind = i
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(pattern_ind, ax=axis[i], add_label=True, show_plt=False)
# Add title and label
axis[i].set_title('%d Qubit RB - after seed %d'%(len(rb_opts['rb_pattern'][i]), seed_num), fontsize=18)
# Display
display.display(plt.gcf())
# Clear display after each seed and close
display.clear_output(wait=True)
time.sleep(1.0)
plt.close()
###Output
_____no_output_____
###Markdown
Add more shots to the data
###Code
shots = 200
result_list = []
qobj_list = []
for rb_seed,rb_circ_seed in enumerate(rb_circs):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model, backend_options={'max_parallel_experiments': 0})
result_list.append(job.result())
qobj_list.append(qobj)
print("Finished Simulating")
#Add this data to the previous fit
rbfit.add_data(result_list)
#Replot
plt.figure(figsize=(15, 6))
for i in range(2):
ax = plt.subplot(1, 2, i+1)
pattern_ind = i
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(pattern_ind, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit RB'%(len(rb_opts['rb_pattern'][i])), fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Predicted Gate Fidelity From the known depolarizing errors on the simulation we can predict the **fidelity**. First we need to count the number of **gates per Clifford**.The function **gates_per_clifford** takes a compiled qobj and outputs the number of basis gates in each circuit.
###Code
#Count the number of single and 2Q gates in the 2Q Cliffords
gates_per_cliff = rb.rb_utils.gates_per_clifford(qobj_list,xdata[0],basis_gates,rb_opts['rb_pattern'][0])
for i in range(len(basis_gates)):
print("Number of %s gates per Clifford: %f"%(basis_gates[i],
np.mean([gates_per_cliff[0][i],gates_per_cliff[1][i]])))
###Output
Number of u1 gates per Clifford: 0.273043
Number of u2 gates per Clifford: 0.937826
Number of u3 gates per Clifford: 0.484239
Number of cx gates per Clifford: 1.517174
###Markdown
The **two qubit Clifford gate error** gives measured errors in the basis gates that were used to construct the Clifford. It assumes that the error in the underlying gates is depolarizing. It outputs the error per a 2-qubit Clifford.The input to this function is:- **ngates:** list of the number of gates per 2Q Clifford.- **gate_qubit:** list of the qubit corresponding to the gate (0, 1 or -1). -1 corresponds to the 2Q gate.- **gate_err:** list of the gate errors.
###Code
#Prepare lists of the number of qubits and the errors
ngates = np.zeros(7)
ngates[0:3] = gates_per_cliff[0][0:3]
ngates[3:6] = gates_per_cliff[1][0:3]
ngates[6] = gates_per_cliff[0][3]
gate_qubits = np.array([0,0,0,1,1,1,-1], dtype=int)
gate_errs = np.zeros(len(gate_qubits))
gate_errs[[1,4]] = p1Q/2 #convert from depolarizing error to epg (1Q)
gate_errs[[2,5]] = 2*p1Q/2 #convert from depolarizing error to epg (1Q)
gate_errs[6] = p2Q*3/4 #convert from depolarizing error to epg (2Q)
#Calculate the predicted epc
pred_epc = rb.rb_utils.twoQ_clifford_error(ngates,gate_qubits,gate_errs)
print("Predicted 2Q Error per Clifford: %e"%pred_epc)
###Output
Predicted 2Q Error per Clifford: 1.584700e-02
###Markdown
Run an RB Sequence with T1,T2 ErrorsWe now choose RB sequences that contain only 2-qubit Cliffords.We execute these sequences as before, but with a noise model extended with T1/T2 thermal relaxation error, and fit the exponentially decaying curve.
###Code
rb_opts2 = rb_opts.copy()
rb_opts2['rb_pattern'] = [[0,1]]
rb_opts2['length_multiplier'] = 1
rb_circs2, xdata2 = rb.randomized_benchmarking_seq(**rb_opts2)
noise_model2 = NoiseModel()
#Add T1/T2 noise to the simulation
t1 = 100.
t2 = 80.
gate1Q = 0.1
gate2Q = 0.5
noise_model2.add_all_qubit_quantum_error(thermal_relaxation_error(t1,t2,gate1Q), 'u2')
noise_model2.add_all_qubit_quantum_error(thermal_relaxation_error(t1,t2,2*gate1Q), 'u3')
noise_model2.add_all_qubit_quantum_error(
thermal_relaxation_error(t1,t2,gate2Q).tensor(thermal_relaxation_error(t1,t2,gate2Q)), 'cx')
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 500
result_list2 = []
qobj_list2 = []
for rb_seed,rb_circ_seed in enumerate(rb_circs2):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model2, backend_options={'max_parallel_experiments': 0})
result_list2.append(job.result())
qobj_list2.append(qobj)
print("Finished Simulating")
#Create an RBFitter object
rbfit = rb.RBFitter(result_list2, xdata2, rb_opts2['rb_pattern'])
plt.figure(figsize=(10, 6))
ax = plt.gca()
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('2 Qubit RB with T1/T2 noise', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
We count again the number of **gates per Clifford** as before, and calculate the **two qubit Clifford gate error**, using the predicted primitive gate errors from the coherence limit.
###Code
#Count the number of single and 2Q gates in the 2Q Cliffords
gates_per_cliff = rb.rb_utils.gates_per_clifford(qobj_list2,xdata2[0],basis_gates,rb_opts2['rb_pattern'][0])
for i in range(len(basis_gates)):
print("Number of %s gates per Clifford: %f"%(basis_gates[i],
np.mean([gates_per_cliff[0][i],gates_per_cliff[1][i]])))
#Prepare lists of the number of qubits and the errors
ngates = np.zeros(7)
ngates[0:3] = gates_per_cliff[0][0:3]
ngates[3:6] = gates_per_cliff[1][0:3]
ngates[6] = gates_per_cliff[0][3]
gate_qubits = np.array([0,0,0,1,1,1,-1], dtype=int)
gate_errs = np.zeros(len(gate_qubits))
#Here are the predicted primitive gate errors from the coherence limit
gate_errs[[1,4]] = rb.rb_utils.coherence_limit(1,[t1],[t2],gate1Q)
gate_errs[[2,5]] = rb.rb_utils.coherence_limit(1,[t1],[t2],2*gate1Q)
gate_errs[6] = rb.rb_utils.coherence_limit(2,[t1,t1],[t2,t2],gate2Q)
#Calculate the predicted epc
pred_epc = rb.rb_utils.twoQ_clifford_error(ngates,gate_qubits,gate_errs)
print("Predicted 2Q Error per Clifford: %e"%pred_epc)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Trusted Notebook" align="middle"> Randomized Benchmarking---* **Last Updated:** March 1, 2019* **Requires:** qiskit-terra 0.8, qiskit-ignis 0.1.1, qiskit-aer 0.2 Introduction**Randomization benchmarking (RB)** is a well-known technique to measure average gate performance by running sequences of random Clifford gates that should return the qubits to the initial state. Qiskit Ignis has tools to generate one- and two-qubit Clifford gate sequences simultaneously. This notebook gives an example for how to use the ``ignis.verification.randomized_benchmarking`` module. This particular example shows how to run 2-qubit randomized benchmarking (RB) simultaneous with 1-qubit RB. There are also examples on how to use some of the companion functions for predicting RB fidelity.
###Code
#Import general libraries (needed for functions)
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
#Import Qiskit classes classes
import qiskit
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error, thermal_relaxation_error
#Import the RB Functions
import qiskit.ignis.verification.randomized_benchmarking as rb
###Output
_____no_output_____
###Markdown
Select the Parameters of the RB RunFirst, wee need to choose the following parameters:- **nseeds:** The number of seeds. For each seed you will get a separate list of output circuits in rb_circs.- **length_vector:** The length vector of Clifford lengths. Must be in ascending order. RB sequences of increasing length grow on top of the previous sequences.- **rb_pattern:** A list of the form [[i,j],[k],...] which will make simultaneous RB sequences where Qi,Qj are a 2-qubit RB sequence and Qk is a 1-qubit sequence, etc. The number of qubits is the sum of the entries. For 'regular' RB the qubit_pattern is just [[0]],[[0,1]].- **length_multiplier:** If this is an array it scales each rb_sequence by the multiplier.- **seed_offset:** What to start the seeds at (e.g. if we want to add more seeds later).- **align_cliffs:** If true adds a barrier across all qubits in rb_pattern after each set of cliffords.In this example we have 3 qubits Q0,Q1,Q2. We are running 2Q RB (on qubits Q0,Q2) and 1Q RB (on qubit Q1) simultaneously, where there are twice as many 1Q Clifford gates.
###Code
#Number of qubits
nQ = 3
#There are 3 qubits: Q0,Q1,Q2.
#Number of seeds (random sequences)
nseeds = 5
#Number of Cliffords in the sequence (start, stop, steps)
nCliffs = np.arange(1,200,20)
#2Q RB on Q0,Q2 and 1Q RB on Q1
rb_pattern = [[0,2],[1]]
#Do three times as many 1Q Cliffords
length_multiplier = [1,3]
###Output
_____no_output_____
###Markdown
Generate RB sequencesWe generate RB sequences. We start with a small example (so it doesn't take too long to run).In order to generate the RB sequences **rb_circs**, which is a list of lists of quantum circuits, we run the function `rb.randomized_benchmarking_seq`.This function returns:- **rb_circs:** A list of lists of circuits for the RB sequences (separate list for each seed).- **xdata:** The Clifford lengths (with multiplier if applicable).- **rb_opts_dict:** Option dictionary back out with default options appended.
###Code
rb_opts = {}
rb_opts['length_vector'] = nCliffs
rb_opts['nseeds'] = nseeds
rb_opts['rb_pattern'] = rb_pattern
rb_opts['length_multiplier'] = length_multiplier
rb_circs, xdata = rb.randomized_benchmarking_seq(**rb_opts)
###Output
_____no_output_____
###Markdown
As an example, we print the circuit corresponding to the first RB sequence:
###Code
print(rb_circs[0][0])
###Output
┌───┐ ┌─────┐┌───┐ ┌───┐┌───┐ ░ ┌─────┐┌───┐ »
qr_0: |0>─┤ H ├─┤ Sdg ├┤ H ├──■─────┤ H ├┤ S ├─░──────┤ Sdg ├┤ H ├──■───────»
┌┴───┴┐└┬───┬┘├───┤ │ ░ ├───┤└─░─┘ ░ ┌───┐├─────┤├───┤ │ ┌───┐»
qr_1: |0>┤ Sdg ├─┤ H ├─┤ Y ├──┼───░─┤ Z ├──░─────┤ H ├┤ Sdg ├┤ H ├──┼──┤ X ├»
└─────┘ └───┘ └───┘┌─┴─┐ ░ ├───┤┌───┐ ░ └───┘├─────┤├───┤┌─┴─┐└───┘»
qr_2: |0>───────────────────┤ X ├───┤ H ├┤ S ├─░──────┤ Sdg ├┤ H ├┤ X ├─────»
└───┘ └───┘└───┘ ░ └─────┘└───┘└───┘ »
cr_0: 0 ═══════════════════════════════════════════════════════════════════»
»
cr_1: 0 ═══════════════════════════════════════════════════════════════════»
»
cr_2: 0 ═══════════════════════════════════════════════════════════════════»
»
« ┌───┐┌───┐┌───┐┌─┐
«qr_0: ┤ H ├┤ S ├┤ H ├┤M├───
« └─░─┘├───┤├───┤└╥┘┌─┐
«qr_1: ──░──┤ X ├┤ H ├─╫─┤M├
« ┌─┐ └───┘└───┘ ║ └╥┘
«qr_2: ─┤M├────────────╫──╫─
« └╥┘ ║ ║
«cr_0: ══╬═════════════╩══╬═
« ║ ║
«cr_1: ══╩════════════════╬═
« ║
«cr_2: ═══════════════════╩═
«
###Markdown
Look at the Unitary for 1 Circuit The Unitary representing each RB circuit should be the identity (with a global phase),since we multiply random Clifford elements, including a computed reversal gate. We simulate this using an Aer unitary simulator.
###Code
#Create a new circuit without the measurement
qc = qiskit.QuantumCircuit(*rb_circs[0][-1].qregs,*rb_circs[0][-1].cregs)
for i in rb_circs[0][-1][0:-nQ]:
qc.data.append(i)
#The Unitary is an identity (with a global phase)
backend = qiskit.Aer.get_backend('unitary_simulator')
basis_gates = ['u1', 'u2', 'u3', 'cx'] # use U,CX for now
job = qiskit.execute(qc, backend=backend, basis_gates=basis_gates)
print(np.around(job.result().get_unitary(), 3))
###Output
[[0.707+0.707j 0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j ]
[0. +0.j 0.707+0.707j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j ]
[0. +0.j 0. +0.j 0.707+0.707j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j ]
[0. +0.j 0. +0.j 0. +0.j 0.707+0.707j 0. +0.j
0. +0.j 0. +0.j 0. +0.j ]
[0. +0.j 0. +0.j 0. +0.j 0. +0.j 0.707+0.707j
0. +0.j 0. +0.j 0. +0.j ]
[0. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j
0.707+0.707j 0. +0.j 0. +0.j ]
[0. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0.707+0.707j 0. +0.j ]
[0. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0.707+0.707j]]
###Markdown
Define the noise model We define a noise model for the simulator. To simulate decay, we add depolarizing error probabilities to the CNOT and U gates.
###Code
noise_model = NoiseModel()
p1Q = 0.002
p2Q = 0.01
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1Q, 1), 'u2')
noise_model.add_all_qubit_quantum_error(depolarizing_error(2*p1Q, 1), 'u3')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p2Q, 2), 'cx')
###Output
_____no_output_____
###Markdown
Execute on Aer simulatorWe can execute the RB sequences either using a Qiskit Aer Simulator (with some noise model) or using an IBMQ provider, and obtain a list of results, `result_list`.
###Code
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 200
result_list = []
qobj_list = []
import time
for rb_seed,rb_circ_seed in enumerate(rb_circs):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model, backend_options={'max_parallel_experiments': 0})
result_list.append(job.result())
qobj_list.append(qobj)
print("Finished Simulating")
###Output
Compiling seed 0
Simulating seed 0
Compiling seed 1
Simulating seed 1
Compiling seed 2
Simulating seed 2
Compiling seed 3
Simulating seed 3
Compiling seed 4
Simulating seed 4
Finished Simulating
###Markdown
Get statistics about the survival probabilitiesThe results in **result_list** should fit to an exponentially decaying function $A \cdot \alpha ^ m + B$, where $m$ is the Clifford length.From $\alpha$ we can calculate the **Error per Clifford (EPC)**:$$ EPC = \frac{2^n-1}{2^n} (1-\alpha)$$(where $n=nQ$ is the number of qubits).
###Code
#Create an RBFitter object with 1 seed of data
rbfit = rb.fitters.RBFitter(result_list[0], xdata, rb_opts['rb_pattern'])
###Output
_____no_output_____
###Markdown
Plot After 1 Seed
###Code
plt.figure(figsize=(15, 6))
for i in range(2):
ax = plt.subplot(1, 2, i+1)
pattern_ind = i
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(pattern_ind, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit RB'%(len(rb_opts['rb_pattern'][i])), fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Plot with the Rest of the Seeds The plot is being updated after each seed.
###Code
rbfit = rb.fitters.RBFitter(result_list[0], xdata, rb_opts['rb_pattern'])
for seed_num, data in enumerate(result_list):#range(1,len(result_list)):
plt.figure(figsize=(15, 6))
axis = [plt.subplot(1, 2, 1), plt.subplot(1, 2, 2)]
# Add another seed to the data
rbfit.add_data([data])
for i in range(2):
pattern_ind = i
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(pattern_ind, ax=axis[i], add_label=True, show_plt=False)
# Add title and label
axis[i].set_title('%d Qubit RB - after seed %d'%(len(rb_opts['rb_pattern'][i]), seed_num), fontsize=18)
# Display
display.display(plt.gcf())
# Clear display after each seed and close
display.clear_output(wait=True)
time.sleep(1.0)
plt.close()
###Output
_____no_output_____
###Markdown
Add more shots to the data
###Code
shots = 200
result_list = []
qobj_list = []
for rb_seed,rb_circ_seed in enumerate(rb_circs):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model, backend_options={'max_parallel_experiments': 0})
result_list.append(job.result())
qobj_list.append(qobj)
print("Finished Simulating")
#Add this data to the previous fit
rbfit.add_data(result_list)
#Replot
plt.figure(figsize=(15, 6))
for i in range(2):
ax = plt.subplot(1, 2, i+1)
pattern_ind = i
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(pattern_ind, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit RB'%(len(rb_opts['rb_pattern'][i])), fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Predicted Gate Fidelity From the known depolarizing errors on the simulation we can predict the **fidelity**. First we need to count the number of **gates per Clifford**.The function **gates_per_clifford** takes a compiled qobj and outputs the number of basis gates in each circuit.
###Code
#Count the number of single and 2Q gates in the 2Q Cliffords
gates_per_cliff = rb.rb_utils.gates_per_clifford(qobj_list,xdata[0],basis_gates,rb_opts['rb_pattern'][0])
for i in range(len(basis_gates)):
print("Number of %s gates per Clifford: %f"%(basis_gates[i],
np.mean([gates_per_cliff[0][i],gates_per_cliff[1][i]])))
###Output
Number of u1 gates per Clifford: 0.273043
Number of u2 gates per Clifford: 0.937826
Number of u3 gates per Clifford: 0.484239
Number of cx gates per Clifford: 1.517174
###Markdown
The **two qubit Clifford gate error** gives measured errors in the basis gates that were used to construct the Clifford. It assumes that the error in the underlying gates is depolarizing. It outputs the error per a 2-qubit Clifford.The input to this function is:- **ngates:** list of the number of gates per 2Q Clifford.- **gate_qubit:** list of the qubit corresponding to the gate (0, 1 or -1). -1 corresponds to the 2Q gate.- **gate_err:** list of the gate errors.
###Code
#Prepare lists of the number of qubits and the errors
ngates = np.zeros(7)
ngates[0:3] = gates_per_cliff[0][0:3]
ngates[3:6] = gates_per_cliff[1][0:3]
ngates[6] = gates_per_cliff[0][3]
gate_qubits = np.array([0,0,0,1,1,1,-1], dtype=int)
gate_errs = np.zeros(len(gate_qubits))
gate_errs[[1,4]] = p1Q/2 #convert from depolarizing error to epg (1Q)
gate_errs[[2,5]] = 2*p1Q/2 #convert from depolarizing error to epg (1Q)
gate_errs[6] = p2Q*3/4 #convert from depolarizing error to epg (2Q)
#Calculate the predicted epc
pred_epc = rb.rb_utils.twoQ_clifford_error(ngates,gate_qubits,gate_errs)
print("Predicted 2Q Error per Clifford: %e"%pred_epc)
###Output
Predicted 2Q Error per Clifford: 1.584700e-02
###Markdown
Run an RB Sequence with T1,T2 ErrorsWe now choose RB sequences that contain only 2-qubit Cliffords.We execute these sequences as before, but with a noise model extended with T1/T2 thermal relaxation error, and fit the exponentially decaying curve.
###Code
rb_opts2 = rb_opts.copy()
rb_opts2['rb_pattern'] = [[0,1]]
rb_opts2['length_multiplier'] = 1
rb_circs2, xdata2 = rb.randomized_benchmarking_seq(**rb_opts2)
noise_model2 = NoiseModel()
#Add T1/T2 noise to the simulation
t1 = 100.
t2 = 80.
gate1Q = 0.1
gate2Q = 0.5
noise_model2.add_all_qubit_quantum_error(thermal_relaxation_error(t1,t2,gate1Q), 'u2')
noise_model2.add_all_qubit_quantum_error(thermal_relaxation_error(t1,t2,2*gate1Q), 'u3')
noise_model2.add_all_qubit_quantum_error(
thermal_relaxation_error(t1,t2,gate2Q).tensor(thermal_relaxation_error(t1,t2,gate2Q)), 'cx')
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 500
result_list2 = []
qobj_list2 = []
for rb_seed,rb_circ_seed in enumerate(rb_circs2):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model2, backend_options={'max_parallel_experiments': 0})
result_list2.append(job.result())
qobj_list2.append(qobj)
print("Finished Simulating")
#Create an RBFitter object
rbfit = rb.RBFitter(result_list2, xdata2, rb_opts2['rb_pattern'])
plt.figure(figsize=(10, 6))
ax = plt.gca()
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('2 Qubit RB with T1/T2 noise', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
We count again the number of **gates per Clifford** as before, and calculate the **two qubit Clifford gate error**, using the predicted primitive gate errors from the coherence limit.
###Code
#Count the number of single and 2Q gates in the 2Q Cliffords
gates_per_cliff = rb.rb_utils.gates_per_clifford(qobj_list2,xdata2[0],basis_gates,rb_opts2['rb_pattern'][0])
for i in range(len(basis_gates)):
print("Number of %s gates per Clifford: %f"%(basis_gates[i],
np.mean([gates_per_cliff[0][i],gates_per_cliff[1][i]])))
#Prepare lists of the number of qubits and the errors
ngates = np.zeros(7)
ngates[0:3] = gates_per_cliff[0][0:3]
ngates[3:6] = gates_per_cliff[1][0:3]
ngates[6] = gates_per_cliff[0][3]
gate_qubits = np.array([0,0,0,1,1,1,-1], dtype=int)
gate_errs = np.zeros(len(gate_qubits))
#Here are the predicted primitive gate errors from the coherence limit
gate_errs[[1,4]] = rb.rb_utils.coherence_limit(1,[t1],[t2],gate1Q)
gate_errs[[2,5]] = rb.rb_utils.coherence_limit(1,[t1],[t2],2*gate1Q)
gate_errs[6] = rb.rb_utils.coherence_limit(2,[t1,t1],[t2,t2],gate2Q)
#Calculate the predicted epc
pred_epc = rb.rb_utils.twoQ_clifford_error(ngates,gate_qubits,gate_errs)
print("Predicted 2Q Error per Clifford: %e"%pred_epc)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Trusted Notebook" align="middle"> Randomized Benchmarking---* **Last Updated:** March 1, 2019* **Requires:** qiskit-terra 0.8, qiskit-ignis 0.1.1, qiskit-aer 0.2 Introduction**Randomization benchmarking (RB)** is a well-known technique to measure average gate performance by running sequences of random Clifford gates that should return the qubits to the initial state. Qiskit Ignis has tools to generate one- and two-qubit Clifford gate sequences simultaneously. This notebook gives an example for how to use the ``ignis.verification.randomized_benchmarking`` module. This particular example shows how to run 2-qubit randomized benchmarking (RB) simulataneous with 1-qubit RB. There are also examples on how to use some of the companion functions for predicting RB fidelity.
###Code
#Import general libraries (needed for functions)
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
#Import Qiskit classes classes
import qiskit
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error, thermal_relaxation_error
#Import the RB Functions
import qiskit.ignis.verification.randomized_benchmarking as rb
###Output
_____no_output_____
###Markdown
Select the Parameters of the RB RunFirst, wee need to choose the following parameters:- **nseeds:** The number of seeds. For each seed there you will get a separate list of output circuits in rb_circs.- **length_vector:** The length vector of Clifford lengths. Must be in ascending order. RB sequences of increasing length grow on top of the previous sequences.- **rb_pattern:** A list of the form [[i,j],[k],...] which will make simultaneous RB sequences where Qi,Qj are a 2-qubit RB sequence and Qk is a 1-qubit sequence, etc. The number of qubits is the sum of the entries. For 'regular' RB the qubit_pattern is just [[0]],[[0,1]].- **length_multiplier:** If this is an array it scales each rb_sequence by the multiplier.- **seed_offset:** What to start the seeds at (e.g. if we want to add more seeds later).- **align_cliffs:** If true adds a barrier across all qubits in rb_pattern after each set of cliffords.In this example we have 3 qubits Q0,Q1,Q2. We are running 2Q RB (on qubits Q0,Q2) and 1Q RB (on qubit Q1) simultaneously, where there are twice as many 1Q Clifford gates.
###Code
#Number of qubits
nQ = 3
#There are 3 qubits: Q0,Q1,Q2.
#Number of seeds (random sequences)
nseeds = 5
#Number of Cliffords in the sequence (start, stop, steps)
nCliffs = np.arange(1,200,20)
#2Q RB on Q0,Q2 and 1Q RB on Q1
rb_pattern = [[0,2],[1]]
#Do three times as many 1Q Cliffords
length_multiplier = [1,3]
###Output
_____no_output_____
###Markdown
Generate RB sequencesWe generate RB sequences. We start with a small example (so it doesn't take too long to run).In order to generate the RB sequences **rb_circs**, which is a list of lists of quantum circuits, we run the function **rb.randomized_benchmarking_seq**.This function returns:- **rb_circs:** A list of lists of circuits for the rb sequences (separate list for each seed).- **xdata:** The Clifford lengths (with multiplier if applicable).- **rb_opts_dict:** Option dictionary back out with default options appended.
###Code
rb_opts = {}
rb_opts['length_vector'] = nCliffs
rb_opts['nseeds'] = nseeds
rb_opts['rb_pattern'] = rb_pattern
rb_opts['length_multiplier'] = length_multiplier
rb_circs, xdata = rb.randomized_benchmarking_seq(**rb_opts)
###Output
_____no_output_____
###Markdown
As an example, we print the circuit corresponding to the first RB sequence
###Code
print(rb_circs[0][0])
###Output
┌───┐ ┌─────┐┌───┐ ┌───┐┌───┐ ░ ┌─────┐┌───┐ »
qr_0: |0>─┤ H ├─┤ Sdg ├┤ H ├──■─────┤ H ├┤ S ├─░──────┤ Sdg ├┤ H ├──■───────»
┌┴───┴┐└┬───┬┘├───┤ │ ░ ├───┤└─░─┘ ░ ┌───┐├─────┤├───┤ │ ┌───┐»
qr_1: |0>┤ Sdg ├─┤ H ├─┤ Y ├──┼───░─┤ Z ├──░─────┤ H ├┤ Sdg ├┤ H ├──┼──┤ X ├»
└─────┘ └───┘ └───┘┌─┴─┐ ░ ├───┤┌───┐ ░ └───┘├─────┤├───┤┌─┴─┐└───┘»
qr_2: |0>───────────────────┤ X ├───┤ H ├┤ S ├─░──────┤ Sdg ├┤ H ├┤ X ├─────»
└───┘ └───┘└───┘ ░ └─────┘└───┘└───┘ »
cr_0: 0 ═══════════════════════════════════════════════════════════════════»
»
cr_1: 0 ═══════════════════════════════════════════════════════════════════»
»
cr_2: 0 ═══════════════════════════════════════════════════════════════════»
»
« ┌───┐┌───┐┌───┐┌─┐
«qr_0: ┤ H ├┤ S ├┤ H ├┤M├───
« └─░─┘├───┤├───┤└╥┘┌─┐
«qr_1: ──░──┤ X ├┤ H ├─╫─┤M├
« ┌─┐ └───┘└───┘ ║ └╥┘
«qr_2: ─┤M├────────────╫──╫─
« └╥┘ ║ ║
«cr_0: ══╬═════════════╩══╬═
« ║ ║
«cr_1: ══╩════════════════╬═
« ║
«cr_2: ═══════════════════╩═
«
###Markdown
Look at the Unitary for 1 Circuit The Unitary representing each RB circuit should be the identity (with a global phase),since we multipy random Clifford elements, including a computed reversal gate. We simulate this using Aer unitary simulator.
###Code
#Create a new circuit without the measurement
qc = qiskit.QuantumCircuit(*rb_circs[0][-1].qregs,*rb_circs[0][-1].cregs)
for i in rb_circs[0][-1][0:-nQ]:
qc.data.append(i)
#The Unitary is an identity (with a global phase)
backend = qiskit.Aer.get_backend('unitary_simulator')
basis_gates = ['u1', 'u2', 'u3', 'cx'] # use U,CX for now
job = qiskit.execute(qc, backend=backend, basis_gates=basis_gates)
print(np.around(job.result().get_unitary(), 3))
###Output
[[0.707+0.707j 0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j ]
[0. +0.j 0.707+0.707j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j ]
[0. +0.j 0. +0.j 0.707+0.707j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j ]
[0. +0.j 0. +0.j 0. +0.j 0.707+0.707j 0. +0.j
0. +0.j 0. +0.j 0. +0.j ]
[0. +0.j 0. +0.j 0. +0.j 0. +0.j 0.707+0.707j
0. +0.j 0. +0.j 0. +0.j ]
[0. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j
0.707+0.707j 0. +0.j 0. +0.j ]
[0. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0.707+0.707j 0. +0.j ]
[0. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0.707+0.707j]]
###Markdown
Define the noise model We define a noise model for the simulator. To simulate decay, we add depolarizing error probabilities to the CNOT and U gates.
###Code
noise_model = NoiseModel()
p1Q = 0.002
p2Q = 0.01
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1Q, 1), 'u2')
noise_model.add_all_qubit_quantum_error(depolarizing_error(2*p1Q, 1), 'u3')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p2Q, 2), 'cx')
###Output
_____no_output_____
###Markdown
Execute on Aer simulatorWe can execute the RB sequences either using Qiskit Aer Simulator (with some noise model) or using IBMQ provider, and obtain a list of results **result_list**.
###Code
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 200
result_list = []
qobj_list = []
import time
for rb_seed,rb_circ_seed in enumerate(rb_circs):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model, backend_options={'max_parallel_experiments': 0})
result_list.append(job.result())
qobj_list.append(qobj)
print("Finished Simulating")
###Output
Compiling seed 0
Simulating seed 0
Compiling seed 1
Simulating seed 1
Compiling seed 2
Simulating seed 2
Compiling seed 3
Simulating seed 3
Compiling seed 4
Simulating seed 4
Finished Simulating
###Markdown
Get statistics about the survival probabilitiesThe results in **results_list** should fit to an exponentially decaying function $A \cdot \alpha ^ m + B$, where $m$ is the Clifford length.From $\alpha$ we can calculate the **Error per Clifford (EPC)**:$$ EPC = \frac{2^n-1}{2^n} (1-\alpha)$$(where $n=nQ$ is the number of qubits).
###Code
#Create an RBFitter object with 1 seed of data
rbfit = rb.fitters.RBFitter(result_list[0], xdata, rb_opts['rb_pattern'])
###Output
_____no_output_____
###Markdown
Plot After 1 Seed
###Code
plt.figure(figsize=(15, 6))
for i in range(2):
ax = plt.subplot(1, 2, i+1)
pattern_ind = i
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(pattern_ind, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit RB'%(len(rb_opts['rb_pattern'][i])), fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Plot with the Rest of the Seeds The plot is being updated after each seed
###Code
rbfit = rb.fitters.RBFitter(result_list[0], xdata, rb_opts['rb_pattern'])
for seed_num, data in enumerate(result_list):#range(1,len(result_list)):
plt.figure(figsize=(15, 6))
axis = [plt.subplot(1, 2, 1), plt.subplot(1, 2, 2)]
# Add another seed to the data
rbfit.add_data([data])
for i in range(2):
pattern_ind = i
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(pattern_ind, ax=axis[i], add_label=True, show_plt=False)
# Add title and label
axis[i].set_title('%d Qubit RB - after seed %d'%(len(rb_opts['rb_pattern'][i]), seed_num), fontsize=18)
# Display
display.display(plt.gcf())
# Clear display after each seed and close
display.clear_output(wait=True)
time.sleep(1.0)
plt.close()
###Output
_____no_output_____
###Markdown
Add more shots to the data
###Code
shots = 200
result_list = []
qobj_list = []
for rb_seed,rb_circ_seed in enumerate(rb_circs):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model, backend_options={'max_parallel_experiments': 0})
result_list.append(job.result())
qobj_list.append(qobj)
print("Finished Simulating")
#Add this data to the previous fit
rbfit.add_data(result_list)
#Replot
plt.figure(figsize=(15, 6))
for i in range(2):
ax = plt.subplot(1, 2, i+1)
pattern_ind = i
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(pattern_ind, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit RB'%(len(rb_opts['rb_pattern'][i])), fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Predicted Gate Fidelity From the known depolarizing errors on the simulation we can predict the **fidelity**. First we need to count the number of **gates per Clifford**.The function **gates_per_clifford** takes a compiled qobj and outputs the number of basis gates in each circuit.
###Code
#Count the number of single and 2Q gates in the 2Q Cliffords
gates_per_cliff = rb.rb_utils.gates_per_clifford(qobj_list,xdata[0],basis_gates,rb_opts['rb_pattern'][0])
for i in range(len(basis_gates)):
print("Number of %s gates per Clifford: %f"%(basis_gates[i],
np.mean([gates_per_cliff[0][i],gates_per_cliff[1][i]])))
###Output
Number of u1 gates per Clifford: 0.273043
Number of u2 gates per Clifford: 0.937826
Number of u3 gates per Clifford: 0.484239
Number of cx gates per Clifford: 1.517174
###Markdown
The **two qubit Clifford gate error** gives measured errors in the basis gates that were used to construct the Clifford. It assumes that the error in the underlying gates is depolarizing. It outputs the error per a 2-qubit Clifford.The input to this function is:- **ngates:** list of the number of gates per 2Q Clifford- **gate_qubit:** list of the qubit corresponding to the gate (0, 1 or -1). -1 corresponds to the 2Q gate.- **gate_err:** list of the gate errors
###Code
#Prepare lists of the number of qubits and the errors
ngates = np.zeros(7)
ngates[0:3] = gates_per_cliff[0][0:3]
ngates[3:6] = gates_per_cliff[1][0:3]
ngates[6] = gates_per_cliff[0][3]
gate_qubits = np.array([0,0,0,1,1,1,-1], dtype=int)
gate_errs = np.zeros(len(gate_qubits))
gate_errs[[1,4]] = p1Q/2 #convert from depolarizing error to epg (1Q)
gate_errs[[2,5]] = 2*p1Q/2 #convert from depolarizing error to epg (1Q)
gate_errs[6] = p2Q*3/4 #convert from depolarizing error to epg (2Q)
#Calculate the predicted epc
pred_epc = rb.rb_utils.twoQ_clifford_error(ngates,gate_qubits,gate_errs)
print("Predicted 2Q Error per Clifford: %e"%pred_epc)
###Output
Predicted 2Q Error per Clifford: 1.584700e-02
###Markdown
Run an RB Sequence with T1,T2 ErrorsWe now choose RB sequences that contain only 2-qubit Cliffords.We execute these sequences as before, but with a noise model extened with T1/T2 thermal relaxation error, and fit the exponentially decaying curve.
###Code
rb_opts2 = rb_opts.copy()
rb_opts2['rb_pattern'] = [[0,1]]
rb_opts2['length_multiplier'] = 1
rb_circs2, xdata2 = rb.randomized_benchmarking_seq(**rb_opts2)
noise_model2 = NoiseModel()
#Add T1/T2 noise to the simulation
t1 = 100.
t2 = 80.
gate1Q = 0.1
gate2Q = 0.5
noise_model2.add_all_qubit_quantum_error(thermal_relaxation_error(t1,t2,gate1Q), 'u2')
noise_model2.add_all_qubit_quantum_error(thermal_relaxation_error(t1,t2,2*gate1Q), 'u3')
noise_model2.add_all_qubit_quantum_error(
thermal_relaxation_error(t1,t2,gate2Q).tensor(thermal_relaxation_error(t1,t2,gate2Q)), 'cx')
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 500
result_list2 = []
qobj_list2 = []
for rb_seed,rb_circ_seed in enumerate(rb_circs2):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model2, backend_options={'max_parallel_experiments': 0})
result_list2.append(job.result())
qobj_list2.append(qobj)
print("Finished Simulating")
#Create an RBFitter object
rbfit = rb.RBFitter(result_list2, xdata2, rb_opts2['rb_pattern'])
plt.figure(figsize=(10, 6))
ax = plt.gca()
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('2 Qubit RB with T1/T2 noise', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
We count again the number of **gates per Clifford** as before, and calculate the **two qubit Clifford gate error**, using the predicted primitive gate errors from the coherence limit.
###Code
#Count the number of single and 2Q gates in the 2Q Cliffords
gates_per_cliff = rb.rb_utils.gates_per_clifford(qobj_list2,xdata2[0],basis_gates,rb_opts2['rb_pattern'][0])
for i in range(len(basis_gates)):
print("Number of %s gates per Clifford: %f"%(basis_gates[i],
np.mean([gates_per_cliff[0][i],gates_per_cliff[1][i]])))
#Prepare lists of the number of qubits and the errors
ngates = np.zeros(7)
ngates[0:3] = gates_per_cliff[0][0:3]
ngates[3:6] = gates_per_cliff[1][0:3]
ngates[6] = gates_per_cliff[0][3]
gate_qubits = np.array([0,0,0,1,1,1,-1], dtype=int)
gate_errs = np.zeros(len(gate_qubits))
#Here are the predicted primitive gate errors from the coherence limit
gate_errs[[1,4]] = rb.rb_utils.coherence_limit(1,[t1],[t2],gate1Q)
gate_errs[[2,5]] = rb.rb_utils.coherence_limit(1,[t1],[t2],2*gate1Q)
gate_errs[6] = rb.rb_utils.coherence_limit(2,[t1,t1],[t2,t2],gate2Q)
#Calculate the predicted epc
pred_epc = rb.rb_utils.twoQ_clifford_error(ngates,gate_qubits,gate_errs)
print("Predicted 2Q Error per Clifford: %e"%pred_epc)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
![qiskit_header.png](attachment:qiskit_header.png) Randomized Benchmarking---* **Last Updated:** March 1, 2019* **Requires:** qiskit-terra 0.8, qiskit-ignis 0.1.1, qiskit-aer 0.2 Introduction**Randomization benchmarking (RB)** is a well-known technique to measure average gate performance by running sequences of random Clifford gates that should return the qubits to the initial state. Qiskit Ignis has tools to generate one- and two-qubit Clifford gate sequences simultaneously. This notebook gives an example for how to use the ``ignis.verification.randomized_benchmarking`` module. This particular example shows how to run 2-qubit randomized benchmarking (RB) simultaneous with 1-qubit RB. There are also examples on how to use some of the companion functions for predicting RB fidelity.
###Code
#Import general libraries (needed for functions)
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
#Import Qiskit classes classes
import qiskit
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error, thermal_relaxation_error
#Import the RB Functions
import qiskit.ignis.verification.randomized_benchmarking as rb
###Output
_____no_output_____
###Markdown
Select the Parameters of the RB RunFirst, wee need to choose the following parameters:- **nseeds:** The number of seeds. For each seed you will get a separate list of output circuits in rb_circs.- **length_vector:** The length vector of Clifford lengths. Must be in ascending order. RB sequences of increasing length grow on top of the previous sequences.- **rb_pattern:** A list of the form [[i,j],[k],...] which will make simultaneous RB sequences where Qi,Qj are a 2-qubit RB sequence and Qk is a 1-qubit sequence, etc. The number of qubits is the sum of the entries. For 'regular' RB the qubit_pattern is just [[0]],[[0,1]].- **length_multiplier:** If this is an array it scales each rb_sequence by the multiplier.- **seed_offset:** What to start the seeds at (e.g. if we want to add more seeds later).- **align_cliffs:** If true adds a barrier across all qubits in rb_pattern after each set of cliffords.In this example we have 3 qubits Q0,Q1,Q2. We are running 2Q RB (on qubits Q0,Q2) and 1Q RB (on qubit Q1) simultaneously, where there are twice as many 1Q Clifford gates.
###Code
#Number of qubits
nQ = 3
#There are 3 qubits: Q0,Q1,Q2.
#Number of seeds (random sequences)
nseeds = 5
#Number of Cliffords in the sequence (start, stop, steps)
nCliffs = np.arange(1,200,20)
#2Q RB on Q0,Q2 and 1Q RB on Q1
rb_pattern = [[0,2],[1]]
#Do three times as many 1Q Cliffords
length_multiplier = [1,3]
###Output
_____no_output_____
###Markdown
Generate RB sequencesWe generate RB sequences. We start with a small example (so it doesn't take too long to run).In order to generate the RB sequences **rb_circs**, which is a list of lists of quantum circuits, we run the function `rb.randomized_benchmarking_seq`.This function returns:- **rb_circs:** A list of lists of circuits for the rb sequences (separate list for each seed).- **xdata:** The Clifford lengths (with multiplier if applicable).
###Code
rb_opts = {}
rb_opts['length_vector'] = nCliffs
rb_opts['nseeds'] = nseeds
rb_opts['rb_pattern'] = rb_pattern
rb_opts['length_multiplier'] = length_multiplier
rb_circs, xdata = rb.randomized_benchmarking_seq(**rb_opts)
###Output
Making the n=2 Clifford Table
###Markdown
As an example, we print the circuit corresponding to the first RB sequence:
###Code
print(rb_circs[0][0])
###Output
┌───┐┌─────┐ ┌───┐ ┌───┐ ░ ┌───┐ ┌───┐┌───┐»
qr_0: |0>┤ H ├┤ Sdg ├─┤ H ├───■──┤ Z ├──────░─┤ Z ├────────────■──┤ H ├┤ S ├»
└─░─┘└┬───┬┘┌┴───┴┐ │ ├───┤┌───┐ ░ └─░─┘┌───┐┌───┐ │ └─░─┘├───┤»
qr_1: |0>──░───┤ H ├─┤ Sdg ├──┼──┤ H ├┤ X ├─────░──┤ H ├┤ Z ├──┼────░──┤ Z ├»
░ └───┘ └─────┘┌─┴─┐├───┤└───┘ ░ ┌───┐└───┘└───┘┌─┴─┐ ┌─┐ └───┘»
qr_2: |0>───────────────────┤ X ├┤ X ├──────░─┤ X ├──────────┤ X ├─┤M├──────»
└───┘└───┘ ░ └───┘ └───┘ └╥┘ »
cr_0: 0 ═══════════════════════════════════════════════════════════╬═══════»
║ »
cr_1: 0 ═══════════════════════════════════════════════════════════╩═══════»
»
cr_2: 0 ═══════════════════════════════════════════════════════════════════»
»
« ┌───┐ ┌─┐
«qr_0: ─┤ H ├──────┤M├───
« ┌┴───┴┐┌───┐└╥┘┌─┐
«qr_1: ┤ Sdg ├┤ H ├─╫─┤M├
« └─────┘└───┘ ║ └╥┘
«qr_2: ─────────────╫──╫─
« ║ ║
«cr_0: ═════════════╩══╬═
« ║
«cr_1: ════════════════╬═
« ║
«cr_2: ════════════════╩═
«
###Markdown
Look at the Unitary for 1 Circuit The Unitary representing each RB circuit should be the identity (with a global phase),since we multiply random Clifford elements, including a computed reversal gate. We simulate this using an Aer unitary simulator.
###Code
#Create a new circuit without the measurement
qc = qiskit.QuantumCircuit(*rb_circs[0][-1].qregs,*rb_circs[0][-1].cregs)
for i in rb_circs[0][-1][0:-nQ]:
qc.data.append(i)
#The Unitary is an identity (with a global phase)
backend = qiskit.Aer.get_backend('unitary_simulator')
basis_gates = ['u1', 'u2', 'u3', 'cx'] # use U,CX for now
job = qiskit.execute(qc, backend=backend, basis_gates=basis_gates)
print(np.around(job.result().get_unitary(), 3))
###Output
[[ 1.-0.j 0.-0.j 0.+0.j 0.+0.j 0.-0.j -0.-0.j 0.+0.j 0.+0.j]
[ 0.-0.j 1.-0.j 0.+0.j 0.+0.j 0.+0.j -0.+0.j 0.+0.j 0.+0.j]
[-0.+0.j -0.+0.j 1.-0.j -0.-0.j 0.+0.j -0.+0.j 0.-0.j 0.+0.j]
[ 0.-0.j -0.+0.j 0.-0.j 1.-0.j -0.+0.j -0.+0.j 0.-0.j 0.+0.j]
[-0.-0.j -0.+0.j -0.-0.j -0.-0.j 1.-0.j -0.+0.j 0.+0.j 0.+0.j]
[-0.+0.j -0.+0.j -0.-0.j -0.+0.j 0.+0.j 1.-0.j 0.+0.j 0.+0.j]
[ 0.+0.j 0.-0.j -0.-0.j -0.+0.j -0.+0.j -0.+0.j 1.-0.j -0.+0.j]
[ 0.+0.j -0.-0.j -0.+0.j 0.+0.j 0.-0.j -0.+0.j 0.+0.j 1.-0.j]]
###Markdown
Define the noise model We define a noise model for the simulator. To simulate decay, we add depolarizing error probabilities to the CNOT and U gates.
###Code
noise_model = NoiseModel()
p1Q = 0.002
p2Q = 0.01
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1Q, 1), 'u2')
noise_model.add_all_qubit_quantum_error(depolarizing_error(2*p1Q, 1), 'u3')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p2Q, 2), 'cx')
###Output
_____no_output_____
###Markdown
Execute on Aer simulatorWe can execute the RB sequences either using a Qiskit Aer Simulator (with some noise model) or using an IBMQ provider, and obtain a list of results, `result_list`.
###Code
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 200
result_list = []
qobj_list = []
import time
for rb_seed,rb_circ_seed in enumerate(rb_circs):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model, backend_options={'max_parallel_experiments': 0})
result_list.append(job.result())
qobj_list.append(qobj)
print("Finished Simulating")
###Output
Compiling seed 0
Simulating seed 0
Compiling seed 1
Simulating seed 1
Compiling seed 2
Simulating seed 2
Compiling seed 3
Simulating seed 3
Compiling seed 4
Simulating seed 4
Finished Simulating
###Markdown
Get statistics about the survival probabilitiesThe results in **result_list** should fit to an exponentially decaying function $A \cdot \alpha ^ m + B$, where $m$ is the Clifford length.From $\alpha$ we can calculate the **Error per Clifford (EPC)**:$$ EPC = \frac{2^n-1}{2^n} (1-\alpha)$$(where $n=nQ$ is the number of qubits).
###Code
#Create an RBFitter object with 1 seed of data
rbfit = rb.fitters.RBFitter(result_list[0], xdata, rb_opts['rb_pattern'])
###Output
_____no_output_____
###Markdown
Plot After 1 Seed
###Code
plt.figure(figsize=(15, 6))
for i in range(2):
ax = plt.subplot(1, 2, i+1)
pattern_ind = i
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(pattern_ind, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit RB'%(len(rb_opts['rb_pattern'][i])), fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Plot with the Rest of the Seeds The plot is being updated after each seed.
###Code
rbfit = rb.fitters.RBFitter(result_list[0], xdata, rb_opts['rb_pattern'])
for seed_num, data in enumerate(result_list):#range(1,len(result_list)):
plt.figure(figsize=(15, 6))
axis = [plt.subplot(1, 2, 1), plt.subplot(1, 2, 2)]
# Add another seed to the data
rbfit.add_data([data])
for i in range(2):
pattern_ind = i
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(pattern_ind, ax=axis[i], add_label=True, show_plt=False)
# Add title and label
axis[i].set_title('%d Qubit RB - after seed %d'%(len(rb_opts['rb_pattern'][i]), seed_num), fontsize=18)
# Display
display.display(plt.gcf())
# Clear display after each seed and close
display.clear_output(wait=True)
time.sleep(1.0)
plt.close()
###Output
_____no_output_____
###Markdown
Add more shots to the data
###Code
shots = 200
result_list = []
qobj_list = []
for rb_seed,rb_circ_seed in enumerate(rb_circs):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model, backend_options={'max_parallel_experiments': 0})
result_list.append(job.result())
qobj_list.append(qobj)
print("Finished Simulating")
#Add this data to the previous fit
rbfit.add_data(result_list)
#Replot
plt.figure(figsize=(15, 6))
for i in range(2):
ax = plt.subplot(1, 2, i+1)
pattern_ind = i
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(pattern_ind, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit RB'%(len(rb_opts['rb_pattern'][i])), fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Predicted Gate Fidelity From the known depolarizing errors on the simulation we can predict the **fidelity**. First we need to count the number of **gates per Clifford**.The function **gates_per_clifford** takes a compiled qobj and outputs the number of basis gates in each circuit.
###Code
#Count the number of single and 2Q gates in the 2Q Cliffords
gates_per_cliff = rb.rb_utils.gates_per_clifford(qobj_list,xdata[0],basis_gates,rb_opts['rb_pattern'][0])
for i in range(len(basis_gates)):
print("Number of %s gates per Clifford: %f"%(basis_gates[i],
np.mean([gates_per_cliff[0][i],gates_per_cliff[1][i]])))
###Output
Number of u1 gates per Clifford: 0.253043
Number of u2 gates per Clifford: 1.020326
Number of u3 gates per Clifford: 0.425109
Number of cx gates per Clifford: 1.471304
###Markdown
The **two-qubit Clifford gate error** gives measured errors in the basis gates that were used to construct the Clifford. It assumes that the error in the underlying gates is depolarizing. It outputs the error per a 2-qubit Clifford.The input to this function is:- **ngates:** list of the number of gates per 2Q Clifford.- **gate_qubit:** list of the qubit corresponding to the gate (0, 1 or -1). -1 corresponds to the 2Q gate.- **gate_err:** list of the gate errors.
###Code
#Prepare lists of the number of qubits and the errors
ngates = np.zeros(7)
ngates[0:3] = gates_per_cliff[0][0:3]
ngates[3:6] = gates_per_cliff[1][0:3]
ngates[6] = gates_per_cliff[0][3]
gate_qubits = np.array([0,0,0,1,1,1,-1], dtype=int)
gate_errs = np.zeros(len(gate_qubits))
gate_errs[[1,4]] = p1Q/2 #convert from depolarizing error to epg (1Q)
gate_errs[[2,5]] = 2*p1Q/2 #convert from depolarizing error to epg (1Q)
gate_errs[6] = p2Q*3/4 #convert from depolarizing error to epg (2Q)
#Calculate the predicted epc
pred_epc = rb.rb_utils.twoQ_clifford_error(ngates,gate_qubits,gate_errs)
print("Predicted 2Q Error per Clifford: %e"%pred_epc)
###Output
Predicted 2Q Error per Clifford: 1.542410e-02
###Markdown
Run an RB Sequence with T1,T2 ErrorsWe now choose RB sequences that contain only 2-qubit Cliffords.We execute these sequences as before, but with a noise model extended with T1/T2 thermal relaxation error, and fit the exponentially decaying curve.
###Code
rb_opts2 = rb_opts.copy()
rb_opts2['rb_pattern'] = [[0,1]]
rb_opts2['length_multiplier'] = 1
rb_circs2, xdata2 = rb.randomized_benchmarking_seq(**rb_opts2)
noise_model2 = NoiseModel()
#Add T1/T2 noise to the simulation
t1 = 100.
t2 = 80.
gate1Q = 0.1
gate2Q = 0.5
noise_model2.add_all_qubit_quantum_error(thermal_relaxation_error(t1,t2,gate1Q), 'u2')
noise_model2.add_all_qubit_quantum_error(thermal_relaxation_error(t1,t2,2*gate1Q), 'u3')
noise_model2.add_all_qubit_quantum_error(
thermal_relaxation_error(t1,t2,gate2Q).tensor(thermal_relaxation_error(t1,t2,gate2Q)), 'cx')
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 500
result_list2 = []
qobj_list2 = []
for rb_seed,rb_circ_seed in enumerate(rb_circs2):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model2, backend_options={'max_parallel_experiments': 0})
result_list2.append(job.result())
qobj_list2.append(qobj)
print("Finished Simulating")
#Create an RBFitter object
rbfit = rb.RBFitter(result_list2, xdata2, rb_opts2['rb_pattern'])
plt.figure(figsize=(10, 6))
ax = plt.gca()
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('2 Qubit RB with T1/T2 noise', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
We count again the number of **gates per Clifford** as before, and calculate the **two-qubit Clifford gate error**, using the predicted primitive gate errors from the coherence limit.
###Code
#Count the number of single and 2Q gates in the 2Q Cliffords
gates_per_cliff = rb.rb_utils.gates_per_clifford(qobj_list2,xdata2[0],basis_gates,rb_opts2['rb_pattern'][0])
for i in range(len(basis_gates)):
print("Number of %s gates per Clifford: %f"%(basis_gates[i],
np.mean([gates_per_cliff[0][i],gates_per_cliff[1][i]])))
#Prepare lists of the number of qubits and the errors
ngates = np.zeros(7)
ngates[0:3] = gates_per_cliff[0][0:3]
ngates[3:6] = gates_per_cliff[1][0:3]
ngates[6] = gates_per_cliff[0][3]
gate_qubits = np.array([0,0,0,1,1,1,-1], dtype=int)
gate_errs = np.zeros(len(gate_qubits))
#Here are the predicted primitive gate errors from the coherence limit
gate_errs[[1,4]] = rb.rb_utils.coherence_limit(1,[t1],[t2],gate1Q)
gate_errs[[2,5]] = rb.rb_utils.coherence_limit(1,[t1],[t2],2*gate1Q)
gate_errs[6] = rb.rb_utils.coherence_limit(2,[t1,t1],[t2,t2],gate2Q)
#Calculate the predicted epc
pred_epc = rb.rb_utils.twoQ_clifford_error(ngates,gate_qubits,gate_errs)
print("Predicted 2Q Error per Clifford: %e"%pred_epc)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
![qiskit_header.png](attachment:qiskit_header.png) Randomized Benchmarking---* **Last Updated:** March 1, 2019* **Requires:** qiskit-terra 0.8, qiskit-ignis 0.1.1, qiskit-aer 0.2 Introduction**Randomization benchmarking (RB)** is a well-known technique to measure average gate performance by running sequences of random Clifford gates that should return the qubits to the initial state. Qiskit Ignis has tools to generate one- and two-qubit Clifford gate sequences simultaneously. This notebook gives an example for how to use the ``ignis.verification.randomized_benchmarking`` module. This particular example shows how to run 2-qubit randomized benchmarking (RB) simultaneous with 1-qubit RB. There are also examples on how to use some of the companion functions for predicting RB fidelity.
###Code
#Import general libraries (needed for functions)
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
#Import Qiskit classes classes
import qiskit
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error, thermal_relaxation_error
#Import the RB Functions
import qiskit.ignis.verification.randomized_benchmarking as rb
###Output
_____no_output_____
###Markdown
Select the Parameters of the RB RunFirst, wee need to choose the following parameters:- **nseeds:** The number of seeds. For each seed you will get a separate list of output circuits in rb_circs.- **length_vector:** The length vector of Clifford lengths. Must be in ascending order. RB sequences of increasing length grow on top of the previous sequences.- **rb_pattern:** A list of the form [[i,j],[k],...] which will make simultaneous RB sequences where Qi,Qj are a 2-qubit RB sequence and Qk is a 1-qubit sequence, etc. The number of qubits is the sum of the entries. For 'regular' RB the qubit_pattern is just [[0]],[[0,1]].- **length_multiplier:** If this is an array it scales each rb_sequence by the multiplier.- **seed_offset:** What to start the seeds at (e.g. if we want to add more seeds later).- **align_cliffs:** If true adds a barrier across all qubits in rb_pattern after each set of cliffords.In this example we have 3 qubits Q0,Q1,Q2. We are running 2Q RB (on qubits Q0,Q2) and 1Q RB (on qubit Q1) simultaneously, where there are twice as many 1Q Clifford gates.
###Code
#Number of qubits
nQ = 3
#There are 3 qubits: Q0,Q1,Q2.
#Number of seeds (random sequences)
nseeds = 5
#Number of Cliffords in the sequence (start, stop, steps)
nCliffs = np.arange(1,200,20)
#2Q RB on Q0,Q2 and 1Q RB on Q1
rb_pattern = [[0,2],[1]]
#Do three times as many 1Q Cliffords
length_multiplier = [1,3]
###Output
_____no_output_____
###Markdown
Generate RB sequencesWe generate RB sequences. We start with a small example (so it doesn't take too long to run).In order to generate the RB sequences **rb_circs**, which is a list of lists of quantum circuits, we run the function `rb.randomized_benchmarking_seq`.This function returns:- **rb_circs:** A list of lists of circuits for the rb sequences (separate list for each seed).- **xdata:** The Clifford lengths (with multiplier if applicable).
###Code
rb_opts = {}
rb_opts['length_vector'] = nCliffs
rb_opts['nseeds'] = nseeds
rb_opts['rb_pattern'] = rb_pattern
rb_opts['length_multiplier'] = length_multiplier
rb_circs, xdata = rb.randomized_benchmarking_seq(**rb_opts)
###Output
Making the n=2 Clifford Table
###Markdown
As an example, we print the circuit corresponding to the first RB sequence:
###Code
print(rb_circs[0][0])
###Output
┌───┐┌─────┐ ┌───┐ ┌───┐ ░ ┌───┐ ┌───┐┌───┐»
qr_0: |0>┤ H ├┤ Sdg ├─┤ H ├───■──┤ Z ├──────░─┤ Z ├────────────■──┤ H ├┤ S ├»
└─░─┘└┬───┬┘┌┴───┴┐ │ ├───┤┌───┐ ░ └─░─┘┌───┐┌───┐ │ └─░─┘├───┤»
qr_1: |0>──░───┤ H ├─┤ Sdg ├──┼──┤ H ├┤ X ├─────░──┤ H ├┤ Z ├──┼────░──┤ Z ├»
░ └───┘ └─────┘┌─┴─┐├───┤└───┘ ░ ┌───┐└───┘└───┘┌─┴─┐ ┌─┐ └───┘»
qr_2: |0>───────────────────┤ X ├┤ X ├──────░─┤ X ├──────────┤ X ├─┤M├──────»
└───┘└───┘ ░ └───┘ └───┘ └╥┘ »
cr_0: 0 ═══════════════════════════════════════════════════════════╬═══════»
║ »
cr_1: 0 ═══════════════════════════════════════════════════════════╩═══════»
»
cr_2: 0 ═══════════════════════════════════════════════════════════════════»
»
« ┌───┐ ┌─┐
«qr_0: ─┤ H ├──────┤M├───
« ┌┴───┴┐┌───┐└╥┘┌─┐
«qr_1: ┤ Sdg ├┤ H ├─╫─┤M├
« └─────┘└───┘ ║ └╥┘
«qr_2: ─────────────╫──╫─
« ║ ║
«cr_0: ═════════════╩══╬═
« ║
«cr_1: ════════════════╬═
« ║
«cr_2: ════════════════╩═
«
###Markdown
Look at the Unitary for 1 Circuit The Unitary representing each RB circuit should be the identity (with a global phase),since we multiply random Clifford elements, including a computed reversal gate. We simulate this using an Aer unitary simulator.
###Code
#Create a new circuit without the measurement
qc = qiskit.QuantumCircuit(*rb_circs[0][-1].qregs,*rb_circs[0][-1].cregs)
for i in rb_circs[0][-1][0:-nQ]:
qc.data.append(i)
#The Unitary is an identity (with a global phase)
backend = qiskit.Aer.get_backend('unitary_simulator')
basis_gates = ['u1', 'u2', 'u3', 'cx'] # use U,CX for now
job = qiskit.execute(qc, backend=backend, basis_gates=basis_gates)
print(np.around(job.result().get_unitary(), 3))
###Output
[[ 1.-0.j 0.-0.j 0.+0.j 0.+0.j 0.-0.j -0.-0.j 0.+0.j 0.+0.j]
[ 0.-0.j 1.-0.j 0.+0.j 0.+0.j 0.+0.j -0.+0.j 0.+0.j 0.+0.j]
[-0.+0.j -0.+0.j 1.-0.j -0.-0.j 0.+0.j -0.+0.j 0.-0.j 0.+0.j]
[ 0.-0.j -0.+0.j 0.-0.j 1.-0.j -0.+0.j -0.+0.j 0.-0.j 0.+0.j]
[-0.-0.j -0.+0.j -0.-0.j -0.-0.j 1.-0.j -0.+0.j 0.+0.j 0.+0.j]
[-0.+0.j -0.+0.j -0.-0.j -0.+0.j 0.+0.j 1.-0.j 0.+0.j 0.+0.j]
[ 0.+0.j 0.-0.j -0.-0.j -0.+0.j -0.+0.j -0.+0.j 1.-0.j -0.+0.j]
[ 0.+0.j -0.-0.j -0.+0.j 0.+0.j 0.-0.j -0.+0.j 0.+0.j 1.-0.j]]
###Markdown
Define the noise model We define a noise model for the simulator. To simulate decay, we add depolarizing error probabilities to the CNOT and U gates.
###Code
noise_model = NoiseModel()
p1Q = 0.002
p2Q = 0.01
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1Q, 1), 'u2')
noise_model.add_all_qubit_quantum_error(depolarizing_error(2*p1Q, 1), 'u3')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p2Q, 2), 'cx')
###Output
_____no_output_____
###Markdown
Execute on Aer simulatorWe can execute the RB sequences either using a Qiskit Aer Simulator (with some noise model) or using an IBMQ provider, and obtain a list of results, `result_list`.
###Code
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 200
result_list = []
qobj_list = []
import time
for rb_seed,rb_circ_seed in enumerate(rb_circs):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model, backend_options={'max_parallel_experiments': 0})
result_list.append(job.result())
qobj_list.append(qobj)
print("Finished Simulating")
###Output
Compiling seed 0
Simulating seed 0
Compiling seed 1
Simulating seed 1
Compiling seed 2
Simulating seed 2
Compiling seed 3
Simulating seed 3
Compiling seed 4
Simulating seed 4
Finished Simulating
###Markdown
Get statistics about the survival probabilitiesThe results in **result_list** should fit to an exponentially decaying function $A \cdot \alpha ^ m + B$, where $m$ is the Clifford length.From $\alpha$ we can calculate the **Error per Clifford (EPC)**:$$ EPC = \frac{2^n-1}{2^n} (1-\alpha)$$(where $n=nQ$ is the number of qubits).
###Code
#Create an RBFitter object with 1 seed of data
rbfit = rb.fitters.RBFitter(result_list[0], xdata, rb_opts['rb_pattern'])
###Output
_____no_output_____
###Markdown
Plot After 1 Seed
###Code
plt.figure(figsize=(15, 6))
for i in range(2):
ax = plt.subplot(1, 2, i+1)
pattern_ind = i
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(pattern_ind, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit RB'%(len(rb_opts['rb_pattern'][i])), fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Plot with the Rest of the Seeds The plot is being updated after each seed.
###Code
rbfit = rb.fitters.RBFitter(result_list[0], xdata, rb_opts['rb_pattern'])
for seed_num, data in enumerate(result_list):#range(1,len(result_list)):
plt.figure(figsize=(15, 6))
axis = [plt.subplot(1, 2, 1), plt.subplot(1, 2, 2)]
# Add another seed to the data
rbfit.add_data([data])
for i in range(2):
pattern_ind = i
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(pattern_ind, ax=axis[i], add_label=True, show_plt=False)
# Add title and label
axis[i].set_title('%d Qubit RB - after seed %d'%(len(rb_opts['rb_pattern'][i]), seed_num), fontsize=18)
# Display
display.display(plt.gcf())
# Clear display after each seed and close
display.clear_output(wait=True)
time.sleep(1.0)
plt.close()
###Output
_____no_output_____
###Markdown
Add more shots to the data
###Code
shots = 200
result_list = []
qobj_list = []
for rb_seed,rb_circ_seed in enumerate(rb_circs):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model, backend_options={'max_parallel_experiments': 0})
result_list.append(job.result())
qobj_list.append(qobj)
print("Finished Simulating")
#Add this data to the previous fit
rbfit.add_data(result_list)
#Replot
plt.figure(figsize=(15, 6))
for i in range(2):
ax = plt.subplot(1, 2, i+1)
pattern_ind = i
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(pattern_ind, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit RB'%(len(rb_opts['rb_pattern'][i])), fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Predicted Gate Fidelity From the known depolarizing errors on the simulation we can predict the **fidelity**. First we need to count the number of **gates per Clifford**.The function **gates_per_clifford** takes a compiled qobj and outputs the number of basis gates in each circuit.
###Code
#Count the number of single and 2Q gates in the 2Q Cliffords
gates_per_cliff = rb.rb_utils.gates_per_clifford(qobj_list,xdata[0],basis_gates,rb_opts['rb_pattern'][0])
for i in range(len(basis_gates)):
print("Number of %s gates per Clifford: %f"%(basis_gates[i],
np.mean([gates_per_cliff[0][i],gates_per_cliff[1][i]])))
###Output
Number of u1 gates per Clifford: 0.253043
Number of u2 gates per Clifford: 1.020326
Number of u3 gates per Clifford: 0.425109
Number of cx gates per Clifford: 1.471304
###Markdown
The **two qubit Clifford gate error** gives measured errors in the basis gates that were used to construct the Clifford. It assumes that the error in the underlying gates is depolarizing. It outputs the error per a 2-qubit Clifford.The input to this function is:- **ngates:** list of the number of gates per 2Q Clifford.- **gate_qubit:** list of the qubit corresponding to the gate (0, 1 or -1). -1 corresponds to the 2Q gate.- **gate_err:** list of the gate errors.
###Code
#Prepare lists of the number of qubits and the errors
ngates = np.zeros(7)
ngates[0:3] = gates_per_cliff[0][0:3]
ngates[3:6] = gates_per_cliff[1][0:3]
ngates[6] = gates_per_cliff[0][3]
gate_qubits = np.array([0,0,0,1,1,1,-1], dtype=int)
gate_errs = np.zeros(len(gate_qubits))
gate_errs[[1,4]] = p1Q/2 #convert from depolarizing error to epg (1Q)
gate_errs[[2,5]] = 2*p1Q/2 #convert from depolarizing error to epg (1Q)
gate_errs[6] = p2Q*3/4 #convert from depolarizing error to epg (2Q)
#Calculate the predicted epc
pred_epc = rb.rb_utils.twoQ_clifford_error(ngates,gate_qubits,gate_errs)
print("Predicted 2Q Error per Clifford: %e"%pred_epc)
###Output
Predicted 2Q Error per Clifford: 1.542410e-02
###Markdown
Run an RB Sequence with T1,T2 ErrorsWe now choose RB sequences that contain only 2-qubit Cliffords.We execute these sequences as before, but with a noise model extended with T1/T2 thermal relaxation error, and fit the exponentially decaying curve.
###Code
rb_opts2 = rb_opts.copy()
rb_opts2['rb_pattern'] = [[0,1]]
rb_opts2['length_multiplier'] = 1
rb_circs2, xdata2 = rb.randomized_benchmarking_seq(**rb_opts2)
noise_model2 = NoiseModel()
#Add T1/T2 noise to the simulation
t1 = 100.
t2 = 80.
gate1Q = 0.1
gate2Q = 0.5
noise_model2.add_all_qubit_quantum_error(thermal_relaxation_error(t1,t2,gate1Q), 'u2')
noise_model2.add_all_qubit_quantum_error(thermal_relaxation_error(t1,t2,2*gate1Q), 'u3')
noise_model2.add_all_qubit_quantum_error(
thermal_relaxation_error(t1,t2,gate2Q).tensor(thermal_relaxation_error(t1,t2,gate2Q)), 'cx')
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 500
result_list2 = []
qobj_list2 = []
for rb_seed,rb_circ_seed in enumerate(rb_circs2):
print('Compiling seed %d'%rb_seed)
new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates)
qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots)
print('Simulating seed %d'%rb_seed)
job = backend.run(qobj, noise_model=noise_model2, backend_options={'max_parallel_experiments': 0})
result_list2.append(job.result())
qobj_list2.append(qobj)
print("Finished Simulating")
#Create an RBFitter object
rbfit = rb.RBFitter(result_list2, xdata2, rb_opts2['rb_pattern'])
plt.figure(figsize=(10, 6))
ax = plt.gca()
# Plot the essence by calling plot_rb_data
rbfit.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('2 Qubit RB with T1/T2 noise', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
We count again the number of **gates per Clifford** as before, and calculate the **two qubit Clifford gate error**, using the predicted primitive gate errors from the coherence limit.
###Code
#Count the number of single and 2Q gates in the 2Q Cliffords
gates_per_cliff = rb.rb_utils.gates_per_clifford(qobj_list2,xdata2[0],basis_gates,rb_opts2['rb_pattern'][0])
for i in range(len(basis_gates)):
print("Number of %s gates per Clifford: %f"%(basis_gates[i],
np.mean([gates_per_cliff[0][i],gates_per_cliff[1][i]])))
#Prepare lists of the number of qubits and the errors
ngates = np.zeros(7)
ngates[0:3] = gates_per_cliff[0][0:3]
ngates[3:6] = gates_per_cliff[1][0:3]
ngates[6] = gates_per_cliff[0][3]
gate_qubits = np.array([0,0,0,1,1,1,-1], dtype=int)
gate_errs = np.zeros(len(gate_qubits))
#Here are the predicted primitive gate errors from the coherence limit
gate_errs[[1,4]] = rb.rb_utils.coherence_limit(1,[t1],[t2],gate1Q)
gate_errs[[2,5]] = rb.rb_utils.coherence_limit(1,[t1],[t2],2*gate1Q)
gate_errs[6] = rb.rb_utils.coherence_limit(2,[t1,t1],[t2,t2],gate2Q)
#Calculate the predicted epc
pred_epc = rb.rb_utils.twoQ_clifford_error(ngates,gate_qubits,gate_errs)
print("Predicted 2Q Error per Clifford: %e"%pred_epc)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____ |
docs/notebooks/ConfigurationFuzzer.ipynb | ###Markdown
Testing ConfigurationsThe behavior of a program is not only governed by its data. The _configuration_ of a program – that is, the settings that govern the execution of a program on its (regular) input data, as set by options or configuration files – just as well influences behavior, and thus can and should be tested. In this chapter, we explore how to systematically _test_ and _cover_ software configurations. By _automatically inferring configuration options_, we can apply these techniques out of the box, with no need for writing a grammar. Finally, we show how to systematically cover _combinations_ of configuration options, quickly detecting unwanted interferences. **Prerequisites*** You should have read the [chapter on grammars](Grammars.ipynb).* You should have read the [chapter on grammar coverage](GrammarCoverageFuzzer.ipynb). SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.ConfigurationFuzzer import ```and then make use of the following features.This chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options.`OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")```The grammar can be extracted via the method `ebnf_grammar()`:```python>>> option_ebnf_grammar = autopep8_runner.ebnf_grammar()>>> print(option_ebnf_grammar){'': ['()*'], '': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config ', ' --ignore-local-config', ' -r', ' --recursive', ' -j ', ' --jobs ', ' -p ', ' --pep8-passes ', ' -a', ' --aggressive', ' --experimental', ' --exclude ', ' --list-fixes', ' --ignore ', ' --select ', ' --max-line-length ', ' --line-range ', ' --range ', ' --indent-size ', ' --hang-closing', ' --exit-code'], '': [' foo.py'], '': ['+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '': [''], '': ['(-)?+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '': [''], '': [''], '': [''], '': ['']}```The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:```python>>> from Grammars import convert_ebnf_grammar>>> fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))>>> [fuzzer.fuzz() for i in range(3)][' -v foo.py', ' --verbose --global-config e\\ --in-place --indent-size -9 --pep8-passes 48 -i --ignore-local-config -h --exit-code --select ] -r -a --help --exclude t --jobs -5 --aggressive --hang-closing --experimental --diff --range -26 -0 --max-line-length 7 --list-fixes --recursive -d --version -p -31 --line-range 6 2 --help -r -v --exit-code foo.py', ' --ignore P -j -9 --ignore }go --select * --global-config ;0 --select \' --exclude !s --exclude L/HW:n" --global-config T --ignore V --select jur --exclude &+w --select 3 --ignore %RhF[` --exclude yMB --global-config 1 --ignore X --exclude _ --global-config xQ) --exclude =>d --ignore ( --ignore ~Y --exclude K --ignore .b --global-config A? --ignore CU --ignore , --global-config f --global-config Ez --exclude p$8c@ --ignore O --select <6 --global-config 5DS --global-config Iq2 --select 4 --exclude J^ --global-config Nv --select 79 --select i- --ignore |Zkml{Z --select aG --version --exclude d --exclude 8g foo.py']```The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")>>> autopep8_fuzzer = OptionFuzzer(autopep8_runner)>>> [autopep8_fuzzer.fuzz() for i in range(3)][' foo.py', ' -v --pep8-passes 4 --global-config b --hang-closing --experimental --recursive -a --verbose -j -10729 --ignore-local-config -r --select ,! --exit-code --max-line-length -5 --ignore Di --indent-size -86 --jobs -3 --exclude { --help --diff -d --version -p -89 --list-fixes --line-range 1 -0 --range 6 5 --aggressive -i foo.py', " --in-place -h --ignore vU --select O; --ignore mq' --ignore ~Q --global-config =F --ignore nfA?0% --exclude / --global-config g --select LB --global-config s --ignore 3\\ --select (y --global-config - --global-config : --exclude ke --select ^ --ignore `6 --ignore p --ignore T --select 4j --exclude I$ --ignore 1Z --exclude M --exclude rK --ignore wN95t --select a --global-config > --recursive --aggressive -a foo.py"]```The final step in testing would now to invoke the program with these arguments.Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. Configuration OptionsWhen we talk about the input to a program, we usually think of the _data_ it processes. This is also what we have been fuzzing in the past chapters – be it with [random input](Fuzzer.ipynb), [mutation-based fuzzing](MutationFuzzer.ipynb), or [grammar-based fuzzing](GrammarFuzzer.ipynb). However, programs typically have several input sources, all of which can and should be tested – and included in test generation. One important source of input is the program's _configuration_ – that is, a set of inputs that typically is set once when beginning to process data and then stays constant while processing data, while the program is running, or even while the program is deployed. Such a configuration is frequently set in _configuration files_ (for instance, as key/value pairs); the most ubiquitous method for command-line tools, though, are _configuration options_ on the command line. As an example, consider the `grep` utility to find textual patterns in files. The exact mode by which `grep` works is governed by a multitude of options, which can be listed by providing a `--help` option:
###Code
!grep --help
###Output
usage: grep [-abcDEFGHhIiJLlmnOoqRSsUVvwxZ] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
###Markdown
All these options need to be tested for whether they operate correctly. In security testing, any such option may also trigger a yet unknown vulnerability. Hence, such options can become _fuzz targets_ on their own. In this chapter, we analyze how to systematically test such options – and better yet, how to extract possible configurations right out of given program files, such that we do not have to specify anything. Options in PythonLet us stick to our common programming language here and examine how options are processed in Python. The `argparse` module provides a parser for command-line arguments (and options) with great functionality – and great complexity. You start by defining a parser (`argparse.ArgumentParser()`) to which individual arguments with various features are added, one after another. Additional parameters for each argument can specify the type (`type`) of the argument (say, integers or strings), or the number of arguments (`nargs`). By default, arguments are stored under their name in the `args` object coming from `parse_args()` – thus, `args.integers` holds the `integer` arguments added earlier. Special actions (`actions`) allow to store specific values in given variables; the `store_const` action stores the given `const` in the attribute named by `dest`. The following example takes a number of integer arguments (`integers`) as well as an operator (`--sum`, `--min`, or `--max`) to be applied on these integers. The operators all store a function reference in the `accumulate` attribute, which is finally invoked on the integers parsed:
###Code
import argparse
def process_numbers(args=[]):
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--sum', dest='accumulate', action='store_const',
const=sum,
help='sum the integers')
group.add_argument('--min', dest='accumulate', action='store_const',
const=min,
help='compute the minimum')
group.add_argument('--max', dest='accumulate', action='store_const',
const=max,
help='compute the maximum')
args = parser.parse_args(args)
print(args.accumulate(args.integers))
###Output
_____no_output_____
###Markdown
Here's how `process_numbers()` works. We can, for instance, invoke the `--min` option on the given arguments to compute the minimum:
###Code
process_numbers(["--min", "100", "200", "300"])
###Output
100
###Markdown
Or compute the sum of three numbers:
###Code
process_numbers(["--sum", "1", "2", "3"])
###Output
6
###Markdown
When defined via `add_mutually_exclusive_group()` (as above), options are mutually exclusive. Consequently, we can have only one operator:
###Code
import bookutils
from ExpectError import ExpectError
with ExpectError(print_traceback=False):
process_numbers(["--sum", "--max", "1", "2", "3"])
###Output
usage: ipykernel_launcher.py [-h] (--sum | --min | --max) N [N ...]
ipykernel_launcher.py: error: argument --max: not allowed with argument --sum
SystemExit: 2 (expected)
###Markdown
A Grammar for ConfigurationsHow can we test a system with several options? The easiest answer is to write a grammar for it. The grammar `PROCESS_NUMBERS_EBNF_GRAMMAR` reflects the possible combinations of options and arguments:
###Code
from Grammars import crange, srange, convert_ebnf_grammar, extend_grammar, is_valid_grammar
from Grammars import START_SYMBOL, new_symbol
PROCESS_NUMBERS_EBNF_GRAMMAR = {
"<start>": ["<operator> <integers>"],
"<operator>": ["--sum", "--min", "--max"],
"<integers>": ["<integer>", "<integers> <integer>"],
"<integer>": ["<digit>+"],
"<digit>": crange('0', '9')
}
assert is_valid_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
PROCESS_NUMBERS_GRAMMAR = convert_ebnf_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
###Output
_____no_output_____
###Markdown
We can feed this grammar into our [grammar coverage fuzzer](GrammarCoverageFuzzer.ipynb) and have it cover one option after another:
###Code
from GrammarCoverageFuzzer import GrammarCoverageFuzzer
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
print(f.fuzz())
###Output
--max 9 5 8 210 80 9756431
--sum 9 4 99 1245 612370
--min 2 3 0 46 15798 7570926
###Markdown
Of course, we can also invoke `process_numbers()` with these very arguments. To this end, we need to convert the string produced by the grammar back into a list of individual arguments, using `split()`:
###Code
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
args = f.fuzz().split()
print(args)
process_numbers(args)
###Output
['--max', '8', '9', '3067', '44', '13852967057']
13852967057
['--sum', '9', '8', '63', '9278111', '59206197798']
59215475989
['--min', '4', '1', '4864', '33342', '7827970808951']
1
###Markdown
In a similar way, we can define grammars for any program to be tested; as well as define grammars for, say, configuration files. Yet, the grammar has to be updated with every change to the program, which creates a maintenance burden. Given that the information required for the grammar is already all encoded in the program, the question arises: _Can't we go and extract configuration options right out of the program in the first place?_ Mining Configuration OptionsIn this section, we try to extract option and argument information right out of a program, such that we do not have to specify a configuration grammar. The aim is to have a configuration fuzzer that works on the options and arguments of an arbitrary program, as long as it follows specific conventions for processing its arguments. In the case of Python programs, this means using the `argparse` module.Our idea is as follows: We execute the given program up to the point where the arguments are actually parsed – that is, `argparse.parse_args()` is invoked. Up to this point, we track all calls into the argument parser, notably those calls that define arguments and options (`add_argument()`). From these, we construct the grammar. Tracking ArgumentsLet us illustrate this approach with a simple experiment: We define a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while `process_numbers` is invoked. If we have a call to a method `add_argument`, we access and print out the local variables (which at this point are the arguments to the method).
###Code
import sys
import string
def traceit(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(method_name, locals)
###Output
_____no_output_____
###Markdown
What we get is a list of all calls to `add_argument()`, together with the method arguments passed:
###Code
sys.settrace(traceit)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
add_argument {'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True), 'args': ('-h', '--help'), 'kwargs': {'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}}
add_argument {'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True), 'args': ('integers',), 'kwargs': {'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x10e6c5fd0>, 'args': ('--sum',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x10e6c5fd0>, 'args': ('--min',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x10e6c5fd0>, 'args': ('--max',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}}
6
###Markdown
From the `args` argument, we can access the individual options and arguments to be defined:
###Code
def traceit(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(locals['args'])
sys.settrace(traceit)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
('-h', '--help')
('integers',)
('--sum',)
('--min',)
('--max',)
6
###Markdown
We see that each argument comes as a tuple with one (say, `integers` or `--sum`) or two members (`-h` and `--help`), which denote alternate forms for the same option. Our job will be to go through the arguments of `add_arguments()` and detect not only the names of options and arguments, but also whether they accept additional parameters, as well as the type of the parameters. A Grammar Miner for Options and Arguments Let us now build a class that gathers all this information to create a grammar. We use the `ParseInterrupt` exception to interrupt program execution after gathering all arguments and options:
###Code
class ParseInterrupt(Exception):
pass
###Output
_____no_output_____
###Markdown
The class `OptionGrammarMiner` takes an executable function for which the grammar of options and arguments is to be mined:
###Code
class OptionGrammarMiner(object):
def __init__(self, function, log=False):
self.function = function
self.log = log
###Output
_____no_output_____
###Markdown
The method `mine_ebnf_grammar()` is where everything happens. It creates a grammar of the form``` ::= * ::= ::= ```in which the options and arguments will be collected. It then sets a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while the previously defined `function` is invoked. Raising `ParseInterrupt` (when `parse_args()` is invoked) ends execution.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
OPTION_SYMBOL = "<option>"
ARGUMENTS_SYMBOL = "<arguments>"
def mine_ebnf_grammar(self):
self.grammar = {
START_SYMBOL: ["(" + self.OPTION_SYMBOL + ")*" + self.ARGUMENTS_SYMBOL],
self.OPTION_SYMBOL: [],
self.ARGUMENTS_SYMBOL: []
}
self.current_group = self.OPTION_SYMBOL
old_trace = sys.gettrace()
sys.settrace(self.traceit)
try:
self.function()
except ParseInterrupt:
pass
sys.settrace(old_trace)
return self.grammar
def mine_grammar(self):
return convert_ebnf_grammar(self.mine_ebnf_grammar())
###Output
_____no_output_____
###Markdown
The trace function checks for four methods: `add_argument()` is the most important function, resulting in processing arguments; `frame.f_locals` again is the set of local variables, which at this point is mostly the arguments to `add_argument()`. Since mutually exclusive groups also have a method `add_argument()`, we set the flag `in_group` to differentiate. Note that we make no specific efforts to differentiate between multiple parsers or groups; we simply assume that there is one parser, and at any point at most one mutually exclusive group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def traceit(self, frame, event, arg):
if event != "call":
return
if "self" not in frame.f_locals:
return
self_var = frame.f_locals["self"]
method_name = frame.f_code.co_name
if method_name == "add_argument":
in_group = repr(type(self_var)).find("Group") >= 0
self.process_argument(frame.f_locals, in_group)
elif method_name == "add_mutually_exclusive_group":
self.add_group(frame.f_locals, exclusive=True)
elif method_name == "add_argument_group":
# self.add_group(frame.f_locals, exclusive=False)
pass
elif method_name == "parse_args":
raise ParseInterrupt
return None
###Output
_____no_output_____
###Markdown
The `process_arguments()` now analyzes the arguments passed and adds them to the grammar:* If the argument starts with `-`, it gets added as an optional element to the `` list* Otherwise, it gets added to the `` list.The optional `nargs` argument specifies how many arguments can follow. If it is a number, we add the appropriate number of elements to the grammar; if it is an abstract specifier (say, `+` or `*`), we use it directly as EBNF operator.Given the large number of parameters and optional behavior, this is a somewhat messy function, but it does the job.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def process_argument(self, locals, in_group):
args = locals["args"]
kwargs = locals["kwargs"]
if self.log:
print(args)
print(kwargs)
print()
for arg in args:
self.process_arg(arg, in_group, kwargs)
class OptionGrammarMiner(OptionGrammarMiner):
def process_arg(self, arg, in_group, kwargs):
if arg.startswith('-'):
if not in_group:
target = self.OPTION_SYMBOL
else:
target = self.current_group
metavar = None
arg = " " + arg
else:
target = self.ARGUMENTS_SYMBOL
metavar = arg
arg = ""
if "nargs" in kwargs:
nargs = kwargs["nargs"]
else:
nargs = 1
param = self.add_parameter(kwargs, metavar)
if param == "":
nargs = 0
if isinstance(nargs, int):
for i in range(nargs):
arg += param
else:
assert nargs in "?+*"
arg += '(' + param + ')' + nargs
if target == self.OPTION_SYMBOL:
self.grammar[target].append(arg)
else:
self.grammar[target].append(arg)
###Output
_____no_output_____
###Markdown
The method `add_parameter()` handles possible parameters of options. If the argument has an `action` defined, it takes no parameter. Otherwise, we identify the type of the parameter (as `int` or `str`) and augment the grammar with an appropriate rule.
###Code
import inspect
class OptionGrammarMiner(OptionGrammarMiner):
def add_parameter(self, kwargs, metavar):
if "action" in kwargs:
# No parameter
return ""
type_ = "str"
if "type" in kwargs:
given_type = kwargs["type"]
# int types come as '<class int>'
if inspect.isclass(given_type) and issubclass(given_type, int):
type_ = "int"
if metavar is None:
if "metavar" in kwargs:
metavar = kwargs["metavar"]
else:
metavar = type_
self.add_type_rule(type_)
if metavar != type_:
self.add_metavar_rule(metavar, type_)
param = " <" + metavar + ">"
return param
###Output
_____no_output_____
###Markdown
The method `add_type_rule()` adds a rule for parameter types to the grammar. If the parameter is identified by a meta-variable (say, `N`), we add a rule for this as well to improve legibility.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_type_rule(self, type_):
if type_ == "int":
self.add_int_rule()
else:
self.add_str_rule()
def add_int_rule(self):
self.grammar["<int>"] = ["(-)?<digit>+"]
self.grammar["<digit>"] = crange('0', '9')
def add_str_rule(self):
self.grammar["<str>"] = ["<char>+"]
self.grammar["<char>"] = srange(
string.digits
+ string.ascii_letters
+ string.punctuation)
def add_metavar_rule(self, metavar, type_):
self.grammar["<" + metavar + ">"] = ["<" + type_ + ">"]
###Output
_____no_output_____
###Markdown
The method `add_group()` adds a new mutually exclusive group to the grammar. We define a new symbol (say, ``) for the options added to the group, and use the `required` and `exclusive` flags to define an appropriate expansion operator. The group is then prefixed to the grammar, as in``` ::= * ::= ```and filled with the next calls to `add_argument()` within the group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_group(self, locals, exclusive):
kwargs = locals["kwargs"]
if self.log:
print(kwargs)
required = kwargs.get("required", False)
group = new_symbol(self.grammar, "<group>")
if required and exclusive:
group_expansion = group
if required and not exclusive:
group_expansion = group + "+"
if not required and exclusive:
group_expansion = group + "?"
if not required and not exclusive:
group_expansion = group + "*"
self.grammar[START_SYMBOL][0] = group_expansion + \
self.grammar[START_SYMBOL][0]
self.grammar[group] = []
self.current_group = group
###Output
_____no_output_____
###Markdown
That's it! With this, we can now extract the grammar from our `process_numbers()` program. Turning on logging again reveals the variables we draw upon.
###Code
miner = OptionGrammarMiner(process_numbers, log=True)
process_numbers_grammar = miner.mine_ebnf_grammar()
###Output
('-h', '--help')
{'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}
('integers',)
{'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}
{'required': True}
('--sum',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}
('--min',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}
('--max',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}
###Markdown
Here is the extracted grammar:
###Code
process_numbers_grammar
###Output
_____no_output_____
###Markdown
The grammar properly identifies the group found:
###Code
process_numbers_grammar["<start>"]
process_numbers_grammar["<group>"]
###Output
_____no_output_____
###Markdown
It also identifies a `--help` option provided not by us, but by the `argparse` module:
###Code
process_numbers_grammar["<option>"]
###Output
_____no_output_____
###Markdown
The grammar also correctly identifies the types of the arguments:
###Code
process_numbers_grammar["<arguments>"]
process_numbers_grammar["<integers>"]
###Output
_____no_output_____
###Markdown
The rules for `int` are set as defined by `add_int_rule()`
###Code
process_numbers_grammar["<int>"]
###Output
_____no_output_____
###Markdown
We can take this grammar and convert it to BNF, such that we can fuzz with it right away:
###Code
assert is_valid_grammar(process_numbers_grammar)
grammar = convert_ebnf_grammar(process_numbers_grammar)
assert is_valid_grammar(grammar)
f = GrammarCoverageFuzzer(grammar)
for i in range(10):
print(f.fuzz())
###Output
--sum 9
--max -h --help --help -16 -0
--min --help 2745341 8
--min 1 27
--sum --help --help -2
--sum --help 0 3 -77
--sum -3
--sum --help 429 8 10 0295 -694 1
--max -h 91 -1425 99
--sum -795 -94 8 -44
###Markdown
Each and every invocation adheres to the rules as set forth in the `argparse` calls. By mining options and arguments from existing programs, we can now fuzz these options out of the box – without having to specify a grammar. Testing Autopep8 Let us try out the option grammar miner on real-world Python programs. `autopep8` is a tool that automatically converts Python code to the [PEP 8 Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). (Actually, all Python code in this book runs through `autopep8` during production.) `autopep8` offers a wide range of options, as can be seen by invoking it with `--help`:
###Code
!autopep8 --help
###Output
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing] [--exit-code]
[files ...]
Automatically formats Python code to conform to the PEP 8 style guide.
positional arguments:
files files to format or '-' for standard in
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-v, --verbose print verbose messages; multiple -v result in more
verbose messages
-d, --diff print the diff for the fixed source
-i, --in-place make changes to files in place
--global-config filename
path to a global pep8 config file; if this file does
not exist then this is ignored (default:
/Users/zeller/.config/pep8)
--ignore-local-config
don't look for and apply local config files; if not
passed, defaults are updated with any config files in
the project's root directory
-r, --recursive run recursively over directories; must be used with
--in-place or --diff
-j n, --jobs n number of parallel jobs; match CPU count if value is
less than 1
-p n, --pep8-passes n
maximum number of additional pep8 passes (default:
infinite)
-a, --aggressive enable non-whitespace changes; multiple -a result in
more aggressive changes
--experimental enable experimental fixes
--exclude globs exclude file/directory names that match these comma-
separated globs
--list-fixes list codes for fixes; used by --ignore and --select
--ignore errors do not fix these errors/warnings (default:
E226,E24,W50,W690)
--select errors fix only these errors/warnings (e.g. E4,W)
--max-line-length n set maximum allowed line length (default: 79)
--line-range line line, --range line line
only fix errors found within this inclusive range of
line numbers (e.g. 1 99); line numbers are indexed at
1
--hang-closing hang-closing option passed to pycodestyle
--exit-code change to behavior of exit code. default behavior of
return value, 0 is no differences, 1 is error exit.
return 2 when add this option. 2 is exists
differences.
###Markdown
Autopep8 SetupWe want to systematically test these options. In order to deploy our configuration grammar miner, we need to find the source code of the executable:
###Code
import os
def find_executable(name):
for path in os.get_exec_path():
qualified_name = os.path.join(path, name)
if os.path.exists(qualified_name):
return qualified_name
return None
autopep8_executable = find_executable("autopep8")
assert autopep8_executable is not None
autopep8_executable
###Output
_____no_output_____
###Markdown
Next, we build a function that reads the contents of the file and executes it.
###Code
def autopep8():
executable = find_executable("autopep8")
# First line has to contain "/usr/bin/env python" or like
first_line = open(executable).readline()
assert first_line.find("python") >= 0
contents = open(executable).read()
exec(contents)
###Output
_____no_output_____
###Markdown
Mining an Autopep8 GrammarWe can use the `autopep8()` function in our grammar miner:
###Code
autopep8_miner = OptionGrammarMiner(autopep8)
###Output
_____no_output_____
###Markdown
and extract a grammar for it:
###Code
autopep8_ebnf_grammar = autopep8_miner.mine_ebnf_grammar()
###Output
_____no_output_____
###Markdown
This works because here, `autopep8` is not a separate process (and a separate Python interpreter), but we run the `autopep8()` function (and the `autopep8` code) in our current Python interpreter – up to the call to `parse_args()`, where we interrupt execution again. At this point, the `autopep8` code has done nothing but setting up the argument parser – which is what we are interested in. The grammar options mined reflect precisely the options seen when providing `--help`:
###Code
print(autopep8_ebnf_grammar["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
Metavariables like `` or `` are placeholders for integers. We assume all metavariables of the same name have the same type:
###Code
autopep8_ebnf_grammar["<line>"]
###Output
_____no_output_____
###Markdown
The grammar miner has inferred that the argument to `autopep8` is a list of files:
###Code
autopep8_ebnf_grammar["<arguments>"]
###Output
_____no_output_____
###Markdown
which in turn all are strings:
###Code
autopep8_ebnf_grammar["<files>"]
###Output
_____no_output_____
###Markdown
As we are only interested in testing options, not arguments, we fix the arguments to a single mandatory input. (Otherwise, we'd have plenty of random file names generated.)
###Code
autopep8_ebnf_grammar["<arguments>"] = [" <files>"]
autopep8_ebnf_grammar["<files>"] = ["foo.py"]
assert is_valid_grammar(autopep8_ebnf_grammar)
###Output
_____no_output_____
###Markdown
Creating Autopep8 Options Let us now use the inferred grammar for fuzzing. Again, we convert the EBNF grammar into a regular BNF grammar:
###Code
autopep8_grammar = convert_ebnf_grammar(autopep8_ebnf_grammar)
assert is_valid_grammar(autopep8_grammar)
###Output
_____no_output_____
###Markdown
And we can use the grammar for fuzzing all options:
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=4)
for i in range(20):
print(f.fuzz())
###Output
-r foo.py
-h --experimental --hang-closing foo.py
--list-fixes -v foo.py
--aggressive -d foo.py
--indent-size 9 --help foo.py
--exit-code --recursive foo.py
--diff --version -i foo.py
--max-line-length 0 --in-place --verbose foo.py
--ignore-local-config -a foo.py
--select x -i --exit-code foo.py
-j 8 --diff foo.py
-d -v -d foo.py
-p 6 -i foo.py
-v --diff foo.py
--ignore uA --recursive foo.py
--jobs 5 -r foo.py
--range 4 1 foo.py
--ignore-local-config -i foo.py
-r --exit-code foo.py
-v -r foo.py
###Markdown
Let us apply these options on the actual program. We need a file `foo.py` that will serve as input: (Note that the following commands will overwrite the file `foo.py`, if it already exists in the current working directory. Be aware of this, if you downloaded the notebooks and are working locally.)
###Code
def create_foo_py():
open("foo.py", "w").write("""
def twice(x = 2):
return x + x
""")
create_foo_py()
print(open("foo.py").read(), end="")
###Output
def twice(x = 2):
return x + x
###Markdown
We see how `autopep8` fixes the spacing:
###Code
!autopep8 foo.py
###Output
def twice(x=2):
return x + x
###Markdown
Let us now put things together. We define a `ProgramRunner` that will run the `autopep8` executable with arguments coming from the mined `autopep8` grammar.
###Code
from Fuzzer import ProgramRunner
###Output
_____no_output_____
###Markdown
Running `autopep8` with the mined options reveals a surprisingly high number of passing runs. (We see that some options depend on each other or are mutually exclusive, but this is handled by the program logic, not the argument parser, and hence out of our scope.) The `GrammarCoverageFuzzer` ensures that each option is tested at least once. (Digits and letters, too, by the way.)
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=5)
for i in range(20):
invocation = "autopep8" + f.fuzz()
print("$ " + invocation)
args = invocation.split()
autopep8 = ProgramRunner(args)
result, outcome = autopep8.run()
if result.stderr != "":
print(result.stderr, end="")
###Output
$ autopep8 foo.py
$ autopep8 --diff --max-line-length 4 --exit-code --range 5 8 -p 2 foo.py
$ autopep8 --ignore z --verbose -r --list-fixes foo.py
--recursive must be used with --in-place or --diff$ autopep8 --exclude 5 -h -i --aggressive --in-place foo.py
$ autopep8 --select a --help --experimental foo.py
$ autopep8 --indent-size -30 --recursive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --global-config < -j 9 -v -a foo.py
parallel jobs requires --in-place$ autopep8 --line-range 7 1 --hang-closing -d foo.py
First value of --range should be less than or equal to the second$ autopep8 --pep8-passes 6 --hang-closing --version --ignore-local-config foo.py
$ autopep8 --jobs -2 --experimental --version foo.py
$ autopep8 --ignore Y: --select ! --global-config e foo.py
$ autopep8 --select 1 -a --recursive --aggressive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --ignore * --ignore `0 --global-config _ --verbose foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config ,\ --exclude r -v foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config xd6M --recursive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --select R --exclude L --version --ignore-local-config foo.py
$ autopep8 --select " --verbose -h -d foo.py
$ autopep8 --diff -i -h foo.py
$ autopep8 --in-place --select w --version -i foo.py
$ autopep8 --ignore 49 --exclude lI -i foo.py
###Markdown
Our `foo.py` file now has been formatted in place a number of times:
###Code
print(open("foo.py").read(), end="")
###Output
def twice(x=2):
return x + x
###Markdown
We don't need it anymore, so we clean up things:
###Code
import os
os.remove("foo.py")
###Output
_____no_output_____
###Markdown
Classes for Fuzzing Configuration OptionsLet us now create reusable classes that we can use for testing arbitrary programs. (Okay, make that "arbitrary programs that are written in Python and use the `argparse` module to process command-line arguments.") The class `OptionRunner` is a subclass of `ProgramRunner` that takes care of automatically determining the grammar, using the same steps we used for `autopep8`, above.
###Code
class OptionRunner(ProgramRunner):
def __init__(self, program, arguments=None):
if isinstance(program, str):
self.base_executable = program
else:
self.base_executable = program[0]
self.find_contents()
self.find_grammar()
if arguments is not None:
self.set_arguments(arguments)
super().__init__(program)
###Output
_____no_output_____
###Markdown
First, we find the contents of the Python executable:
###Code
class OptionRunner(OptionRunner):
def find_contents(self):
self._executable = find_executable(self.base_executable)
first_line = open(self._executable).readline()
assert first_line.find("python") >= 0
self.contents = open(self._executable).read()
def invoker(self):
exec(self.contents)
def executable(self):
return self._executable
###Output
_____no_output_____
###Markdown
Next, we determine the grammar using the `OptionGrammarMiner` class:
###Code
class OptionRunner(OptionRunner):
def find_grammar(self):
miner = OptionGrammarMiner(self.invoker)
self._ebnf_grammar = miner.mine_ebnf_grammar()
def ebnf_grammar(self):
return self._ebnf_grammar
def grammar(self):
return convert_ebnf_grammar(self._ebnf_grammar)
###Output
_____no_output_____
###Markdown
The two service methods `set_arguments()` and `set_invocation()` help us to change the arguments and program, respectively.
###Code
from Grammars import unreachable_nonterminals
class OptionRunner(OptionRunner):
def set_arguments(self, args):
self._ebnf_grammar["<arguments>"] = [" " + args]
# Delete rules for previous arguments
for nonterminal in unreachable_nonterminals(self._ebnf_grammar):
del self._ebnf_grammar[nonterminal]
def set_invocation(self, program):
self.program = program
###Output
_____no_output_____
###Markdown
We can instantiate the class on `autopep8` and immediately get the grammar:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
print(autopep8_runner.ebnf_grammar()["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
An `OptionFuzzer` interacts with the given `OptionRunner` to obtain its grammar, which is then passed to its `GrammarCoverageFuzzer` superclass.
###Code
class OptionFuzzer(GrammarCoverageFuzzer):
def __init__(self, runner, *args, **kwargs):
assert issubclass(type(runner), OptionRunner)
self.runner = runner
grammar = runner.grammar()
super().__init__(grammar, *args, **kwargs)
###Output
_____no_output_____
###Markdown
When invoking `run()`, the `OptionFuzzer` creates a new invocation (using `fuzz()` from its grammar) and runs the now given (or previously set) runner with the arguments from the grammar. Note that the runner specified in `run()` can differ from the one set during initialization; this allows for mining options from one program and applying it in another context.
###Code
class OptionFuzzer(OptionFuzzer):
def run(self, runner=None, inp=""):
if runner is None:
runner = self.runner
assert issubclass(type(runner), OptionRunner)
invocation = runner.executable() + " " + self.fuzz()
runner.set_invocation(invocation.split())
return runner.run(inp)
###Output
_____no_output_____
###Markdown
Example: Autopep8Let us apply our newly defined classes on the `autopep8` runner:
###Code
autopep8_fuzzer = OptionFuzzer(autopep8_runner, max_nonterminals=5)
for i in range(3):
print(autopep8_fuzzer.fuzz())
###Output
foo.py
--in-place --ignore-local-config --jobs 6 --recursive -i foo.py
--help -a --indent-size -95 --pep8-passes 3 --exclude = -r foo.py
###Markdown
We can now systematically test `autopep8` with these classes:
###Code
autopep8_fuzzer.run(autopep8_runner)
###Output
_____no_output_____
###Markdown
Example: MyPyWe can extract options for the `mypy` static type checker for Python:
###Code
assert find_executable("mypy") is not None
mypy_runner = OptionRunner("mypy", "foo.py")
print(mypy_runner.ebnf_grammar()["<option>"])
mypy_fuzzer = OptionFuzzer(mypy_runner, max_nonterminals=5)
for i in range(10):
print(mypy_fuzzer.fuzz())
###Output
--follow-imports l foo.py
--logical-deps --allow-any-generics --any-exprs-report yB --fast-exit --show-error-context -i --no-site-packages --explicit-package-bases --no-sqlite-cache --warn-unused-ignores foo.py
--html-report 7/ --tb --error-summary foo.py
--no-implicit-reexport --no-error-summary --warn-return-any --dump-build-stats --check-untyped-defs --hide-error-context --no-install-types --install-types foo.py
--no-warn-redundant-casts --no-warn-return-any --bazel --sqlite-cache --enable-error-code foo.py
--cache-map { t --version --warn-unused-configs foo.py
--wip-pep-612 --no-explicit-package-bases --help --semantic-analysis-only --soft-error-limit 6 --skip-version-check --no-warn-unused-configs foo.py
--disallow-untyped-defs --hide-error-codes --show-absolute-path --shadow-file --package-root --interactive foo.py
--strict-optional --disable-error-code --disallow-untyped-globals --disallow-incomplete-defs --no-namespace-packages --show-column-numbers --command --raise-exceptions --no-color-output --allow-incomplete-defs --warn-incomplete-stub --package --txt-report ( -c --dump-deps --no-implicit-optional --pretty --no-fast-exit --dump-graph --xml-report La --warn-no-return --disallow-any-decorated --skip-cache-mtime-checks --allow-untyped-calls --no-check-untyped-defs --color-output --ignore-missing-imports --cache-dir --no-incremental --no-silence-site-packages --scripts-are-modules --allow-subclassing-any --quickstart-file Q4 --allow-redefinition --disallow-any-explicit --always-false foo.py
-p --xslt-html-report -U --strict --pdb --disallow-any-unimported foo.py
###Markdown
Example: NotedownHere's the configuration options for the `notedown` Notebook to Markdown converter:
###Code
assert find_executable("notedown") is not None
notedown_runner = OptionRunner("notedown")
print(notedown_runner.ebnf_grammar()["<option>"])
notedown_fuzzer = OptionFuzzer(notedown_runner, max_nonterminals=5)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
--render TI
--to q --knit 'U --debug --from | --run
--version --nomagic -h x
--rmagic --strip --examples --help 8F
--precode 2 --execute --help @
--examples --nomagic !_
--nomagic --debug --debug
--timeout 3 -h m
--debug --rmagic gp
--debug --execute --rmagic P
###Markdown
Combinatorial TestingOur `CoverageGrammarFuzzer` does a good job in covering each and every option at least once, which is great for systematic testing. However, as we also can see in our examples above, some options require each other, while others interfere with each other. What we should do as good testers is not only to cover every option individually, but also _combinations_ of options. The Python `itertools` module gives us means to create combinations from lists. We can, for instance, take the `notedown` options and create a list of all pairs.
###Code
from itertools import combinations
option_list = notedown_runner.ebnf_grammar()["<option>"]
pairs = list(combinations(option_list, 2))
###Output
_____no_output_____
###Markdown
There's quite a number of pairs:
###Code
len(pairs)
print(pairs[:20])
###Output
[(' -h', ' --help'), (' -h', ' -o( <str>)?'), (' -h', ' --output( <str>)?'), (' -h', ' --from <str>'), (' -h', ' --to <str>'), (' -h', ' --run'), (' -h', ' --execute'), (' -h', ' --timeout <int>'), (' -h', ' --strip'), (' -h', ' --precode( <str>)+'), (' -h', ' --knit( <str>)?'), (' -h', ' --rmagic'), (' -h', ' --nomagic'), (' -h', ' --render'), (' -h', ' --template <str>'), (' -h', ' --match <str>'), (' -h', ' --examples'), (' -h', ' --version'), (' -h', ' --debug'), (' --help', ' -o( <str>)?')]
###Markdown
Testing every such pair of options frequently suffices to cover all interferences between options. (Programs rarely have conditions involving three or more configuration settings.) To this end, we _change_ the grammar from having a list of options to having a list of _option pairs_, such that covering these will automatically cover all pairs. We create a function `pairwise()` that takes a list of options as occurring in our grammar and returns a list of _pairwise options_ – that is, our original options, but concatenated.
###Code
def pairwise(option_list):
return [option_1 + option_2
for (option_1, option_2) in combinations(option_list, 2)]
###Output
_____no_output_____
###Markdown
Here's the first 20 pairs:
###Code
print(pairwise(option_list)[:20])
###Output
[' -h --help', ' -h -o( <str>)?', ' -h --output( <str>)?', ' -h --from <str>', ' -h --to <str>', ' -h --run', ' -h --execute', ' -h --timeout <int>', ' -h --strip', ' -h --precode( <str>)+', ' -h --knit( <str>)?', ' -h --rmagic', ' -h --nomagic', ' -h --render', ' -h --template <str>', ' -h --match <str>', ' -h --examples', ' -h --version', ' -h --debug', ' --help -o( <str>)?']
###Markdown
The new grammar `pairwise_notedown_grammar` is a copy of the `notedown` grammar, but with the list of options replaced with the above pairwise option list.
###Code
notedown_grammar = notedown_runner.grammar()
pairwise_notedown_grammar = extend_grammar(notedown_grammar)
pairwise_notedown_grammar["<option>"] = pairwise(notedown_grammar["<option>"])
assert is_valid_grammar(pairwise_notedown_grammar)
###Output
_____no_output_____
###Markdown
Using the "pairwise" grammar to fuzz now covers one pair after another:
###Code
notedown_fuzzer = GrammarCoverageFuzzer(
pairwise_notedown_grammar, max_nonterminals=4)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
--timeout 56 --debug
--help --render --help --run
-h --debug --help --debug
--run --examples --rmagic --nomagic |
--rmagic --version --nomagic --render
--strip --nomagic -h --examples
--examples --version -h --help
--run --debug --execute --rmagic %
--output : --run zj
--nomagic --debug --nomagic --examples
###Markdown
Can we actually test all combinations of options? Not in practice, as the number of combinations quickly grows as the length increases. It decreases again as the number of options reaches the maximum (with 20 options, there is only 1 combination involving _all_ options), but the absolute numbers are still staggering:
###Code
for combination_length in range(1, 20):
tuples = list(combinations(option_list, combination_length))
print(combination_length, len(tuples))
###Output
1 20
2 190
3 1140
4 4845
5 15504
6 38760
7 77520
8 125970
9 167960
10 184756
11 167960
12 125970
13 77520
14 38760
15 15504
16 4845
17 1140
18 190
19 20
###Markdown
Formally, the number of combinations of length $k$ in a set of options of length $n$ is the binomial coefficient$${n \choose k} = \frac{n!}{k!(n - k)!}$$ which for $k = 2$ (all pairs) gives us$${n \choose 2} = \frac{n!}{2(n - 2)!} = \frac{n (n - 1)}{2}$$ For `autopep8` with its 29 options...
###Code
len(autopep8_runner.ebnf_grammar()["<option>"])
###Output
_____no_output_____
###Markdown
... we thus have 406 distinct pairs. However, the binomial coefficient does not differentiate between permutations of elements of the pairs, which our tests do. Therefore we need 812 tests to cover all pairs:
###Code
len(autopep8_runner.ebnf_grammar()["<option>"]) * \
(len(autopep8_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
For `mypy` with its 110 options, though, we already end up with 11,990 tests to be conducted:
###Code
len(mypy_runner.ebnf_grammar()["<option>"])
len(mypy_runner.ebnf_grammar()["<option>"]) * \
(len(mypy_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
Even if each pair takes a second to run, we'd still be done in three hours of testing, though. If your program has more options that you all want to get covered in combinations, it is advisable that you limit the number of configurations further – for instance by limiting combinatorial testing to those combinations that possibly can interact with each other; and covering all other (presumably orthogonal) options individually. This mechanism of creating configurations by extending grammars can be easily extended to other configuration targets. One may want to explore a greater number of configurations, or expansions in specific contexts. The [exercises](Exercises), below, have a number of options ready for you. SynopsisThis chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options. `OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
###Output
_____no_output_____
###Markdown
The grammar can be extracted via the method `ebnf_grammar()`:
###Code
option_ebnf_grammar = autopep8_runner.ebnf_grammar()
print(option_ebnf_grammar)
###Output
{'<start>': ['(<option>)*<arguments>'], '<option>': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code'], '<arguments>': [' foo.py'], '<str>': ['<char>+'], '<char>': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '#', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '<filename>': ['<str>'], '<int>': ['(-)?<digit>+'], '<digit>': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '<n>': ['<int>'], '<globs>': ['<str>'], '<errors>': ['<str>'], '<line>': ['<int>']}
###Markdown
The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:
###Code
from Grammars import convert_ebnf_grammar
fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))
[fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
autopep8_fuzzer = OptionFuzzer(autopep8_runner)
[autopep8_fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The final step in testing would now to invoke the program with these arguments. Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. Lessons Learned* Besides regular input data, program _configurations_ make an important testing target.* For a given program using a standard library to parse command-line options and arguments, one can automatically extract these and convert them into a grammar.* To cover not only single options, but combinations of options, one can expand the grammar to cover all pairs, or come up with even more ambitious targets. Next StepsIf you liked the idea of mining a grammar from a program, do not miss:* [how to mine grammars for input data](GrammarMiner.ipynb) Our next steps in the book focus on:* [how to parse and recombine inputs](Parser.ipynb)* [how to assign weights and probabilities to specific productions](ProbabilisticGrammarFuzzer.ipynb)* [how to simplify inputs that cause a failure](Reducer.ipynb) BackgroundAlthough configuration data is just as likely to cause failures as other input data, it has received relatively little attention in test generation – possibly because, unlike "regular" input data, configuration data is not so much under control of external parties, and because, again unlike regular data, there is little variance in configurations. Creating models for software configurations and using these models for testing is commonplace, as is the idea of pairwise testing. For an overview, see \cite{Pezze2008}; for a discussion and comparison of state-of-the-art techniques, see \cite{Petke2015}.More specifically, \cite{Sutton2007} also discuss techniques to systematically cover command-line options. Dai et al. \cite{Dai2010} apply configuration fuzzing by changing variables associated with configuration files. Exercises Exercise 1: ifdef Configuration FuzzingIn C programs, the *C preprocessor* can be used to choose which code parts should be compiled and which ones should not. As an example, in the C code```Cifdef LONG_FOOlong foo() { ... }elseint foo() { ... }endif```the compiler will compile the function `foo()` with return type`long` if the _preprocessor variable_ `LONG_FOO` is defined, and with return type `int` if not. Such preprocessor variables are either set in the source files (using `define`, as in `define LONG_FOO`) or on the C compiler command line (using `-D` or `-D=`, as in `-DLONG_FOO`. Such *conditional compilation* is used to configure C programs towards their environment. System-specific code can contain lots of conditional compilation. As an example, consider this excerpt of `xmlparse.c`, the XML parser that is part of the Python runtime library:```cif defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32) define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800endifif !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \ && !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \ && !defined(XML_DEV_URANDOM) \ && !defined(_WIN32) \ && !defined(XML_POOR_ENTROPY) errorendifif !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */endififdef XML_UNICODE_WCHAR_Tdefine XML_T(x) (const wchar_t)xdefine XML_L(x) L xelsedefine XML_T(x) (const unsigned short)xdefine XML_L(x) xendifint fun(int x) { return XML_T(x); }``` A typical configuration for the C preprocessor on the above code could be `cc -c -D_WIN32 -DXML_POOR_ENTROPY -DXML_UNICODE_WCHAR_T xmlparse.c`, defining the given preprocessor variables and selecting the appropriate code fragments. Since the compiler can only compile one configuration at a time (implying that we can also only _test_ one resulting executable at a time), your task is to find out which of these configurations actually compile. To this end, proceed in three steps. Part 1: Extract Preprocessor VariablesWrite a _function_ `cpp_identifiers()` that, given a set of lines (say, from `open(filename).readlines()`), extracts all preprocessor variables referenced in `if` or `ifdef` preprocessor instructions. Apply `ifdef_identifiers()` on the sample C input above, such that```pythoncpp_identifiers(open("xmlparse.c").readlines()) ```returns the set```python{'_WIN32', 'LOAD_LIBRARY_SEARCH_SYSTEM32', 'HAVE_GETRANDOM', 'HAVE_SYSCALL_GETRANDOM', 'HAVE_ARC4RANDOM_BUF', ...}``` **Solution.** Let us start with creating a sample input file, `xmlparse.c`:
###Code
filename = "xmlparse.c"
open(filename, "w").write(
"""
#if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32)
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
#endif
#if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \
&& !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \
&& !defined(XML_DEV_URANDOM) \
&& !defined(_WIN32) \
&& !defined(XML_POOR_ENTROPY)
# error
#endif
#if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)
#define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */
#endif
#ifdef XML_UNICODE_WCHAR_T
#define XML_T(x) (const wchar_t)x
#define XML_L(x) L ## x
#else
#define XML_T(x) (const unsigned short)x
#define XML_L(x) x
#endif
int fun(int x) { return XML_T(x); }
""");
###Output
_____no_output_____
###Markdown
To find C preprocessor `if` directives and preprocessor variables, we use regular expressions matching them.
###Code
import re
re_cpp_if_directive = re.compile(r"\s*#\s*(el)?if")
re_cpp_identifier = re.compile(r"[a-zA-Z_$]+")
def cpp_identifiers(lines):
identifiers = set()
for line in lines:
if re_cpp_if_directive.match(line):
identifiers |= set(re_cpp_identifier.findall(line))
# These are preprocessor keywords
identifiers -= {"if", "ifdef", "ifndef", "defined"}
return identifiers
cpp_ids = cpp_identifiers(open("xmlparse.c").readlines())
cpp_ids
###Output
_____no_output_____
###Markdown
Part 2: Derive an Option GrammarWith the help of `cpp_identifiers()`, create a grammar which has C compiler invocations with a list of options, where each option takes the form `-D` for a preprocessor variable ``. Using this grammar `cpp_grammar`, a fuzzer ```pythong = GrammarCoverageFuzzer(cpp_grammar)```would create C compiler invocations such as```python[g.fuzz() for i in range(10)]['cc -DHAVE_SYSCALL_GETRANDOM xmlparse.c', 'cc -D__SCO__ -DRANDOM_BUF -DXML_UNICODE_WCHAR_T -D__UNIXWARE__ xmlparse.c', 'cc -DXML_POOR_ENTROPY xmlparse.c', 'cc -DRANDOM xmlparse.c', 'cc -D_WIN xmlparse.c', 'cc -DHAVE_ARC xmlparse.c', ...]``` **Solution.** This is not very difficult:
###Code
from Grammars import new_symbol
cpp_grammar = {
"<start>": ["cc -c<options> " + filename],
"<options>": ["<option>", "<options><option>"],
"<option>": []
}
for id in cpp_ids:
s = new_symbol(cpp_grammar, "<" + id + ">")
cpp_grammar["<option>"].append(s)
cpp_grammar[s] = [" -D" + id]
cpp_grammar
assert is_valid_grammar(cpp_grammar)
###Output
_____no_output_____
###Markdown
Part 3: C Preprocessor Configuration FuzzingUsing the grammar just produced, use a `GrammarCoverageFuzzer` to1. Test each processor variable individually2. Test each pair of processor variables, using `pairwise()`.What happens if you actually run the invocations? **Solution.** We can simply run the coverage fuzzer, as described above.
###Code
g = GrammarCoverageFuzzer(cpp_grammar)
g.fuzz()
from Fuzzer import ProgramRunner
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -D__SCO__ -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_SYSCALL_GETRANDOM -DXML_DEV_URANDOM -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -DTIOCSWINSZ -DHAVE_GETRANDOM -DRANDOM_BUF -D__UNIXWARE__ xmlparse.c
$ cc -c -DHAVE_GETRANDOM -DXML_UNICODE_WCHAR_T -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 1 error generated.
$ cc -c -D__SCO__ -D__UNIXWARE__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_DEV_URANDOM xmlparse.c
$ cc -c -D__UNIXWARE__ -D_WIN xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -D_WIN -DHAVE_SYSCALL_GETRANDOM xmlparse.c
###Markdown
To test all pairs, we can use `pairwise()`:
###Code
pairwise_cpp_grammar = extend_grammar(cpp_grammar)
pairwise_cpp_grammar["<option>"] = pairwise(cpp_grammar["<option>"])
pairwise_cpp_grammar["<option>"][:10]
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_SYSCALL_GETRANDOM -DHAVE_SYSCALL_GETRANDOM -DRANDOM xmlparse.c
$ cc -c -D__UNIXWARE__ -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:7:3: error:
# error
^
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 2 errors generated.
$ cc -c -D__SCO__ -D_WIN xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_SYSCALL_GETRANDOM xmlparse.c
$ cc -c -DRANDOM_BUF -D_WIN -D__UNIXWARE__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_GETRANDOM xmlparse.c
$ cc -c -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:7:3: error:
# error
^
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 2 errors generated.
$ cc -c -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_SYSCALL_GETRANDOM -DRANDOM xmlparse.c
###Markdown
Some of the compilation errors we get could be expected – for instance, defining `XML_UNICODE_WCHAR_T` when actually, the type is not supported in our environment. Other errors may not be expected – and it is these errors we would find through systematic configuration fuzzing, as described above. At the end, don't forget to clean up:
###Code
os.remove("xmlparse.c")
if os.path.exists("xmlparse.o"):
os.remove("xmlparse.o")
###Output
_____no_output_____
###Markdown
Exercise 2: .ini Configuration FuzzingBesides command-line options, another important source of configurations are _configuration files_. In this exercise, we will consider the very simple configuration language provided by the Python `ConfigParser` module, which is very similar to what is found in Microsoft Windows _.ini_ files. The following example for a `ConfigParser` input file stems right from [the ConfigParser documentation](https://docs.python.org/3/library/configparser.html):```[DEFAULT]ServerAliveInterval = 45Compression = yesCompressionLevel = 9ForwardX11 = yes[bitbucket.org]User = hg[topsecret.server.com]Port = 50022ForwardX11 = no``` The above `ConfigParser` file can be created programmatically:
###Code
import configparser
config = configparser.ConfigParser()
config['DEFAULT'] = {'ServerAliveInterval': '45',
'Compression': 'yes',
'CompressionLevel': '9'}
config['bitbucket.org'] = {}
config['bitbucket.org']['User'] = 'hg'
config['topsecret.server.com'] = {}
topsecret = config['topsecret.server.com']
topsecret['Port'] = '50022' # mutates the parser
topsecret['ForwardX11'] = 'no' # same here
config['DEFAULT']['ForwardX11'] = 'yes'
with open('example.ini', 'w') as configfile:
config.write(configfile)
with open('example.ini') as configfile:
print(configfile.read(), end="")
###Output
[DEFAULT]
serveraliveinterval = 45
compression = yes
compressionlevel = 9
forwardx11 = yes
[bitbucket.org]
user = hg
[topsecret.server.com]
port = 50022
forwardx11 = no
###Markdown
and be read in again:
###Code
config = configparser.ConfigParser()
config.read('example.ini')
topsecret = config['topsecret.server.com']
topsecret['Port']
###Output
_____no_output_____
###Markdown
Part 1: Read ConfigurationUsing `configparser`, create a program reading in the above configuration file and accessing the individual elements. Part 2: Create a Configuration GrammarDesign a grammar that will automatically create configuration files suitable for your above program. Fuzz your program with it. Part 3: Mine a Configuration GrammarBy dynamically tracking the individual accesses to configuration elements, you can again extract a basic grammar from the execution. To this end, create a subclass of `ConfigParser` with a special method `__getitem__`:
###Code
class TrackingConfigParser(configparser.ConfigParser):
def __getitem__(self, key):
print("Accessing", repr(key))
return super().__getitem__(key)
###Output
_____no_output_____
###Markdown
For a `TrackingConfigParser` object `p`, `p.__getitem__(key)` will be invoked whenever `p[key]` is accessed:
###Code
tracking_config_parser = TrackingConfigParser()
tracking_config_parser.read('example.ini')
section = tracking_config_parser['topsecret.server.com']
###Output
Accessing 'topsecret.server.com'
###Markdown
Using `__getitem__()`, as above, implement a tracking mechanism that, while your program accesses the read configuration, automatically saves options accessed and values read. Create a prototype grammar from these values; use it for fuzzing. At the end, don't forget to clean up:
###Code
import os
os.remove("example.ini")
###Output
_____no_output_____
###Markdown
**Solution.** Left to the reader. Enjoy! Exercise 3: Extracting and Fuzzing C Command-Line OptionsIn C programs, the `getopt()` function are frequently used to process configuration options. A call```getopt(argc, argv, "bf:")```indicates that the program accepts two options `-b` and `-f`, with `-f` taking an argument (as indicated by the following colon). Part 1: Getopt FuzzingWrite a framework which, for a given C program, automatically extracts the argument to `getopt()` and derives a fuzzing grammar for it. There are multiple ways to achieve this:1. Scan the program source code for occurrences of `getopt()` and return the string passed. (Crude, but should frequently work.)2. Insert your own implementation of `getopt()` into the source code (effectively replacing `getopt()` from the runtime library), which outputs the `getopt()` argument and exits the program. Recompile and run.3. (Advanced.) As above, but instead of changing the source code, hook into the _dynamic linker_ which at runtime links the program with the C runtime library. Set the library loading path (on Linux and Unix, this is the `LD_LIBRARY_PATH` environment variable) such that your own version of `getopt()` is linked first, and the regular libraries later. Executing the program (without recompiling) should yield the desired result.Apply this on `grep` and `ls`; report the resulting grammars and results. **Solution.** Left to the reader. Enjoy hacking! Part 2: Fuzzing Long Options in CSame as Part 1, but also hook into the GNU variant `getopt_long()`, which accepts "long" arguments with double dashes such as `--help`. Note that method 1, above, will not work here, since the "long" options are defined in a separately defined structure. **Solution.** Left to the reader. Enjoy hacking! Exercise 4: Expansions in ContextIn our above option configurations, we have multiple symbols which all expand to the same integer. For instance, the `--line-range` option of `autopep8` takes two `` parameters which both expand into the same `` symbol:``` ::= ... | --line-range | ... ::= ::= (-)?+ ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9```
###Code
autopep8_runner.ebnf_grammar()["<line>"]
autopep8_runner.ebnf_grammar()["<int>"]
autopep8_runner.ebnf_grammar()["<digit>"]
###Output
_____no_output_____
###Markdown
Testing ConfigurationsThe behavior of a program is not only governed by its data. The _configuration_ of a program – that is, the settings that govern the execution of a program on its (regular) input data, as set by options or configuration files – just as well influences behavior, and thus can and should be tested. In this chapter, we explore how to systematically _test_ and _cover_ software configurations. By _automatically inferring configuration options_, we can apply these techniques out of the box, with no need for writing a grammar. Finally, we show how to systematically cover _combinations_ of configuration options, quickly detecting unwanted interferences.
###Code
from bookutils import YouTubeVideo
YouTubeVideo('XTGFX-tcotE')
###Output
_____no_output_____
###Markdown
**Prerequisites*** You should have read the [chapter on grammars](Grammars.ipynb).* You should have read the [chapter on grammar coverage](GrammarCoverageFuzzer.ipynb).
###Code
import bookutils
from typing import List, Union, Optional, Callable, Type
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.ConfigurationFuzzer import ```and then make use of the following features.This chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options.`OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")```The grammar can be extracted via the method `ebnf_grammar()`:```python>>> option_ebnf_grammar = autopep8_runner.ebnf_grammar()>>> option_ebnf_grammar{'': ['()*'], '': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config ', ' --ignore-local-config', ' -r', ' --recursive', ' -j ', ' --jobs ', ' -p ', ' --pep8-passes ', ' -a', ' --aggressive', ' --experimental', ' --exclude ', ' --list-fixes', ' --ignore ', ' --select ', ' --max-line-length ', ' --line-range ', ' --range ', ' --indent-size ', ' --hang-closing', ' --exit-code'], '': [' foo.py'], '': ['+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '': [''], '': ['(-)?+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '': [''], '': [''], '': [''], '': ['']}```The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:```python>>> from Grammars import convert_ebnf_grammar>>> fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))>>> [fuzzer.fuzz() for i in range(3)][' foo.py', ' --max-line-length 6 --jobs -594 --ignore , --ignore-local-config -r --in-place --list-fixes --recursive -v --experimental -p 72 -h --aggressive --indent-size 3 --exit-code --hang-closing --pep8-passes -180 -d --global-config XQjT --diff --exclude *g -j 43 --help --select A --version --verbose -a --line-range -3963 0 --range 1 4 -i --in-place --version foo.py', ' --global-config 2 --select PuR --ignore b --ignore @ --ignore ;7d --ignore ) --ignore Fw1Z --ignore 0 --global-config ynf --select >G --select + --global-config ( --exclude v --exclude V --ignore ^ --select L --exclude 6 --exclude =$` --ignore % --global-config N --ignore [8maop --ignore 3! --select ~?c< --exclude C --select U --exclude h --global-config --global-config 5O --select x --select B] --ignore _ --global-config .K --global-config S --exclude r --global-config qW --exclude te4/ --exclude J} --ignore " --exclude |H --global-config -&k{s --global-config E --select :I --ignore 9 --global-config M --exclude YD --select \\ --exclude z --ignore i --select \'l --ignore M --ignore ;h --exit-code foo.py']```The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")>>> autopep8_fuzzer = OptionFuzzer(autopep8_runner)>>> [autopep8_fuzzer.fuzz() for i in range(3)][' --diff foo.py', ' --exclude --global-config V --select He --global-config | --global-config n}aicm --ignore 7 --ignore b --global-config u --exclude WB` --exclude 2 --exclude JpZt --exclude l_ --select *%^ --exclude & --exclude )Lv --global-config [ --global-config " --exclude sOEXP --aggressive --exclude \' --help --diff --experimental foo.py', ' --ignore FCw; --global-config /1K?:6 --exclude U --exclude z --ignore rQ --select x --select Y --select { --global-config o --select 34 --exclude ]j --select ~ --exclude 9@ --ignore w --global-config CVL --diff foo.py']```The final step in testing would now to invoke the program with these arguments.Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs.The `OptionRunner` constructor accepts an additional `miner` keyword parameter, which takes the class of the argument grammar miner to be used. By default, this is `OptionGrammarMiner` – a helper class that can be used (and extended) to create own option grammar miners.![](PICS/ConfigurationFuzzer-synopsis-1.svg) Configuration OptionsWhen we talk about the input to a program, we usually think of the _data_ it processes. This is also what we have been fuzzing in the past chapters – be it with [random input](Fuzzer.ipynb), [mutation-based fuzzing](MutationFuzzer.ipynb), or [grammar-based fuzzing](GrammarFuzzer.ipynb). However, programs typically have several input sources, all of which can and should be tested – and included in test generation. One important source of input is the program's _configuration_ – that is, a set of inputs that typically is set once when beginning to process data and then stays constant while processing data, while the program is running, or even while the program is deployed. Such a configuration is frequently set in _configuration files_ (for instance, as key/value pairs); the most ubiquitous method for command-line tools, though, are _configuration options_ on the command line. As an example, consider the `grep` utility to find textual patterns in files. The exact mode by which `grep` works is governed by a multitude of options, which can be listed by providing a `--help` option:
###Code
!grep --help
###Output
usage: grep [-abcdDEFGHhIiJLlMmnOopqRSsUVvwXxZz] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
###Markdown
All these options need to be tested for whether they operate correctly. In security testing, any such option may also trigger a yet unknown vulnerability. Hence, such options can become _fuzz targets_ on their own. In this chapter, we analyze how to systematically test such options – and better yet, how to extract possible configurations right out of given program files, such that we do not have to specify anything. Options in PythonLet us stick to our common programming language here and examine how options are processed in Python. The `argparse` module provides a parser for command-line arguments (and options) with great functionality – and great complexity. You start by defining a parser (`argparse.ArgumentParser()`) to which individual arguments with various features are added, one after another. Additional parameters for each argument can specify the type (`type`) of the argument (say, integers or strings), or the number of arguments (`nargs`). By default, arguments are stored under their name in the `args` object coming from `parse_args()` – thus, `args.integers` holds the `integer` arguments added earlier. Special actions (`actions`) allow to store specific values in given variables; the `store_const` action stores the given `const` in the attribute named by `dest`. The following example takes a number of integer arguments (`integers`) as well as an operator (`--sum`, `--min`, or `--max`) to be applied on these integers. The operators all store a function reference in the `accumulate` attribute, which is finally invoked on the integers parsed:
###Code
import argparse
def process_numbers(args=[]):
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--sum', dest='accumulate', action='store_const',
const=sum,
help='sum the integers')
group.add_argument('--min', dest='accumulate', action='store_const',
const=min,
help='compute the minimum')
group.add_argument('--max', dest='accumulate', action='store_const',
const=max,
help='compute the maximum')
args = parser.parse_args(args)
print(args.accumulate(args.integers))
###Output
_____no_output_____
###Markdown
Here's how `process_numbers()` works. We can, for instance, invoke the `--min` option on the given arguments to compute the minimum:
###Code
process_numbers(["--min", "100", "200", "300"])
###Output
100
###Markdown
Or compute the sum of three numbers:
###Code
process_numbers(["--sum", "1", "2", "3"])
###Output
6
###Markdown
When defined via `add_mutually_exclusive_group()` (as above), options are mutually exclusive. Consequently, we can have only one operator:
###Code
import bookutils
from ExpectError import ExpectError
with ExpectError(SystemExit, print_traceback=False):
process_numbers(["--sum", "--max", "1", "2", "3"])
###Output
usage: ipykernel_launcher.py [-h] (--sum | --min | --max) N [N ...]
ipykernel_launcher.py: error: argument --max: not allowed with argument --sum
SystemExit: 2 (expected)
###Markdown
A Grammar for ConfigurationsHow can we test a system with several options? The easiest answer is to write a grammar for it. The grammar `PROCESS_NUMBERS_EBNF_GRAMMAR` reflects the possible combinations of options and arguments:
###Code
from Grammars import crange, srange, convert_ebnf_grammar, extend_grammar, is_valid_grammar
from Grammars import START_SYMBOL, new_symbol, Grammar
PROCESS_NUMBERS_EBNF_GRAMMAR: Grammar = {
"<start>": ["<operator> <integers>"],
"<operator>": ["--sum", "--min", "--max"],
"<integers>": ["<integer>", "<integers> <integer>"],
"<integer>": ["<digit>+"],
"<digit>": crange('0', '9')
}
assert is_valid_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
PROCESS_NUMBERS_GRAMMAR = convert_ebnf_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
###Output
_____no_output_____
###Markdown
We can feed this grammar into our [grammar coverage fuzzer](GrammarCoverageFuzzer.ipynb) and have it cover one option after another:
###Code
from GrammarCoverageFuzzer import GrammarCoverageFuzzer
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
print(f.fuzz())
###Output
--max 9 5 8 210 80 9756431
--sum 9 4 99 1245 612370
--min 2 3 0 46 15798 7570926
###Markdown
Of course, we can also invoke `process_numbers()` with these very arguments. To this end, we need to convert the string produced by the grammar back into a list of individual arguments, using `split()`:
###Code
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
args = f.fuzz().split()
print(args)
process_numbers(args)
###Output
['--max', '8', '9', '3067', '44', '13852967057']
13852967057
['--sum', '9', '8', '63', '9278111', '59206197798']
59215475989
['--min', '4', '1', '4864', '33342', '7827970808951']
1
###Markdown
In a similar way, we can define grammars for any program to be tested; as well as define grammars for, say, configuration files. Yet, the grammar has to be updated with every change to the program, which creates a maintenance burden. Given that the information required for the grammar is already all encoded in the program, the question arises: _Can't we go and extract configuration options right out of the program in the first place?_ Mining Configuration OptionsIn this section, we try to extract option and argument information right out of a program, such that we do not have to specify a configuration grammar. The aim is to have a configuration fuzzer that works on the options and arguments of an arbitrary program, as long as it follows specific conventions for processing its arguments. In the case of Python programs, this means using the `argparse` module.Our idea is as follows: We execute the given program up to the point where the arguments are actually parsed – that is, `argparse.parse_args()` is invoked. Up to this point, we track all calls into the argument parser, notably those calls that define arguments and options (`add_argument()`). From these, we construct the grammar. Tracking ArgumentsLet us illustrate this approach with a simple experiment: We define a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while `process_numbers` is invoked. If we have a call to a method `add_argument`, we access and print out the local variables (which at this point are the arguments to the method).
###Code
import sys
import string
def trace_locals(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(method_name, locals)
###Output
_____no_output_____
###Markdown
What we get is a list of all calls to `add_argument()`, together with the method arguments passed:
###Code
sys.settrace(trace_locals)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
add_argument {'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True), 'args': ('-h', '--help'), 'kwargs': {'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}}
add_argument {'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True), 'args': ('integers',), 'kwargs': {'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x10e3c98b0>, 'args': ('--sum',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x10e3c98b0>, 'args': ('--min',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x10e3c98b0>, 'args': ('--max',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}}
6
###Markdown
From the `args` argument, we can access the individual options and arguments to be defined:
###Code
def trace_options(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(locals['args'])
sys.settrace(trace_options)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
('-h', '--help')
('integers',)
('--sum',)
('--min',)
('--max',)
6
###Markdown
We see that each argument comes as a tuple with one (say, `integers` or `--sum`) or two members (`-h` and `--help`), which denote alternate forms for the same option. Our job will be to go through the arguments of `add_arguments()` and detect not only the names of options and arguments, but also whether they accept additional parameters, as well as the type of the parameters. A Grammar Miner for Options and Arguments Let us now build a class that gathers all this information to create a grammar. We use the `ParseInterrupt` exception to interrupt program execution after gathering all arguments and options:
###Code
class ParseInterrupt(Exception):
pass
###Output
_____no_output_____
###Markdown
The class `OptionGrammarMiner` takes an executable function for which the grammar of options and arguments is to be mined:
###Code
class OptionGrammarMiner:
"""Helper class for extracting option grammars"""
def __init__(self, function: Callable, log: bool = False):
"""Constructor.
`function` - a function processing arguments using argparse()
`log` - output diagnostics if True
"""
self.function = function
self.log = log
###Output
_____no_output_____
###Markdown
The method `mine_ebnf_grammar()` is where everything happens. It creates a grammar of the form``` ::= * ::= ::= ```in which the options and arguments will be collected. It then sets a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while the previously defined `function` is invoked. Raising `ParseInterrupt` (when `parse_args()` is invoked) ends execution.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
OPTION_SYMBOL = "<option>"
ARGUMENTS_SYMBOL = "<arguments>"
def mine_ebnf_grammar(self):
"""Extract EBNF option grammar"""
self.grammar = {
START_SYMBOL: ["(" + self.OPTION_SYMBOL + ")*" + self.ARGUMENTS_SYMBOL],
self.OPTION_SYMBOL: [],
self.ARGUMENTS_SYMBOL: []
}
self.current_group = self.OPTION_SYMBOL
old_trace = sys.gettrace()
sys.settrace(self.traceit)
try:
self.function()
except ParseInterrupt:
pass
sys.settrace(old_trace)
return self.grammar
def mine_grammar(self):
"""Extract BNF option grammar"""
return convert_ebnf_grammar(self.mine_ebnf_grammar())
###Output
_____no_output_____
###Markdown
The trace function checks for four methods: `add_argument()` is the most important function, resulting in processing arguments; `frame.f_locals` again is the set of local variables, which at this point is mostly the arguments to `add_argument()`. Since mutually exclusive groups also have a method `add_argument()`, we set the flag `in_group` to differentiate. Note that we make no specific efforts to differentiate between multiple parsers or groups; we simply assume that there is one parser, and at any point at most one mutually exclusive group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def traceit(self, frame, event, arg):
if event != "call":
return
if "self" not in frame.f_locals:
return
self_var = frame.f_locals["self"]
method_name = frame.f_code.co_name
if method_name == "add_argument":
in_group = repr(type(self_var)).find("Group") >= 0
self.process_argument(frame.f_locals, in_group)
elif method_name == "add_mutually_exclusive_group":
self.add_group(frame.f_locals, exclusive=True)
elif method_name == "add_argument_group":
# self.add_group(frame.f_locals, exclusive=False)
pass
elif method_name == "parse_args":
raise ParseInterrupt
return self.traceit
###Output
_____no_output_____
###Markdown
The `process_arguments()` now analyzes the arguments passed and adds them to the grammar:* If the argument starts with `-`, it gets added as an optional element to the `` list* Otherwise, it gets added to the `` list.The optional `nargs` argument specifies how many arguments can follow. If it is a number, we add the appropriate number of elements to the grammar; if it is an abstract specifier (say, `+` or `*`), we use it directly as EBNF operator.Given the large number of parameters and optional behavior, this is a somewhat messy function, but it does the job.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def process_argument(self, locals, in_group):
args = locals["args"]
kwargs = locals["kwargs"]
if self.log:
print(args)
print(kwargs)
print()
for arg in args:
self.process_arg(arg, in_group, kwargs)
class OptionGrammarMiner(OptionGrammarMiner):
def process_arg(self, arg, in_group, kwargs):
if arg.startswith('-'):
if not in_group:
target = self.OPTION_SYMBOL
else:
target = self.current_group
metavar = None
arg = " " + arg
else:
target = self.ARGUMENTS_SYMBOL
metavar = arg
arg = ""
if "nargs" in kwargs:
nargs = kwargs["nargs"]
else:
nargs = 1
param = self.add_parameter(kwargs, metavar)
if param == "":
nargs = 0
if isinstance(nargs, int):
for i in range(nargs):
arg += param
else:
assert nargs in "?+*"
arg += '(' + param + ')' + nargs
if target == self.OPTION_SYMBOL:
self.grammar[target].append(arg)
else:
self.grammar[target].append(arg)
###Output
_____no_output_____
###Markdown
The method `add_parameter()` handles possible parameters of options. If the argument has an `action` defined, it takes no parameter. Otherwise, we identify the type of the parameter (as `int` or `str`) and augment the grammar with an appropriate rule.
###Code
import inspect
class OptionGrammarMiner(OptionGrammarMiner):
def add_parameter(self, kwargs, metavar):
if "action" in kwargs:
# No parameter
return ""
type_ = "str"
if "type" in kwargs:
given_type = kwargs["type"]
# int types come as '<class int>'
if inspect.isclass(given_type) and issubclass(given_type, int):
type_ = "int"
if metavar is None:
if "metavar" in kwargs:
metavar = kwargs["metavar"]
else:
metavar = type_
self.add_type_rule(type_)
if metavar != type_:
self.add_metavar_rule(metavar, type_)
param = " <" + metavar + ">"
return param
###Output
_____no_output_____
###Markdown
The method `add_type_rule()` adds a rule for parameter types to the grammar. If the parameter is identified by a meta-variable (say, `N`), we add a rule for this as well to improve legibility.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_type_rule(self, type_):
if type_ == "int":
self.add_int_rule()
else:
self.add_str_rule()
def add_int_rule(self):
self.grammar["<int>"] = ["(-)?<digit>+"]
self.grammar["<digit>"] = crange('0', '9')
def add_str_rule(self):
self.grammar["<str>"] = ["<char>+"]
self.grammar["<char>"] = srange(
string.digits
+ string.ascii_letters
+ string.punctuation)
def add_metavar_rule(self, metavar, type_):
self.grammar["<" + metavar + ">"] = ["<" + type_ + ">"]
###Output
_____no_output_____
###Markdown
The method `add_group()` adds a new mutually exclusive group to the grammar. We define a new symbol (say, ``) for the options added to the group, and use the `required` and `exclusive` flags to define an appropriate expansion operator. The group is then prefixed to the grammar, as in``` ::= * ::= ```and filled with the next calls to `add_argument()` within the group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_group(self, locals, exclusive):
kwargs = locals["kwargs"]
if self.log:
print(kwargs)
required = kwargs.get("required", False)
group = new_symbol(self.grammar, "<group>")
if required and exclusive:
group_expansion = group
if required and not exclusive:
group_expansion = group + "+"
if not required and exclusive:
group_expansion = group + "?"
if not required and not exclusive:
group_expansion = group + "*"
self.grammar[START_SYMBOL][0] = group_expansion + \
self.grammar[START_SYMBOL][0]
self.grammar[group] = []
self.current_group = group
###Output
_____no_output_____
###Markdown
That's it! With this, we can now extract the grammar from our `process_numbers()` program. Turning on logging again reveals the variables we draw upon.
###Code
miner = OptionGrammarMiner(process_numbers, log=True)
process_numbers_grammar = miner.mine_ebnf_grammar()
###Output
('-h', '--help')
{'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}
('integers',)
{'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}
{'required': True}
('--sum',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}
('--min',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}
('--max',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}
###Markdown
Here is the extracted grammar:
###Code
process_numbers_grammar
###Output
_____no_output_____
###Markdown
The grammar properly identifies the group found:
###Code
process_numbers_grammar["<start>"]
process_numbers_grammar["<group>"]
###Output
_____no_output_____
###Markdown
It also identifies a `--help` option provided not by us, but by the `argparse` module:
###Code
process_numbers_grammar["<option>"]
###Output
_____no_output_____
###Markdown
The grammar also correctly identifies the types of the arguments:
###Code
process_numbers_grammar["<arguments>"]
process_numbers_grammar["<integers>"]
###Output
_____no_output_____
###Markdown
The rules for `int` are set as defined by `add_int_rule()`
###Code
process_numbers_grammar["<int>"]
###Output
_____no_output_____
###Markdown
We can take this grammar and convert it to BNF, such that we can fuzz with it right away:
###Code
assert is_valid_grammar(process_numbers_grammar)
grammar = convert_ebnf_grammar(process_numbers_grammar)
assert is_valid_grammar(grammar)
f = GrammarCoverageFuzzer(grammar)
for i in range(10):
print(f.fuzz())
###Output
--sum 9
--max -h --help --help -16 -0
--min --help 2745341 8
--min 1 27
--sum --help --help -2
--sum --help 0 3 -77
--sum -3
--sum --help 429 8 10 0295 -694 1
--max -h 91 -1425 99
--sum -795 -94 8 -44
###Markdown
Each and every invocation adheres to the rules as set forth in the `argparse` calls. By mining options and arguments from existing programs, we can now fuzz these options out of the box – without having to specify a grammar. Testing Autopep8 Let us try out the option grammar miner on real-world Python programs. `autopep8` is a tool that automatically converts Python code to the [PEP 8 Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). (Actually, all Python code in this book runs through `autopep8` during production.) `autopep8` offers a wide range of options, as can be seen by invoking it with `--help`:
###Code
!autopep8 --help
###Output
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing] [--exit-code]
[files ...]
Automatically formats Python code to conform to the PEP 8 style guide.
positional arguments:
files files to format or '-' for standard in
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-v, --verbose print verbose messages; multiple -v result in more
verbose messages
-d, --diff print the diff for the fixed source
-i, --in-place make changes to files in place
--global-config filename
path to a global pep8 config file; if this file does
not exist then this is ignored (default:
/Users/zeller/.config/pep8)
--ignore-local-config
don't look for and apply local config files; if not
passed, defaults are updated with any config files in
the project's root directory
-r, --recursive run recursively over directories; must be used with
--in-place or --diff
-j n, --jobs n number of parallel jobs; match CPU count if value is
less than 1
-p n, --pep8-passes n
maximum number of additional pep8 passes (default:
infinite)
-a, --aggressive enable non-whitespace changes; multiple -a result in
more aggressive changes
--experimental enable experimental fixes
--exclude globs exclude file/directory names that match these comma-
separated globs
--list-fixes list codes for fixes; used by --ignore and --select
--ignore errors do not fix these errors/warnings (default:
E226,E24,W50,W690)
--select errors fix only these errors/warnings (e.g. E4,W)
--max-line-length n set maximum allowed line length (default: 79)
--line-range line line, --range line line
only fix errors found within this inclusive range of
line numbers (e.g. 1 99); line numbers are indexed at
1
--hang-closing hang-closing option passed to pycodestyle
--exit-code change to behavior of exit code. default behavior of
return value, 0 is no differences, 1 is error exit.
return 2 when add this option. 2 is exists
differences.
###Markdown
Autopep8 SetupWe want to systematically test these options. In order to deploy our configuration grammar miner, we need to find the source code of the executable:
###Code
import os
def find_executable(name):
for path in os.get_exec_path():
qualified_name = os.path.join(path, name)
if os.path.exists(qualified_name):
return qualified_name
return None
autopep8_executable = find_executable("autopep8")
assert autopep8_executable is not None
autopep8_executable
###Output
_____no_output_____
###Markdown
Next, we build a function that reads the contents of the file and executes it.
###Code
def autopep8():
executable = find_executable("autopep8")
# First line has to contain "/usr/bin/env python" or like
first_line = open(executable).readline()
assert first_line.find("python") >= 0
contents = open(executable).read()
exec(contents)
###Output
_____no_output_____
###Markdown
Mining an Autopep8 GrammarWe can use the `autopep8()` function in our grammar miner:
###Code
autopep8_miner = OptionGrammarMiner(autopep8)
###Output
_____no_output_____
###Markdown
and extract a grammar for it:
###Code
autopep8_ebnf_grammar = autopep8_miner.mine_ebnf_grammar()
###Output
_____no_output_____
###Markdown
This works because here, `autopep8` is not a separate process (and a separate Python interpreter), but we run the `autopep8()` function (and the `autopep8` code) in our current Python interpreter – up to the call to `parse_args()`, where we interrupt execution again. At this point, the `autopep8` code has done nothing but setting up the argument parser – which is what we are interested in. The grammar options mined reflect precisely the options seen when providing `--help`:
###Code
print(autopep8_ebnf_grammar["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
Metavariables like `` or `` are placeholders for integers. We assume all metavariables of the same name have the same type:
###Code
autopep8_ebnf_grammar["<line>"]
###Output
_____no_output_____
###Markdown
The grammar miner has inferred that the argument to `autopep8` is a list of files:
###Code
autopep8_ebnf_grammar["<arguments>"]
###Output
_____no_output_____
###Markdown
which in turn all are strings:
###Code
autopep8_ebnf_grammar["<files>"]
###Output
_____no_output_____
###Markdown
As we are only interested in testing options, not arguments, we fix the arguments to a single mandatory input. (Otherwise, we'd have plenty of random file names generated.)
###Code
autopep8_ebnf_grammar["<arguments>"] = [" <files>"]
autopep8_ebnf_grammar["<files>"] = ["foo.py"]
assert is_valid_grammar(autopep8_ebnf_grammar)
###Output
_____no_output_____
###Markdown
Creating Autopep8 Options Let us now use the inferred grammar for fuzzing. Again, we convert the EBNF grammar into a regular BNF grammar:
###Code
autopep8_grammar = convert_ebnf_grammar(autopep8_ebnf_grammar)
assert is_valid_grammar(autopep8_grammar)
###Output
_____no_output_____
###Markdown
And we can use the grammar for fuzzing all options:
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=4)
for i in range(20):
print(f.fuzz())
###Output
-r foo.py
-h --experimental --hang-closing foo.py
--list-fixes -v foo.py
--aggressive -d foo.py
--indent-size 9 --help foo.py
--exit-code --recursive foo.py
--diff --version -i foo.py
--max-line-length 0 --in-place --verbose foo.py
--ignore-local-config -a foo.py
--select x -i --exit-code foo.py
-j 8 --diff foo.py
-d -v -d foo.py
-p 6 -i foo.py
-v --diff foo.py
--ignore uA --recursive foo.py
--jobs 5 -r foo.py
--range 4 1 foo.py
--ignore-local-config -i foo.py
-r --exit-code foo.py
-v -r foo.py
###Markdown
Let us apply these options on the actual program. We need a file `foo.py` that will serve as input: (Note that the following commands will overwrite the file `foo.py`, if it already exists in the current working directory. Be aware of this, if you downloaded the notebooks and are working locally.)
###Code
def create_foo_py():
open("foo.py", "w").write("""
def twice(x = 2):
return x + x
""")
create_foo_py()
print(open("foo.py").read(), end="")
###Output
def twice(x = 2):
return x + x
###Markdown
We see how `autopep8` fixes the spacing:
###Code
!autopep8 foo.py
###Output
def twice(x=2):
return x + x
###Markdown
Let us now put things together. We define a `ProgramRunner` that will run the `autopep8` executable with arguments coming from the mined `autopep8` grammar.
###Code
from Fuzzer import ProgramRunner
###Output
_____no_output_____
###Markdown
Running `autopep8` with the mined options reveals a surprisingly high number of passing runs. (We see that some options depend on each other or are mutually exclusive, but this is handled by the program logic, not the argument parser, and hence out of our scope.) The `GrammarCoverageFuzzer` ensures that each option is tested at least once. (Digits and letters, too, by the way.)
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=5)
for i in range(20):
invocation = "autopep8" + f.fuzz()
print("$ " + invocation)
args = invocation.split()
autopep8_runner = ProgramRunner(args)
result, outcome = autopep8_runner.run()
if result.stderr != "":
print(result.stderr, end="")
###Output
$ autopep8 foo.py
$ autopep8 --diff --max-line-length 4 --exit-code --range 5 8 -p 2 foo.py
$ autopep8 --ignore z --verbose -r --list-fixes foo.py
--recursive must be used with --in-place or --diff$ autopep8 --exclude 5 -h -i --aggressive --in-place foo.py
$ autopep8 --select a --help --experimental foo.py
$ autopep8 --indent-size -30 --recursive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --global-config < -j 9 -v -a foo.py
parallel jobs requires --in-place$ autopep8 --line-range 7 1 --hang-closing -d foo.py
First value of --range should be less than or equal to the second$ autopep8 --pep8-passes 6 --hang-closing --version --ignore-local-config foo.py
$ autopep8 --jobs -2 --experimental --version foo.py
$ autopep8 --ignore Y: --select ! --global-config e foo.py
$ autopep8 --select 1 -a --recursive --aggressive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --ignore * --ignore `0 --global-config _ --verbose foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config ,\ --exclude r -v foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config xd6M --recursive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --select R --exclude L --version --ignore-local-config foo.py
$ autopep8 --select " --verbose -h -d foo.py
$ autopep8 --diff -i -h foo.py
$ autopep8 --in-place --select w --version -i foo.py
$ autopep8 --ignore 49 --exclude lI -i foo.py
###Markdown
Our `foo.py` file now has been formatted in place a number of times:
###Code
print(open("foo.py").read(), end="")
###Output
def twice(x=2):
return x + x
###Markdown
We don't need it anymore, so we clean up things:
###Code
import os
os.remove("foo.py")
###Output
_____no_output_____
###Markdown
Classes for Fuzzing Configuration OptionsLet us now create reusable classes that we can use for testing arbitrary programs. (Okay, make that "arbitrary programs that are written in Python and use the `argparse` module to process command-line arguments.") The class `OptionRunner` is a subclass of `ProgramRunner` that takes care of automatically determining the grammar, using the same steps we used for `autopep8`, above.
###Code
from Grammars import unreachable_nonterminals
class OptionRunner(ProgramRunner):
"""Run a program while determining its option grammar"""
def __init__(self, program: Union[str, List[str]],
arguments: Optional[str] = None, *,
log: bool = False,
miner_class: Optional[Type[OptionGrammarMiner]] = None):
"""Constructor.
`program` - the (Python) program to be executed
`arguments` - an (optional) string with arguments for `program`
`log` - if True, enable logging in miner
`miner_class` - the `OptionGrammarMiner` class to be used
(default: `OptionGrammarMiner`)
"""
if isinstance(program, str):
self.base_executable = program
else:
self.base_executable = program[0]
if miner_class is None:
miner_class = OptionGrammarMiner
self.miner_class = miner_class
self.log = log
self.find_contents()
self.find_grammar()
if arguments is not None:
self.set_arguments(arguments)
super().__init__(program)
###Output
_____no_output_____
###Markdown
First, we find the contents of the Python executable:
###Code
class OptionRunner(OptionRunner):
def find_contents(self):
self._executable = find_executable(self.base_executable)
if self._executable is None:
raise IOError(self.base_executable + ": not found")
first_line = open(self._executable).readline()
if first_line.find("python") < 0:
raise IOError(self.base_executable + ": not a Python executable")
self.contents = open(self._executable).read()
def invoker(self):
# We are passing the local variables as is, such that we can access `self`
# We set __name__ to '__main__' to invoke the script as an executable
exec(self.contents, {'__name__': '__main__'})
def executable(self):
return self._executable
###Output
_____no_output_____
###Markdown
Next, we determine the grammar using the `OptionGrammarMiner` class:
###Code
class OptionRunner(OptionRunner):
def find_grammar(self):
miner = self.miner_class(self.invoker, log=self.log)
self._ebnf_grammar = miner.mine_ebnf_grammar()
def ebnf_grammar(self):
"""Return extracted grammar in EBNF form"""
return self._ebnf_grammar
def grammar(self):
"""Return extracted grammar in BNF form"""
return convert_ebnf_grammar(self._ebnf_grammar)
###Output
_____no_output_____
###Markdown
The two service methods `set_arguments()` and `set_invocation()` help us to change the arguments and program, respectively.
###Code
class OptionRunner(OptionRunner):
def set_arguments(self, args):
self._ebnf_grammar["<arguments>"] = [" " + args]
# Delete rules for previous arguments
for nonterminal in unreachable_nonterminals(self._ebnf_grammar):
del self._ebnf_grammar[nonterminal]
def set_invocation(self, program):
self.program = program
###Output
_____no_output_____
###Markdown
We can instantiate the class on `autopep8` and immediately get the grammar:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
print(autopep8_runner.ebnf_grammar()["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
An `OptionFuzzer` interacts with the given `OptionRunner` to obtain its grammar, which is then passed to its `GrammarCoverageFuzzer` superclass.
###Code
class OptionFuzzer(GrammarCoverageFuzzer):
"""Fuzz a (Python) program using its arguments"""
def __init__(self, runner: OptionRunner, *args, **kwargs):
"""Constructor. `runner` is an OptionRunner."""
assert issubclass(type(runner), OptionRunner)
self.runner = runner
grammar = runner.grammar()
super().__init__(grammar, *args, **kwargs)
###Output
_____no_output_____
###Markdown
When invoking `run()`, the `OptionFuzzer` creates a new invocation (using `fuzz()` from its grammar) and runs the now given (or previously set) runner with the arguments from the grammar. Note that the runner specified in `run()` can differ from the one set during initialization; this allows for mining options from one program and applying it in another context.
###Code
class OptionFuzzer(OptionFuzzer):
def run(self, runner=None, inp=""):
if runner is None:
runner = self.runner
assert issubclass(type(runner), OptionRunner)
invocation = runner.executable() + " " + self.fuzz()
runner.set_invocation(invocation.split())
return runner.run(inp)
###Output
_____no_output_____
###Markdown
Example: Autopep8Let us apply our newly defined classes on the `autopep8` runner:
###Code
autopep8_fuzzer = OptionFuzzer(autopep8_runner, max_nonterminals=5)
for i in range(3):
print(autopep8_fuzzer.fuzz())
###Output
foo.py
--in-place --ignore-local-config --jobs 6 --recursive -i foo.py
--help -a --indent-size -95 --pep8-passes 3 --exclude = -r foo.py
###Markdown
We can now systematically test `autopep8` with these classes:
###Code
autopep8_fuzzer.run(autopep8_runner)
###Output
_____no_output_____
###Markdown
Example: MyPyWe can extract options for the `mypy` static type checker for Python:
###Code
assert find_executable("mypy") is not None
mypy_runner = OptionRunner("mypy", "foo.py")
print(mypy_runner.ebnf_grammar()["<option>"])
mypy_fuzzer = OptionFuzzer(mypy_runner, max_nonterminals=5)
for i in range(10):
print(mypy_fuzzer.fuzz())
###Output
--follow-imports l foo.py
--logical-deps --allow-any-generics --any-exprs-report yB --fast-exit --show-error-context -i --no-site-packages --no-explicit-package-bases --no-sqlite-cache --warn-unused-ignores foo.py
--html-report 7/ --tb --error-summary foo.py
--no-implicit-reexport --no-error-summary --warn-return-any --dump-build-stats --check-untyped-defs --hide-error-context --no-install-types --install-types foo.py
--no-warn-redundant-casts --no-warn-return-any --bazel --sqlite-cache --enable-error-code foo.py
--cache-map { t --version --warn-unused-configs foo.py
--explicit-package-bases --exclude --help --semantic-analysis-only --soft-error-limit 6 --skip-version-check --no-warn-unused-configs foo.py
--disallow-untyped-defs --hide-error-codes --show-absolute-path --shadow-file --package-root --interactive foo.py
--strict-optional --disable-error-code --disallow-untyped-globals --disallow-incomplete-defs --no-namespace-packages --show-column-numbers --inferstats --raise-exceptions --no-color-output --allow-incomplete-defs --warn-incomplete-stub --command --module --disallow-redefinition --cache-dir --no-strict-optional -c -h -p foo.py
--implicit-optional --disallow-any-generics --no-warn-unreachable --strict-equality --show-error-codes --package --always-false foo.py
###Markdown
Example: NotedownHere's the configuration options for the `notedown` Notebook to Markdown converter:
###Code
assert find_executable("notedown") is not None
notedown_runner = OptionRunner("notedown")
print(notedown_runner.ebnf_grammar()["<option>"])
notedown_fuzzer = OptionFuzzer(notedown_runner, max_nonterminals=5)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
c
--help -o --nomagic --version --output --match H9- --rmagic --render --run
--examples -h --precode r x --execute --strip --debug <
--execute --rmagic --rmagic !
-h --render --debug 2
--examples --version --version .
--execute --help --render R
--run -h 6T
--execute --rmagic -h ;
--from 5* --version --help s
###Markdown
Combinatorial TestingOur `CoverageGrammarFuzzer` does a good job in covering each and every option at least once, which is great for systematic testing. However, as we also can see in our examples above, some options require each other, while others interfere with each other. What we should do as good testers is not only to cover every option individually, but also _combinations_ of options. The Python `itertools` module gives us means to create combinations from lists. We can, for instance, take the `notedown` options and create a list of all pairs.
###Code
from itertools import combinations
option_list = notedown_runner.ebnf_grammar()["<option>"]
pairs = list(combinations(option_list, 2))
###Output
_____no_output_____
###Markdown
There's quite a number of pairs:
###Code
len(pairs)
print(pairs[:20])
###Output
[(' -h', ' --help'), (' -h', ' -o( <str>)?'), (' -h', ' --output( <str>)?'), (' -h', ' --from <str>'), (' -h', ' --to <str>'), (' -h', ' --run'), (' -h', ' --execute'), (' -h', ' --timeout <int>'), (' -h', ' --strip'), (' -h', ' --precode( <str>)+'), (' -h', ' --knit( <str>)?'), (' -h', ' --rmagic'), (' -h', ' --nomagic'), (' -h', ' --render'), (' -h', ' --template <str>'), (' -h', ' --match <str>'), (' -h', ' --examples'), (' -h', ' --version'), (' -h', ' --debug'), (' --help', ' -o( <str>)?')]
###Markdown
Testing every such pair of options frequently suffices to cover all interferences between options. (Programs rarely have conditions involving three or more configuration settings.) To this end, we _change_ the grammar from having a list of options to having a list of _option pairs_, such that covering these will automatically cover all pairs. We create a function `pairwise()` that takes a list of options as occurring in our grammar and returns a list of _pairwise options_ – that is, our original options, but concatenated.
###Code
def pairwise(option_list):
return [option_1 + option_2
for (option_1, option_2) in combinations(option_list, 2)]
###Output
_____no_output_____
###Markdown
Here's the first 20 pairs:
###Code
print(pairwise(option_list)[:20])
###Output
[' -h --help', ' -h -o( <str>)?', ' -h --output( <str>)?', ' -h --from <str>', ' -h --to <str>', ' -h --run', ' -h --execute', ' -h --timeout <int>', ' -h --strip', ' -h --precode( <str>)+', ' -h --knit( <str>)?', ' -h --rmagic', ' -h --nomagic', ' -h --render', ' -h --template <str>', ' -h --match <str>', ' -h --examples', ' -h --version', ' -h --debug', ' --help -o( <str>)?']
###Markdown
The new grammar `pairwise_notedown_grammar` is a copy of the `notedown` grammar, but with the list of options replaced with the above pairwise option list.
###Code
notedown_grammar = notedown_runner.grammar()
pairwise_notedown_grammar = extend_grammar(notedown_grammar)
pairwise_notedown_grammar["<option>"] = pairwise(notedown_grammar["<option>"])
assert is_valid_grammar(pairwise_notedown_grammar)
###Output
_____no_output_____
###Markdown
Using the "pairwise" grammar to fuzz now covers one pair after another:
###Code
notedown_pairwise_fuzzer = GrammarCoverageFuzzer(
pairwise_notedown_grammar, max_nonterminals=4)
for i in range(10):
print(notedown_pairwise_fuzzer.fuzz())
###Output
--help --timeout -0 w
--rmagic --render --execute --strip
--execute --rmagic --nomagic --debug
--help --strip --strip --version r
-h --debug --help --execute
--execute --debug -h --render
--strip --rmagic --render --examples h
-h --run --examples --debug
--from PX --examples >
--run --strip --strip --render
###Markdown
Can we actually test all combinations of options? Not in practice, as the number of combinations quickly grows as the length increases. It decreases again as the number of options reaches the maximum (with 20 options, there is only 1 combination involving _all_ options), but the absolute numbers are still staggering:
###Code
for combination_length in range(1, 20):
tuples = list(combinations(option_list, combination_length))
print(combination_length, len(tuples))
###Output
1 20
2 190
3 1140
4 4845
5 15504
6 38760
7 77520
8 125970
9 167960
10 184756
11 167960
12 125970
13 77520
14 38760
15 15504
16 4845
17 1140
18 190
19 20
###Markdown
Formally, the number of combinations of length $k$ in a set of options of length $n$ is the binomial coefficient$${n \choose k} = \frac{n!}{k!(n - k)!}$$ which for $k = 2$ (all pairs) gives us$${n \choose 2} = \frac{n!}{2(n - 2)!} = \frac{n (n - 1)}{2}$$ For `autopep8` with its 29 options...
###Code
len(autopep8_runner.ebnf_grammar()["<option>"])
###Output
_____no_output_____
###Markdown
... we thus have 406 distinct pairs. However, the binomial coefficient does not differentiate between permutations of elements of the pairs, which our tests do. Therefore we need 812 tests to cover all pairs:
###Code
len(autopep8_runner.ebnf_grammar()["<option>"]) * \
(len(autopep8_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
For `mypy` with its 110 options, though, we already end up with 11,990 tests to be conducted:
###Code
len(mypy_runner.ebnf_grammar()["<option>"])
len(mypy_runner.ebnf_grammar()["<option>"]) * \
(len(mypy_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
Even if each pair takes a second to run, we'd still be done in three hours of testing, though. If your program has more options that you all want to get covered in combinations, it is advisable that you limit the number of configurations further – for instance by limiting combinatorial testing to those combinations that possibly can interact with each other; and covering all other (presumably orthogonal) options individually. This mechanism of creating configurations by extending grammars can be easily extended to other configuration targets. One may want to explore a greater number of configurations, or expansions in specific contexts. The [exercises](Exercises), below, have a number of options ready for you. SynopsisThis chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options. `OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
###Output
_____no_output_____
###Markdown
The grammar can be extracted via the method `ebnf_grammar()`:
###Code
option_ebnf_grammar = autopep8_runner.ebnf_grammar()
option_ebnf_grammar
###Output
_____no_output_____
###Markdown
The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:
###Code
from Grammars import convert_ebnf_grammar
fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))
[fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
autopep8_fuzzer = OptionFuzzer(autopep8_runner)
[autopep8_fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The final step in testing would now to invoke the program with these arguments. Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. The `OptionRunner` constructor accepts an additional `miner` keyword parameter, which takes the class of the argument grammar miner to be used. By default, this is `OptionGrammarMiner` – a helper class that can be used (and extended) to create own option grammar miners.
###Code
# ignore
from ClassDiagram import display_class_hierarchy
from Fuzzer import Fuzzer, Runner, ProgramRunner
from Grammars import Expansion
from GrammarFuzzer import GrammarFuzzer, DerivationTree
from GrammarCoverageFuzzer import TrackingGrammarCoverageFuzzer
# ignore
display_class_hierarchy([OptionRunner, OptionFuzzer, OptionGrammarMiner],
public_methods=[
Fuzzer.__init__,
Fuzzer.fuzz,
Fuzzer.run,
Fuzzer.runs,
GrammarFuzzer.__init__,
GrammarFuzzer.fuzz,
GrammarFuzzer.fuzz_tree,
TrackingGrammarCoverageFuzzer.__init__,
OptionFuzzer.__init__,
OptionFuzzer.run,
Runner.__init__,
Runner.run,
ProgramRunner.__init__,
ProgramRunner.__init__,
OptionRunner.__init__,
OptionRunner.ebnf_grammar,
OptionRunner.grammar,
OptionGrammarMiner.__init__,
OptionGrammarMiner.mine_ebnf_grammar,
OptionGrammarMiner.mine_grammar,
],
types={
'DerivationTree': DerivationTree,
'Expansion': Expansion,
'Grammar': Grammar
},
project='fuzzingbook')
###Output
_____no_output_____
###Markdown
Lessons Learned* Besides regular input data, program _configurations_ make an important testing target.* For a given program using a standard library to parse command-line options and arguments, one can automatically extract these and convert them into a grammar.* To cover not only single options, but combinations of options, one can expand the grammar to cover all pairs, or come up with even more ambitious targets. Next StepsIf you liked the idea of mining a grammar from a program, do not miss:* [how to mine grammars for input data](GrammarMiner.ipynb) Our next steps in the book focus on:* [how to parse and recombine inputs](Parser.ipynb)* [how to assign weights and probabilities to specific productions](ProbabilisticGrammarFuzzer.ipynb)* [how to simplify inputs that cause a failure](Reducer.ipynb) BackgroundAlthough configuration data is just as likely to cause failures as other input data, it has received relatively little attention in test generation – possibly because, unlike "regular" input data, configuration data is not so much under control of external parties, and because, again unlike regular data, there is little variance in configurations. Creating models for software configurations and using these models for testing is commonplace, as is the idea of pairwise testing. For an overview, see \cite{Pezze2008}; for a discussion and comparison of state-of-the-art techniques, see \cite{Petke2015}.More specifically, \cite{Sutton2007} also discuss techniques to systematically cover command-line options. Dai et al. \cite{Dai2010} apply configuration fuzzing by changing variables associated with configuration files. Exercises Exercise 1: ifdef Configuration FuzzingIn C programs, the *C preprocessor* can be used to choose which code parts should be compiled and which ones should not. As an example, in the C code```Cifdef LONG_FOOlong foo() { ... }elseint foo() { ... }endif```the compiler will compile the function `foo()` with return type`long` if the _preprocessor variable_ `LONG_FOO` is defined, and with return type `int` if not. Such preprocessor variables are either set in the source files (using `define`, as in `define LONG_FOO`) or on the C compiler command line (using `-D` or `-D=`, as in `-DLONG_FOO`. Such *conditional compilation* is used to configure C programs towards their environment. System-specific code can contain lots of conditional compilation. As an example, consider this excerpt of `xmlparse.c`, the XML parser that is part of the Python runtime library:```cif defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32) define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800endifif !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \ && !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \ && !defined(XML_DEV_URANDOM) \ && !defined(_WIN32) \ && !defined(XML_POOR_ENTROPY) errorendifif !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */endififdef XML_UNICODE_WCHAR_Tdefine XML_T(x) (const wchar_t)xdefine XML_L(x) L xelsedefine XML_T(x) (const unsigned short)xdefine XML_L(x) xendifint fun(int x) { return XML_T(x); }``` A typical configuration for the C preprocessor on the above code could be `cc -c -D_WIN32 -DXML_POOR_ENTROPY -DXML_UNICODE_WCHAR_T xmlparse.c`, defining the given preprocessor variables and selecting the appropriate code fragments. Since the compiler can only compile one configuration at a time (implying that we can also only _test_ one resulting executable at a time), your task is to find out which of these configurations actually compile. To this end, proceed in three steps. Part 1: Extract Preprocessor VariablesWrite a _function_ `cpp_identifiers()` that, given a set of lines (say, from `open(filename).readlines()`), extracts all preprocessor variables referenced in `if` or `ifdef` preprocessor instructions. Apply `ifdef_identifiers()` on the sample C input above, such that```pythoncpp_identifiers(open("xmlparse.c").readlines()) ```returns the set```python{'_WIN32', 'LOAD_LIBRARY_SEARCH_SYSTEM32', 'HAVE_GETRANDOM', 'HAVE_SYSCALL_GETRANDOM', 'HAVE_ARC4RANDOM_BUF', ...}``` **Solution.** Let us start with creating a sample input file, `xmlparse.c`:
###Code
filename = "xmlparse.c"
open(filename, "w").write(
"""
#if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32)
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
#endif
#if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \
&& !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \
&& !defined(XML_DEV_URANDOM) \
&& !defined(_WIN32) \
&& !defined(XML_POOR_ENTROPY)
# error
#endif
#if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)
#define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */
#endif
#ifdef XML_UNICODE_WCHAR_T
#define XML_T(x) (const wchar_t)x
#define XML_L(x) L ## x
#else
#define XML_T(x) (const unsigned short)x
#define XML_L(x) x
#endif
int fun(int x) { return XML_T(x); }
""");
###Output
_____no_output_____
###Markdown
To find C preprocessor `if` directives and preprocessor variables, we use regular expressions matching them.
###Code
import re
re_cpp_if_directive = re.compile(r"\s*#\s*(el)?if")
re_cpp_identifier = re.compile(r"[a-zA-Z_$]+")
def cpp_identifiers(lines):
identifiers = set()
for line in lines:
if re_cpp_if_directive.match(line):
identifiers |= set(re_cpp_identifier.findall(line))
# These are preprocessor keywords
identifiers -= {"if", "ifdef", "ifndef", "defined"}
return identifiers
cpp_ids = cpp_identifiers(open("xmlparse.c").readlines())
cpp_ids
###Output
_____no_output_____
###Markdown
Part 2: Derive an Option GrammarWith the help of `cpp_identifiers()`, create a grammar which has C compiler invocations with a list of options, where each option takes the form `-D` for a preprocessor variable ``. Using this grammar `cpp_grammar`, a fuzzer ```pythong = GrammarCoverageFuzzer(cpp_grammar)```would create C compiler invocations such as```python[g.fuzz() for i in range(10)]['cc -DHAVE_SYSCALL_GETRANDOM xmlparse.c', 'cc -D__SCO__ -DRANDOM_BUF -DXML_UNICODE_WCHAR_T -D__UNIXWARE__ xmlparse.c', 'cc -DXML_POOR_ENTROPY xmlparse.c', 'cc -DRANDOM xmlparse.c', 'cc -D_WIN xmlparse.c', 'cc -DHAVE_ARC xmlparse.c', ...]``` **Solution.** This is not very difficult:
###Code
from Grammars import Grammar, is_valid_grammar
cpp_grammar: Grammar = {
"<start>": ["cc -c<options> " + filename],
"<options>": ["<option>", "<options><option>"],
"<option>": []
}
for id in cpp_ids:
s = new_symbol(cpp_grammar, "<" + id + ">")
cpp_grammar["<option>"].append(s)
cpp_grammar[s] = [" -D" + id]
assert is_valid_grammar(cpp_grammar)
cpp_grammar
###Output
_____no_output_____
###Markdown
Part 3: C Preprocessor Configuration FuzzingUsing the grammar just produced, use a `GrammarCoverageFuzzer` to1. Test each processor variable individually2. Test each pair of processor variables, using `pairwise()`.What happens if you actually run the invocations? **Solution.** We can simply run the coverage fuzzer, as described above.
###Code
g = GrammarCoverageFuzzer(cpp_grammar)
g.fuzz()
from Fuzzer import ProgramRunner
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -DRANDOM -DXML_DEV_URANDOM xmlparse.c
$ cc -c -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:7:3: error:
# error
^
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 2 errors generated.
$ cc -c -D__UNIXWARE__ -DHAVE_ARC -DTIOCSWINSZ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_GETRANDOM xmlparse.c
$ cc -c -D_WIN -DLOAD_LIBRARY_SEARCH_SYSTEM -DRANDOM_BUF xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_SYSCALL_GETRANDOM -DHAVE_GETRANDOM -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
$ cc -c -DRANDOM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM_BUF xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DTIOCSWINSZ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
###Markdown
To test all pairs, we can use `pairwise()`:
###Code
pairwise_cpp_grammar = extend_grammar(cpp_grammar)
pairwise_cpp_grammar["<option>"] = pairwise(cpp_grammar["<option>"])
pairwise_cpp_grammar["<option>"][:10]
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DHAVE_ARC -DHAVE_SYSCALL_GETRANDOM -DXML_DEV_URANDOM xmlparse.c
$ cc -c -DHAVE_GETRANDOM -DHAVE_ARC xmlparse.c
$ cc -c -DRANDOM_BUF xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -D_WIN xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM -DHAVE_SYSCALL_GETRANDOM -D_WIN xmlparse.c
$ cc -c -DRANDOM_BUF -D__SCO__ -DHAVE_SYSCALL_GETRANDOM -DRANDOM_BUF xmlparse.c
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:7:3: error:
# error
^
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 2 errors generated.
$ cc -c -DXML_DEV_URANDOM -D_WIN -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
$ cc -c -DHAVE_ARC xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM_BUF -DHAVE_GETRANDOM -DXML_DEV_URANDOM xmlparse.c
###Markdown
Some of the compilation errors we get could be expected – for instance, defining `XML_UNICODE_WCHAR_T` when actually, the type is not supported in our environment. Other errors may not be expected – and it is these errors we would find through systematic configuration fuzzing, as described above. At the end, don't forget to clean up:
###Code
os.remove("xmlparse.c")
if os.path.exists("xmlparse.o"):
os.remove("xmlparse.o")
###Output
_____no_output_____
###Markdown
Exercise 2: .ini Configuration FuzzingBesides command-line options, another important source of configurations are _configuration files_. In this exercise, we will consider the very simple configuration language provided by the Python `ConfigParser` module, which is very similar to what is found in Microsoft Windows _.ini_ files. The following example for a `ConfigParser` input file stems right from [the ConfigParser documentation](https://docs.python.org/3/library/configparser.html):```[DEFAULT]ServerAliveInterval = 45Compression = yesCompressionLevel = 9ForwardX11 = yes[bitbucket.org]User = hg[topsecret.server.com]Port = 50022ForwardX11 = no``` The above `ConfigParser` file can be created programmatically:
###Code
import configparser
config = configparser.ConfigParser()
config['DEFAULT'] = {'ServerAliveInterval': '45',
'Compression': 'yes',
'CompressionLevel': '9'}
config['bitbucket.org'] = {}
config['bitbucket.org']['User'] = 'hg'
config['topsecret.server.com'] = {}
topsecret = config['topsecret.server.com']
topsecret['Port'] = '50022' # mutates the parser
topsecret['ForwardX11'] = 'no' # same here
config['DEFAULT']['ForwardX11'] = 'yes'
with open('example.ini', 'w') as configfile:
config.write(configfile)
with open('example.ini') as configfile:
print(configfile.read(), end="")
###Output
[DEFAULT]
serveraliveinterval = 45
compression = yes
compressionlevel = 9
forwardx11 = yes
[bitbucket.org]
user = hg
[topsecret.server.com]
port = 50022
forwardx11 = no
###Markdown
and be read in again:
###Code
config = configparser.ConfigParser()
config.read('example.ini')
topsecret = config['topsecret.server.com']
topsecret['Port']
###Output
_____no_output_____
###Markdown
Part 1: Read ConfigurationUsing `configparser`, create a program reading in the above configuration file and accessing the individual elements. Part 2: Create a Configuration GrammarDesign a grammar that will automatically create configuration files suitable for your above program. Fuzz your program with it. Part 3: Mine a Configuration GrammarBy dynamically tracking the individual accesses to configuration elements, you can again extract a basic grammar from the execution. To this end, create a subclass of `ConfigParser` with a special method `__getitem__`:
###Code
class TrackingConfigParser(configparser.ConfigParser):
def __getitem__(self, key):
print("Accessing", repr(key))
return super().__getitem__(key)
###Output
_____no_output_____
###Markdown
For a `TrackingConfigParser` object `p`, `p.__getitem__(key)` will be invoked whenever `p[key]` is accessed:
###Code
tracking_config_parser = TrackingConfigParser()
tracking_config_parser.read('example.ini')
section = tracking_config_parser['topsecret.server.com']
###Output
Accessing 'topsecret.server.com'
###Markdown
Using `__getitem__()`, as above, implement a tracking mechanism that, while your program accesses the read configuration, automatically saves options accessed and values read. Create a prototype grammar from these values; use it for fuzzing. At the end, don't forget to clean up:
###Code
import os
os.remove("example.ini")
###Output
_____no_output_____
###Markdown
**Solution.** Left to the reader. Enjoy! Exercise 3: Extracting and Fuzzing C Command-Line OptionsIn C programs, the `getopt()` function are frequently used to process configuration options. A call```getopt(argc, argv, "bf:")```indicates that the program accepts two options `-b` and `-f`, with `-f` taking an argument (as indicated by the following colon). Part 1: Getopt FuzzingWrite a framework which, for a given C program, automatically extracts the argument to `getopt()` and derives a fuzzing grammar for it. There are multiple ways to achieve this:1. Scan the program source code for occurrences of `getopt()` and return the string passed. (Crude, but should frequently work.)2. Insert your own implementation of `getopt()` into the source code (effectively replacing `getopt()` from the runtime library), which outputs the `getopt()` argument and exits the program. Recompile and run.3. (Advanced.) As above, but instead of changing the source code, hook into the _dynamic linker_ which at runtime links the program with the C runtime library. Set the library loading path (on Linux and Unix, this is the `LD_LIBRARY_PATH` environment variable) such that your own version of `getopt()` is linked first, and the regular libraries later. Executing the program (without recompiling) should yield the desired result.Apply this on `grep` and `ls`; report the resulting grammars and results. **Solution.** Left to the reader. Enjoy hacking! Part 2: Fuzzing Long Options in CSame as Part 1, but also hook into the GNU variant `getopt_long()`, which accepts "long" arguments with double dashes such as `--help`. Note that method 1, above, will not work here, since the "long" options are defined in a separately defined structure. **Solution.** Left to the reader. Enjoy hacking! Exercise 4: Expansions in ContextIn our above option configurations, we have multiple symbols which all expand to the same integer. For instance, the `--line-range` option of `autopep8` takes two `` parameters which both expand into the same `` symbol:``` ::= ... | --line-range | ... ::= ::= (-)?+ ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9```
###Code
autopep8_runner.ebnf_grammar()["<line>"]
autopep8_runner.ebnf_grammar()["<int>"]
autopep8_runner.ebnf_grammar()["<digit>"]
###Output
_____no_output_____
###Markdown
Testing ConfigurationsThe behavior of a program is not only governed by its data. The _configuration_ of a program – that is, the settings that govern the execution of a program on its (regular) input data, as set by options or configuration files – just as well influences behavior, and thus can and should be tested. In this chapter, we explore how to systematically _test_ and _cover_ software configurations. By _automatically inferring configuration options_, we can apply these techniques out of the box, with no need for writing a grammar. Finally, we show how to systematically cover _combinations_ of configuration options, quickly detecting unwanted interferences.
###Code
from bookutils import YouTubeVideo
YouTubeVideo('XTGFX-tcotE')
###Output
_____no_output_____
###Markdown
**Prerequisites*** You should have read the [chapter on grammars](Grammars.ipynb).* You should have read the [chapter on grammar coverage](GrammarCoverageFuzzer.ipynb).
###Code
import bookutils
from typing import List, Union, Optional, Callable, Type
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.ConfigurationFuzzer import ```and then make use of the following features.This chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options.`OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")```The grammar can be extracted via the method `ebnf_grammar()`:```python>>> option_ebnf_grammar = autopep8_runner.ebnf_grammar()>>> option_ebnf_grammar{'': ['()*'], '': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config ', ' --ignore-local-config', ' -r', ' --recursive', ' -j ', ' --jobs ', ' -p ', ' --pep8-passes ', ' -a', ' --aggressive', ' --experimental', ' --exclude ', ' --list-fixes', ' --ignore ', ' --select ', ' --max-line-length ', ' --line-range ', ' --range ', ' --indent-size ', ' --hang-closing', ' --exit-code'], '': [' foo.py'], '': ['+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '': [''], '': ['(-)?+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '': [''], '': [''], '': [''], '': ['']}```The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:```python>>> from Grammars import convert_ebnf_grammar>>> fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))>>> [fuzzer.fuzz() for i in range(3)][' foo.py', ' --max-line-length 6 --jobs -594 --ignore , --ignore-local-config -r --in-place --list-fixes --recursive -v --experimental -p 72 -h --aggressive --indent-size 3 --exit-code --hang-closing --pep8-passes -180 -d --global-config XQjT --diff --exclude *g -j 43 --help --select A --version --verbose -a --line-range -3963 0 --range 1 4 -i --in-place --version foo.py', ' --global-config 2 --select PuR --ignore b --ignore @ --ignore ;7d --ignore ) --ignore Fw1Z --ignore 0 --global-config ynf --select >G --select + --global-config ( --exclude v --exclude V --ignore ^ --select L --exclude 6 --exclude =$` --ignore % --global-config N --ignore [8maop --ignore 3! --select ~?c< --exclude C --select U --exclude h --global-config --global-config 5O --select x --select B] --ignore _ --global-config .K --global-config S --exclude r --global-config qW --exclude te4/ --exclude J} --ignore " --exclude |H --global-config -&k{s --global-config E --select :I --ignore 9 --global-config M --exclude YD --select \\ --exclude z --ignore i --select \'l --ignore M --ignore ;h --exit-code foo.py']```The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")>>> autopep8_fuzzer = OptionFuzzer(autopep8_runner)>>> [autopep8_fuzzer.fuzz() for i in range(3)][' --diff foo.py', ' --exclude --global-config V --select He --global-config | --global-config n}aicm --ignore 7 --ignore b --global-config u --exclude WB` --exclude 2 --exclude JpZt --exclude l_ --select *%^ --exclude & --exclude )Lv --global-config [ --global-config " --exclude sOEXP --aggressive --exclude \' --help --diff --experimental foo.py', ' --ignore FCw; --global-config /1K?:6 --exclude U --exclude z --ignore rQ --select x --select Y --select { --global-config o --select 34 --exclude ]j --select ~ --exclude 9@ --ignore w --global-config CVL --diff foo.py']```The final step in testing would now to invoke the program with these arguments.Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs.The `OptionRunner` constructor accepts an additional `miner` keyword parameter, which takes the class of the argument grammar miner to be used. By default, this is `OptionGrammarMiner` – a helper class that can be used (and extended) to create own option grammar miners.![](PICS/ConfigurationFuzzer-synopsis-1.svg) Configuration OptionsWhen we talk about the input to a program, we usually think of the _data_ it processes. This is also what we have been fuzzing in the past chapters – be it with [random input](Fuzzer.ipynb), [mutation-based fuzzing](MutationFuzzer.ipynb), or [grammar-based fuzzing](GrammarFuzzer.ipynb). However, programs typically have several input sources, all of which can and should be tested – and included in test generation. One important source of input is the program's _configuration_ – that is, a set of inputs that typically is set once when beginning to process data and then stays constant while processing data, while the program is running, or even while the program is deployed. Such a configuration is frequently set in _configuration files_ (for instance, as key/value pairs); the most ubiquitous method for command-line tools, though, are _configuration options_ on the command line. As an example, consider the `grep` utility to find textual patterns in files. The exact mode by which `grep` works is governed by a multitude of options, which can be listed by providing a `--help` option:
###Code
!grep --help
###Output
usage: grep [-abcdDEFGHhIiJLlMmnOopqRSsUVvwXxZz] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
###Markdown
All these options need to be tested for whether they operate correctly. In security testing, any such option may also trigger a yet unknown vulnerability. Hence, such options can become _fuzz targets_ on their own. In this chapter, we analyze how to systematically test such options – and better yet, how to extract possible configurations right out of given program files, such that we do not have to specify anything. Options in PythonLet us stick to our common programming language here and examine how options are processed in Python. The `argparse` module provides a parser for command-line arguments (and options) with great functionality – and great complexity. You start by defining a parser (`argparse.ArgumentParser()`) to which individual arguments with various features are added, one after another. Additional parameters for each argument can specify the type (`type`) of the argument (say, integers or strings), or the number of arguments (`nargs`). By default, arguments are stored under their name in the `args` object coming from `parse_args()` – thus, `args.integers` holds the `integer` arguments added earlier. Special actions (`actions`) allow to store specific values in given variables; the `store_const` action stores the given `const` in the attribute named by `dest`. The following example takes a number of integer arguments (`integers`) as well as an operator (`--sum`, `--min`, or `--max`) to be applied on these integers. The operators all store a function reference in the `accumulate` attribute, which is finally invoked on the integers parsed:
###Code
import argparse
def process_numbers(args=[]):
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--sum', dest='accumulate', action='store_const',
const=sum,
help='sum the integers')
group.add_argument('--min', dest='accumulate', action='store_const',
const=min,
help='compute the minimum')
group.add_argument('--max', dest='accumulate', action='store_const',
const=max,
help='compute the maximum')
args = parser.parse_args(args)
print(args.accumulate(args.integers))
###Output
_____no_output_____
###Markdown
Here's how `process_numbers()` works. We can, for instance, invoke the `--min` option on the given arguments to compute the minimum:
###Code
process_numbers(["--min", "100", "200", "300"])
###Output
100
###Markdown
Or compute the sum of three numbers:
###Code
process_numbers(["--sum", "1", "2", "3"])
###Output
6
###Markdown
When defined via `add_mutually_exclusive_group()` (as above), options are mutually exclusive. Consequently, we can have only one operator:
###Code
import bookutils
from ExpectError import ExpectError
with ExpectError(SystemExit, print_traceback=False):
process_numbers(["--sum", "--max", "1", "2", "3"])
###Output
usage: ipykernel_launcher.py [-h] (--sum | --min | --max) N [N ...]
ipykernel_launcher.py: error: argument --max: not allowed with argument --sum
SystemExit: 2 (expected)
###Markdown
A Grammar for ConfigurationsHow can we test a system with several options? The easiest answer is to write a grammar for it. The grammar `PROCESS_NUMBERS_EBNF_GRAMMAR` reflects the possible combinations of options and arguments:
###Code
from Grammars import crange, srange, convert_ebnf_grammar, extend_grammar, is_valid_grammar
from Grammars import START_SYMBOL, new_symbol, Grammar
PROCESS_NUMBERS_EBNF_GRAMMAR: Grammar = {
"<start>": ["<operator> <integers>"],
"<operator>": ["--sum", "--min", "--max"],
"<integers>": ["<integer>", "<integers> <integer>"],
"<integer>": ["<digit>+"],
"<digit>": crange('0', '9')
}
assert is_valid_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
PROCESS_NUMBERS_GRAMMAR = convert_ebnf_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
###Output
_____no_output_____
###Markdown
We can feed this grammar into our [grammar coverage fuzzer](GrammarCoverageFuzzer.ipynb) and have it cover one option after another:
###Code
from GrammarCoverageFuzzer import GrammarCoverageFuzzer
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
print(f.fuzz())
###Output
--max 9 5 8 210 80 9756431
--sum 9 4 99 1245 612370
--min 2 3 0 46 15798 7570926
###Markdown
Of course, we can also invoke `process_numbers()` with these very arguments. To this end, we need to convert the string produced by the grammar back into a list of individual arguments, using `split()`:
###Code
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
args = f.fuzz().split()
print(args)
process_numbers(args)
###Output
['--max', '8', '9', '3067', '44', '13852967057']
13852967057
['--sum', '9', '8', '63', '9278111', '59206197798']
59215475989
['--min', '4', '1', '4864', '33342', '7827970808951']
1
###Markdown
In a similar way, we can define grammars for any program to be tested; as well as define grammars for, say, configuration files. Yet, the grammar has to be updated with every change to the program, which creates a maintenance burden. Given that the information required for the grammar is already all encoded in the program, the question arises: _Can't we go and extract configuration options right out of the program in the first place?_ Mining Configuration OptionsIn this section, we try to extract option and argument information right out of a program, such that we do not have to specify a configuration grammar. The aim is to have a configuration fuzzer that works on the options and arguments of an arbitrary program, as long as it follows specific conventions for processing its arguments. In the case of Python programs, this means using the `argparse` module.Our idea is as follows: We execute the given program up to the point where the arguments are actually parsed – that is, `argparse.parse_args()` is invoked. Up to this point, we track all calls into the argument parser, notably those calls that define arguments and options (`add_argument()`). From these, we construct the grammar. Tracking ArgumentsLet us illustrate this approach with a simple experiment: We define a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while `process_numbers` is invoked. If we have a call to a method `add_argument`, we access and print out the local variables (which at this point are the arguments to the method).
###Code
import sys
import string
def trace_locals(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(method_name, locals)
###Output
_____no_output_____
###Markdown
What we get is a list of all calls to `add_argument()`, together with the method arguments passed:
###Code
sys.settrace(trace_locals)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
add_argument {'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True), 'args': ('-h', '--help'), 'kwargs': {'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}}
add_argument {'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True), 'args': ('integers',), 'kwargs': {'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x107b123d0>, 'args': ('--sum',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x107b123d0>, 'args': ('--min',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x107b123d0>, 'args': ('--max',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}}
6
###Markdown
From the `args` argument, we can access the individual options and arguments to be defined:
###Code
def trace_options(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(locals['args'])
sys.settrace(trace_options)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
('-h', '--help')
('integers',)
('--sum',)
('--min',)
('--max',)
6
###Markdown
We see that each argument comes as a tuple with one (say, `integers` or `--sum`) or two members (`-h` and `--help`), which denote alternate forms for the same option. Our job will be to go through the arguments of `add_arguments()` and detect not only the names of options and arguments, but also whether they accept additional parameters, as well as the type of the parameters. A Grammar Miner for Options and Arguments Let us now build a class that gathers all this information to create a grammar. We use the `ParseInterrupt` exception to interrupt program execution after gathering all arguments and options:
###Code
class ParseInterrupt(Exception):
pass
###Output
_____no_output_____
###Markdown
The class `OptionGrammarMiner` takes an executable function for which the grammar of options and arguments is to be mined:
###Code
class OptionGrammarMiner:
"""Helper class for extracting option grammars"""
def __init__(self, function: Callable, log: bool = False):
"""Constructor.
`function` - a function processing arguments using argparse()
`log` - output diagnostics if True
"""
self.function = function
self.log = log
###Output
_____no_output_____
###Markdown
The method `mine_ebnf_grammar()` is where everything happens. It creates a grammar of the form``` ::= * ::= ::= ```in which the options and arguments will be collected. It then sets a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while the previously defined `function` is invoked. Raising `ParseInterrupt` (when `parse_args()` is invoked) ends execution.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
OPTION_SYMBOL = "<option>"
ARGUMENTS_SYMBOL = "<arguments>"
def mine_ebnf_grammar(self):
"""Extract EBNF option grammar"""
self.grammar = {
START_SYMBOL: ["(" + self.OPTION_SYMBOL + ")*" + self.ARGUMENTS_SYMBOL],
self.OPTION_SYMBOL: [],
self.ARGUMENTS_SYMBOL: []
}
self.current_group = self.OPTION_SYMBOL
old_trace = sys.gettrace()
sys.settrace(self.traceit)
try:
self.function()
except ParseInterrupt:
pass
sys.settrace(old_trace)
return self.grammar
def mine_grammar(self):
"""Extract BNF option grammar"""
return convert_ebnf_grammar(self.mine_ebnf_grammar())
###Output
_____no_output_____
###Markdown
The trace function checks for four methods: `add_argument()` is the most important function, resulting in processing arguments; `frame.f_locals` again is the set of local variables, which at this point is mostly the arguments to `add_argument()`. Since mutually exclusive groups also have a method `add_argument()`, we set the flag `in_group` to differentiate. Note that we make no specific efforts to differentiate between multiple parsers or groups; we simply assume that there is one parser, and at any point at most one mutually exclusive group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def traceit(self, frame, event, arg):
if event != "call":
return
if "self" not in frame.f_locals:
return
self_var = frame.f_locals["self"]
method_name = frame.f_code.co_name
if method_name == "add_argument":
in_group = repr(type(self_var)).find("Group") >= 0
self.process_argument(frame.f_locals, in_group)
elif method_name == "add_mutually_exclusive_group":
self.add_group(frame.f_locals, exclusive=True)
elif method_name == "add_argument_group":
# self.add_group(frame.f_locals, exclusive=False)
pass
elif method_name == "parse_args":
raise ParseInterrupt
return self.traceit
###Output
_____no_output_____
###Markdown
The `process_arguments()` now analyzes the arguments passed and adds them to the grammar:* If the argument starts with `-`, it gets added as an optional element to the `` list* Otherwise, it gets added to the `` list.The optional `nargs` argument specifies how many arguments can follow. If it is a number, we add the appropriate number of elements to the grammar; if it is an abstract specifier (say, `+` or `*`), we use it directly as EBNF operator.Given the large number of parameters and optional behavior, this is a somewhat messy function, but it does the job.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def process_argument(self, locals, in_group):
args = locals["args"]
kwargs = locals["kwargs"]
if self.log:
print(args)
print(kwargs)
print()
for arg in args:
self.process_arg(arg, in_group, kwargs)
class OptionGrammarMiner(OptionGrammarMiner):
def process_arg(self, arg, in_group, kwargs):
if arg.startswith('-'):
if not in_group:
target = self.OPTION_SYMBOL
else:
target = self.current_group
metavar = None
arg = " " + arg
else:
target = self.ARGUMENTS_SYMBOL
metavar = arg
arg = ""
if "nargs" in kwargs:
nargs = kwargs["nargs"]
else:
nargs = 1
param = self.add_parameter(kwargs, metavar)
if param == "":
nargs = 0
if isinstance(nargs, int):
for i in range(nargs):
arg += param
else:
assert nargs in "?+*"
arg += '(' + param + ')' + nargs
if target == self.OPTION_SYMBOL:
self.grammar[target].append(arg)
else:
self.grammar[target].append(arg)
###Output
_____no_output_____
###Markdown
The method `add_parameter()` handles possible parameters of options. If the argument has an `action` defined, it takes no parameter. Otherwise, we identify the type of the parameter (as `int` or `str`) and augment the grammar with an appropriate rule.
###Code
import inspect
class OptionGrammarMiner(OptionGrammarMiner):
def add_parameter(self, kwargs, metavar):
if "action" in kwargs:
# No parameter
return ""
type_ = "str"
if "type" in kwargs:
given_type = kwargs["type"]
# int types come as '<class int>'
if inspect.isclass(given_type) and issubclass(given_type, int):
type_ = "int"
if metavar is None:
if "metavar" in kwargs:
metavar = kwargs["metavar"]
else:
metavar = type_
self.add_type_rule(type_)
if metavar != type_:
self.add_metavar_rule(metavar, type_)
param = " <" + metavar + ">"
return param
###Output
_____no_output_____
###Markdown
The method `add_type_rule()` adds a rule for parameter types to the grammar. If the parameter is identified by a meta-variable (say, `N`), we add a rule for this as well to improve legibility.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_type_rule(self, type_):
if type_ == "int":
self.add_int_rule()
else:
self.add_str_rule()
def add_int_rule(self):
self.grammar["<int>"] = ["(-)?<digit>+"]
self.grammar["<digit>"] = crange('0', '9')
def add_str_rule(self):
self.grammar["<str>"] = ["<char>+"]
self.grammar["<char>"] = srange(
string.digits
+ string.ascii_letters
+ string.punctuation)
def add_metavar_rule(self, metavar, type_):
self.grammar["<" + metavar + ">"] = ["<" + type_ + ">"]
###Output
_____no_output_____
###Markdown
The method `add_group()` adds a new mutually exclusive group to the grammar. We define a new symbol (say, ``) for the options added to the group, and use the `required` and `exclusive` flags to define an appropriate expansion operator. The group is then prefixed to the grammar, as in``` ::= * ::= ```and filled with the next calls to `add_argument()` within the group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_group(self, locals, exclusive):
kwargs = locals["kwargs"]
if self.log:
print(kwargs)
required = kwargs.get("required", False)
group = new_symbol(self.grammar, "<group>")
if required and exclusive:
group_expansion = group
if required and not exclusive:
group_expansion = group + "+"
if not required and exclusive:
group_expansion = group + "?"
if not required and not exclusive:
group_expansion = group + "*"
self.grammar[START_SYMBOL][0] = group_expansion + \
self.grammar[START_SYMBOL][0]
self.grammar[group] = []
self.current_group = group
###Output
_____no_output_____
###Markdown
That's it! With this, we can now extract the grammar from our `process_numbers()` program. Turning on logging again reveals the variables we draw upon.
###Code
miner = OptionGrammarMiner(process_numbers, log=True)
process_numbers_grammar = miner.mine_ebnf_grammar()
###Output
('-h', '--help')
{'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}
('integers',)
{'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}
{'required': True}
('--sum',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}
('--min',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}
('--max',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}
###Markdown
Here is the extracted grammar:
###Code
process_numbers_grammar
###Output
_____no_output_____
###Markdown
The grammar properly identifies the group found:
###Code
process_numbers_grammar["<start>"]
process_numbers_grammar["<group>"]
###Output
_____no_output_____
###Markdown
It also identifies a `--help` option provided not by us, but by the `argparse` module:
###Code
process_numbers_grammar["<option>"]
###Output
_____no_output_____
###Markdown
The grammar also correctly identifies the types of the arguments:
###Code
process_numbers_grammar["<arguments>"]
process_numbers_grammar["<integers>"]
###Output
_____no_output_____
###Markdown
The rules for `int` are set as defined by `add_int_rule()`
###Code
process_numbers_grammar["<int>"]
###Output
_____no_output_____
###Markdown
We can take this grammar and convert it to BNF, such that we can fuzz with it right away:
###Code
assert is_valid_grammar(process_numbers_grammar)
grammar = convert_ebnf_grammar(process_numbers_grammar)
assert is_valid_grammar(grammar)
f = GrammarCoverageFuzzer(grammar)
for i in range(10):
print(f.fuzz())
###Output
--sum 9
--max -h --help --help -16 -0
--min --help 2745341 8
--min 1 27
--sum --help --help -2
--sum --help 0 3 -77
--sum -3
--sum --help 429 8 10 0295 -694 1
--max -h 91 -1425 99
--sum -795 -94 8 -44
###Markdown
Each and every invocation adheres to the rules as set forth in the `argparse` calls. By mining options and arguments from existing programs, we can now fuzz these options out of the box – without having to specify a grammar. Testing Autopep8 Let us try out the option grammar miner on real-world Python programs. `autopep8` is a tool that automatically converts Python code to the [PEP 8 Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). (Actually, all Python code in this book runs through `autopep8` during production.) `autopep8` offers a wide range of options, as can be seen by invoking it with `--help`:
###Code
!autopep8 --help
###Output
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing] [--exit-code]
[files ...]
Automatically formats Python code to conform to the PEP 8 style guide.
positional arguments:
files files to format or '-' for standard in
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-v, --verbose print verbose messages; multiple -v result in more
verbose messages
-d, --diff print the diff for the fixed source
-i, --in-place make changes to files in place
--global-config filename
path to a global pep8 config file; if this file does
not exist then this is ignored (default:
/Users/zeller/.config/pep8)
--ignore-local-config
don't look for and apply local config files; if not
passed, defaults are updated with any config files in
the project's root directory
-r, --recursive run recursively over directories; must be used with
--in-place or --diff
-j n, --jobs n number of parallel jobs; match CPU count if value is
less than 1
-p n, --pep8-passes n
maximum number of additional pep8 passes (default:
infinite)
-a, --aggressive enable non-whitespace changes; multiple -a result in
more aggressive changes
--experimental enable experimental fixes
--exclude globs exclude file/directory names that match these comma-
separated globs
--list-fixes list codes for fixes; used by --ignore and --select
--ignore errors do not fix these errors/warnings (default:
E226,E24,W50,W690)
--select errors fix only these errors/warnings (e.g. E4,W)
--max-line-length n set maximum allowed line length (default: 79)
--line-range line line, --range line line
only fix errors found within this inclusive range of
line numbers (e.g. 1 99); line numbers are indexed at
1
--hang-closing hang-closing option passed to pycodestyle
--exit-code change to behavior of exit code. default behavior of
return value, 0 is no differences, 1 is error exit.
return 2 when add this option. 2 is exists
differences.
###Markdown
Autopep8 SetupWe want to systematically test these options. In order to deploy our configuration grammar miner, we need to find the source code of the executable:
###Code
import os
def find_executable(name):
for path in os.get_exec_path():
qualified_name = os.path.join(path, name)
if os.path.exists(qualified_name):
return qualified_name
return None
autopep8_executable = find_executable("autopep8")
assert autopep8_executable is not None
autopep8_executable
###Output
_____no_output_____
###Markdown
Next, we build a function that reads the contents of the file and executes it.
###Code
def autopep8():
executable = find_executable("autopep8")
# First line has to contain "/usr/bin/env python" or like
first_line = open(executable).readline()
assert first_line.find("python") >= 0
contents = open(executable).read()
exec(contents)
###Output
_____no_output_____
###Markdown
Mining an Autopep8 GrammarWe can use the `autopep8()` function in our grammar miner:
###Code
autopep8_miner = OptionGrammarMiner(autopep8)
###Output
_____no_output_____
###Markdown
and extract a grammar for it:
###Code
autopep8_ebnf_grammar = autopep8_miner.mine_ebnf_grammar()
###Output
_____no_output_____
###Markdown
This works because here, `autopep8` is not a separate process (and a separate Python interpreter), but we run the `autopep8()` function (and the `autopep8` code) in our current Python interpreter – up to the call to `parse_args()`, where we interrupt execution again. At this point, the `autopep8` code has done nothing but setting up the argument parser – which is what we are interested in. The grammar options mined reflect precisely the options seen when providing `--help`:
###Code
print(autopep8_ebnf_grammar["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
Metavariables like `` or `` are placeholders for integers. We assume all metavariables of the same name have the same type:
###Code
autopep8_ebnf_grammar["<line>"]
###Output
_____no_output_____
###Markdown
The grammar miner has inferred that the argument to `autopep8` is a list of files:
###Code
autopep8_ebnf_grammar["<arguments>"]
###Output
_____no_output_____
###Markdown
which in turn all are strings:
###Code
autopep8_ebnf_grammar["<files>"]
###Output
_____no_output_____
###Markdown
As we are only interested in testing options, not arguments, we fix the arguments to a single mandatory input. (Otherwise, we'd have plenty of random file names generated.)
###Code
autopep8_ebnf_grammar["<arguments>"] = [" <files>"]
autopep8_ebnf_grammar["<files>"] = ["foo.py"]
assert is_valid_grammar(autopep8_ebnf_grammar)
###Output
_____no_output_____
###Markdown
Creating Autopep8 Options Let us now use the inferred grammar for fuzzing. Again, we convert the EBNF grammar into a regular BNF grammar:
###Code
autopep8_grammar = convert_ebnf_grammar(autopep8_ebnf_grammar)
assert is_valid_grammar(autopep8_grammar)
###Output
_____no_output_____
###Markdown
And we can use the grammar for fuzzing all options:
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=4)
for i in range(20):
print(f.fuzz())
###Output
-r foo.py
-h --experimental --hang-closing foo.py
--list-fixes -v foo.py
--aggressive -d foo.py
--indent-size 9 --help foo.py
--exit-code --recursive foo.py
--diff --version -i foo.py
--max-line-length 0 --in-place --verbose foo.py
--ignore-local-config -a foo.py
--select x -i --exit-code foo.py
-j 8 --diff foo.py
-d -v -d foo.py
-p 6 -i foo.py
-v --diff foo.py
--ignore uA --recursive foo.py
--jobs 5 -r foo.py
--range 4 1 foo.py
--ignore-local-config -i foo.py
-r --exit-code foo.py
-v -r foo.py
###Markdown
Let us apply these options on the actual program. We need a file `foo.py` that will serve as input: (Note that the following commands will overwrite the file `foo.py`, if it already exists in the current working directory. Be aware of this, if you downloaded the notebooks and are working locally.)
###Code
def create_foo_py():
open("foo.py", "w").write("""
def twice(x = 2):
return x + x
""")
create_foo_py()
print(open("foo.py").read(), end="")
###Output
def twice(x = 2):
return x + x
###Markdown
We see how `autopep8` fixes the spacing:
###Code
!autopep8 foo.py
###Output
def twice(x=2):
return x + x
###Markdown
Let us now put things together. We define a `ProgramRunner` that will run the `autopep8` executable with arguments coming from the mined `autopep8` grammar.
###Code
from Fuzzer import ProgramRunner
###Output
_____no_output_____
###Markdown
Running `autopep8` with the mined options reveals a surprisingly high number of passing runs. (We see that some options depend on each other or are mutually exclusive, but this is handled by the program logic, not the argument parser, and hence out of our scope.) The `GrammarCoverageFuzzer` ensures that each option is tested at least once. (Digits and letters, too, by the way.)
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=5)
for i in range(20):
invocation = "autopep8" + f.fuzz()
print("$ " + invocation)
args = invocation.split()
autopep8_runner = ProgramRunner(args)
result, outcome = autopep8_runner.run()
if result.stderr != "":
print(result.stderr, end="")
###Output
$ autopep8 foo.py
$ autopep8 --diff --max-line-length 4 --exit-code --range 5 8 -p 2 foo.py
$ autopep8 --ignore z --verbose -r --list-fixes foo.py
--recursive must be used with --in-place or --diff$ autopep8 --exclude 5 -h -i --aggressive --in-place foo.py
$ autopep8 --select a --help --experimental foo.py
$ autopep8 --indent-size -30 --recursive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --global-config < -j 9 -v -a foo.py
parallel jobs requires --in-place$ autopep8 --line-range 7 1 --hang-closing -d foo.py
First value of --range should be less than or equal to the second$ autopep8 --pep8-passes 6 --hang-closing --version --ignore-local-config foo.py
$ autopep8 --jobs -2 --experimental --version foo.py
$ autopep8 --ignore Y: --select ! --global-config e foo.py
$ autopep8 --select 1 -a --recursive --aggressive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --ignore * --ignore `0 --global-config _ --verbose foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config ,\ --exclude r -v foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config xd6M --recursive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --select R --exclude L --version --ignore-local-config foo.py
$ autopep8 --select " --verbose -h -d foo.py
$ autopep8 --diff -i -h foo.py
$ autopep8 --in-place --select w --version -i foo.py
$ autopep8 --ignore 49 --exclude lI -i foo.py
###Markdown
Our `foo.py` file now has been formatted in place a number of times:
###Code
print(open("foo.py").read(), end="")
###Output
def twice(x=2):
return x + x
###Markdown
We don't need it anymore, so we clean up things:
###Code
import os
os.remove("foo.py")
###Output
_____no_output_____
###Markdown
Classes for Fuzzing Configuration OptionsLet us now create reusable classes that we can use for testing arbitrary programs. (Okay, make that "arbitrary programs that are written in Python and use the `argparse` module to process command-line arguments.") The class `OptionRunner` is a subclass of `ProgramRunner` that takes care of automatically determining the grammar, using the same steps we used for `autopep8`, above.
###Code
from Grammars import unreachable_nonterminals
class OptionRunner(ProgramRunner):
"""Run a program while determining its option grammar"""
def __init__(self, program: Union[str, List[str]],
arguments: Optional[str] = None, *,
log: bool = False,
miner_class: Optional[Type[OptionGrammarMiner]] = None):
"""Constructor.
`program` - the (Python) program to be executed
`arguments` - an (optional) string with arguments for `program`
`log` - if True, enable logging in miner
`miner_class` - the `OptionGrammarMiner` class to be used
(default: `OptionGrammarMiner`)
"""
if isinstance(program, str):
self.base_executable = program
else:
self.base_executable = program[0]
if miner_class is None:
miner_class = OptionGrammarMiner
self.miner_class = miner_class
self.log = log
self.find_contents()
self.find_grammar()
if arguments is not None:
self.set_arguments(arguments)
super().__init__(program)
###Output
_____no_output_____
###Markdown
First, we find the contents of the Python executable:
###Code
class OptionRunner(OptionRunner):
def find_contents(self):
self._executable = find_executable(self.base_executable)
if self._executable is None:
raise IOError(self.base_executable + ": not found")
first_line = open(self._executable).readline()
if first_line.find("python") < 0:
raise IOError(self.base_executable + ": not a Python executable")
self.contents = open(self._executable).read()
def invoker(self):
# We are passing the local variables as is, such that we can access `self`
# We set __name__ to '__main__' to invoke the script as an executable
exec(self.contents, {'__name__': '__main__'})
def executable(self):
return self._executable
###Output
_____no_output_____
###Markdown
Next, we determine the grammar using the `OptionGrammarMiner` class:
###Code
class OptionRunner(OptionRunner):
def find_grammar(self):
miner = self.miner_class(self.invoker, log=self.log)
self._ebnf_grammar = miner.mine_ebnf_grammar()
def ebnf_grammar(self):
"""Return extracted grammar in EBNF form"""
return self._ebnf_grammar
def grammar(self):
"""Return extracted grammar in BNF form"""
return convert_ebnf_grammar(self._ebnf_grammar)
###Output
_____no_output_____
###Markdown
The two service methods `set_arguments()` and `set_invocation()` help us to change the arguments and program, respectively.
###Code
class OptionRunner(OptionRunner):
def set_arguments(self, args):
self._ebnf_grammar["<arguments>"] = [" " + args]
# Delete rules for previous arguments
for nonterminal in unreachable_nonterminals(self._ebnf_grammar):
del self._ebnf_grammar[nonterminal]
def set_invocation(self, program):
self.program = program
###Output
_____no_output_____
###Markdown
We can instantiate the class on `autopep8` and immediately get the grammar:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
print(autopep8_runner.ebnf_grammar()["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
An `OptionFuzzer` interacts with the given `OptionRunner` to obtain its grammar, which is then passed to its `GrammarCoverageFuzzer` superclass.
###Code
class OptionFuzzer(GrammarCoverageFuzzer):
"""Fuzz a (Python) program using its arguments"""
def __init__(self, runner: OptionRunner, *args, **kwargs):
"""Constructor. `runner` is an OptionRunner."""
assert issubclass(type(runner), OptionRunner)
self.runner = runner
grammar = runner.grammar()
super().__init__(grammar, *args, **kwargs)
###Output
_____no_output_____
###Markdown
When invoking `run()`, the `OptionFuzzer` creates a new invocation (using `fuzz()` from its grammar) and runs the now given (or previously set) runner with the arguments from the grammar. Note that the runner specified in `run()` can differ from the one set during initialization; this allows for mining options from one program and applying it in another context.
###Code
class OptionFuzzer(OptionFuzzer):
def run(self, runner=None, inp=""):
if runner is None:
runner = self.runner
assert issubclass(type(runner), OptionRunner)
invocation = runner.executable() + " " + self.fuzz()
runner.set_invocation(invocation.split())
return runner.run(inp)
###Output
_____no_output_____
###Markdown
Example: Autopep8Let us apply our newly defined classes on the `autopep8` runner:
###Code
autopep8_fuzzer = OptionFuzzer(autopep8_runner, max_nonterminals=5)
for i in range(3):
print(autopep8_fuzzer.fuzz())
###Output
foo.py
--in-place --ignore-local-config --jobs 6 --recursive -i foo.py
--help -a --indent-size -95 --pep8-passes 3 --exclude = -r foo.py
###Markdown
We can now systematically test `autopep8` with these classes:
###Code
autopep8_fuzzer.run(autopep8_runner)
###Output
_____no_output_____
###Markdown
Example: MyPyWe can extract options for the `mypy` static type checker for Python:
###Code
assert find_executable("mypy") is not None
mypy_runner = OptionRunner("mypy", "foo.py")
print(mypy_runner.ebnf_grammar()["<option>"])
mypy_fuzzer = OptionFuzzer(mypy_runner, max_nonterminals=5)
for i in range(10):
print(mypy_fuzzer.fuzz())
###Output
--follow-imports l foo.py
--logical-deps --allow-any-generics --any-exprs-report yB --fast-exit --show-error-context -i --no-site-packages --no-explicit-package-bases --no-sqlite-cache --warn-unused-ignores foo.py
--html-report 7/ --tb --error-summary foo.py
--no-implicit-reexport --no-error-summary --warn-return-any --dump-build-stats --check-untyped-defs --hide-error-context --no-install-types --install-types foo.py
--no-warn-redundant-casts --no-warn-return-any --bazel --sqlite-cache --enable-error-code foo.py
--cache-map { t --version --warn-unused-configs foo.py
--explicit-package-bases --exclude --help --semantic-analysis-only --soft-error-limit 6 --skip-version-check --no-warn-unused-configs foo.py
--disallow-untyped-defs --hide-error-codes --show-absolute-path --shadow-file --package-root --interactive foo.py
--strict-optional --disable-error-code --disallow-untyped-globals --disallow-incomplete-defs --no-namespace-packages --show-column-numbers --inferstats --raise-exceptions --no-color-output --allow-incomplete-defs --warn-incomplete-stub --command --module --disallow-redefinition --cache-dir --no-strict-optional -c -h -p foo.py
--implicit-optional --disallow-any-generics --no-warn-unreachable --strict-equality --show-error-codes --package --always-false foo.py
###Markdown
Example: NotedownHere's the configuration options for the `notedown` Notebook to Markdown converter:
###Code
assert find_executable("notedown") is not None
notedown_runner = OptionRunner("notedown")
print(notedown_runner.ebnf_grammar()["<option>"])
notedown_fuzzer = OptionFuzzer(notedown_runner, max_nonterminals=5)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
c
--help -o --nomagic --version --output --match H9- --rmagic --render --run
--examples -h --precode r x --execute --strip --debug <
--execute --rmagic --rmagic !
-h --render --debug 2
--examples --version --version .
--execute --help --render R
--run -h 6T
--execute --rmagic -h ;
--from 5* --version --help s
###Markdown
Combinatorial TestingOur `CoverageGrammarFuzzer` does a good job in covering each and every option at least once, which is great for systematic testing. However, as we also can see in our examples above, some options require each other, while others interfere with each other. What we should do as good testers is not only to cover every option individually, but also _combinations_ of options. The Python `itertools` module gives us means to create combinations from lists. We can, for instance, take the `notedown` options and create a list of all pairs.
###Code
from itertools import combinations
option_list = notedown_runner.ebnf_grammar()["<option>"]
pairs = list(combinations(option_list, 2))
###Output
_____no_output_____
###Markdown
There's quite a number of pairs:
###Code
len(pairs)
print(pairs[:20])
###Output
[(' -h', ' --help'), (' -h', ' -o( <str>)?'), (' -h', ' --output( <str>)?'), (' -h', ' --from <str>'), (' -h', ' --to <str>'), (' -h', ' --run'), (' -h', ' --execute'), (' -h', ' --timeout <int>'), (' -h', ' --strip'), (' -h', ' --precode( <str>)+'), (' -h', ' --knit( <str>)?'), (' -h', ' --rmagic'), (' -h', ' --nomagic'), (' -h', ' --render'), (' -h', ' --template <str>'), (' -h', ' --match <str>'), (' -h', ' --examples'), (' -h', ' --version'), (' -h', ' --debug'), (' --help', ' -o( <str>)?')]
###Markdown
Testing every such pair of options frequently suffices to cover all interferences between options. (Programs rarely have conditions involving three or more configuration settings.) To this end, we _change_ the grammar from having a list of options to having a list of _option pairs_, such that covering these will automatically cover all pairs. We create a function `pairwise()` that takes a list of options as occurring in our grammar and returns a list of _pairwise options_ – that is, our original options, but concatenated.
###Code
def pairwise(option_list):
return [option_1 + option_2
for (option_1, option_2) in combinations(option_list, 2)]
###Output
_____no_output_____
###Markdown
Here's the first 20 pairs:
###Code
print(pairwise(option_list)[:20])
###Output
[' -h --help', ' -h -o( <str>)?', ' -h --output( <str>)?', ' -h --from <str>', ' -h --to <str>', ' -h --run', ' -h --execute', ' -h --timeout <int>', ' -h --strip', ' -h --precode( <str>)+', ' -h --knit( <str>)?', ' -h --rmagic', ' -h --nomagic', ' -h --render', ' -h --template <str>', ' -h --match <str>', ' -h --examples', ' -h --version', ' -h --debug', ' --help -o( <str>)?']
###Markdown
The new grammar `pairwise_notedown_grammar` is a copy of the `notedown` grammar, but with the list of options replaced with the above pairwise option list.
###Code
notedown_grammar = notedown_runner.grammar()
pairwise_notedown_grammar = extend_grammar(notedown_grammar)
pairwise_notedown_grammar["<option>"] = pairwise(notedown_grammar["<option>"])
assert is_valid_grammar(pairwise_notedown_grammar)
###Output
_____no_output_____
###Markdown
Using the "pairwise" grammar to fuzz now covers one pair after another:
###Code
notedown_pairwise_fuzzer = GrammarCoverageFuzzer(
pairwise_notedown_grammar, max_nonterminals=4)
for i in range(10):
print(notedown_pairwise_fuzzer.fuzz())
###Output
--help --timeout -0 w
--rmagic --render --execute --strip
--execute --rmagic --nomagic --debug
--help --strip --strip --version r
-h --debug --help --execute
--execute --debug -h --render
--strip --rmagic --render --examples h
-h --run --examples --debug
--from PX --examples >
--run --strip --strip --render
###Markdown
Can we actually test all combinations of options? Not in practice, as the number of combinations quickly grows as the length increases. It decreases again as the number of options reaches the maximum (with 20 options, there is only 1 combination involving _all_ options), but the absolute numbers are still staggering:
###Code
for combination_length in range(1, 20):
tuples = list(combinations(option_list, combination_length))
print(combination_length, len(tuples))
###Output
1 20
2 190
3 1140
4 4845
5 15504
6 38760
7 77520
8 125970
9 167960
10 184756
11 167960
12 125970
13 77520
14 38760
15 15504
16 4845
17 1140
18 190
19 20
###Markdown
Formally, the number of combinations of length $k$ in a set of options of length $n$ is the binomial coefficient$${n \choose k} = \frac{n!}{k!(n - k)!}$$ which for $k = 2$ (all pairs) gives us$${n \choose 2} = \frac{n!}{2(n - 2)!} = \frac{n (n - 1)}{2}$$ For `autopep8` with its 29 options...
###Code
len(autopep8_runner.ebnf_grammar()["<option>"])
###Output
_____no_output_____
###Markdown
... we thus have 406 distinct pairs. However, the binomial coefficient does not differentiate between permutations of elements of the pairs, which our tests do. Therefore we need 812 tests to cover all pairs:
###Code
len(autopep8_runner.ebnf_grammar()["<option>"]) * \
(len(autopep8_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
For `mypy` with its 110 options, though, we already end up with 11,990 tests to be conducted:
###Code
len(mypy_runner.ebnf_grammar()["<option>"])
len(mypy_runner.ebnf_grammar()["<option>"]) * \
(len(mypy_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
Even if each pair takes a second to run, we'd still be done in three hours of testing, though. If your program has more options that you all want to get covered in combinations, it is advisable that you limit the number of configurations further – for instance by limiting combinatorial testing to those combinations that possibly can interact with each other; and covering all other (presumably orthogonal) options individually. This mechanism of creating configurations by extending grammars can be easily extended to other configuration targets. One may want to explore a greater number of configurations, or expansions in specific contexts. The [exercises](Exercises), below, have a number of options ready for you. SynopsisThis chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options. `OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
###Output
_____no_output_____
###Markdown
The grammar can be extracted via the method `ebnf_grammar()`:
###Code
option_ebnf_grammar = autopep8_runner.ebnf_grammar()
option_ebnf_grammar
###Output
_____no_output_____
###Markdown
The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:
###Code
from Grammars import convert_ebnf_grammar
fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))
[fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
autopep8_fuzzer = OptionFuzzer(autopep8_runner)
[autopep8_fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The final step in testing would now to invoke the program with these arguments. Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. The `OptionRunner` constructor accepts an additional `miner` keyword parameter, which takes the class of the argument grammar miner to be used. By default, this is `OptionGrammarMiner` – a helper class that can be used (and extended) to create own option grammar miners.
###Code
# ignore
from ClassDiagram import display_class_hierarchy
from Fuzzer import Fuzzer, Runner, ProgramRunner
from Grammars import Expansion
from GrammarFuzzer import GrammarFuzzer, DerivationTree
from GrammarCoverageFuzzer import TrackingGrammarCoverageFuzzer
# ignore
display_class_hierarchy([OptionRunner, OptionFuzzer, OptionGrammarMiner],
public_methods=[
Fuzzer.__init__,
Fuzzer.fuzz,
Fuzzer.run,
Fuzzer.runs,
GrammarFuzzer.__init__,
GrammarFuzzer.fuzz,
GrammarFuzzer.fuzz_tree,
TrackingGrammarCoverageFuzzer.__init__,
OptionFuzzer.__init__,
OptionFuzzer.run,
Runner.__init__,
Runner.run,
ProgramRunner.__init__,
ProgramRunner.__init__,
OptionRunner.__init__,
OptionRunner.ebnf_grammar,
OptionRunner.grammar,
OptionGrammarMiner.__init__,
OptionGrammarMiner.mine_ebnf_grammar,
OptionGrammarMiner.mine_grammar,
],
types={
'DerivationTree': DerivationTree,
'Expansion': Expansion,
'Grammar': Grammar
},
project='fuzzingbook')
###Output
_____no_output_____
###Markdown
Lessons Learned* Besides regular input data, program _configurations_ make an important testing target.* For a given program using a standard library to parse command-line options and arguments, one can automatically extract these and convert them into a grammar.* To cover not only single options, but combinations of options, one can expand the grammar to cover all pairs, or come up with even more ambitious targets. Next StepsIf you liked the idea of mining a grammar from a program, do not miss:* [how to mine grammars for input data](GrammarMiner.ipynb) Our next steps in the book focus on:* [how to parse and recombine inputs](Parser.ipynb)* [how to assign weights and probabilities to specific productions](ProbabilisticGrammarFuzzer.ipynb)* [how to simplify inputs that cause a failure](Reducer.ipynb) BackgroundAlthough configuration data is just as likely to cause failures as other input data, it has received relatively little attention in test generation – possibly because, unlike "regular" input data, configuration data is not so much under control of external parties, and because, again unlike regular data, there is little variance in configurations. Creating models for software configurations and using these models for testing is commonplace, as is the idea of pairwise testing. For an overview, see \cite{Pezze2008}; for a discussion and comparison of state-of-the-art techniques, see \cite{Petke2015}.More specifically, \cite{Sutton2007} also discuss techniques to systematically cover command-line options. Dai et al. \cite{Dai2010} apply configuration fuzzing by changing variables associated with configuration files. Exercises Exercise 1: ifdef Configuration FuzzingIn C programs, the *C preprocessor* can be used to choose which code parts should be compiled and which ones should not. As an example, in the C code```Cifdef LONG_FOOlong foo() { ... }elseint foo() { ... }endif```the compiler will compile the function `foo()` with return type`long` if the _preprocessor variable_ `LONG_FOO` is defined, and with return type `int` if not. Such preprocessor variables are either set in the source files (using `define`, as in `define LONG_FOO`) or on the C compiler command line (using `-D` or `-D=`, as in `-DLONG_FOO`. Such *conditional compilation* is used to configure C programs towards their environment. System-specific code can contain lots of conditional compilation. As an example, consider this excerpt of `xmlparse.c`, the XML parser that is part of the Python runtime library:```cif defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32) define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800endifif !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \ && !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \ && !defined(XML_DEV_URANDOM) \ && !defined(_WIN32) \ && !defined(XML_POOR_ENTROPY) errorendifif !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */endififdef XML_UNICODE_WCHAR_Tdefine XML_T(x) (const wchar_t)xdefine XML_L(x) L xelsedefine XML_T(x) (const unsigned short)xdefine XML_L(x) xendifint fun(int x) { return XML_T(x); }``` A typical configuration for the C preprocessor on the above code could be `cc -c -D_WIN32 -DXML_POOR_ENTROPY -DXML_UNICODE_WCHAR_T xmlparse.c`, defining the given preprocessor variables and selecting the appropriate code fragments. Since the compiler can only compile one configuration at a time (implying that we can also only _test_ one resulting executable at a time), your task is to find out which of these configurations actually compile. To this end, proceed in three steps. Part 1: Extract Preprocessor VariablesWrite a _function_ `cpp_identifiers()` that, given a set of lines (say, from `open(filename).readlines()`), extracts all preprocessor variables referenced in `if` or `ifdef` preprocessor instructions. Apply `ifdef_identifiers()` on the sample C input above, such that```pythoncpp_identifiers(open("xmlparse.c").readlines()) ```returns the set```python{'_WIN32', 'LOAD_LIBRARY_SEARCH_SYSTEM32', 'HAVE_GETRANDOM', 'HAVE_SYSCALL_GETRANDOM', 'HAVE_ARC4RANDOM_BUF', ...}``` **Solution.** Let us start with creating a sample input file, `xmlparse.c`:
###Code
filename = "xmlparse.c"
open(filename, "w").write(
"""
#if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32)
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
#endif
#if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \
&& !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \
&& !defined(XML_DEV_URANDOM) \
&& !defined(_WIN32) \
&& !defined(XML_POOR_ENTROPY)
# error
#endif
#if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)
#define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */
#endif
#ifdef XML_UNICODE_WCHAR_T
#define XML_T(x) (const wchar_t)x
#define XML_L(x) L ## x
#else
#define XML_T(x) (const unsigned short)x
#define XML_L(x) x
#endif
int fun(int x) { return XML_T(x); }
""");
###Output
_____no_output_____
###Markdown
To find C preprocessor `if` directives and preprocessor variables, we use regular expressions matching them.
###Code
import re
re_cpp_if_directive = re.compile(r"\s*#\s*(el)?if")
re_cpp_identifier = re.compile(r"[a-zA-Z_$]+")
def cpp_identifiers(lines):
identifiers = set()
for line in lines:
if re_cpp_if_directive.match(line):
identifiers |= set(re_cpp_identifier.findall(line))
# These are preprocessor keywords
identifiers -= {"if", "ifdef", "ifndef", "defined"}
return identifiers
cpp_ids = cpp_identifiers(open("xmlparse.c").readlines())
cpp_ids
###Output
_____no_output_____
###Markdown
Part 2: Derive an Option GrammarWith the help of `cpp_identifiers()`, create a grammar which has C compiler invocations with a list of options, where each option takes the form `-D` for a preprocessor variable ``. Using this grammar `cpp_grammar`, a fuzzer ```pythong = GrammarCoverageFuzzer(cpp_grammar)```would create C compiler invocations such as```python[g.fuzz() for i in range(10)]['cc -DHAVE_SYSCALL_GETRANDOM xmlparse.c', 'cc -D__SCO__ -DRANDOM_BUF -DXML_UNICODE_WCHAR_T -D__UNIXWARE__ xmlparse.c', 'cc -DXML_POOR_ENTROPY xmlparse.c', 'cc -DRANDOM xmlparse.c', 'cc -D_WIN xmlparse.c', 'cc -DHAVE_ARC xmlparse.c', ...]``` **Solution.** This is not very difficult:
###Code
from Grammars import Grammar, is_valid_grammar
cpp_grammar: Grammar = {
"<start>": ["cc -c<options> " + filename],
"<options>": ["<option>", "<options><option>"],
"<option>": []
}
for id in cpp_ids:
s = new_symbol(cpp_grammar, "<" + id + ">")
cpp_grammar["<option>"].append(s)
cpp_grammar[s] = [" -D" + id]
assert is_valid_grammar(cpp_grammar)
cpp_grammar
###Output
_____no_output_____
###Markdown
Part 3: C Preprocessor Configuration FuzzingUsing the grammar just produced, use a `GrammarCoverageFuzzer` to1. Test each processor variable individually2. Test each pair of processor variables, using `pairwise()`.What happens if you actually run the invocations? **Solution.** We can simply run the coverage fuzzer, as described above.
###Code
g = GrammarCoverageFuzzer(cpp_grammar)
g.fuzz()
from Fuzzer import ProgramRunner
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:7:3: error:
# error
^
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 2 errors generated.
$ cc -c -DHAVE_GETRANDOM -D_WIN xmlparse.c
$ cc -c -D__UNIXWARE__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM_BUF -DRANDOM -DHAVE_SYSCALL_GETRANDOM xmlparse.c
$ cc -c -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DTIOCSWINSZ -DXML_DEV_URANDOM -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
$ cc -c -DHAVE_ARC -D__SCO__ -DXML_DEV_URANDOM xmlparse.c
$ cc -c -DHAVE_GETRANDOM xmlparse.c
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_SYSCALL_GETRANDOM xmlparse.c
###Markdown
To test all pairs, we can use `pairwise()`:
###Code
pairwise_cpp_grammar = extend_grammar(cpp_grammar)
pairwise_cpp_grammar["<option>"] = pairwise(cpp_grammar["<option>"])
pairwise_cpp_grammar["<option>"][:10]
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DRANDOM -DHAVE_ARC -D_WIN xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -D__SCO__ -DRANDOM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DTIOCSWINSZ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_GETRANDOM -DHAVE_ARC -DTIOCSWINSZ xmlparse.c
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM -DXML_POOR_ENTROPY -DHAVE_ARC -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
$ cc -c -DXML_DEV_URANDOM -D__UNIXWARE__ xmlparse.c
$ cc -c -D_WIN -DTIOCSWINSZ -DXML_DEV_URANDOM xmlparse.c
$ cc -c -DRANDOM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM -D__SCO__ -D_WIN xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
###Markdown
Some of the compilation errors we get could be expected – for instance, defining `XML_UNICODE_WCHAR_T` when actually, the type is not supported in our environment. Other errors may not be expected – and it is these errors we would find through systematic configuration fuzzing, as described above. At the end, don't forget to clean up:
###Code
os.remove("xmlparse.c")
if os.path.exists("xmlparse.o"):
os.remove("xmlparse.o")
###Output
_____no_output_____
###Markdown
Exercise 2: .ini Configuration FuzzingBesides command-line options, another important source of configurations are _configuration files_. In this exercise, we will consider the very simple configuration language provided by the Python `ConfigParser` module, which is very similar to what is found in Microsoft Windows _.ini_ files. The following example for a `ConfigParser` input file stems right from [the ConfigParser documentation](https://docs.python.org/3/library/configparser.html):```[DEFAULT]ServerAliveInterval = 45Compression = yesCompressionLevel = 9ForwardX11 = yes[bitbucket.org]User = hg[topsecret.server.com]Port = 50022ForwardX11 = no``` The above `ConfigParser` file can be created programmatically:
###Code
import configparser
config = configparser.ConfigParser()
config['DEFAULT'] = {'ServerAliveInterval': '45',
'Compression': 'yes',
'CompressionLevel': '9'}
config['bitbucket.org'] = {}
config['bitbucket.org']['User'] = 'hg'
config['topsecret.server.com'] = {}
topsecret = config['topsecret.server.com']
topsecret['Port'] = '50022' # mutates the parser
topsecret['ForwardX11'] = 'no' # same here
config['DEFAULT']['ForwardX11'] = 'yes'
with open('example.ini', 'w') as configfile:
config.write(configfile)
with open('example.ini') as configfile:
print(configfile.read(), end="")
###Output
[DEFAULT]
serveraliveinterval = 45
compression = yes
compressionlevel = 9
forwardx11 = yes
[bitbucket.org]
user = hg
[topsecret.server.com]
port = 50022
forwardx11 = no
###Markdown
and be read in again:
###Code
config = configparser.ConfigParser()
config.read('example.ini')
topsecret = config['topsecret.server.com']
topsecret['Port']
###Output
_____no_output_____
###Markdown
Part 1: Read ConfigurationUsing `configparser`, create a program reading in the above configuration file and accessing the individual elements. Part 2: Create a Configuration GrammarDesign a grammar that will automatically create configuration files suitable for your above program. Fuzz your program with it. Part 3: Mine a Configuration GrammarBy dynamically tracking the individual accesses to configuration elements, you can again extract a basic grammar from the execution. To this end, create a subclass of `ConfigParser` with a special method `__getitem__`:
###Code
class TrackingConfigParser(configparser.ConfigParser):
def __getitem__(self, key):
print("Accessing", repr(key))
return super().__getitem__(key)
###Output
_____no_output_____
###Markdown
For a `TrackingConfigParser` object `p`, `p.__getitem__(key)` will be invoked whenever `p[key]` is accessed:
###Code
tracking_config_parser = TrackingConfigParser()
tracking_config_parser.read('example.ini')
section = tracking_config_parser['topsecret.server.com']
###Output
Accessing 'topsecret.server.com'
###Markdown
Using `__getitem__()`, as above, implement a tracking mechanism that, while your program accesses the read configuration, automatically saves options accessed and values read. Create a prototype grammar from these values; use it for fuzzing. At the end, don't forget to clean up:
###Code
import os
os.remove("example.ini")
###Output
_____no_output_____
###Markdown
**Solution.** Left to the reader. Enjoy! Exercise 3: Extracting and Fuzzing C Command-Line OptionsIn C programs, the `getopt()` function are frequently used to process configuration options. A call```getopt(argc, argv, "bf:")```indicates that the program accepts two options `-b` and `-f`, with `-f` taking an argument (as indicated by the following colon). Part 1: Getopt FuzzingWrite a framework which, for a given C program, automatically extracts the argument to `getopt()` and derives a fuzzing grammar for it. There are multiple ways to achieve this:1. Scan the program source code for occurrences of `getopt()` and return the string passed. (Crude, but should frequently work.)2. Insert your own implementation of `getopt()` into the source code (effectively replacing `getopt()` from the runtime library), which outputs the `getopt()` argument and exits the program. Recompile and run.3. (Advanced.) As above, but instead of changing the source code, hook into the _dynamic linker_ which at runtime links the program with the C runtime library. Set the library loading path (on Linux and Unix, this is the `LD_LIBRARY_PATH` environment variable) such that your own version of `getopt()` is linked first, and the regular libraries later. Executing the program (without recompiling) should yield the desired result.Apply this on `grep` and `ls`; report the resulting grammars and results. **Solution.** Left to the reader. Enjoy hacking! Part 2: Fuzzing Long Options in CSame as Part 1, but also hook into the GNU variant `getopt_long()`, which accepts "long" arguments with double dashes such as `--help`. Note that method 1, above, will not work here, since the "long" options are defined in a separately defined structure. **Solution.** Left to the reader. Enjoy hacking! Exercise 4: Expansions in ContextIn our above option configurations, we have multiple symbols which all expand to the same integer. For instance, the `--line-range` option of `autopep8` takes two `` parameters which both expand into the same `` symbol:``` ::= ... | --line-range | ... ::= ::= (-)?+ ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9```
###Code
autopep8_runner.ebnf_grammar()["<line>"]
autopep8_runner.ebnf_grammar()["<int>"]
autopep8_runner.ebnf_grammar()["<digit>"]
###Output
_____no_output_____
###Markdown
Testing ConfigurationsThe behavior of a program is not only governed by its data. The _configuration_ of a program – that is, the settings that govern the execution of a program on its (regular) input data, as set by options or configuration files – just as well influences behavior, and thus can and should be tested. In this chapter, we explore how to systematically _test_ and _cover_ software configurations. By _automatically inferring configuration options_, we can apply these techniques out of the box, with no need for writing a grammar. Finally, we show how to systematically cover _combinations_ of configuration options, quickly detecting unwanted interferences. **Prerequisites*** You should have read the [chapter on grammars](Grammars.ipynb).* You should have read the [chapter on grammar coverage](GrammarCoverageFuzzer.ipynb). SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.ConfigurationFuzzer import ```and then make use of the following features.This chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options.`OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")```The grammar can be extracted via the method `ebnf_grammar()`:```python>>> option_ebnf_grammar = autopep8_runner.ebnf_grammar()>>> print(option_ebnf_grammar){'': ['()*'], '': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config ', ' --ignore-local-config', ' -r', ' --recursive', ' -j ', ' --jobs ', ' -p ', ' --pep8-passes ', ' -a', ' --aggressive', ' --experimental', ' --exclude ', ' --list-fixes', ' --ignore ', ' --select ', ' --max-line-length ', ' --line-range ', ' --range ', ' --indent-size ', ' --hang-closing', ' --exit-code'], '': [' foo.py'], '': ['+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '': [''], '': ['(-)?+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '': [''], '': [''], '': [''], '': ['']}```The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:```python>>> from Grammars import convert_ebnf_grammar>>> fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))>>> [fuzzer.fuzz() for i in range(3)][' foo.py', ' --max-line-length 6 --jobs -594 --ignore , --ignore-local-config -r --in-place --list-fixes --recursive -v --experimental -p 72 -h --aggressive --indent-size 3 --exit-code --hang-closing --pep8-passes -180 -d --global-config XQjT --diff --exclude *g -j 43 --help --select A --version --verbose -a --line-range -3963 0 --range 1 4 -i --in-place --version foo.py', ' --global-config 2 --select PuR --ignore b --ignore @ --ignore ;7d --ignore ) --ignore Fw1Z --ignore 0 --global-config ynf --select >G --select + --global-config ( --exclude v --exclude V --ignore ^ --select L --exclude 6 --exclude =$` --ignore % --global-config N --ignore [8maop --ignore 3! --select ~?c< --exclude C --select U --exclude h --global-config --global-config 5O --select x --select B] --ignore _ --global-config .K --global-config S --exclude r --global-config qW --exclude te4/ --exclude J} --ignore " --exclude |H --global-config -&k{s --global-config E --select :I --ignore 9 --global-config M --exclude YD --select \\ --exclude z --ignore i --select \'l --ignore M --ignore ;h --exit-code foo.py']```The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")>>> autopep8_fuzzer = OptionFuzzer(autopep8_runner)>>> [autopep8_fuzzer.fuzz() for i in range(3)][' --diff foo.py', ' --exclude --global-config V --select He --global-config | --global-config n}aicm --ignore 7 --ignore b --global-config u --exclude WB` --exclude 2 --exclude JpZt --exclude l_ --select *%^ --exclude & --exclude )Lv --global-config [ --global-config " --exclude sOEXP --aggressive --exclude \' --help --diff --experimental foo.py', ' --ignore FCw; --global-config /1K?:6 --exclude U --exclude z --ignore rQ --select x --select Y --select { --global-config o --select 34 --exclude ]j --select ~ --exclude 9@ --ignore w --global-config CVL --diff foo.py']```The final step in testing would now to invoke the program with these arguments.Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. Configuration OptionsWhen we talk about the input to a program, we usually think of the _data_ it processes. This is also what we have been fuzzing in the past chapters – be it with [random input](Fuzzer.ipynb), [mutation-based fuzzing](MutationFuzzer.ipynb), or [grammar-based fuzzing](GrammarFuzzer.ipynb). However, programs typically have several input sources, all of which can and should be tested – and included in test generation. One important source of input is the program's _configuration_ – that is, a set of inputs that typically is set once when beginning to process data and then stays constant while processing data, while the program is running, or even while the program is deployed. Such a configuration is frequently set in _configuration files_ (for instance, as key/value pairs); the most ubiquitous method for command-line tools, though, are _configuration options_ on the command line. As an example, consider the `grep` utility to find textual patterns in files. The exact mode by which `grep` works is governed by a multitude of options, which can be listed by providing a `--help` option:
###Code
!grep --help
###Output
usage: grep [-abcdDEFGHhIiJLlMmnOopqRSsUVvwXxZz] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
###Markdown
All these options need to be tested for whether they operate correctly. In security testing, any such option may also trigger a yet unknown vulnerability. Hence, such options can become _fuzz targets_ on their own. In this chapter, we analyze how to systematically test such options – and better yet, how to extract possible configurations right out of given program files, such that we do not have to specify anything. Options in PythonLet us stick to our common programming language here and examine how options are processed in Python. The `argparse` module provides a parser for command-line arguments (and options) with great functionality – and great complexity. You start by defining a parser (`argparse.ArgumentParser()`) to which individual arguments with various features are added, one after another. Additional parameters for each argument can specify the type (`type`) of the argument (say, integers or strings), or the number of arguments (`nargs`). By default, arguments are stored under their name in the `args` object coming from `parse_args()` – thus, `args.integers` holds the `integer` arguments added earlier. Special actions (`actions`) allow to store specific values in given variables; the `store_const` action stores the given `const` in the attribute named by `dest`. The following example takes a number of integer arguments (`integers`) as well as an operator (`--sum`, `--min`, or `--max`) to be applied on these integers. The operators all store a function reference in the `accumulate` attribute, which is finally invoked on the integers parsed:
###Code
import argparse
def process_numbers(args=[]):
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--sum', dest='accumulate', action='store_const',
const=sum,
help='sum the integers')
group.add_argument('--min', dest='accumulate', action='store_const',
const=min,
help='compute the minimum')
group.add_argument('--max', dest='accumulate', action='store_const',
const=max,
help='compute the maximum')
args = parser.parse_args(args)
print(args.accumulate(args.integers))
###Output
_____no_output_____
###Markdown
Here's how `process_numbers()` works. We can, for instance, invoke the `--min` option on the given arguments to compute the minimum:
###Code
process_numbers(["--min", "100", "200", "300"])
###Output
100
###Markdown
Or compute the sum of three numbers:
###Code
process_numbers(["--sum", "1", "2", "3"])
###Output
6
###Markdown
When defined via `add_mutually_exclusive_group()` (as above), options are mutually exclusive. Consequently, we can have only one operator:
###Code
import bookutils
from ExpectError import ExpectError
with ExpectError(print_traceback=False):
process_numbers(["--sum", "--max", "1", "2", "3"])
###Output
usage: ipykernel_launcher.py [-h] (--sum | --min | --max) N [N ...]
ipykernel_launcher.py: error: argument --max: not allowed with argument --sum
SystemExit: 2 (expected)
###Markdown
A Grammar for ConfigurationsHow can we test a system with several options? The easiest answer is to write a grammar for it. The grammar `PROCESS_NUMBERS_EBNF_GRAMMAR` reflects the possible combinations of options and arguments:
###Code
from Grammars import crange, srange, convert_ebnf_grammar, extend_grammar, is_valid_grammar
from Grammars import START_SYMBOL, new_symbol, Grammar
PROCESS_NUMBERS_EBNF_GRAMMAR: Grammar = {
"<start>": ["<operator> <integers>"],
"<operator>": ["--sum", "--min", "--max"],
"<integers>": ["<integer>", "<integers> <integer>"],
"<integer>": ["<digit>+"],
"<digit>": crange('0', '9')
}
assert is_valid_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
PROCESS_NUMBERS_GRAMMAR = convert_ebnf_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
###Output
_____no_output_____
###Markdown
We can feed this grammar into our [grammar coverage fuzzer](GrammarCoverageFuzzer.ipynb) and have it cover one option after another:
###Code
from GrammarCoverageFuzzer import GrammarCoverageFuzzer
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
print(f.fuzz())
###Output
--max 9 5 8 210 80 9756431
--sum 9 4 99 1245 612370
--min 2 3 0 46 15798 7570926
###Markdown
Of course, we can also invoke `process_numbers()` with these very arguments. To this end, we need to convert the string produced by the grammar back into a list of individual arguments, using `split()`:
###Code
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
args = f.fuzz().split()
print(args)
process_numbers(args)
###Output
['--max', '8', '9', '3067', '44', '13852967057']
13852967057
['--sum', '9', '8', '63', '9278111', '59206197798']
59215475989
['--min', '4', '1', '4864', '33342', '7827970808951']
1
###Markdown
In a similar way, we can define grammars for any program to be tested; as well as define grammars for, say, configuration files. Yet, the grammar has to be updated with every change to the program, which creates a maintenance burden. Given that the information required for the grammar is already all encoded in the program, the question arises: _Can't we go and extract configuration options right out of the program in the first place?_ Mining Configuration OptionsIn this section, we try to extract option and argument information right out of a program, such that we do not have to specify a configuration grammar. The aim is to have a configuration fuzzer that works on the options and arguments of an arbitrary program, as long as it follows specific conventions for processing its arguments. In the case of Python programs, this means using the `argparse` module.Our idea is as follows: We execute the given program up to the point where the arguments are actually parsed – that is, `argparse.parse_args()` is invoked. Up to this point, we track all calls into the argument parser, notably those calls that define arguments and options (`add_argument()`). From these, we construct the grammar. Tracking ArgumentsLet us illustrate this approach with a simple experiment: We define a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while `process_numbers` is invoked. If we have a call to a method `add_argument`, we access and print out the local variables (which at this point are the arguments to the method).
###Code
import sys
import string
def trace_locals(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(method_name, locals)
###Output
_____no_output_____
###Markdown
What we get is a list of all calls to `add_argument()`, together with the method arguments passed:
###Code
sys.settrace(trace_locals)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
add_argument {'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True), 'args': ('-h', '--help'), 'kwargs': {'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}}
add_argument {'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True), 'args': ('integers',), 'kwargs': {'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x10ae26820>, 'args': ('--sum',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x10ae26820>, 'args': ('--min',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x10ae26820>, 'args': ('--max',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}}
6
###Markdown
From the `args` argument, we can access the individual options and arguments to be defined:
###Code
def trace_options(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(locals['args'])
sys.settrace(trace_options)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
('-h', '--help')
('integers',)
('--sum',)
('--min',)
('--max',)
6
###Markdown
We see that each argument comes as a tuple with one (say, `integers` or `--sum`) or two members (`-h` and `--help`), which denote alternate forms for the same option. Our job will be to go through the arguments of `add_arguments()` and detect not only the names of options and arguments, but also whether they accept additional parameters, as well as the type of the parameters. A Grammar Miner for Options and Arguments Let us now build a class that gathers all this information to create a grammar. We use the `ParseInterrupt` exception to interrupt program execution after gathering all arguments and options:
###Code
class ParseInterrupt(Exception):
pass
###Output
_____no_output_____
###Markdown
The class `OptionGrammarMiner` takes an executable function for which the grammar of options and arguments is to be mined:
###Code
class OptionGrammarMiner:
def __init__(self, function, log=False):
self.function = function
self.log = log
###Output
_____no_output_____
###Markdown
The method `mine_ebnf_grammar()` is where everything happens. It creates a grammar of the form``` ::= * ::= ::= ```in which the options and arguments will be collected. It then sets a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while the previously defined `function` is invoked. Raising `ParseInterrupt` (when `parse_args()` is invoked) ends execution.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
OPTION_SYMBOL = "<option>"
ARGUMENTS_SYMBOL = "<arguments>"
def mine_ebnf_grammar(self):
self.grammar = {
START_SYMBOL: ["(" + self.OPTION_SYMBOL + ")*" + self.ARGUMENTS_SYMBOL],
self.OPTION_SYMBOL: [],
self.ARGUMENTS_SYMBOL: []
}
self.current_group = self.OPTION_SYMBOL
old_trace = sys.gettrace()
sys.settrace(self.traceit)
try:
self.function()
except ParseInterrupt:
pass
sys.settrace(old_trace)
return self.grammar
def mine_grammar(self):
return convert_ebnf_grammar(self.mine_ebnf_grammar())
###Output
_____no_output_____
###Markdown
The trace function checks for four methods: `add_argument()` is the most important function, resulting in processing arguments; `frame.f_locals` again is the set of local variables, which at this point is mostly the arguments to `add_argument()`. Since mutually exclusive groups also have a method `add_argument()`, we set the flag `in_group` to differentiate. Note that we make no specific efforts to differentiate between multiple parsers or groups; we simply assume that there is one parser, and at any point at most one mutually exclusive group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def traceit(self, frame, event, arg):
if event != "call":
return
if "self" not in frame.f_locals:
return
self_var = frame.f_locals["self"]
method_name = frame.f_code.co_name
if method_name == "add_argument":
in_group = repr(type(self_var)).find("Group") >= 0
self.process_argument(frame.f_locals, in_group)
elif method_name == "add_mutually_exclusive_group":
self.add_group(frame.f_locals, exclusive=True)
elif method_name == "add_argument_group":
# self.add_group(frame.f_locals, exclusive=False)
pass
elif method_name == "parse_args":
raise ParseInterrupt
return None
###Output
_____no_output_____
###Markdown
The `process_arguments()` now analyzes the arguments passed and adds them to the grammar:* If the argument starts with `-`, it gets added as an optional element to the `` list* Otherwise, it gets added to the `` list.The optional `nargs` argument specifies how many arguments can follow. If it is a number, we add the appropriate number of elements to the grammar; if it is an abstract specifier (say, `+` or `*`), we use it directly as EBNF operator.Given the large number of parameters and optional behavior, this is a somewhat messy function, but it does the job.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def process_argument(self, locals, in_group):
args = locals["args"]
kwargs = locals["kwargs"]
if self.log:
print(args)
print(kwargs)
print()
for arg in args:
self.process_arg(arg, in_group, kwargs)
class OptionGrammarMiner(OptionGrammarMiner):
def process_arg(self, arg, in_group, kwargs):
if arg.startswith('-'):
if not in_group:
target = self.OPTION_SYMBOL
else:
target = self.current_group
metavar = None
arg = " " + arg
else:
target = self.ARGUMENTS_SYMBOL
metavar = arg
arg = ""
if "nargs" in kwargs:
nargs = kwargs["nargs"]
else:
nargs = 1
param = self.add_parameter(kwargs, metavar)
if param == "":
nargs = 0
if isinstance(nargs, int):
for i in range(nargs):
arg += param
else:
assert nargs in "?+*"
arg += '(' + param + ')' + nargs
if target == self.OPTION_SYMBOL:
self.grammar[target].append(arg)
else:
self.grammar[target].append(arg)
###Output
_____no_output_____
###Markdown
The method `add_parameter()` handles possible parameters of options. If the argument has an `action` defined, it takes no parameter. Otherwise, we identify the type of the parameter (as `int` or `str`) and augment the grammar with an appropriate rule.
###Code
import inspect
class OptionGrammarMiner(OptionGrammarMiner):
def add_parameter(self, kwargs, metavar):
if "action" in kwargs:
# No parameter
return ""
type_ = "str"
if "type" in kwargs:
given_type = kwargs["type"]
# int types come as '<class int>'
if inspect.isclass(given_type) and issubclass(given_type, int):
type_ = "int"
if metavar is None:
if "metavar" in kwargs:
metavar = kwargs["metavar"]
else:
metavar = type_
self.add_type_rule(type_)
if metavar != type_:
self.add_metavar_rule(metavar, type_)
param = " <" + metavar + ">"
return param
###Output
_____no_output_____
###Markdown
The method `add_type_rule()` adds a rule for parameter types to the grammar. If the parameter is identified by a meta-variable (say, `N`), we add a rule for this as well to improve legibility.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_type_rule(self, type_):
if type_ == "int":
self.add_int_rule()
else:
self.add_str_rule()
def add_int_rule(self):
self.grammar["<int>"] = ["(-)?<digit>+"]
self.grammar["<digit>"] = crange('0', '9')
def add_str_rule(self):
self.grammar["<str>"] = ["<char>+"]
self.grammar["<char>"] = srange(
string.digits
+ string.ascii_letters
+ string.punctuation)
def add_metavar_rule(self, metavar, type_):
self.grammar["<" + metavar + ">"] = ["<" + type_ + ">"]
###Output
_____no_output_____
###Markdown
The method `add_group()` adds a new mutually exclusive group to the grammar. We define a new symbol (say, ``) for the options added to the group, and use the `required` and `exclusive` flags to define an appropriate expansion operator. The group is then prefixed to the grammar, as in``` ::= * ::= ```and filled with the next calls to `add_argument()` within the group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_group(self, locals, exclusive):
kwargs = locals["kwargs"]
if self.log:
print(kwargs)
required = kwargs.get("required", False)
group = new_symbol(self.grammar, "<group>")
if required and exclusive:
group_expansion = group
if required and not exclusive:
group_expansion = group + "+"
if not required and exclusive:
group_expansion = group + "?"
if not required and not exclusive:
group_expansion = group + "*"
self.grammar[START_SYMBOL][0] = group_expansion + \
self.grammar[START_SYMBOL][0]
self.grammar[group] = []
self.current_group = group
###Output
_____no_output_____
###Markdown
That's it! With this, we can now extract the grammar from our `process_numbers()` program. Turning on logging again reveals the variables we draw upon.
###Code
miner = OptionGrammarMiner(process_numbers, log=True)
process_numbers_grammar = miner.mine_ebnf_grammar()
###Output
('-h', '--help')
{'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}
('integers',)
{'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}
{'required': True}
('--sum',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}
('--min',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}
('--max',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}
###Markdown
Here is the extracted grammar:
###Code
process_numbers_grammar
###Output
_____no_output_____
###Markdown
The grammar properly identifies the group found:
###Code
process_numbers_grammar["<start>"]
process_numbers_grammar["<group>"]
###Output
_____no_output_____
###Markdown
It also identifies a `--help` option provided not by us, but by the `argparse` module:
###Code
process_numbers_grammar["<option>"]
###Output
_____no_output_____
###Markdown
The grammar also correctly identifies the types of the arguments:
###Code
process_numbers_grammar["<arguments>"]
process_numbers_grammar["<integers>"]
###Output
_____no_output_____
###Markdown
The rules for `int` are set as defined by `add_int_rule()`
###Code
process_numbers_grammar["<int>"]
###Output
_____no_output_____
###Markdown
We can take this grammar and convert it to BNF, such that we can fuzz with it right away:
###Code
assert is_valid_grammar(process_numbers_grammar)
grammar = convert_ebnf_grammar(process_numbers_grammar)
assert is_valid_grammar(grammar)
f = GrammarCoverageFuzzer(grammar)
for i in range(10):
print(f.fuzz())
###Output
--sum 9
--max -h --help --help -16 -0
--min --help 2745341 8
--min 1 27
--sum --help --help -2
--sum --help 0 3 -77
--sum -3
--sum --help 429 8 10 0295 -694 1
--max -h 91 -1425 99
--sum -795 -94 8 -44
###Markdown
Each and every invocation adheres to the rules as set forth in the `argparse` calls. By mining options and arguments from existing programs, we can now fuzz these options out of the box – without having to specify a grammar. Testing Autopep8 Let us try out the option grammar miner on real-world Python programs. `autopep8` is a tool that automatically converts Python code to the [PEP 8 Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). (Actually, all Python code in this book runs through `autopep8` during production.) `autopep8` offers a wide range of options, as can be seen by invoking it with `--help`:
###Code
!autopep8 --help
###Output
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing] [--exit-code]
[files ...]
Automatically formats Python code to conform to the PEP 8 style guide.
positional arguments:
files files to format or '-' for standard in
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-v, --verbose print verbose messages; multiple -v result in more
verbose messages
-d, --diff print the diff for the fixed source
-i, --in-place make changes to files in place
--global-config filename
path to a global pep8 config file; if this file does
not exist then this is ignored (default:
/Users/zeller/.config/pep8)
--ignore-local-config
don't look for and apply local config files; if not
passed, defaults are updated with any config files in
the project's root directory
-r, --recursive run recursively over directories; must be used with
--in-place or --diff
-j n, --jobs n number of parallel jobs; match CPU count if value is
less than 1
-p n, --pep8-passes n
maximum number of additional pep8 passes (default:
infinite)
-a, --aggressive enable non-whitespace changes; multiple -a result in
more aggressive changes
--experimental enable experimental fixes
--exclude globs exclude file/directory names that match these comma-
separated globs
--list-fixes list codes for fixes; used by --ignore and --select
--ignore errors do not fix these errors/warnings (default:
E226,E24,W50,W690)
--select errors fix only these errors/warnings (e.g. E4,W)
--max-line-length n set maximum allowed line length (default: 79)
--line-range line line, --range line line
only fix errors found within this inclusive range of
line numbers (e.g. 1 99); line numbers are indexed at
1
--hang-closing hang-closing option passed to pycodestyle
--exit-code change to behavior of exit code. default behavior of
return value, 0 is no differences, 1 is error exit.
return 2 when add this option. 2 is exists
differences.
###Markdown
Autopep8 SetupWe want to systematically test these options. In order to deploy our configuration grammar miner, we need to find the source code of the executable:
###Code
import os
def find_executable(name):
for path in os.get_exec_path():
qualified_name = os.path.join(path, name)
if os.path.exists(qualified_name):
return qualified_name
return None
autopep8_executable = find_executable("autopep8")
assert autopep8_executable is not None
autopep8_executable
###Output
_____no_output_____
###Markdown
Next, we build a function that reads the contents of the file and executes it.
###Code
def autopep8():
executable = find_executable("autopep8")
# First line has to contain "/usr/bin/env python" or like
first_line = open(executable).readline()
assert first_line.find("python") >= 0
contents = open(executable).read()
exec(contents)
###Output
_____no_output_____
###Markdown
Mining an Autopep8 GrammarWe can use the `autopep8()` function in our grammar miner:
###Code
autopep8_miner = OptionGrammarMiner(autopep8)
###Output
_____no_output_____
###Markdown
and extract a grammar for it:
###Code
autopep8_ebnf_grammar = autopep8_miner.mine_ebnf_grammar()
###Output
_____no_output_____
###Markdown
This works because here, `autopep8` is not a separate process (and a separate Python interpreter), but we run the `autopep8()` function (and the `autopep8` code) in our current Python interpreter – up to the call to `parse_args()`, where we interrupt execution again. At this point, the `autopep8` code has done nothing but setting up the argument parser – which is what we are interested in. The grammar options mined reflect precisely the options seen when providing `--help`:
###Code
print(autopep8_ebnf_grammar["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
Metavariables like `` or `` are placeholders for integers. We assume all metavariables of the same name have the same type:
###Code
autopep8_ebnf_grammar["<line>"]
###Output
_____no_output_____
###Markdown
The grammar miner has inferred that the argument to `autopep8` is a list of files:
###Code
autopep8_ebnf_grammar["<arguments>"]
###Output
_____no_output_____
###Markdown
which in turn all are strings:
###Code
autopep8_ebnf_grammar["<files>"]
###Output
_____no_output_____
###Markdown
As we are only interested in testing options, not arguments, we fix the arguments to a single mandatory input. (Otherwise, we'd have plenty of random file names generated.)
###Code
autopep8_ebnf_grammar["<arguments>"] = [" <files>"]
autopep8_ebnf_grammar["<files>"] = ["foo.py"]
assert is_valid_grammar(autopep8_ebnf_grammar)
###Output
_____no_output_____
###Markdown
Creating Autopep8 Options Let us now use the inferred grammar for fuzzing. Again, we convert the EBNF grammar into a regular BNF grammar:
###Code
autopep8_grammar = convert_ebnf_grammar(autopep8_ebnf_grammar)
assert is_valid_grammar(autopep8_grammar)
###Output
_____no_output_____
###Markdown
And we can use the grammar for fuzzing all options:
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=4)
for i in range(20):
print(f.fuzz())
###Output
-r foo.py
-h --experimental --hang-closing foo.py
--list-fixes -v foo.py
--aggressive -d foo.py
--indent-size 9 --help foo.py
--exit-code --recursive foo.py
--diff --version -i foo.py
--max-line-length 0 --in-place --verbose foo.py
--ignore-local-config -a foo.py
--select x -i --exit-code foo.py
-j 8 --diff foo.py
-d -v -d foo.py
-p 6 -i foo.py
-v --diff foo.py
--ignore uA --recursive foo.py
--jobs 5 -r foo.py
--range 4 1 foo.py
--ignore-local-config -i foo.py
-r --exit-code foo.py
-v -r foo.py
###Markdown
Let us apply these options on the actual program. We need a file `foo.py` that will serve as input: (Note that the following commands will overwrite the file `foo.py`, if it already exists in the current working directory. Be aware of this, if you downloaded the notebooks and are working locally.)
###Code
def create_foo_py():
open("foo.py", "w").write("""
def twice(x = 2):
return x + x
""")
create_foo_py()
print(open("foo.py").read(), end="")
###Output
def twice(x = 2):
return x + x
###Markdown
We see how `autopep8` fixes the spacing:
###Code
!autopep8 foo.py
###Output
def twice(x=2):
return x + x
###Markdown
Let us now put things together. We define a `ProgramRunner` that will run the `autopep8` executable with arguments coming from the mined `autopep8` grammar.
###Code
from Fuzzer import ProgramRunner
###Output
_____no_output_____
###Markdown
Running `autopep8` with the mined options reveals a surprisingly high number of passing runs. (We see that some options depend on each other or are mutually exclusive, but this is handled by the program logic, not the argument parser, and hence out of our scope.) The `GrammarCoverageFuzzer` ensures that each option is tested at least once. (Digits and letters, too, by the way.)
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=5)
for i in range(20):
invocation = "autopep8" + f.fuzz()
print("$ " + invocation)
args = invocation.split()
autopep8_runner = ProgramRunner(args)
result, outcome = autopep8_runner.run()
if result.stderr != "":
print(result.stderr, end="")
###Output
$ autopep8 foo.py
$ autopep8 --diff --max-line-length 4 --exit-code --range 5 8 -p 2 foo.py
$ autopep8 --ignore z --verbose -r --list-fixes foo.py
--recursive must be used with --in-place or --diff$ autopep8 --exclude 5 -h -i --aggressive --in-place foo.py
$ autopep8 --select a --help --experimental foo.py
$ autopep8 --indent-size -30 --recursive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --global-config < -j 9 -v -a foo.py
parallel jobs requires --in-place$ autopep8 --line-range 7 1 --hang-closing -d foo.py
First value of --range should be less than or equal to the second$ autopep8 --pep8-passes 6 --hang-closing --version --ignore-local-config foo.py
$ autopep8 --jobs -2 --experimental --version foo.py
$ autopep8 --ignore Y: --select ! --global-config e foo.py
$ autopep8 --select 1 -a --recursive --aggressive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --ignore * --ignore `0 --global-config _ --verbose foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config ,\ --exclude r -v foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config xd6M --recursive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --select R --exclude L --version --ignore-local-config foo.py
$ autopep8 --select " --verbose -h -d foo.py
$ autopep8 --diff -i -h foo.py
$ autopep8 --in-place --select w --version -i foo.py
$ autopep8 --ignore 49 --exclude lI -i foo.py
###Markdown
Our `foo.py` file now has been formatted in place a number of times:
###Code
print(open("foo.py").read(), end="")
###Output
def twice(x=2):
return x + x
###Markdown
We don't need it anymore, so we clean up things:
###Code
import os
os.remove("foo.py")
###Output
_____no_output_____
###Markdown
Classes for Fuzzing Configuration OptionsLet us now create reusable classes that we can use for testing arbitrary programs. (Okay, make that "arbitrary programs that are written in Python and use the `argparse` module to process command-line arguments.") The class `OptionRunner` is a subclass of `ProgramRunner` that takes care of automatically determining the grammar, using the same steps we used for `autopep8`, above.
###Code
class OptionRunner(ProgramRunner):
def __init__(self, program, arguments=None):
if isinstance(program, str):
self.base_executable = program
else:
self.base_executable = program[0]
self.find_contents()
self.find_grammar()
if arguments is not None:
self.set_arguments(arguments)
super().__init__(program)
###Output
_____no_output_____
###Markdown
First, we find the contents of the Python executable:
###Code
class OptionRunner(OptionRunner):
def find_contents(self):
self._executable = find_executable(self.base_executable)
first_line = open(self._executable).readline()
assert first_line.find("python") >= 0
self.contents = open(self._executable).read()
def invoker(self):
exec(self.contents)
def executable(self):
return self._executable
###Output
_____no_output_____
###Markdown
Next, we determine the grammar using the `OptionGrammarMiner` class:
###Code
class OptionRunner(OptionRunner):
def find_grammar(self):
miner = OptionGrammarMiner(self.invoker)
self._ebnf_grammar = miner.mine_ebnf_grammar()
def ebnf_grammar(self):
return self._ebnf_grammar
def grammar(self):
return convert_ebnf_grammar(self._ebnf_grammar)
###Output
_____no_output_____
###Markdown
The two service methods `set_arguments()` and `set_invocation()` help us to change the arguments and program, respectively.
###Code
from Grammars import unreachable_nonterminals
class OptionRunner(OptionRunner):
def set_arguments(self, args):
self._ebnf_grammar["<arguments>"] = [" " + args]
# Delete rules for previous arguments
for nonterminal in unreachable_nonterminals(self._ebnf_grammar):
del self._ebnf_grammar[nonterminal]
def set_invocation(self, program):
self.program = program
###Output
_____no_output_____
###Markdown
We can instantiate the class on `autopep8` and immediately get the grammar:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
print(autopep8_runner.ebnf_grammar()["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
An `OptionFuzzer` interacts with the given `OptionRunner` to obtain its grammar, which is then passed to its `GrammarCoverageFuzzer` superclass.
###Code
class OptionFuzzer(GrammarCoverageFuzzer):
def __init__(self, runner, *args, **kwargs):
assert issubclass(type(runner), OptionRunner)
self.runner = runner
grammar = runner.grammar()
super().__init__(grammar, *args, **kwargs)
###Output
_____no_output_____
###Markdown
When invoking `run()`, the `OptionFuzzer` creates a new invocation (using `fuzz()` from its grammar) and runs the now given (or previously set) runner with the arguments from the grammar. Note that the runner specified in `run()` can differ from the one set during initialization; this allows for mining options from one program and applying it in another context.
###Code
class OptionFuzzer(OptionFuzzer):
def run(self, runner=None, inp=""):
if runner is None:
runner = self.runner
assert issubclass(type(runner), OptionRunner)
invocation = runner.executable() + " " + self.fuzz()
runner.set_invocation(invocation.split())
return runner.run(inp)
###Output
_____no_output_____
###Markdown
Example: Autopep8Let us apply our newly defined classes on the `autopep8` runner:
###Code
autopep8_fuzzer = OptionFuzzer(autopep8_runner, max_nonterminals=5)
for i in range(3):
print(autopep8_fuzzer.fuzz())
###Output
foo.py
--in-place --ignore-local-config --jobs 6 --recursive -i foo.py
--help -a --indent-size -95 --pep8-passes 3 --exclude = -r foo.py
###Markdown
We can now systematically test `autopep8` with these classes:
###Code
autopep8_fuzzer.run(autopep8_runner)
###Output
_____no_output_____
###Markdown
Example: MyPyWe can extract options for the `mypy` static type checker for Python:
###Code
assert find_executable("mypy") is not None
mypy_runner = OptionRunner("mypy", "foo.py")
print(mypy_runner.ebnf_grammar()["<option>"])
mypy_fuzzer = OptionFuzzer(mypy_runner, max_nonterminals=5)
for i in range(10):
print(mypy_fuzzer.fuzz())
###Output
--follow-imports l foo.py
--logical-deps --allow-any-generics --any-exprs-report yB --fast-exit --show-error-context -i --no-site-packages --no-explicit-package-bases --no-sqlite-cache --warn-unused-ignores foo.py
--html-report 7/ --tb --error-summary foo.py
--no-implicit-reexport --no-error-summary --warn-return-any --dump-build-stats --check-untyped-defs --hide-error-context --no-install-types --install-types foo.py
--no-warn-redundant-casts --no-warn-return-any --bazel --sqlite-cache --enable-error-code foo.py
--cache-map { t --version --warn-unused-configs foo.py
--explicit-package-bases --exclude --help --semantic-analysis-only --soft-error-limit 6 --skip-version-check --no-warn-unused-configs foo.py
--disallow-untyped-defs --hide-error-codes --show-absolute-path --shadow-file --package-root --interactive foo.py
--strict-optional --disable-error-code --disallow-untyped-globals --disallow-incomplete-defs --no-namespace-packages --show-column-numbers --inferstats --raise-exceptions --no-color-output --allow-incomplete-defs --warn-incomplete-stub --command --module --disallow-redefinition --cache-dir --no-strict-optional -c -h -p foo.py
--implicit-optional --disallow-any-generics --no-warn-unreachable --strict-equality --show-error-codes --package --always-false foo.py
###Markdown
Example: NotedownHere's the configuration options for the `notedown` Notebook to Markdown converter:
###Code
assert find_executable("notedown") is not None
notedown_runner = OptionRunner("notedown")
print(notedown_runner.ebnf_grammar()["<option>"])
notedown_fuzzer = OptionFuzzer(notedown_runner, max_nonterminals=5)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
c
--help -o --nomagic --version --output --match H9- --rmagic --render --run
--examples -h --precode r x --execute --strip --debug <
--execute --rmagic --rmagic !
-h --render --debug 2
--examples --version --version .
--execute --help --render R
--run -h 6T
--execute --rmagic -h ;
--from 5* --version --help s
###Markdown
Combinatorial TestingOur `CoverageGrammarFuzzer` does a good job in covering each and every option at least once, which is great for systematic testing. However, as we also can see in our examples above, some options require each other, while others interfere with each other. What we should do as good testers is not only to cover every option individually, but also _combinations_ of options. The Python `itertools` module gives us means to create combinations from lists. We can, for instance, take the `notedown` options and create a list of all pairs.
###Code
from itertools import combinations
option_list = notedown_runner.ebnf_grammar()["<option>"]
pairs = list(combinations(option_list, 2))
###Output
_____no_output_____
###Markdown
There's quite a number of pairs:
###Code
len(pairs)
print(pairs[:20])
###Output
[(' -h', ' --help'), (' -h', ' -o( <str>)?'), (' -h', ' --output( <str>)?'), (' -h', ' --from <str>'), (' -h', ' --to <str>'), (' -h', ' --run'), (' -h', ' --execute'), (' -h', ' --timeout <int>'), (' -h', ' --strip'), (' -h', ' --precode( <str>)+'), (' -h', ' --knit( <str>)?'), (' -h', ' --rmagic'), (' -h', ' --nomagic'), (' -h', ' --render'), (' -h', ' --template <str>'), (' -h', ' --match <str>'), (' -h', ' --examples'), (' -h', ' --version'), (' -h', ' --debug'), (' --help', ' -o( <str>)?')]
###Markdown
Testing every such pair of options frequently suffices to cover all interferences between options. (Programs rarely have conditions involving three or more configuration settings.) To this end, we _change_ the grammar from having a list of options to having a list of _option pairs_, such that covering these will automatically cover all pairs. We create a function `pairwise()` that takes a list of options as occurring in our grammar and returns a list of _pairwise options_ – that is, our original options, but concatenated.
###Code
def pairwise(option_list):
return [option_1 + option_2
for (option_1, option_2) in combinations(option_list, 2)]
###Output
_____no_output_____
###Markdown
Here's the first 20 pairs:
###Code
print(pairwise(option_list)[:20])
###Output
[' -h --help', ' -h -o( <str>)?', ' -h --output( <str>)?', ' -h --from <str>', ' -h --to <str>', ' -h --run', ' -h --execute', ' -h --timeout <int>', ' -h --strip', ' -h --precode( <str>)+', ' -h --knit( <str>)?', ' -h --rmagic', ' -h --nomagic', ' -h --render', ' -h --template <str>', ' -h --match <str>', ' -h --examples', ' -h --version', ' -h --debug', ' --help -o( <str>)?']
###Markdown
The new grammar `pairwise_notedown_grammar` is a copy of the `notedown` grammar, but with the list of options replaced with the above pairwise option list.
###Code
notedown_grammar = notedown_runner.grammar()
pairwise_notedown_grammar = extend_grammar(notedown_grammar)
pairwise_notedown_grammar["<option>"] = pairwise(notedown_grammar["<option>"])
assert is_valid_grammar(pairwise_notedown_grammar)
###Output
_____no_output_____
###Markdown
Using the "pairwise" grammar to fuzz now covers one pair after another:
###Code
notedown_pairwise_fuzzer = GrammarCoverageFuzzer(
pairwise_notedown_grammar, max_nonterminals=4)
for i in range(10):
print(notedown_pairwise_fuzzer.fuzz())
###Output
--help --timeout -0 w
--rmagic --render --execute --strip
--execute --rmagic --nomagic --debug
--help --strip --strip --version r
-h --debug --help --execute
--execute --debug -h --render
--strip --rmagic --render --examples h
-h --run --examples --debug
--from PX --examples >
--run --strip --strip --render
###Markdown
Can we actually test all combinations of options? Not in practice, as the number of combinations quickly grows as the length increases. It decreases again as the number of options reaches the maximum (with 20 options, there is only 1 combination involving _all_ options), but the absolute numbers are still staggering:
###Code
for combination_length in range(1, 20):
tuples = list(combinations(option_list, combination_length))
print(combination_length, len(tuples))
###Output
1 20
2 190
3 1140
4 4845
5 15504
6 38760
7 77520
8 125970
9 167960
10 184756
11 167960
12 125970
13 77520
14 38760
15 15504
16 4845
17 1140
18 190
19 20
###Markdown
Formally, the number of combinations of length $k$ in a set of options of length $n$ is the binomial coefficient$${n \choose k} = \frac{n!}{k!(n - k)!}$$ which for $k = 2$ (all pairs) gives us$${n \choose 2} = \frac{n!}{2(n - 2)!} = \frac{n (n - 1)}{2}$$ For `autopep8` with its 29 options...
###Code
len(autopep8_runner.ebnf_grammar()["<option>"])
###Output
_____no_output_____
###Markdown
... we thus have 406 distinct pairs. However, the binomial coefficient does not differentiate between permutations of elements of the pairs, which our tests do. Therefore we need 812 tests to cover all pairs:
###Code
len(autopep8_runner.ebnf_grammar()["<option>"]) * \
(len(autopep8_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
For `mypy` with its 110 options, though, we already end up with 11,990 tests to be conducted:
###Code
len(mypy_runner.ebnf_grammar()["<option>"])
len(mypy_runner.ebnf_grammar()["<option>"]) * \
(len(mypy_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
Even if each pair takes a second to run, we'd still be done in three hours of testing, though. If your program has more options that you all want to get covered in combinations, it is advisable that you limit the number of configurations further – for instance by limiting combinatorial testing to those combinations that possibly can interact with each other; and covering all other (presumably orthogonal) options individually. This mechanism of creating configurations by extending grammars can be easily extended to other configuration targets. One may want to explore a greater number of configurations, or expansions in specific contexts. The [exercises](Exercises), below, have a number of options ready for you. SynopsisThis chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options. `OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
###Output
_____no_output_____
###Markdown
The grammar can be extracted via the method `ebnf_grammar()`:
###Code
option_ebnf_grammar = autopep8_runner.ebnf_grammar()
print(option_ebnf_grammar)
###Output
{'<start>': ['(<option>)*<arguments>'], '<option>': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code'], '<arguments>': [' foo.py'], '<str>': ['<char>+'], '<char>': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '#', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '<filename>': ['<str>'], '<int>': ['(-)?<digit>+'], '<digit>': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '<n>': ['<int>'], '<globs>': ['<str>'], '<errors>': ['<str>'], '<line>': ['<int>']}
###Markdown
The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:
###Code
from Grammars import convert_ebnf_grammar
fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))
[fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
autopep8_fuzzer = OptionFuzzer(autopep8_runner)
[autopep8_fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The final step in testing would now to invoke the program with these arguments. Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. Lessons Learned* Besides regular input data, program _configurations_ make an important testing target.* For a given program using a standard library to parse command-line options and arguments, one can automatically extract these and convert them into a grammar.* To cover not only single options, but combinations of options, one can expand the grammar to cover all pairs, or come up with even more ambitious targets. Next StepsIf you liked the idea of mining a grammar from a program, do not miss:* [how to mine grammars for input data](GrammarMiner.ipynb) Our next steps in the book focus on:* [how to parse and recombine inputs](Parser.ipynb)* [how to assign weights and probabilities to specific productions](ProbabilisticGrammarFuzzer.ipynb)* [how to simplify inputs that cause a failure](Reducer.ipynb) BackgroundAlthough configuration data is just as likely to cause failures as other input data, it has received relatively little attention in test generation – possibly because, unlike "regular" input data, configuration data is not so much under control of external parties, and because, again unlike regular data, there is little variance in configurations. Creating models for software configurations and using these models for testing is commonplace, as is the idea of pairwise testing. For an overview, see \cite{Pezze2008}; for a discussion and comparison of state-of-the-art techniques, see \cite{Petke2015}.More specifically, \cite{Sutton2007} also discuss techniques to systematically cover command-line options. Dai et al. \cite{Dai2010} apply configuration fuzzing by changing variables associated with configuration files. Exercises Exercise 1: ifdef Configuration FuzzingIn C programs, the *C preprocessor* can be used to choose which code parts should be compiled and which ones should not. As an example, in the C code```Cifdef LONG_FOOlong foo() { ... }elseint foo() { ... }endif```the compiler will compile the function `foo()` with return type`long` if the _preprocessor variable_ `LONG_FOO` is defined, and with return type `int` if not. Such preprocessor variables are either set in the source files (using `define`, as in `define LONG_FOO`) or on the C compiler command line (using `-D` or `-D=`, as in `-DLONG_FOO`. Such *conditional compilation* is used to configure C programs towards their environment. System-specific code can contain lots of conditional compilation. As an example, consider this excerpt of `xmlparse.c`, the XML parser that is part of the Python runtime library:```cif defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32) define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800endifif !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \ && !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \ && !defined(XML_DEV_URANDOM) \ && !defined(_WIN32) \ && !defined(XML_POOR_ENTROPY) errorendifif !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */endififdef XML_UNICODE_WCHAR_Tdefine XML_T(x) (const wchar_t)xdefine XML_L(x) L xelsedefine XML_T(x) (const unsigned short)xdefine XML_L(x) xendifint fun(int x) { return XML_T(x); }``` A typical configuration for the C preprocessor on the above code could be `cc -c -D_WIN32 -DXML_POOR_ENTROPY -DXML_UNICODE_WCHAR_T xmlparse.c`, defining the given preprocessor variables and selecting the appropriate code fragments. Since the compiler can only compile one configuration at a time (implying that we can also only _test_ one resulting executable at a time), your task is to find out which of these configurations actually compile. To this end, proceed in three steps. Part 1: Extract Preprocessor VariablesWrite a _function_ `cpp_identifiers()` that, given a set of lines (say, from `open(filename).readlines()`), extracts all preprocessor variables referenced in `if` or `ifdef` preprocessor instructions. Apply `ifdef_identifiers()` on the sample C input above, such that```pythoncpp_identifiers(open("xmlparse.c").readlines()) ```returns the set```python{'_WIN32', 'LOAD_LIBRARY_SEARCH_SYSTEM32', 'HAVE_GETRANDOM', 'HAVE_SYSCALL_GETRANDOM', 'HAVE_ARC4RANDOM_BUF', ...}``` **Solution.** Let us start with creating a sample input file, `xmlparse.c`:
###Code
filename = "xmlparse.c"
open(filename, "w").write(
"""
#if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32)
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
#endif
#if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \
&& !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \
&& !defined(XML_DEV_URANDOM) \
&& !defined(_WIN32) \
&& !defined(XML_POOR_ENTROPY)
# error
#endif
#if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)
#define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */
#endif
#ifdef XML_UNICODE_WCHAR_T
#define XML_T(x) (const wchar_t)x
#define XML_L(x) L ## x
#else
#define XML_T(x) (const unsigned short)x
#define XML_L(x) x
#endif
int fun(int x) { return XML_T(x); }
""");
###Output
_____no_output_____
###Markdown
To find C preprocessor `if` directives and preprocessor variables, we use regular expressions matching them.
###Code
import re
re_cpp_if_directive = re.compile(r"\s*#\s*(el)?if")
re_cpp_identifier = re.compile(r"[a-zA-Z_$]+")
def cpp_identifiers(lines):
identifiers = set()
for line in lines:
if re_cpp_if_directive.match(line):
identifiers |= set(re_cpp_identifier.findall(line))
# These are preprocessor keywords
identifiers -= {"if", "ifdef", "ifndef", "defined"}
return identifiers
cpp_ids = cpp_identifiers(open("xmlparse.c").readlines())
cpp_ids
###Output
_____no_output_____
###Markdown
Part 2: Derive an Option GrammarWith the help of `cpp_identifiers()`, create a grammar which has C compiler invocations with a list of options, where each option takes the form `-D` for a preprocessor variable ``. Using this grammar `cpp_grammar`, a fuzzer ```pythong = GrammarCoverageFuzzer(cpp_grammar)```would create C compiler invocations such as```python[g.fuzz() for i in range(10)]['cc -DHAVE_SYSCALL_GETRANDOM xmlparse.c', 'cc -D__SCO__ -DRANDOM_BUF -DXML_UNICODE_WCHAR_T -D__UNIXWARE__ xmlparse.c', 'cc -DXML_POOR_ENTROPY xmlparse.c', 'cc -DRANDOM xmlparse.c', 'cc -D_WIN xmlparse.c', 'cc -DHAVE_ARC xmlparse.c', ...]``` **Solution.** This is not very difficult:
###Code
from Grammars import Grammar, is_valid_grammar
cpp_grammar: Grammar = {
"<start>": ["cc -c<options> " + filename],
"<options>": ["<option>", "<options><option>"],
"<option>": []
}
for id in cpp_ids:
s = new_symbol(cpp_grammar, "<" + id + ">")
cpp_grammar["<option>"].append(s)
cpp_grammar[s] = [" -D" + id]
assert is_valid_grammar(cpp_grammar)
cpp_grammar
###Output
_____no_output_____
###Markdown
Part 3: C Preprocessor Configuration FuzzingUsing the grammar just produced, use a `GrammarCoverageFuzzer` to1. Test each processor variable individually2. Test each pair of processor variables, using `pairwise()`.What happens if you actually run the invocations? **Solution.** We can simply run the coverage fuzzer, as described above.
###Code
g = GrammarCoverageFuzzer(cpp_grammar)
g.fuzz()
from Fuzzer import ProgramRunner
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:7:3: error:
# error
^
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 2 errors generated.
$ cc -c -D__SCO__ -DHAVE_ARC xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_DEV_URANDOM xmlparse.c
$ cc -c -DHAVE_SYSCALL_GETRANDOM -DHAVE_GETRANDOM -DTIOCSWINSZ xmlparse.c
$ cc -c -D__UNIXWARE__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM_BUF -DRANDOM -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -D_WIN -D__UNIXWARE__ -DRANDOM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -DTIOCSWINSZ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
###Markdown
To test all pairs, we can use `pairwise()`:
###Code
pairwise_cpp_grammar = extend_grammar(cpp_grammar)
pairwise_cpp_grammar["<option>"] = pairwise(cpp_grammar["<option>"])
pairwise_cpp_grammar["<option>"][:10]
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DHAVE_GETRANDOM -D_WIN -DHAVE_ARC xmlparse.c
$ cc -c -D__UNIXWARE__ -DHAVE_GETRANDOM xmlparse.c
$ cc -c -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -DRANDOM_BUF xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -D__SCO__ -D_WIN -DRANDOM_BUF xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_POOR_ENTROPY -DLOAD_LIBRARY_SEARCH_SYSTEM -D_WIN -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -DRANDOM -DXML_DEV_URANDOM xmlparse.c
$ cc -c -DHAVE_ARC -DRANDOM_BUF -DRANDOM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_GETRANDOM xmlparse.c
$ cc -c -DXML_POOR_ENTROPY -D__UNIXWARE__ -DHAVE_ARC xmlparse.c
###Markdown
Some of the compilation errors we get could be expected – for instance, defining `XML_UNICODE_WCHAR_T` when actually, the type is not supported in our environment. Other errors may not be expected – and it is these errors we would find through systematic configuration fuzzing, as described above. At the end, don't forget to clean up:
###Code
os.remove("xmlparse.c")
if os.path.exists("xmlparse.o"):
os.remove("xmlparse.o")
###Output
_____no_output_____
###Markdown
Exercise 2: .ini Configuration FuzzingBesides command-line options, another important source of configurations are _configuration files_. In this exercise, we will consider the very simple configuration language provided by the Python `ConfigParser` module, which is very similar to what is found in Microsoft Windows _.ini_ files. The following example for a `ConfigParser` input file stems right from [the ConfigParser documentation](https://docs.python.org/3/library/configparser.html):```[DEFAULT]ServerAliveInterval = 45Compression = yesCompressionLevel = 9ForwardX11 = yes[bitbucket.org]User = hg[topsecret.server.com]Port = 50022ForwardX11 = no``` The above `ConfigParser` file can be created programmatically:
###Code
import configparser
config = configparser.ConfigParser()
config['DEFAULT'] = {'ServerAliveInterval': '45',
'Compression': 'yes',
'CompressionLevel': '9'}
config['bitbucket.org'] = {}
config['bitbucket.org']['User'] = 'hg'
config['topsecret.server.com'] = {}
topsecret = config['topsecret.server.com']
topsecret['Port'] = '50022' # mutates the parser
topsecret['ForwardX11'] = 'no' # same here
config['DEFAULT']['ForwardX11'] = 'yes'
with open('example.ini', 'w') as configfile:
config.write(configfile)
with open('example.ini') as configfile:
print(configfile.read(), end="")
###Output
[DEFAULT]
serveraliveinterval = 45
compression = yes
compressionlevel = 9
forwardx11 = yes
[bitbucket.org]
user = hg
[topsecret.server.com]
port = 50022
forwardx11 = no
###Markdown
and be read in again:
###Code
config = configparser.ConfigParser()
config.read('example.ini')
topsecret = config['topsecret.server.com']
topsecret['Port']
###Output
_____no_output_____
###Markdown
Part 1: Read ConfigurationUsing `configparser`, create a program reading in the above configuration file and accessing the individual elements. Part 2: Create a Configuration GrammarDesign a grammar that will automatically create configuration files suitable for your above program. Fuzz your program with it. Part 3: Mine a Configuration GrammarBy dynamically tracking the individual accesses to configuration elements, you can again extract a basic grammar from the execution. To this end, create a subclass of `ConfigParser` with a special method `__getitem__`:
###Code
class TrackingConfigParser(configparser.ConfigParser):
def __getitem__(self, key):
print("Accessing", repr(key))
return super().__getitem__(key)
###Output
_____no_output_____
###Markdown
For a `TrackingConfigParser` object `p`, `p.__getitem__(key)` will be invoked whenever `p[key]` is accessed:
###Code
tracking_config_parser = TrackingConfigParser()
tracking_config_parser.read('example.ini')
section = tracking_config_parser['topsecret.server.com']
###Output
Accessing 'topsecret.server.com'
###Markdown
Using `__getitem__()`, as above, implement a tracking mechanism that, while your program accesses the read configuration, automatically saves options accessed and values read. Create a prototype grammar from these values; use it for fuzzing. At the end, don't forget to clean up:
###Code
import os
os.remove("example.ini")
###Output
_____no_output_____
###Markdown
**Solution.** Left to the reader. Enjoy! Exercise 3: Extracting and Fuzzing C Command-Line OptionsIn C programs, the `getopt()` function are frequently used to process configuration options. A call```getopt(argc, argv, "bf:")```indicates that the program accepts two options `-b` and `-f`, with `-f` taking an argument (as indicated by the following colon). Part 1: Getopt FuzzingWrite a framework which, for a given C program, automatically extracts the argument to `getopt()` and derives a fuzzing grammar for it. There are multiple ways to achieve this:1. Scan the program source code for occurrences of `getopt()` and return the string passed. (Crude, but should frequently work.)2. Insert your own implementation of `getopt()` into the source code (effectively replacing `getopt()` from the runtime library), which outputs the `getopt()` argument and exits the program. Recompile and run.3. (Advanced.) As above, but instead of changing the source code, hook into the _dynamic linker_ which at runtime links the program with the C runtime library. Set the library loading path (on Linux and Unix, this is the `LD_LIBRARY_PATH` environment variable) such that your own version of `getopt()` is linked first, and the regular libraries later. Executing the program (without recompiling) should yield the desired result.Apply this on `grep` and `ls`; report the resulting grammars and results. **Solution.** Left to the reader. Enjoy hacking! Part 2: Fuzzing Long Options in CSame as Part 1, but also hook into the GNU variant `getopt_long()`, which accepts "long" arguments with double dashes such as `--help`. Note that method 1, above, will not work here, since the "long" options are defined in a separately defined structure. **Solution.** Left to the reader. Enjoy hacking! Exercise 4: Expansions in ContextIn our above option configurations, we have multiple symbols which all expand to the same integer. For instance, the `--line-range` option of `autopep8` takes two `` parameters which both expand into the same `` symbol:``` ::= ... | --line-range | ... ::= ::= (-)?+ ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9```
###Code
autopep8_runner.ebnf_grammar()["<line>"]
autopep8_runner.ebnf_grammar()["<int>"]
autopep8_runner.ebnf_grammar()["<digit>"]
###Output
_____no_output_____
###Markdown
Testing ConfigurationsThe behavior of a program is not only governed by its data. The _configuration_ of a program – that is, the settings that govern the execution of a program on its (regular) input data, as set by options or configuration files – just as well influences behavior, and thus can and should be tested. In this chapter, we explore how to systematically _test_ and _cover_ software configurations. By _automatically inferring configuration options_, we can apply these techniques out of the box, with no need for writing a grammar. Finally, we show how to systematically cover _combinations_ of configuration options, quickly detecting unwanted interferences. **Prerequisites*** You should have read the [chapter on grammars](Grammars.ipynb).* You should have read the [chapter on grammar coverage](GrammarCoverageFuzzer.ipynb). SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.ConfigurationFuzzer import ```and then make use of the following features.This chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options.`OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")```The grammar can be extracted via the method `ebnf_grammar()`:```python>>> option_ebnf_grammar = autopep8_runner.ebnf_grammar()>>> print(option_ebnf_grammar){'': ['()*'], '': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config ', ' --ignore-local-config', ' -r', ' --recursive', ' -j ', ' --jobs ', ' -p ', ' --pep8-passes ', ' -a', ' --aggressive', ' --experimental', ' --exclude ', ' --list-fixes', ' --ignore ', ' --select ', ' --max-line-length ', ' --line-range ', ' --range ', ' --indent-size ', ' --hang-closing'], '': [' foo.py'], '': ['+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '': [''], '': ['(-)?+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '': [''], '': [''], '': [''], '': ['']}```The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:```python>>> from Grammars import convert_ebnf_grammar>>> fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))>>> [fuzzer.fuzz() for i in range(3)][' foo.py', ' --indent-size 54 --diff --global-config k --select &, --list-fixes -a --hang-closing --range 0 72 --ignore-local-config -p 8 --version -d --experimental foo.py', ' --ignore i --jobs -16 --verbose -v --line-range -3 9 -r --help --max-line-length 8 -h --aggressive --recursive --exclude qE" --in-place -j -979 -i --pep8-passes 4 --version --in-place --aggressive --version foo.py']```The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")>>> autopep8_fuzzer = OptionFuzzer(autopep8_runner)>>> [autopep8_fuzzer.fuzz() for i in range(3)][' foo.py', ' --range 46 -1 --recursive -d --select <6 --exclude :" --global-config UVE --help --aggressive --experimental -r --line-range -7 -9 --version -i -h --indent-size -05 --max-line-length 8 --in-place --verbose --jobs -32 --ignore-local-config -v -p -1 --hang-closing -j 38 -a --list-fixes --pep8-passes 67 --diff --ignore v --select I --ignore (1NJ --ignore Km --ignore ? --select ^kZ --global-config y --select ia]9 --exclude o --ignore R!4GP.x8/ --ignore D --exclude 7 --exclude Bd -a --recursive --verbose foo.py', " --ignore \\ --global-config l --global-config @ --ignore ,CM~& --ignore nb --select c --global-config zgW --ignore $`s{H --global-config - --exclude 2| --select O --exclude 0 --exclude * --ignore qA'F}X --global-config p>_r+ --global-config eQ --exclude [ --ignore t --select h) --select %f --exclude u3;=TL --global-config w --ignore j5 --exclude Y --ignore S --ignore ]J --global-config 1 --ignore-local-config --max-line-length 36693 -i foo.py"]```The final step in testing would now to invoke the program with these arguments.Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. Configuration OptionsWhen we talk about the input to a program, we usually think of the _data_ it processes. This is also what we have been fuzzing in the past chapters – be it with [random input](Fuzzer.ipynb), [mutation-based fuzzing](MutationFuzzer.ipynb), or [grammar-based fuzzing](GrammarFuzzer.ipynb). However, programs typically have several input sources, all of which can and should be tested – and included in test generation. One important source of input is the program's _configuration_ – that is, a set of inputs that typically is set once when beginning to process data and then stays constant while processing data, while the program is running, or even while the program is deployed. Such a configuration is frequently set in _configuration files_ (for instance, as key/value pairs); the most ubiquitous method for command-line tools, though, are _configuration options_ on the command line. As an example, consider the `grep` utility to find textual patterns in files. The exact mode by which `grep` works is governed by a multitude of options, which can be listed by providing a `--help` option:
###Code
!grep --help
###Output
usage: grep [-abcDEFGHhIiJLlmnOoqRSsUVvwxZ] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
###Markdown
All these options need to be tested for whether they operate correctly. In security testing, any such option may also trigger a yet unknown vulnerability. Hence, such options can become _fuzz targets_ on their own. In this chapter, we analyze how to systematically test such options – and better yet, how to extract possible configurations right out of given program files, such that we do not have to specify anything. Options in PythonLet us stick to our common programming language here and examine how options are processed in Python. The `argparse` module provides a parser for command-line arguments (and options) with great functionality – and great complexity. You start by defining a parser (`argparse.ArgumentParser()`) to which individual arguments with various features are added, one after another. Additional parameters for each argument can specify the type (`type`) of the argument (say, integers or strings), or the number of arguments (`nargs`). By default, arguments are stored under their name in the `args` object coming from `parse_args()` – thus, `args.integers` holds the `integers` arguments added earlier. Special actions (`actions`) allow to store specific values in given variables; the `store_const` action stores the given `const` in the attribute named by `dest`. The following example takes a number of integer arguments (`integers`) as well as an operator (`--sum`, `--min`, or `--max`) to be applied on these integers. The operators all store a function reference in the `accumulate` attribute, which is finally invoked on the integers parsed:
###Code
import argparse
def process_numbers(args=[]):
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--sum', dest='accumulate', action='store_const',
const=sum,
help='sum the integers')
group.add_argument('--min', dest='accumulate', action='store_const',
const=min,
help='compute the minimum')
group.add_argument('--max', dest='accumulate', action='store_const',
const=max,
help='compute the maximum')
args = parser.parse_args(args)
print(args.accumulate(args.integers))
###Output
_____no_output_____
###Markdown
Here's how `process_numbers()` works. We can, for instance, invoke the `--min` option on the given arguments to compute the minimum:
###Code
process_numbers(["--min", "100", "200", "300"])
###Output
100
###Markdown
Or compute the sum of three numbers:
###Code
process_numbers(["--sum", "1", "2", "3"])
###Output
6
###Markdown
When defined via `add_mutually_exclusive_group()` (as above), options are mutually exclusive. Consequently, we can have only one operator:
###Code
import fuzzingbook_utils
from ExpectError import ExpectError
with ExpectError(print_traceback=False):
process_numbers(["--sum", "--max", "1", "2", "3"])
###Output
usage: ipykernel_launcher.py [-h] (--sum | --min | --max) N [N ...]
ipykernel_launcher.py: error: argument --max: not allowed with argument --sum
SystemExit: 2 (expected)
###Markdown
A Grammar for ConfigurationsHow can we test a system with several options? The easiest answer is to write a grammar for it. The grammar `PROCESS_NUMBERS_EBNF_GRAMMAR` reflects the possible combinations of options and arguments:
###Code
from Grammars import crange, srange, convert_ebnf_grammar, extend_grammar, is_valid_grammar
from Grammars import START_SYMBOL, new_symbol
PROCESS_NUMBERS_EBNF_GRAMMAR = {
"<start>": ["<operator> <integers>"],
"<operator>": ["--sum", "--min", "--max"],
"<integers>": ["<integer>", "<integers> <integer>"],
"<integer>": ["<digit>+"],
"<digit>": crange('0', '9')
}
assert is_valid_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
PROCESS_NUMBERS_GRAMMAR = convert_ebnf_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
###Output
_____no_output_____
###Markdown
We can feed this grammar into our [grammar coverage fuzzer](GrammarCoverageFuzzer.ipynb) and have it cover one option after another:
###Code
from GrammarCoverageFuzzer import GrammarCoverageFuzzer
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
print(f.fuzz())
###Output
--max 9 5 8 210 80 9756431
--sum 9 4 99 1245 612370
--min 2 3 0 46 15798 7570926
###Markdown
Of course, we can also invoke `process_numbers()` with these very arguments. To this end, we need to convert the string produced by the grammar back into a list of individual arguments, using `split()`:
###Code
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
args = f.fuzz().split()
print(args)
process_numbers(args)
###Output
['--max', '8', '9', '3067', '44', '13852967057']
13852967057
['--sum', '9', '8', '63', '9278111', '59206197798']
59215475989
['--min', '4', '1', '4864', '33342', '7827970808951']
1
###Markdown
In a similar way, we can define grammars for any program to be tested; as well as define grammars for, say, configuration files. Yet, the grammar has to be updated with every change to the program, which creates a maintenance burden. Given that the information required for the grammar is already all encoded in the program, the question arises: _Can't we go and extract configuration options right out of the program in the first place?_ Mining Configuration OptionsIn this section, we try to extract option and argument information right out of a program, such that we do not have to specify a configuration grammar. The aim is to have a configuration fuzzer that works on the options and arguments of an arbitrary program, as long as it follows specific conventions for processing its arguments. In the case of Python programs, this means using the `argparse` module.Our idea is as follows: We execute the given program up to the point where the arguments are actually parsed – that is, `argparse.parse_args()` is invoked. Up to this point, we track all calls into the argument parser, notably those calls that define arguments and options (`add_argument()`). From these, we construct the grammar. Tracking ArgumentsLet us illustrate this approach with a simple experiment: We define a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while `process_numbers` is invoked. If we have a call to a method `add_argument`, we access and print out the local variables (which at this point are the arguments to the method).
###Code
import sys
import string
def traceit(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(method_name, locals)
###Output
_____no_output_____
###Markdown
What we get is a list of all calls to `add_argument()`, together with the method arguments passed:
###Code
sys.settrace(traceit)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
add_argument {'kwargs': {'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}, 'args': ('-h', '--help'), 'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', version=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True)}
add_argument {'kwargs': {'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}, 'args': ('integers',), 'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', version=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True)}
add_argument {'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}, 'args': ('--sum',), 'self': <argparse._MutuallyExclusiveGroup object at 0x117b07588>}
add_argument {'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}, 'args': ('--min',), 'self': <argparse._MutuallyExclusiveGroup object at 0x117b07588>}
add_argument {'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}, 'args': ('--max',), 'self': <argparse._MutuallyExclusiveGroup object at 0x117b07588>}
6
###Markdown
From the `args` argument, we can access the individual options and arguments to be defined:
###Code
def traceit(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(locals['args'])
sys.settrace(traceit)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
('-h', '--help')
('integers',)
('--sum',)
('--min',)
('--max',)
6
###Markdown
We see that each argument comes as a tuple with one (say, `integers` or `--sum`) or two members (`-h` and `--help`), which denote alternate forms for the same option. Our job will be to go through the arguments of `add_arguments()` and detect not only the names of options and arguments, but also whether they accept additional parameters, as well as the type of the parameters. A Grammar Miner for Options and Arguments Let us now build a class that gathers all this information to create a grammar. We use the `ParseInterrupt` exception to interrupt program execution after gathering all arguments and options:
###Code
class ParseInterrupt(Exception):
pass
###Output
_____no_output_____
###Markdown
The class `OptionGrammarMiner` takes an executable function for which the grammar of options and arguments is to be mined:
###Code
class OptionGrammarMiner(object):
def __init__(self, function, log=False):
self.function = function
self.log = log
###Output
_____no_output_____
###Markdown
The method `mine_ebnf_grammar()` is where everything happens. It creates a grammar of the form``` ::= * ::= ::= ```in which the options and arguments will be collected. It then sets a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while the previously defined `function` is invoked. Raising `ParseInterrupt` (when `parse_args()` is invoked) ends execution.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
OPTION_SYMBOL = "<option>"
ARGUMENTS_SYMBOL = "<arguments>"
def mine_ebnf_grammar(self):
self.grammar = {
START_SYMBOL: ["(" + self.OPTION_SYMBOL + ")*" + self.ARGUMENTS_SYMBOL],
self.OPTION_SYMBOL: [],
self.ARGUMENTS_SYMBOL: []
}
self.current_group = self.OPTION_SYMBOL
old_trace = sys.settrace(self.traceit)
try:
self.function()
except ParseInterrupt:
pass
sys.settrace(old_trace)
return self.grammar
def mine_grammar(self):
return convert_ebnf_grammar(self.mine_ebnf_grammar())
###Output
_____no_output_____
###Markdown
The trace function checks for four methods: `add_argument()` is the most important function, resulting in processing arguments; `frame.f_locals` again is the set of local variables, which at this point is mostly the arguments to `add_argument()`. Since mutually exclusive groups also have a method `add_argument()`, we set the flag `in_group` to differentiate. Note that we make no specific efforts to differentiate between multiple parsers or groups; we simply assume that there is one parser, and at any point at most one mutually exclusive group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def traceit(self, frame, event, arg):
if event != "call":
return
if "self" not in frame.f_locals:
return
self_var = frame.f_locals["self"]
method_name = frame.f_code.co_name
if method_name == "add_argument":
in_group = repr(type(self_var)).find("Group") >= 0
self.process_argument(frame.f_locals, in_group)
elif method_name == "add_mutually_exclusive_group":
self.add_group(frame.f_locals, exclusive=True)
elif method_name == "add_argument_group":
# self.add_group(frame.f_locals, exclusive=False)
pass
elif method_name == "parse_args":
raise ParseInterrupt
return None
###Output
_____no_output_____
###Markdown
The `process_arguments()` now analyzes the arguments passed and adds them to the grammar:* If the argument starts with `-`, it gets added as an optional element to the `` list* Otherwise, it gets added to the `` list.The optional `nargs` argument specifies how many arguments can follow. If it is a number, we add the appropriate number of elements to the grammar; if it is an abstract specifier (say, `+` or `*`), we use it directly as EBNF operator.Given the large number of parameters and optional behavior, this is a somewhat messy function, but it does the job.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def process_argument(self, locals, in_group):
args = locals["args"]
kwargs = locals["kwargs"]
if self.log:
print(args)
print(kwargs)
print()
for arg in args:
self.process_arg(arg, in_group, kwargs)
class OptionGrammarMiner(OptionGrammarMiner):
def process_arg(self, arg, in_group, kwargs):
if arg.startswith('-'):
if not in_group:
target = self.OPTION_SYMBOL
else:
target = self.current_group
metavar = None
arg = " " + arg
else:
target = self.ARGUMENTS_SYMBOL
metavar = arg
arg = ""
if "nargs" in kwargs:
nargs = kwargs["nargs"]
else:
nargs = 1
param = self.add_parameter(kwargs, metavar)
if param == "":
nargs = 0
if isinstance(nargs, int):
for i in range(nargs):
arg += param
else:
assert nargs in "?+*"
arg += '(' + param + ')' + nargs
if target == self.OPTION_SYMBOL:
self.grammar[target].append(arg)
else:
self.grammar[target].append(arg)
###Output
_____no_output_____
###Markdown
The method `add_parameter()` handles possible parameters of options. If the argument has an `action` defined, it takes no parameter. Otherwise, we identify the type of the parameter (as `int` or `str`) and augment the grammar with an appropriate rule.
###Code
import inspect
class OptionGrammarMiner(OptionGrammarMiner):
def add_parameter(self, kwargs, metavar):
if "action" in kwargs:
# No parameter
return ""
type_ = "str"
if "type" in kwargs:
given_type = kwargs["type"]
# int types come as '<class int>'
if inspect.isclass(given_type) and issubclass(given_type, int):
type_ = "int"
if metavar is None:
if "metavar" in kwargs:
metavar = kwargs["metavar"]
else:
metavar = type_
self.add_type_rule(type_)
if metavar != type_:
self.add_metavar_rule(metavar, type_)
param = " <" + metavar + ">"
return param
###Output
_____no_output_____
###Markdown
The method `add_type_rule()` adds a rule for parameter types to the grammar. If the parameter is identified by a meta-variable (say, `N`), we add a rule for this as well to improve legibility.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_type_rule(self, type_):
if type_ == "int":
self.add_int_rule()
else:
self.add_str_rule()
def add_int_rule(self):
self.grammar["<int>"] = ["(-)?<digit>+"]
self.grammar["<digit>"] = crange('0', '9')
def add_str_rule(self):
self.grammar["<str>"] = ["<char>+"]
self.grammar["<char>"] = srange(
string.digits
+ string.ascii_letters
+ string.punctuation)
def add_metavar_rule(self, metavar, type_):
self.grammar["<" + metavar + ">"] = ["<" + type_ + ">"]
###Output
_____no_output_____
###Markdown
The method `add_group()` adds a new mutually exclusive group to the grammar. We define a new symbol (say, ``) for the options added to the group, and use the `required` and `exclusive` flags to define an appropriate expansion operator. The group is then prefixed to the grammar, as in``` ::= * ::= ```and filled with the next calls to `add_argument()` within the group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_group(self, locals, exclusive):
kwargs = locals["kwargs"]
if self.log:
print(kwargs)
required = kwargs.get("required", False)
group = new_symbol(self.grammar, "<group>")
if required and exclusive:
group_expansion = group
if required and not exclusive:
group_expansion = group + "+"
if not required and exclusive:
group_expansion = group + "?"
if not required and not exclusive:
group_expansion = group + "*"
self.grammar[START_SYMBOL][0] = group_expansion + \
self.grammar[START_SYMBOL][0]
self.grammar[group] = []
self.current_group = group
###Output
_____no_output_____
###Markdown
That's it! With this, we can now extract the grammar from our `process_numbers()` program. Turning on logging again reveals the variables we draw upon.
###Code
miner = OptionGrammarMiner(process_numbers, log=True)
process_numbers_grammar = miner.mine_ebnf_grammar()
###Output
('-h', '--help')
{'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}
('integers',)
{'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}
{'required': True}
('--sum',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}
('--min',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}
('--max',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}
###Markdown
Here is the extracted grammar:
###Code
process_numbers_grammar
###Output
_____no_output_____
###Markdown
The grammar properly identifies the group found:
###Code
process_numbers_grammar["<start>"]
process_numbers_grammar["<group>"]
###Output
_____no_output_____
###Markdown
It also identifies a `--help` option provided not by us, but by the `argparse` module:
###Code
process_numbers_grammar["<option>"]
###Output
_____no_output_____
###Markdown
The grammar also correctly identifies the types of the arguments:
###Code
process_numbers_grammar["<arguments>"]
process_numbers_grammar["<integers>"]
###Output
_____no_output_____
###Markdown
The rules for `int` are set as defined by `add_int_rule()`
###Code
process_numbers_grammar["<int>"]
###Output
_____no_output_____
###Markdown
We can take this grammar and convert it to BNF, such that we can fuzz with it right away:
###Code
assert is_valid_grammar(process_numbers_grammar)
grammar = convert_ebnf_grammar(process_numbers_grammar)
assert is_valid_grammar(grammar)
f = GrammarCoverageFuzzer(grammar)
for i in range(10):
print(f.fuzz())
###Output
--sum 9
--max -h --help --help -16 -0
--min --help 2745341 8
--min 1 27
--sum --help --help -2
--sum --help 0 3 -77
--sum -3
--sum --help 429 8 10 0295 -694 1
--max -h 91 -1425 99
--sum -795 -94 8 -44
###Markdown
Each and every invocation adheres to the rules as set forth in the `argparse` calls. By mining options and arguments from existing programs, we can now fuzz these options out of the box – without having to specify a grammar. Testing Autopep8 Let us try out the option grammar miner on real-world Python programs. `autopep8` is a tool that automatically converts Python code to the [PEP 8 Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). (Actually, all Python code in this book runs through `autopep8` during production.) `autopep8` offers a wide range of options, as can be seen by invoking it with `--help`:
###Code
!autopep8 --help
###Output
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing]
[files [files ...]]
Automatically formats Python code to conform to the PEP 8 style guide.
positional arguments:
files files to format or '-' for standard in
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-v, --verbose print verbose messages; multiple -v result in more
verbose messages
-d, --diff print the diff for the fixed source
-i, --in-place make changes to files in place
--global-config filename
path to a global pep8 config file; if this file does
not exist then this is ignored (default:
/Users/zeller/.config/pep8)
--ignore-local-config
don't look for and apply local config files; if not
passed, defaults are updated with any config files in
the project's root directory
-r, --recursive run recursively over directories; must be used with
--in-place or --diff
-j n, --jobs n number of parallel jobs; match CPU count if value is
less than 1
-p n, --pep8-passes n
maximum number of additional pep8 passes (default:
infinite)
-a, --aggressive enable non-whitespace changes; multiple -a result in
more aggressive changes
--experimental enable experimental fixes
--exclude globs exclude file/directory names that match these comma-
separated globs
--list-fixes list codes for fixes; used by --ignore and --select
--ignore errors do not fix these errors/warnings (default:
E226,E24,W503)
--select errors fix only these errors/warnings (e.g. E4,W)
--max-line-length n set maximum allowed line length (default: 79)
--line-range line line, --range line line
only fix errors found within this inclusive range of
line numbers (e.g. 1 99); line numbers are indexed at
1
--hang-closing hang-closing option passed to pycodestyle
###Markdown
Autopep8 SetupWe want to systematically test these options. In order to deploy our configuration grammar miner, we need to find the source code of the executable:
###Code
import os
def find_executable(name):
for path in os.get_exec_path():
qualified_name = os.path.join(path, name)
if os.path.exists(qualified_name):
return qualified_name
return None
autopep8_executable = find_executable("autopep8")
assert autopep8_executable is not None
autopep8_executable
###Output
_____no_output_____
###Markdown
Next, we build a function that reads the contents of the file and executes it.
###Code
def autopep8():
executable = find_executable("autopep8")
# First line has to contain "/usr/bin/env python" or like
first_line = open(executable).readline()
assert first_line.find("python") >= 0
contents = open(executable).read()
exec(contents)
###Output
_____no_output_____
###Markdown
Mining an Autopep8 GrammarWe can use the `autopep8()` function in our grammar miner:
###Code
autopep8_miner = OptionGrammarMiner(autopep8)
###Output
_____no_output_____
###Markdown
and extract a grammar for it:
###Code
autopep8_ebnf_grammar = autopep8_miner.mine_ebnf_grammar()
###Output
_____no_output_____
###Markdown
This works because here, `autopep8` is not a separate process (and a separate Python interpreter), but we run the `autopep8()` function (and the `autopep8` code) in our current Python interpreter – up to the call to `parse_args()`, where we interrupt execution again. At this point, the `autopep8` code has done nothing but setting up the argument parser – which is what we are interested in. The grammar options mined reflect precisely the options seen when providing `--help`:
###Code
print(autopep8_ebnf_grammar["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing']
###Markdown
Metavariables like `` or `` are placeholders for integers. We assume all metavariables of the same name have the same type:
###Code
autopep8_ebnf_grammar["<line>"]
###Output
_____no_output_____
###Markdown
The grammar miner has inferred that the argument to `autopep8` is a list of files:
###Code
autopep8_ebnf_grammar["<arguments>"]
###Output
_____no_output_____
###Markdown
which in turn all are strings:
###Code
autopep8_ebnf_grammar["<files>"]
###Output
_____no_output_____
###Markdown
As we are only interested in testing options, not arguments, we fix the arguments to a single mandatory input. (Otherwise, we'd have plenty of random file names generated.)
###Code
autopep8_ebnf_grammar["<arguments>"] = [" <files>"]
autopep8_ebnf_grammar["<files>"] = ["foo.py"]
assert is_valid_grammar(autopep8_ebnf_grammar)
###Output
_____no_output_____
###Markdown
Creating Autopep8 Options Let us now use the inferred grammar for fuzzing. Again, we convert the EBNF grammar into a regular BNF grammar:
###Code
autopep8_grammar = convert_ebnf_grammar(autopep8_ebnf_grammar)
assert is_valid_grammar(autopep8_grammar)
###Output
_____no_output_____
###Markdown
And we can use the grammar for fuzzing all options:
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=4)
for i in range(20):
print(f.fuzz())
###Output
-r foo.py
--hang-closing --experimental --aggressive foo.py
--ignore-local-config -d -h -p 9 --version --list-fixes foo.py
-a --verbose foo.py
-v --indent-size 7 --global-config { foo.py
--in-place --help --select ~s --max-line-length 1 foo.py
--pep8-passes 8 --diff foo.py
-i --recursive foo.py
-r --hang-closing foo.py
--jobs 0 -i foo.py
--exclude k --line-range 3 6 --verbose foo.py
-v -i foo.py
--version -a --list-fixes foo.py
--ignore x -r foo.py
-j 4 --in-place -a foo.py
--range 5 2 --list-fixes foo.py
--indent-size 5 --indent-size 3 foo.py
--indent-size 0 --indent-size 8 foo.py
--indent-size 7 --indent-size 3 foo.py
--indent-size 9 --verbose foo.py
###Markdown
Let us apply these options on the actual program. We need a file `foo.py` that will serve as input:
###Code
def create_foo_py():
open("foo.py", "w").write("""
def twice(x = 2):
return x + x
""")
create_foo_py()
print(open("foo.py").read(), end="")
###Output
def twice(x = 2):
return x + x
###Markdown
We see how `autopep8` fixes the spacing:
###Code
!autopep8 foo.py
###Output
def twice(x=2):
return x + x
###Markdown
Let us now put things together. We define a `ProgramRunner` that will run the `autopep8` executable with arguments coming from the mined `autopep8` grammar.
###Code
from Fuzzer import ProgramRunner
###Output
_____no_output_____
###Markdown
Running `autopep8` with the mined options reveals a surprisingly high number of passing runs. (We see that some options depend on each other or are mutually exclusive, but this is handled by the program logic, not the argument parser, and hence out of our scope.) The `GrammarCoverageFuzzer` ensures that each option is tested at least once. (Digits and letters, too, by the way.)
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=5)
for i in range(20):
invocation = "autopep8" + f.fuzz()
print("$ " + invocation)
args = invocation.split()
autopep8 = ProgramRunner(args)
result, outcome = autopep8.run()
if result.stderr != "":
print(result.stderr, end="")
###Output
$ autopep8 foo.py
$ autopep8 -a --max-line-length 2 --jobs 5 --help -r foo.py
$ autopep8 --version --indent-size 0 --ignore-local-config -h foo.py
autopep8 1.3.4 (pycodestyle: 2.5.0)
$ autopep8 --ignore z --diff -j 7 --experimental --list-fixes --verbose -i --recursive foo.py
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing]
[files [files ...]]
autopep8: error: --in-place and --diff are mutually exclusive
$ autopep8 --line-range 1 6 --in-place --select _ foo.py
$ autopep8 --exclude n --pep8-passes 3 --aggressive foo.py
$ autopep8 --global-config &F -p 4 -d foo.py
$ autopep8 --hang-closing --range 8 9 -v foo.py
[file:foo.py]
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
$ autopep8 --indent-size 1 --version --hang-closing foo.py
autopep8 1.3.4 (pycodestyle: 2.5.0)
$ autopep8 --indent-size 3 --hang-closing --aggressive foo.py
$ autopep8 --indent-size 8 -r --in-place foo.py
$ autopep8 --indent-size 9 --indent-size 7 --version foo.py
autopep8 1.3.4 (pycodestyle: 2.5.0)
$ autopep8 -a --aggressive --help -v foo.py
$ autopep8 --indent-size 9 --indent-size 7 foo.py
$ autopep8 --indent-size 5 --indent-size 2 --verbose foo.py
[file:foo.py]
---> Applying global fix for E265
---> 1 issue(s) to fix {'E111': {3}}
$ autopep8 --indent-size 9 --in-place --recursive foo.py
$ autopep8 --indent-size 9 --indent-size 9 foo.py
$ autopep8 --indent-size 6 --indent-size 9 -v foo.py
[file:foo.py]
---> Applying global fix for E265
---> 2 issue(s) to fix {'E111': {3}, 'E117': {3}}
$ autopep8 --indent-size 4 --indent-size -5 --list-fixes foo.py
$ autopep8 --indent-size 93 -a foo.py
###Markdown
Our `foo.py` file now has been formatted in place a number of times:
###Code
print(open("foo.py").read(), end="")
###Output
def twice(x=2):
return x + x
###Markdown
We don't need it anymore, so we clean up things:
###Code
import os
os.remove("foo.py")
###Output
_____no_output_____
###Markdown
Classes for Fuzzing Configuration OptionsLet us now create reusable classes that we can use for testing arbitrary programs. (Okay, make that "arbitrary programs that are written in Python and use the `argparse` module to process command-line arguments.") The class `OptionRunner` is a subclass of `ProgramRunner` that takes care of automatically determining the grammar, using the same steps we used for `autopep8`, above.
###Code
class OptionRunner(ProgramRunner):
def __init__(self, program, arguments=None):
if isinstance(program, str):
self.base_executable = program
else:
self.base_executable = program[0]
self.find_contents()
self.find_grammar()
if arguments is not None:
self.set_arguments(arguments)
super().__init__(program)
###Output
_____no_output_____
###Markdown
First, we find the contents of the Python executable:
###Code
class OptionRunner(OptionRunner):
def find_contents(self):
self._executable = find_executable(self.base_executable)
first_line = open(self._executable).readline()
assert first_line.find("python") >= 0
self.contents = open(self._executable).read()
def invoker(self):
exec(self.contents)
def executable(self):
return self._executable
###Output
_____no_output_____
###Markdown
Next, we determine the grammar using the `OptionGrammarMiner` class:
###Code
class OptionRunner(OptionRunner):
def find_grammar(self):
miner = OptionGrammarMiner(self.invoker)
self._ebnf_grammar = miner.mine_ebnf_grammar()
def ebnf_grammar(self):
return self._ebnf_grammar
def grammar(self):
return convert_ebnf_grammar(self._ebnf_grammar)
###Output
_____no_output_____
###Markdown
The two service methods `set_arguments()` and `set_invocation()` help us to change the arguments and program, respectively.
###Code
from Grammars import unreachable_nonterminals
class OptionRunner(OptionRunner):
def set_arguments(self, args):
self._ebnf_grammar["<arguments>"] = [" " + args]
# Delete rules for previous arguments
for nonterminal in unreachable_nonterminals(self._ebnf_grammar):
del self._ebnf_grammar[nonterminal]
def set_invocation(self, program):
self.program = program
###Output
_____no_output_____
###Markdown
We can instantiate the class on `autopep8` and immediately get the grammar:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
print(autopep8_runner.ebnf_grammar()["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing']
###Markdown
An `OptionFuzzer` interacts with the given `OptionRunner` to obtain its grammar, which is then passed to its `GrammarCoverageFuzzer` superclass.
###Code
class OptionFuzzer(GrammarCoverageFuzzer):
def __init__(self, runner, *args, **kwargs):
assert issubclass(type(runner), OptionRunner)
self.runner = runner
grammar = runner.grammar()
super().__init__(grammar, *args, **kwargs)
###Output
_____no_output_____
###Markdown
When invoking `run()`, the `OptionFuzzer` creates a new invocation (using `fuzz()` from its grammar) and runs the now given (or previously set) runner with the arguments from the grammar. Note that the runner specified in `run()` can differ from the one set during initialization; this allows for mining options from one program and applying it in another context.
###Code
class OptionFuzzer(OptionFuzzer):
def run(self, runner=None, inp=""):
if runner is None:
runner = self.runner
assert issubclass(type(runner), OptionRunner)
invocation = runner.executable() + " " + self.fuzz()
runner.set_invocation(invocation.split())
return runner.run(inp)
###Output
_____no_output_____
###Markdown
Example: Autopep8Let us apply our newly defined classes on the `autopep8` runner:
###Code
autopep8_fuzzer = OptionFuzzer(autopep8_runner, max_nonterminals=5)
for i in range(3):
print(autopep8_fuzzer.fuzz())
###Output
-j -8 foo.py
--aggressive --global-config U} --version --verbose foo.py
--help --experimental -p 01 --hang-closing -r -d --list-fixes foo.py
###Markdown
We can now systematically test `autopep8` with these classes:
###Code
autopep8_fuzzer.run(autopep8_runner)
###Output
_____no_output_____
###Markdown
Example: MyPyWe can extract options for the `mypy` static type checker for Python:
###Code
assert find_executable("mypy") is not None
mypy_runner = OptionRunner("mypy", "foo.py")
print(mypy_runner.ebnf_grammar()["<option>"])
mypy_fuzzer = OptionFuzzer(mypy_runner, max_nonterminals=5)
for i in range(10):
print(mypy_fuzzer.fuzz())
###Output
foo.py
-m --no-warn-unused-configs --cache-dir --dirty-stubs --xml-report b --strict --always-false --no-warn-no-return --disallow-any-generics foo.py
--linecoverage-report VF? --local-partial-types foo.py
--python-executable --dump-graph --any-exprs-report j --warn-unused-ignores --bazel -2 foo.py
--scripts-are-modules --warn-no-return --verbose -p --no-silence-site-packages --shadow-file --no-strict-optional --disallow-subclassing-any --strict-optional --almost-silent --package --help foo.py
--check-untyped-defs --warn-incomplete-stub --no-check-untyped-defs --allow-untyped-calls --ignore-missing-imports foo.py
--show-traceback --hide-column-numbers --disallow-any-decorated --disallow-untyped-decorators --xslt-html-report pm --warn-redundant-casts --fast-parser --package-root --html-report x --no-site-packages --hide-error-context --always-true foo.py
--disallow-incomplete-defs --strict-optional-whitelist K -V foo.py
--custom-typeshed-dir r^ --command -i --skip-version-check foo.py
--config-file C --allow-incomplete-defs --no-warn-redundant-casts --find-occurrences v8 --warn-unused-configs --disallow-untyped-defs foo.py
###Markdown
Example: NotedownHere's the configuration options for the `notedown` Notebook to Markdown converter:
###Code
assert find_executable("notedown") is not None
notedown_runner = OptionRunner("notedown")
print(notedown_runner.ebnf_grammar()["<option>"])
notedown_fuzzer = OptionFuzzer(notedown_runner, max_nonterminals=5)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
--nomagic
-o --examples --match 6? --timeout 93 --help --run >
--precode Y --rmagic --version 2
--template '* --strip s8p
--output -h --debug ^
--execute --render --debug v
--knit --to q --from m --run -h --version +
-o --rmagic --nomagic J
--precode 4 f --version ]
-o E --version HB
###Markdown
Combinatorial TestingOur `CoverageGrammarFuzzer` does a good job in covering each and every option at least once, which is great for systematic testing. However, as we also can see in our examples above, some options require each other, while others interfere with each other. What we should do as good testers is not only to cover every option individually, but also _combinations_ of options. The Python `itertools` module gives us means to create combinations from lists. We can, for instance, take the `notedown` options and create a list of all pairs.
###Code
from itertools import combinations
option_list = notedown_runner.ebnf_grammar()["<option>"]
pairs = list(combinations(option_list, 2))
###Output
_____no_output_____
###Markdown
There's quite a number of pairs:
###Code
len(pairs)
print(pairs[:20])
###Output
[(' -h', ' --help'), (' -h', ' -o( <str>)?'), (' -h', ' --output( <str>)?'), (' -h', ' --from <str>'), (' -h', ' --to <str>'), (' -h', ' --run'), (' -h', ' --execute'), (' -h', ' --timeout <int>'), (' -h', ' --strip'), (' -h', ' --precode( <str>)+'), (' -h', ' --knit( <str>)?'), (' -h', ' --rmagic'), (' -h', ' --nomagic'), (' -h', ' --render'), (' -h', ' --template <str>'), (' -h', ' --match <str>'), (' -h', ' --examples'), (' -h', ' --version'), (' -h', ' --debug'), (' --help', ' -o( <str>)?')]
###Markdown
Testing every such pair of options frequently suffices to cover all interferences between options. (Programs rarely have conditions involving three or more configuration settings.) To this end, we _change_ the grammar from having a list of options to having a list of _option pairs_, such that covering these will automatically cover all pairs. We create a function `pairwise()` that takes a list of options as occurring in our grammar and returns a list of _pairwise options_ – that is, our original options, but concatenated.
###Code
def pairwise(option_list):
return [option_1 + option_2
for (option_1, option_2) in combinations(option_list, 2)]
###Output
_____no_output_____
###Markdown
Here's the first 20 pairs:
###Code
print(pairwise(option_list)[:20])
###Output
[' -h --help', ' -h -o( <str>)?', ' -h --output( <str>)?', ' -h --from <str>', ' -h --to <str>', ' -h --run', ' -h --execute', ' -h --timeout <int>', ' -h --strip', ' -h --precode( <str>)+', ' -h --knit( <str>)?', ' -h --rmagic', ' -h --nomagic', ' -h --render', ' -h --template <str>', ' -h --match <str>', ' -h --examples', ' -h --version', ' -h --debug', ' --help -o( <str>)?']
###Markdown
The new grammar `pairwise_notedown_grammar` is a copy of the `notedown` grammar, but with the list of options replaced with the above pairwise option list.
###Code
notedown_grammar = notedown_runner.grammar()
pairwise_notedown_grammar = extend_grammar(notedown_grammar)
pairwise_notedown_grammar["<option>"] = pairwise(notedown_grammar["<option>"])
assert is_valid_grammar(pairwise_notedown_grammar)
###Output
_____no_output_____
###Markdown
Using the "pairwise" grammar to fuzz now covers one pair after another:
###Code
notedown_fuzzer = GrammarCoverageFuzzer(
pairwise_notedown_grammar, max_nonterminals=4)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
--run --debug --help --execute
-o --timeout 8
--precode : --render -h --run
--help --debug --strip --nomagic G
--render --debug --rmagic --nomagic r
--help --version --execute --strip ^
-h --execute --precode t --version ip
--nomagic --debug --version --debug -h --render K
--examples --version --help --examples
###Markdown
Can we actually test all combinations of options? Not in practice, as the number of combinations quickly grows as the length increases. It decreases again as the number of options reaches the maximum (with 20 options, there is only 1 combination involving _all_ options), but the absolute numbers are still staggering:
###Code
for combination_length in range(1, 20):
tuples = list(combinations(option_list, combination_length))
print(combination_length, len(tuples))
###Output
1 20
2 190
3 1140
4 4845
5 15504
6 38760
7 77520
8 125970
9 167960
10 184756
11 167960
12 125970
13 77520
14 38760
15 15504
16 4845
17 1140
18 190
19 20
###Markdown
Formally, the number of combinations of length $k$ in a set of options of length $n$ is the binomial coefficient$${n \choose k} = \frac{n!}{k!(n - k)!}$$ which for $k = 2$ (all pairs) gives us$${n \choose 2} = \frac{n!}{2(n - 2)!} = n \times (n - 1)$$ For `autopep8` with its 29 options...
###Code
len(autopep8_runner.ebnf_grammar()["<option>"])
###Output
_____no_output_____
###Markdown
... we thus need 812 tests to cover all pairs:
###Code
len(autopep8_runner.ebnf_grammar()["<option>"]) * \
(len(autopep8_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
For `mypy` with its 110 options, though, we already end up with 11,990 tests to be conducted:
###Code
len(mypy_runner.ebnf_grammar()["<option>"])
len(mypy_runner.ebnf_grammar()["<option>"]) * \
(len(mypy_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
Even if each pair takes a second to run, we'd still be done in three hours of testing, though. If your program has more options that you all want to get covered in combinations, it is advisable that you limit the number of configurations further – for instance by limiting combinatorial testing to those combinations that possibly can interact with each other; and covering all other (presumably orthogonal) options individually. This mechanism of creating configurations by extending grammars can be easily extended to other configuration targets. One may want to explore a greater number of configurations, or expansions in specific contexts. The [exercises](Exercises), below, have a number of options ready for you. SynopsisThis chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options. `OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
###Output
_____no_output_____
###Markdown
The grammar can be extracted via the method `ebnf_grammar()`:
###Code
option_ebnf_grammar = autopep8_runner.ebnf_grammar()
print(option_ebnf_grammar)
###Output
{'<start>': ['(<option>)*<arguments>'], '<option>': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing'], '<arguments>': [' foo.py'], '<str>': ['<char>+'], '<char>': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '#', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '<filename>': ['<str>'], '<int>': ['(-)?<digit>+'], '<digit>': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '<n>': ['<int>'], '<globs>': ['<str>'], '<errors>': ['<str>'], '<line>': ['<int>']}
###Markdown
The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:
###Code
from Grammars import convert_ebnf_grammar
fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))
[fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
autopep8_fuzzer = OptionFuzzer(autopep8_runner)
[autopep8_fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The final step in testing would now to invoke the program with these arguments. Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. Lessons Learned* Besides regular input data, program _configurations_ make an important testing target.* For a given program using a standard library to parse command-line options and arguments, one can automatically extract these and convert them into a grammar.* To cover not only single options, but combinations of options, one can expand the grammar to cover all pairs, or come up with even more ambitious targets. Next StepsIf you liked the idea of mining a grammar from a program, do not miss:* [how to mine grammars for input data](GrammarMiner.ipynb) Our next steps in the book focus on:* [how to parse and recombine inputs](Parser.ipynb)* [how to assign weights and probabilities to specific productions](ProbabilisticGrammarFuzzer.ipynb)* [how to simplify inputs that cause a failure](Reducer.ipynb) BackgroundAlthough configuration data is just as likely to cause failures as other input data, it has received relatively little attention in test generation – possibly because, unlike "regular" input data, configuration data is not so much under control of external parties, and because, again unlike regular data, there is little variance in configurations. Creating models for software configurations and using these models for testing is commonplace, as is the idea of pairwise testing. For an overview, see \cite{Pezze2008}; for a discussion and comparison of state-of-the-art techniques, see \cite{Petke2015}.More specifically, \cite{Sutton2007} also discuss techniques to systematically cover command-line options. Dai et al. \cite{Dai2010} apply configuration fuzzing by changing variables associated with configuration files. Exercises Exercise 1: ifdef Configuration FuzzingIn C programs, the *C preprocessor* can be used to choose which code parts should be compiled and which ones should not. As an example, in the C code```Cifdef LONG_FOOlong foo() { ... }elseint foo() { ... }endif```the compiler will compile the function `foo()` with return type`long` if the _preprocessor variable_ `LONG_FOO` is defined, and with return type `int` if not. Such preprocessor variables are either set in the source files (using `define`, as in `define LONG_FOO`) or on the C compiler command line (using `-D` or `-D=`, as in `-DLONG_FOO`. Such *conditional compilation* is used to configure C programs towards their environment. System-specific code can contain lots of conditional compilation. As an example, consider this excerpt of `xmlparse.c`, the XML parser that is part of the Python runtime library:```cif defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32) define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800endifif !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \ && !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \ && !defined(XML_DEV_URANDOM) \ && !defined(_WIN32) \ && !defined(XML_POOR_ENTROPY) errorendifif !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */endififdef XML_UNICODE_WCHAR_Tdefine XML_T(x) (const wchar_t)xdefine XML_L(x) L xelsedefine XML_T(x) (const unsigned short)xdefine XML_L(x) xendifint fun(int x) { return XML_T(x); }``` A typical configuration for the C preprocessor on the above code could be `cc -c -D_WIN32 -DXML_POOR_ENTROPY -DXML_UNICODE_WCHAR_T xmlparse.c`, defining the given preprocessor variables and selecting the appropriate code fragments. Since the compiler can only compile one configuration at a time (implying that we can also only _test_ one resulting executable at a time), your task is to find out which of these configurations actually compile. To this end, proceed in three steps. Part 1: Extract Preprocessor VariablesWrite a _function_ `cpp_identifiers()` that, given a set of lines (say, from `open(filename).readlines()`), extracts all preprocessor variables referenced in `if` or `ifdef` preprocessor instructions. Apply `ifdef_identifiers()` on the sample C input above, such that```pythoncpp_identifiers(open("xmlparse.c").readlines()) ```returns the set```python{'_WIN32', 'LOAD_LIBRARY_SEARCH_SYSTEM32', 'HAVE_GETRANDOM', 'HAVE_SYSCALL_GETRANDOM', 'HAVE_ARC4RANDOM_BUF', ...}``` **Solution.** Let us start with creating a sample input file, `xmlparse.c`:
###Code
filename = "xmlparse.c"
open(filename, "w").write(
"""
#if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32)
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
#endif
#if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \
&& !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \
&& !defined(XML_DEV_URANDOM) \
&& !defined(_WIN32) \
&& !defined(XML_POOR_ENTROPY)
# error
#endif
#if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)
#define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */
#endif
#ifdef XML_UNICODE_WCHAR_T
#define XML_T(x) (const wchar_t)x
#define XML_L(x) L ## x
#else
#define XML_T(x) (const unsigned short)x
#define XML_L(x) x
#endif
int fun(int x) { return XML_T(x); }
""");
###Output
_____no_output_____
###Markdown
To find C preprocessor `if` directives and preprocessor variables, we use regular expressions matching them.
###Code
import re
re_cpp_if_directive = re.compile(r"\s*#\s*(el)?if")
re_cpp_identifier = re.compile(r"[a-zA-Z_$]+")
def cpp_identifiers(lines):
identifiers = set()
for line in lines:
if re_cpp_if_directive.match(line):
identifiers |= set(re_cpp_identifier.findall(line))
# These are preprocessor keywords
identifiers -= {"if", "ifdef", "ifndef", "defined"}
return identifiers
cpp_ids = cpp_identifiers(open("xmlparse.c").readlines())
cpp_ids
###Output
_____no_output_____
###Markdown
Part 2: Derive an Option GrammarWith the help of `cpp_identifiers()`, create a grammar which has C compiler invocations with a list of options, where each option takes the form `-D` for a preprocessor variable ``. Using this grammar `cpp_grammar`, a fuzzer ```pythong = GrammarCoverageFuzzer(cpp_grammar)```would create C compiler invocations such as```python[g.fuzz() for i in range(10)]['cc -DHAVE_SYSCALL_GETRANDOM xmlparse.c', 'cc -D__SCO__ -DRANDOM_BUF -DXML_UNICODE_WCHAR_T -D__UNIXWARE__ xmlparse.c', 'cc -DXML_POOR_ENTROPY xmlparse.c', 'cc -DRANDOM xmlparse.c', 'cc -D_WIN xmlparse.c', 'cc -DHAVE_ARC xmlparse.c', ...]``` **Solution.** This is not very difficult:
###Code
from Grammars import new_symbol
cpp_grammar = {
"<start>": ["cc -c<options> " + filename],
"<options>": ["<option>", "<options><option>"],
"<option>": []
}
for id in cpp_ids:
s = new_symbol(cpp_grammar, "<" + id + ">")
cpp_grammar["<option>"].append(s)
cpp_grammar[s] = [" -D" + id]
cpp_grammar
assert is_valid_grammar(cpp_grammar)
###Output
_____no_output_____
###Markdown
Part 3: C Preprocessor Configuration FuzzingUsing the grammar just produced, use a `GrammarCoverageFuzzer` to1. Test each processor variable individually2. Test each pair of processor variables, using `pairwise()`.What happens if you actually run the invocations? **Solution.** We can simply run the coverage fuzzer, as described above.
###Code
g = GrammarCoverageFuzzer(cpp_grammar)
g.fuzz()
from Fuzzer import ProgramRunner
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -D__SCO__ -DHAVE_ARC xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_DEV_URANDOM -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
$ cc -c -DRANDOM -DRANDOM_BUF xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -D__UNIXWARE__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_SYSCALL_GETRANDOM -DTIOCSWINSZ -DHAVE_GETRANDOM -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 1 error generated.
$ cc -c -DRANDOM_BUF xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
###Markdown
To test all pairs, we can use `pairwise()`:
###Code
pairwise_cpp_grammar = extend_grammar(cpp_grammar)
pairwise_cpp_grammar["<option>"] = pairwise(cpp_grammar["<option>"])
pairwise_cpp_grammar["<option>"][:10]
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DTIOCSWINSZ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DTIOCSWINSZ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_GETRANDOM -D__SCO__ xmlparse.c
$ cc -c -D__UNIXWARE__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_SYSCALL_GETRANDOM -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -DHAVE_GETRANDOM xmlparse.c
$ cc -c -DHAVE_SYSCALL_GETRANDOM xmlparse.c
$ cc -c -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DTIOCSWINSZ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_POOR_ENTROPY -DRANDOM_BUF -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 1 error generated.
###Markdown
Some of the compilation errors we get could be expected – for instance, defining `XML_UNICODE_WCHAR_T` when actually, the type is not supported in our environment. Other errors may not be expected – and it is these errors we would find through systematic configuration fuzzing, as described above. At the end, don't forget to clean up:
###Code
os.remove("xmlparse.c")
if os.path.exists("xmlparse.o"):
os.remove("xmlparse.o")
###Output
_____no_output_____
###Markdown
Exercise 2: .ini Configuration FuzzingBesides command-line options, another important source of configurations are _configuration files_. In this exercise, we will consider the very simple configuration language provided by the Python `ConfigParser` module, which is very similar to what is found in Microsoft Windows _.ini_ files. The following example for a `ConfigParser` input file stems right from [the ConfigParser documentation](https://docs.python.org/3/library/configparser.html):```[DEFAULT]ServerAliveInterval = 45Compression = yesCompressionLevel = 9ForwardX11 = yes[bitbucket.org]User = hg[topsecret.server.com]Port = 50022ForwardX11 = no``` The above `ConfigParser` file can be created programmatically:
###Code
import configparser
config = configparser.ConfigParser()
config['DEFAULT'] = {'ServerAliveInterval': '45',
'Compression': 'yes',
'CompressionLevel': '9'}
config['bitbucket.org'] = {}
config['bitbucket.org']['User'] = 'hg'
config['topsecret.server.com'] = {}
topsecret = config['topsecret.server.com']
topsecret['Port'] = '50022' # mutates the parser
topsecret['ForwardX11'] = 'no' # same here
config['DEFAULT']['ForwardX11'] = 'yes'
with open('example.ini', 'w') as configfile:
config.write(configfile)
with open('example.ini') as configfile:
print(configfile.read(), end="")
###Output
[DEFAULT]
serveraliveinterval = 45
compression = yes
compressionlevel = 9
forwardx11 = yes
[bitbucket.org]
user = hg
[topsecret.server.com]
port = 50022
forwardx11 = no
###Markdown
and be read in again:
###Code
config = configparser.ConfigParser()
config.read('example.ini')
topsecret = config['topsecret.server.com']
topsecret['Port']
###Output
_____no_output_____
###Markdown
Part 1: Read ConfigurationUsing `configparser`, create a program reading in the above configuration file and accessing the individual elements. Part 2: Create a Configuration GrammarDesign a grammar that will automatically create configuration files suitable for your above program. Fuzz your program with it. Part 3: Mine a Configuration GrammarBy dynamically tracking the individual accesses to configuration elements, you can again extract a basic grammar from the execution. To this end, create a subclass of `ConfigParser` with a special method `__getitem__`:
###Code
class TrackingConfigParser(configparser.ConfigParser):
def __getitem__(self, key):
print("Accessing", repr(key))
return super().__getitem__(key)
###Output
_____no_output_____
###Markdown
For a `TrackingConfigParser` object `p`, `p.__getitem__(key)` will be invoked whenever `p[key]` is accessed:
###Code
tracking_config_parser = TrackingConfigParser()
tracking_config_parser.read('example.ini')
section = tracking_config_parser['topsecret.server.com']
###Output
Accessing 'topsecret.server.com'
###Markdown
Using `__getitem__()`, as above, implement a tracking mechanism that, while your program accesses the read configuration, automatically saves options accessed and values read. Create a prototype grammar from these values; use it for fuzzing. At the end, don't forget to clean up:
###Code
import os
os.remove("example.ini")
###Output
_____no_output_____
###Markdown
**Solution.** Left to the reader. Enjoy! Exercise 3: Extracting and Fuzzing C Command-Line OptionsIn C programs, the `getopt()` function are frequently used to process configuration options. A call```getopt(argc, argv, "bf:")```indicates that the program accepts two options `-b` and `-f`, with `-f` taking an argument (as indicated by the following colon). Part 1: Getopt FuzzingWrite a framework which, for a given C program, automatically extracts the argument to `getopt()` and derives a fuzzing grammar for it. There are multiple ways to achieve this:1. Scan the program source code for occurrences of `getopt()` and return the string passed. (Crude, but should frequently work.)2. Insert your own implementation of `getopt()` into the source code (effectively replacing `getopt()` from the runtime library), which outputs the `getopt()` argument and exits the program. Recompile and run.3. (Advanced.) As above, but instead of changing the source code, hook into the _dynamic linker_ which at runtime links the program with the C runtime library. Set the library loading path (on Linux and Unix, this is the `LD_LIBRARY_PATH` environment variable) such that your own version of `getopt()` is linked first, and the regular libraries later. Executing the program (without recompiling) should yield the desired result.Apply this on `grep` and `ls`; report the resulting grammars and results. **Solution.** Left to the reader. Enjoy hacking! Part 2: Fuzzing Long Options in CSame as Part 1, but also hook into the GNU variant `getopt_long()`, which accepts "long" arguments with double dashes such as `--help`. Note that method 1, above, will not work here, since the "long" options are defined in a separately defined structure. **Solution.** Left to the reader. Enjoy hacking! Exercise 4: Expansions in ContextIn our above option configurations, we have multiple symbols which all expand to the same integer. For instance, the `--line-range` option of `autopep8` takes two `` parameters which both expand into the same `` symbol:``` ::= ... | --line-range | ... ::= ::= (-)?+ ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9```
###Code
autopep8_runner.ebnf_grammar()["<line>"]
autopep8_runner.ebnf_grammar()["<int>"]
autopep8_runner.ebnf_grammar()["<digit>"]
###Output
_____no_output_____
###Markdown
Testing ConfigurationsThe behavior of a program is not only governed by its data. The _configuration_ of a program – that is, the settings that govern the execution of a program on its (regular) input data, as set by options or configuration files – just as well influences behavior, and thus can and should be tested. In this chapter, we explore how to systematically _test_ and _cover_ software configurations. By _automatically inferring configuration options_, we can apply these techniques out of the box, with no need for writing a grammar. Finally, we show how to systematically cover _combinations_ of configuration options, quickly detecting unwanted interferences. **Prerequisites*** You should have read the [chapter on grammars](Grammars.ipynb).* You should have read the [chapter on grammar coverage](GrammarCoverageFuzzer.ipynb). SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.ConfigurationFuzzer import ```and then make use of the following features.This chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options.`OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")```The grammar can be extracted via the method `ebnf_grammar()`:```python>>> option_ebnf_grammar = autopep8_runner.ebnf_grammar()>>> print(option_ebnf_grammar){'': ['()*'], '': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config ', ' --ignore-local-config', ' -r', ' --recursive', ' -j ', ' --jobs ', ' -p ', ' --pep8-passes ', ' -a', ' --aggressive', ' --experimental', ' --exclude ', ' --list-fixes', ' --ignore ', ' --select ', ' --max-line-length ', ' --line-range ', ' --range ', ' --indent-size ', ' --hang-closing', ' --exit-code'], '': [' foo.py'], '': ['+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '': [''], '': ['(-)?+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '': [''], '': [''], '': [''], '': ['']}```The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:```python>>> from Grammars import convert_ebnf_grammar>>> fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))>>> [fuzzer.fuzz() for i in range(3)][' --pep8-passes 62 foo.py', ' --exit-code -j -95 --in-place -d --line-range 87 1 --list-fixes --range 3 0 --hang-closing --aggressive --experimental foo.py', ' --indent-size 4 --ignore 7_ -p -3 --jobs 9 --diff -i -a --ignore-local-config -h --select y -r --exclude Ig8> --max-line-length -43 --help --verbose --global-config AR --recursive --version -v --ignore m --select ~a|h< --global-config vx --exclude `q --ignore l --select J --select pLNOV --ignore Y{ --global-config ; --exclude f --select Hr --exclude - --select * --exclude )W --select c9+4 --select 0]s --ignore b6 --ignore k$? --select G --select BM --global-config \\1[ --exclude Z --exclude d --exclude \' --global-config DU --select u --global-config QSz --ignore e!,t2F --global-config w. --exclude i --select n --exclude ETCj --exclude P: --ignore (& --exclude K3@ --select =/ --select Xo --exclude %" --select } --exclude 5 --ignore )^ --global-config 6 --exclude O!& --global-config B foo.py']```The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")>>> autopep8_fuzzer = OptionFuzzer(autopep8_runner)>>> [autopep8_fuzzer.fuzz() for i in range(3)][' --indent-size 9 foo.py', ' -v --global-config F --range -0 -715382 --help --ignore-local-config -j -4 -a --exit-code -r --list-fixes --hang-closing --diff --version --select hZ --jobs 5 --pep8-passes -6 --line-range -9 5 --exclude k% --recursive -h -i --max-line-length 703 --aggressive -d --verbose --experimental --in-place -p -9 --ignore G --ignore 8U --global-config mKa --global-config 45[ --ignore & --global-config Yp --global-config .i) --select |7 --select `*l^SIy> --exclude C --global-config = --ignore xD --global-config bQ --select Tsq --select \\ --select cd~t --exclude ?V --global-config 1O:R --global-config g --global-config E$W --exclude MN --global-config ;v --select !2BX --select / --global-config L9J_w3 --ignore \' --select uz( --exclude P@ --global-config ero --exclude H --global-config 0,fj --ignore }<-n --ignore +{6 --select " --ignore ]A --in-place --global-config ~P --range -19 -84 foo.py', ' foo.py']```The final step in testing would now to invoke the program with these arguments.Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. Configuration OptionsWhen we talk about the input to a program, we usually think of the _data_ it processes. This is also what we have been fuzzing in the past chapters – be it with [random input](Fuzzer.ipynb), [mutation-based fuzzing](MutationFuzzer.ipynb), or [grammar-based fuzzing](GrammarFuzzer.ipynb). However, programs typically have several input sources, all of which can and should be tested – and included in test generation. One important source of input is the program's _configuration_ – that is, a set of inputs that typically is set once when beginning to process data and then stays constant while processing data, while the program is running, or even while the program is deployed. Such a configuration is frequently set in _configuration files_ (for instance, as key/value pairs); the most ubiquitous method for command-line tools, though, are _configuration options_ on the command line. As an example, consider the `grep` utility to find textual patterns in files. The exact mode by which `grep` works is governed by a multitude of options, which can be listed by providing a `--help` option:
###Code
!grep --help
###Output
usage: grep [-abcDEFGHhIiJLlmnOoqRSsUVvwxZ] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
###Markdown
All these options need to be tested for whether they operate correctly. In security testing, any such option may also trigger a yet unknown vulnerability. Hence, such options can become _fuzz targets_ on their own. In this chapter, we analyze how to systematically test such options – and better yet, how to extract possible configurations right out of given program files, such that we do not have to specify anything. Options in PythonLet us stick to our common programming language here and examine how options are processed in Python. The `argparse` module provides a parser for command-line arguments (and options) with great functionality – and great complexity. You start by defining a parser (`argparse.ArgumentParser()`) to which individual arguments with various features are added, one after another. Additional parameters for each argument can specify the type (`type`) of the argument (say, integers or strings), or the number of arguments (`nargs`). By default, arguments are stored under their name in the `args` object coming from `parse_args()` – thus, `args.integers` holds the `integer` arguments added earlier. Special actions (`actions`) allow to store specific values in given variables; the `store_const` action stores the given `const` in the attribute named by `dest`. The following example takes a number of integer arguments (`integers`) as well as an operator (`--sum`, `--min`, or `--max`) to be applied on these integers. The operators all store a function reference in the `accumulate` attribute, which is finally invoked on the integers parsed:
###Code
import argparse
def process_numbers(args=[]):
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--sum', dest='accumulate', action='store_const',
const=sum,
help='sum the integers')
group.add_argument('--min', dest='accumulate', action='store_const',
const=min,
help='compute the minimum')
group.add_argument('--max', dest='accumulate', action='store_const',
const=max,
help='compute the maximum')
args = parser.parse_args(args)
print(args.accumulate(args.integers))
###Output
_____no_output_____
###Markdown
Here's how `process_numbers()` works. We can, for instance, invoke the `--min` option on the given arguments to compute the minimum:
###Code
process_numbers(["--min", "100", "200", "300"])
###Output
100
###Markdown
Or compute the sum of three numbers:
###Code
process_numbers(["--sum", "1", "2", "3"])
###Output
6
###Markdown
When defined via `add_mutually_exclusive_group()` (as above), options are mutually exclusive. Consequently, we can have only one operator:
###Code
import bookutils
from ExpectError import ExpectError
with ExpectError(print_traceback=False):
process_numbers(["--sum", "--max", "1", "2", "3"])
###Output
usage: ipykernel_launcher.py [-h] (--sum | --min | --max) N [N ...]
ipykernel_launcher.py: error: argument --max: not allowed with argument --sum
SystemExit: 2 (expected)
###Markdown
A Grammar for ConfigurationsHow can we test a system with several options? The easiest answer is to write a grammar for it. The grammar `PROCESS_NUMBERS_EBNF_GRAMMAR` reflects the possible combinations of options and arguments:
###Code
from Grammars import crange, srange, convert_ebnf_grammar, extend_grammar, is_valid_grammar
from Grammars import START_SYMBOL, new_symbol
PROCESS_NUMBERS_EBNF_GRAMMAR = {
"<start>": ["<operator> <integers>"],
"<operator>": ["--sum", "--min", "--max"],
"<integers>": ["<integer>", "<integers> <integer>"],
"<integer>": ["<digit>+"],
"<digit>": crange('0', '9')
}
assert is_valid_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
PROCESS_NUMBERS_GRAMMAR = convert_ebnf_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
###Output
_____no_output_____
###Markdown
We can feed this grammar into our [grammar coverage fuzzer](GrammarCoverageFuzzer.ipynb) and have it cover one option after another:
###Code
from GrammarCoverageFuzzer import GrammarCoverageFuzzer
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
print(f.fuzz())
###Output
--max 9 5 8 210 80 9756431
--sum 9 4 99 1245 612370
--min 2 3 0 46 15798 7570926
###Markdown
Of course, we can also invoke `process_numbers()` with these very arguments. To this end, we need to convert the string produced by the grammar back into a list of individual arguments, using `split()`:
###Code
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
args = f.fuzz().split()
print(args)
process_numbers(args)
###Output
['--max', '8', '9', '3067', '44', '13852967057']
13852967057
['--sum', '9', '8', '63', '9278111', '59206197798']
59215475989
['--min', '4', '1', '4864', '33342', '7827970808951']
1
###Markdown
In a similar way, we can define grammars for any program to be tested; as well as define grammars for, say, configuration files. Yet, the grammar has to be updated with every change to the program, which creates a maintenance burden. Given that the information required for the grammar is already all encoded in the program, the question arises: _Can't we go and extract configuration options right out of the program in the first place?_ Mining Configuration OptionsIn this section, we try to extract option and argument information right out of a program, such that we do not have to specify a configuration grammar. The aim is to have a configuration fuzzer that works on the options and arguments of an arbitrary program, as long as it follows specific conventions for processing its arguments. In the case of Python programs, this means using the `argparse` module.Our idea is as follows: We execute the given program up to the point where the arguments are actually parsed – that is, `argparse.parse_args()` is invoked. Up to this point, we track all calls into the argument parser, notably those calls that define arguments and options (`add_argument()`). From these, we construct the grammar. Tracking ArgumentsLet us illustrate this approach with a simple experiment: We define a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while `process_numbers` is invoked. If we have a call to a method `add_argument`, we access and print out the local variables (which at this point are the arguments to the method).
###Code
import sys
import string
def traceit(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(method_name, locals)
###Output
_____no_output_____
###Markdown
What we get is a list of all calls to `add_argument()`, together with the method arguments passed:
###Code
sys.settrace(traceit)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
add_argument {'kwargs': {'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}, 'args': ('-h', '--help'), 'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', version=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True)}
add_argument {'kwargs': {'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}, 'args': ('integers',), 'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', version=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True)}
add_argument {'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}, 'args': ('--sum',), 'self': <argparse._MutuallyExclusiveGroup object at 0x7fa82955d710>}
add_argument {'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}, 'args': ('--min',), 'self': <argparse._MutuallyExclusiveGroup object at 0x7fa82955d710>}
add_argument {'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}, 'args': ('--max',), 'self': <argparse._MutuallyExclusiveGroup object at 0x7fa82955d710>}
6
###Markdown
From the `args` argument, we can access the individual options and arguments to be defined:
###Code
def traceit(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(locals['args'])
sys.settrace(traceit)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
('-h', '--help')
('integers',)
('--sum',)
('--min',)
('--max',)
6
###Markdown
We see that each argument comes as a tuple with one (say, `integers` or `--sum`) or two members (`-h` and `--help`), which denote alternate forms for the same option. Our job will be to go through the arguments of `add_arguments()` and detect not only the names of options and arguments, but also whether they accept additional parameters, as well as the type of the parameters. A Grammar Miner for Options and Arguments Let us now build a class that gathers all this information to create a grammar. We use the `ParseInterrupt` exception to interrupt program execution after gathering all arguments and options:
###Code
class ParseInterrupt(Exception):
pass
###Output
_____no_output_____
###Markdown
The class `OptionGrammarMiner` takes an executable function for which the grammar of options and arguments is to be mined:
###Code
class OptionGrammarMiner(object):
def __init__(self, function, log=False):
self.function = function
self.log = log
###Output
_____no_output_____
###Markdown
The method `mine_ebnf_grammar()` is where everything happens. It creates a grammar of the form``` ::= * ::= ::= ```in which the options and arguments will be collected. It then sets a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while the previously defined `function` is invoked. Raising `ParseInterrupt` (when `parse_args()` is invoked) ends execution.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
OPTION_SYMBOL = "<option>"
ARGUMENTS_SYMBOL = "<arguments>"
def mine_ebnf_grammar(self):
self.grammar = {
START_SYMBOL: ["(" + self.OPTION_SYMBOL + ")*" + self.ARGUMENTS_SYMBOL],
self.OPTION_SYMBOL: [],
self.ARGUMENTS_SYMBOL: []
}
self.current_group = self.OPTION_SYMBOL
old_trace = sys.gettrace()
sys.settrace(self.traceit)
try:
self.function()
except ParseInterrupt:
pass
sys.settrace(old_trace)
return self.grammar
def mine_grammar(self):
return convert_ebnf_grammar(self.mine_ebnf_grammar())
###Output
_____no_output_____
###Markdown
The trace function checks for four methods: `add_argument()` is the most important function, resulting in processing arguments; `frame.f_locals` again is the set of local variables, which at this point is mostly the arguments to `add_argument()`. Since mutually exclusive groups also have a method `add_argument()`, we set the flag `in_group` to differentiate. Note that we make no specific efforts to differentiate between multiple parsers or groups; we simply assume that there is one parser, and at any point at most one mutually exclusive group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def traceit(self, frame, event, arg):
if event != "call":
return
if "self" not in frame.f_locals:
return
self_var = frame.f_locals["self"]
method_name = frame.f_code.co_name
if method_name == "add_argument":
in_group = repr(type(self_var)).find("Group") >= 0
self.process_argument(frame.f_locals, in_group)
elif method_name == "add_mutually_exclusive_group":
self.add_group(frame.f_locals, exclusive=True)
elif method_name == "add_argument_group":
# self.add_group(frame.f_locals, exclusive=False)
pass
elif method_name == "parse_args":
raise ParseInterrupt
return None
###Output
_____no_output_____
###Markdown
The `process_arguments()` now analyzes the arguments passed and adds them to the grammar:* If the argument starts with `-`, it gets added as an optional element to the `` list* Otherwise, it gets added to the `` list.The optional `nargs` argument specifies how many arguments can follow. If it is a number, we add the appropriate number of elements to the grammar; if it is an abstract specifier (say, `+` or `*`), we use it directly as EBNF operator.Given the large number of parameters and optional behavior, this is a somewhat messy function, but it does the job.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def process_argument(self, locals, in_group):
args = locals["args"]
kwargs = locals["kwargs"]
if self.log:
print(args)
print(kwargs)
print()
for arg in args:
self.process_arg(arg, in_group, kwargs)
class OptionGrammarMiner(OptionGrammarMiner):
def process_arg(self, arg, in_group, kwargs):
if arg.startswith('-'):
if not in_group:
target = self.OPTION_SYMBOL
else:
target = self.current_group
metavar = None
arg = " " + arg
else:
target = self.ARGUMENTS_SYMBOL
metavar = arg
arg = ""
if "nargs" in kwargs:
nargs = kwargs["nargs"]
else:
nargs = 1
param = self.add_parameter(kwargs, metavar)
if param == "":
nargs = 0
if isinstance(nargs, int):
for i in range(nargs):
arg += param
else:
assert nargs in "?+*"
arg += '(' + param + ')' + nargs
if target == self.OPTION_SYMBOL:
self.grammar[target].append(arg)
else:
self.grammar[target].append(arg)
###Output
_____no_output_____
###Markdown
The method `add_parameter()` handles possible parameters of options. If the argument has an `action` defined, it takes no parameter. Otherwise, we identify the type of the parameter (as `int` or `str`) and augment the grammar with an appropriate rule.
###Code
import inspect
class OptionGrammarMiner(OptionGrammarMiner):
def add_parameter(self, kwargs, metavar):
if "action" in kwargs:
# No parameter
return ""
type_ = "str"
if "type" in kwargs:
given_type = kwargs["type"]
# int types come as '<class int>'
if inspect.isclass(given_type) and issubclass(given_type, int):
type_ = "int"
if metavar is None:
if "metavar" in kwargs:
metavar = kwargs["metavar"]
else:
metavar = type_
self.add_type_rule(type_)
if metavar != type_:
self.add_metavar_rule(metavar, type_)
param = " <" + metavar + ">"
return param
###Output
_____no_output_____
###Markdown
The method `add_type_rule()` adds a rule for parameter types to the grammar. If the parameter is identified by a meta-variable (say, `N`), we add a rule for this as well to improve legibility.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_type_rule(self, type_):
if type_ == "int":
self.add_int_rule()
else:
self.add_str_rule()
def add_int_rule(self):
self.grammar["<int>"] = ["(-)?<digit>+"]
self.grammar["<digit>"] = crange('0', '9')
def add_str_rule(self):
self.grammar["<str>"] = ["<char>+"]
self.grammar["<char>"] = srange(
string.digits
+ string.ascii_letters
+ string.punctuation)
def add_metavar_rule(self, metavar, type_):
self.grammar["<" + metavar + ">"] = ["<" + type_ + ">"]
###Output
_____no_output_____
###Markdown
The method `add_group()` adds a new mutually exclusive group to the grammar. We define a new symbol (say, ``) for the options added to the group, and use the `required` and `exclusive` flags to define an appropriate expansion operator. The group is then prefixed to the grammar, as in``` ::= * ::= ```and filled with the next calls to `add_argument()` within the group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_group(self, locals, exclusive):
kwargs = locals["kwargs"]
if self.log:
print(kwargs)
required = kwargs.get("required", False)
group = new_symbol(self.grammar, "<group>")
if required and exclusive:
group_expansion = group
if required and not exclusive:
group_expansion = group + "+"
if not required and exclusive:
group_expansion = group + "?"
if not required and not exclusive:
group_expansion = group + "*"
self.grammar[START_SYMBOL][0] = group_expansion + \
self.grammar[START_SYMBOL][0]
self.grammar[group] = []
self.current_group = group
###Output
_____no_output_____
###Markdown
That's it! With this, we can now extract the grammar from our `process_numbers()` program. Turning on logging again reveals the variables we draw upon.
###Code
miner = OptionGrammarMiner(process_numbers, log=True)
process_numbers_grammar = miner.mine_ebnf_grammar()
###Output
('-h', '--help')
{'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}
('integers',)
{'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}
{'required': True}
('--sum',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}
('--min',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}
('--max',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}
###Markdown
Here is the extracted grammar:
###Code
process_numbers_grammar
###Output
_____no_output_____
###Markdown
The grammar properly identifies the group found:
###Code
process_numbers_grammar["<start>"]
process_numbers_grammar["<group>"]
###Output
_____no_output_____
###Markdown
It also identifies a `--help` option provided not by us, but by the `argparse` module:
###Code
process_numbers_grammar["<option>"]
###Output
_____no_output_____
###Markdown
The grammar also correctly identifies the types of the arguments:
###Code
process_numbers_grammar["<arguments>"]
process_numbers_grammar["<integers>"]
###Output
_____no_output_____
###Markdown
The rules for `int` are set as defined by `add_int_rule()`
###Code
process_numbers_grammar["<int>"]
###Output
_____no_output_____
###Markdown
We can take this grammar and convert it to BNF, such that we can fuzz with it right away:
###Code
assert is_valid_grammar(process_numbers_grammar)
grammar = convert_ebnf_grammar(process_numbers_grammar)
assert is_valid_grammar(grammar)
f = GrammarCoverageFuzzer(grammar)
for i in range(10):
print(f.fuzz())
###Output
--sum 9
--max -h --help --help -16 -0
--min --help 2745341 8
--min 1 27
--sum --help --help -2
--sum --help 0 3 -77
--sum -3
--sum --help 429 8 10 0295 -694 1
--max -h 91 -1425 99
--sum -795 -94 8 -44
###Markdown
Each and every invocation adheres to the rules as set forth in the `argparse` calls. By mining options and arguments from existing programs, we can now fuzz these options out of the box – without having to specify a grammar. Testing Autopep8 Let us try out the option grammar miner on real-world Python programs. `autopep8` is a tool that automatically converts Python code to the [PEP 8 Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). (Actually, all Python code in this book runs through `autopep8` during production.) `autopep8` offers a wide range of options, as can be seen by invoking it with `--help`:
###Code
!autopep8 --help
###Output
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing] [--exit-code]
[files [files ...]]
Automatically formats Python code to conform to the PEP 8 style guide.
positional arguments:
files files to format or '-' for standard in
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-v, --verbose print verbose messages; multiple -v result in more
verbose messages
-d, --diff print the diff for the fixed source
-i, --in-place make changes to files in place
--global-config filename
path to a global pep8 config file; if this file does
not exist then this is ignored (default:
/Users/zeller/.config/pep8)
--ignore-local-config
don't look for and apply local config files; if not
passed, defaults are updated with any config files in
the project's root directory
-r, --recursive run recursively over directories; must be used with
--in-place or --diff
-j n, --jobs n number of parallel jobs; match CPU count if value is
less than 1
-p n, --pep8-passes n
maximum number of additional pep8 passes (default:
infinite)
-a, --aggressive enable non-whitespace changes; multiple -a result in
more aggressive changes
--experimental enable experimental fixes
--exclude globs exclude file/directory names that match these comma-
separated globs
--list-fixes list codes for fixes; used by --ignore and --select
--ignore errors do not fix these errors/warnings (default:
E226,E24,W50,W690)
--select errors fix only these errors/warnings (e.g. E4,W)
--max-line-length n set maximum allowed line length (default: 79)
--line-range line line, --range line line
only fix errors found within this inclusive range of
line numbers (e.g. 1 99); line numbers are indexed at
1
--hang-closing hang-closing option passed to pycodestyle
--exit-code change to behavior of exit code. default behavior of
return value, 0 is no differences, 1 is error exit.
return 2 when add this option. 2 is exists
differences.
###Markdown
Autopep8 SetupWe want to systematically test these options. In order to deploy our configuration grammar miner, we need to find the source code of the executable:
###Code
import os
def find_executable(name):
for path in os.get_exec_path():
qualified_name = os.path.join(path, name)
if os.path.exists(qualified_name):
return qualified_name
return None
autopep8_executable = find_executable("autopep8")
assert autopep8_executable is not None
autopep8_executable
###Output
_____no_output_____
###Markdown
Next, we build a function that reads the contents of the file and executes it.
###Code
def autopep8():
executable = find_executable("autopep8")
# First line has to contain "/usr/bin/env python" or like
first_line = open(executable).readline()
assert first_line.find("python") >= 0
contents = open(executable).read()
exec(contents)
###Output
_____no_output_____
###Markdown
Mining an Autopep8 GrammarWe can use the `autopep8()` function in our grammar miner:
###Code
autopep8_miner = OptionGrammarMiner(autopep8)
###Output
_____no_output_____
###Markdown
and extract a grammar for it:
###Code
autopep8_ebnf_grammar = autopep8_miner.mine_ebnf_grammar()
###Output
_____no_output_____
###Markdown
This works because here, `autopep8` is not a separate process (and a separate Python interpreter), but we run the `autopep8()` function (and the `autopep8` code) in our current Python interpreter – up to the call to `parse_args()`, where we interrupt execution again. At this point, the `autopep8` code has done nothing but setting up the argument parser – which is what we are interested in. The grammar options mined reflect precisely the options seen when providing `--help`:
###Code
print(autopep8_ebnf_grammar["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
Metavariables like `` or `` are placeholders for integers. We assume all metavariables of the same name have the same type:
###Code
autopep8_ebnf_grammar["<line>"]
###Output
_____no_output_____
###Markdown
The grammar miner has inferred that the argument to `autopep8` is a list of files:
###Code
autopep8_ebnf_grammar["<arguments>"]
###Output
_____no_output_____
###Markdown
which in turn all are strings:
###Code
autopep8_ebnf_grammar["<files>"]
###Output
_____no_output_____
###Markdown
As we are only interested in testing options, not arguments, we fix the arguments to a single mandatory input. (Otherwise, we'd have plenty of random file names generated.)
###Code
autopep8_ebnf_grammar["<arguments>"] = [" <files>"]
autopep8_ebnf_grammar["<files>"] = ["foo.py"]
assert is_valid_grammar(autopep8_ebnf_grammar)
###Output
_____no_output_____
###Markdown
Creating Autopep8 Options Let us now use the inferred grammar for fuzzing. Again, we convert the EBNF grammar into a regular BNF grammar:
###Code
autopep8_grammar = convert_ebnf_grammar(autopep8_ebnf_grammar)
assert is_valid_grammar(autopep8_grammar)
###Output
_____no_output_____
###Markdown
And we can use the grammar for fuzzing all options:
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=4)
for i in range(20):
print(f.fuzz())
###Output
-r foo.py
-h --experimental --hang-closing foo.py
--list-fixes -v foo.py
--aggressive -d foo.py
--indent-size 9 --help foo.py
--exit-code --recursive foo.py
--diff --version -i foo.py
--max-line-length 0 --in-place --verbose foo.py
--ignore-local-config -a foo.py
--select x -i --exit-code foo.py
-j 8 --diff foo.py
-d -v -d foo.py
-p 6 -i foo.py
-v --diff foo.py
--ignore uA --recursive foo.py
--jobs 5 -r foo.py
--range 4 1 foo.py
--ignore-local-config -i foo.py
-r --exit-code foo.py
-v -r foo.py
###Markdown
Let us apply these options on the actual program. We need a file `foo.py` that will serve as input: (Note that the following commands will overwrite the file `foo.py`, if it already exists in the current working directory. Be aware of this, if you downloaded the notebooks and are working locally.)
###Code
def create_foo_py():
open("foo.py", "w").write("""
def twice(x = 2):
return x + x
""")
create_foo_py()
print(open("foo.py").read(), end="")
###Output
def twice(x = 2):
return x + x
###Markdown
We see how `autopep8` fixes the spacing:
###Code
!autopep8 foo.py
###Output
def twice(x=2):
return x + x
###Markdown
Let us now put things together. We define a `ProgramRunner` that will run the `autopep8` executable with arguments coming from the mined `autopep8` grammar.
###Code
from Fuzzer import ProgramRunner
###Output
_____no_output_____
###Markdown
Running `autopep8` with the mined options reveals a surprisingly high number of passing runs. (We see that some options depend on each other or are mutually exclusive, but this is handled by the program logic, not the argument parser, and hence out of our scope.) The `GrammarCoverageFuzzer` ensures that each option is tested at least once. (Digits and letters, too, by the way.)
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=5)
for i in range(20):
invocation = "autopep8" + f.fuzz()
print("$ " + invocation)
args = invocation.split()
autopep8 = ProgramRunner(args)
result, outcome = autopep8.run()
if result.stderr != "":
print(result.stderr, end="")
###Output
$ autopep8 foo.py
$ autopep8 --diff --max-line-length 4 --exit-code --range 5 8 -p 2 foo.py
$ autopep8 --ignore z --verbose -r --list-fixes foo.py
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing] [--exit-code]
[files [files ...]]
autopep8: error: --recursive must be used with --in-place or --diff
$ autopep8 --exclude 5 -h -i --aggressive --in-place foo.py
$ autopep8 --select a --help --experimental foo.py
$ autopep8 --indent-size -30 --recursive foo.py
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing] [--exit-code]
[files [files ...]]
autopep8: error: --recursive must be used with --in-place or --diff
$ autopep8 --global-config < -j 9 -v -a foo.py
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing] [--exit-code]
[files [files ...]]
autopep8: error: parallel jobs requires --in-place
$ autopep8 --line-range 7 1 --hang-closing -d foo.py
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing] [--exit-code]
[files [files ...]]
autopep8: error: First value of --range should be less than or equal to the second
$ autopep8 --pep8-passes 6 --hang-closing --version --ignore-local-config foo.py
autopep8 1.5.5 (pycodestyle: 2.5.0)
$ autopep8 --jobs -2 --experimental --version foo.py
autopep8 1.5.5 (pycodestyle: 2.5.0)
$ autopep8 --ignore Y: --select ! --global-config e foo.py
$ autopep8 --select 1 -a --recursive --aggressive foo.py
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing] [--exit-code]
[files [files ...]]
autopep8: error: --recursive must be used with --in-place or --diff
$ autopep8 --ignore * --ignore `0 --global-config _ --verbose foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config ,\ --exclude r -v foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config xd6M --recursive foo.py
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing] [--exit-code]
[files [files ...]]
autopep8: error: --recursive must be used with --in-place or --diff
$ autopep8 --select R --exclude L --version --ignore-local-config foo.py
autopep8 1.5.5 (pycodestyle: 2.5.0)
$ autopep8 --select " --verbose -h -d foo.py
$ autopep8 --diff -i -h foo.py
$ autopep8 --in-place --select w --version -i foo.py
autopep8 1.5.5 (pycodestyle: 2.5.0)
$ autopep8 --ignore 49 --exclude lI -i foo.py
###Markdown
Our `foo.py` file now has been formatted in place a number of times:
###Code
print(open("foo.py").read(), end="")
###Output
def twice(x=2):
return x + x
###Markdown
We don't need it anymore, so we clean up things:
###Code
import os
os.remove("foo.py")
###Output
_____no_output_____
###Markdown
Classes for Fuzzing Configuration OptionsLet us now create reusable classes that we can use for testing arbitrary programs. (Okay, make that "arbitrary programs that are written in Python and use the `argparse` module to process command-line arguments.") The class `OptionRunner` is a subclass of `ProgramRunner` that takes care of automatically determining the grammar, using the same steps we used for `autopep8`, above.
###Code
class OptionRunner(ProgramRunner):
def __init__(self, program, arguments=None):
if isinstance(program, str):
self.base_executable = program
else:
self.base_executable = program[0]
self.find_contents()
self.find_grammar()
if arguments is not None:
self.set_arguments(arguments)
super().__init__(program)
###Output
_____no_output_____
###Markdown
First, we find the contents of the Python executable:
###Code
class OptionRunner(OptionRunner):
def find_contents(self):
self._executable = find_executable(self.base_executable)
first_line = open(self._executable).readline()
assert first_line.find("python") >= 0
self.contents = open(self._executable).read()
def invoker(self):
exec(self.contents)
def executable(self):
return self._executable
###Output
_____no_output_____
###Markdown
Next, we determine the grammar using the `OptionGrammarMiner` class:
###Code
class OptionRunner(OptionRunner):
def find_grammar(self):
miner = OptionGrammarMiner(self.invoker)
self._ebnf_grammar = miner.mine_ebnf_grammar()
def ebnf_grammar(self):
return self._ebnf_grammar
def grammar(self):
return convert_ebnf_grammar(self._ebnf_grammar)
###Output
_____no_output_____
###Markdown
The two service methods `set_arguments()` and `set_invocation()` help us to change the arguments and program, respectively.
###Code
from Grammars import unreachable_nonterminals
class OptionRunner(OptionRunner):
def set_arguments(self, args):
self._ebnf_grammar["<arguments>"] = [" " + args]
# Delete rules for previous arguments
for nonterminal in unreachable_nonterminals(self._ebnf_grammar):
del self._ebnf_grammar[nonterminal]
def set_invocation(self, program):
self.program = program
###Output
_____no_output_____
###Markdown
We can instantiate the class on `autopep8` and immediately get the grammar:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
print(autopep8_runner.ebnf_grammar()["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
An `OptionFuzzer` interacts with the given `OptionRunner` to obtain its grammar, which is then passed to its `GrammarCoverageFuzzer` superclass.
###Code
class OptionFuzzer(GrammarCoverageFuzzer):
def __init__(self, runner, *args, **kwargs):
assert issubclass(type(runner), OptionRunner)
self.runner = runner
grammar = runner.grammar()
super().__init__(grammar, *args, **kwargs)
###Output
_____no_output_____
###Markdown
When invoking `run()`, the `OptionFuzzer` creates a new invocation (using `fuzz()` from its grammar) and runs the now given (or previously set) runner with the arguments from the grammar. Note that the runner specified in `run()` can differ from the one set during initialization; this allows for mining options from one program and applying it in another context.
###Code
class OptionFuzzer(OptionFuzzer):
def run(self, runner=None, inp=""):
if runner is None:
runner = self.runner
assert issubclass(type(runner), OptionRunner)
invocation = runner.executable() + " " + self.fuzz()
runner.set_invocation(invocation.split())
return runner.run(inp)
###Output
_____no_output_____
###Markdown
Example: Autopep8Let us apply our newly defined classes on the `autopep8` runner:
###Code
autopep8_fuzzer = OptionFuzzer(autopep8_runner, max_nonterminals=5)
for i in range(3):
print(autopep8_fuzzer.fuzz())
###Output
foo.py
--in-place --ignore-local-config --jobs 6 --recursive -i foo.py
--help -a --indent-size -95 --pep8-passes 3 --exclude = -r foo.py
###Markdown
We can now systematically test `autopep8` with these classes:
###Code
autopep8_fuzzer.run(autopep8_runner)
###Output
_____no_output_____
###Markdown
Example: MyPyWe can extract options for the `mypy` static type checker for Python:
###Code
assert find_executable("mypy") is not None
mypy_runner = OptionRunner("mypy", "foo.py")
print(mypy_runner.ebnf_grammar()["<option>"])
mypy_fuzzer = OptionFuzzer(mypy_runner, max_nonterminals=5)
for i in range(10):
print(mypy_fuzzer.fuzz())
###Output
--follow-imports l foo.py
--explicit-package-bases --allow-any-generics --cobertura-xml-report yB --no-fast-exit --show-error-context --incremental --no-site-packages --package --no-sqlite-cache --warn-unused-ignores foo.py
-c --inferstats --any-exprs-report @ --no-error-summary --pdb foo.py
--skip-cache-mtime-checks --strict-optional-whitelist --disallow-any-explicit --no-strict-equality --disallow-untyped-calls --allow-subclassing-any foo.py
--warn-redundant-casts --strict-equality --warn-return-any --tb --dump-build-stats --shadow-file --semantic-analysis-only --raise-exceptions --no-silence-site-packages --junit-xml 'G --no-namespace-packages --help --disallow-any-generics foo.py
--bazel -h -p --disallow-untyped-globals --strict-optional --hide-error-context --disallow-untyped-decorators --enable-error-code foo.py
--scripts-are-modules --hide-column-numbers --version --hide-absolute-path --local-partial-types --no-check-untyped-defs --txt-report = --linecount-report t --allow-untyped-decorators -2 --package-root foo.py
--implicit-optional --disallow-any-expr --disable-error-code --show-error-codes --disallow-any-unimported --dump-graph -m --cache-fine-grained --show-column-numbers --always-false foo.py
--always-true --no-incremental -v --no-warn-unused-configs --allow-untyped-calls --py2 --no-implicit-reexport --ignore-missing-imports --warn-unreachable --strict foo.py
--module --verbose --linecoverage-report |8 --sqlite-cache --warn-no-return foo.py
###Markdown
Example: NotedownHere's the configuration options for the `notedown` Notebook to Markdown converter:
###Code
assert find_executable("notedown") is not None
notedown_runner = OptionRunner("notedown")
print(notedown_runner.ebnf_grammar()["<option>"])
notedown_fuzzer = OptionFuzzer(notedown_runner, max_nonterminals=5)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
--execute --strip --run n
--render --match # -h --help --from - -o z*h --version /|
--precode k 1 --rmagic Q
--timeout 9 --nomagic 5
--debug -h --examples --help r
--examples --rmagic tM
--template U --output --to ! --knit o< -o --examples Z
--rmagic --debug --nomagic f
--strip -h --rmagic D
###Markdown
Combinatorial TestingOur `CoverageGrammarFuzzer` does a good job in covering each and every option at least once, which is great for systematic testing. However, as we also can see in our examples above, some options require each other, while others interfere with each other. What we should do as good testers is not only to cover every option individually, but also _combinations_ of options. The Python `itertools` module gives us means to create combinations from lists. We can, for instance, take the `notedown` options and create a list of all pairs.
###Code
from itertools import combinations
option_list = notedown_runner.ebnf_grammar()["<option>"]
pairs = list(combinations(option_list, 2))
###Output
_____no_output_____
###Markdown
There's quite a number of pairs:
###Code
len(pairs)
print(pairs[:20])
###Output
[(' -h', ' --help'), (' -h', ' -o( <str>)?'), (' -h', ' --output( <str>)?'), (' -h', ' --from <str>'), (' -h', ' --to <str>'), (' -h', ' --run'), (' -h', ' --execute'), (' -h', ' --timeout <int>'), (' -h', ' --strip'), (' -h', ' --precode( <str>)+'), (' -h', ' --knit( <str>)?'), (' -h', ' --rmagic'), (' -h', ' --nomagic'), (' -h', ' --render'), (' -h', ' --template <str>'), (' -h', ' --match <str>'), (' -h', ' --examples'), (' -h', ' --version'), (' -h', ' --debug'), (' --help', ' -o( <str>)?')]
###Markdown
Testing every such pair of options frequently suffices to cover all interferences between options. (Programs rarely have conditions involving three or more configuration settings.) To this end, we _change_ the grammar from having a list of options to having a list of _option pairs_, such that covering these will automatically cover all pairs. We create a function `pairwise()` that takes a list of options as occurring in our grammar and returns a list of _pairwise options_ – that is, our original options, but concatenated.
###Code
def pairwise(option_list):
return [option_1 + option_2
for (option_1, option_2) in combinations(option_list, 2)]
###Output
_____no_output_____
###Markdown
Here's the first 20 pairs:
###Code
print(pairwise(option_list)[:20])
###Output
[' -h --help', ' -h -o( <str>)?', ' -h --output( <str>)?', ' -h --from <str>', ' -h --to <str>', ' -h --run', ' -h --execute', ' -h --timeout <int>', ' -h --strip', ' -h --precode( <str>)+', ' -h --knit( <str>)?', ' -h --rmagic', ' -h --nomagic', ' -h --render', ' -h --template <str>', ' -h --match <str>', ' -h --examples', ' -h --version', ' -h --debug', ' --help -o( <str>)?']
###Markdown
The new grammar `pairwise_notedown_grammar` is a copy of the `notedown` grammar, but with the list of options replaced with the above pairwise option list.
###Code
notedown_grammar = notedown_runner.grammar()
pairwise_notedown_grammar = extend_grammar(notedown_grammar)
pairwise_notedown_grammar["<option>"] = pairwise(notedown_grammar["<option>"])
assert is_valid_grammar(pairwise_notedown_grammar)
###Output
_____no_output_____
###Markdown
Using the "pairwise" grammar to fuzz now covers one pair after another:
###Code
notedown_fuzzer = GrammarCoverageFuzzer(
pairwise_notedown_grammar, max_nonterminals=4)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
--run --nomagic Ku
--help --execute --rmagic --debug --run --execute
--version --debug --help --strip
--help --nomagic --rmagic --nomagic t
--nomagic --version --execute --render
-h --run -h --debug
--rmagic --examples --execute --nomagic r
--run --render --run --examples
-h --nomagic --examples --version "
--execute --strip --help --rmagic A
###Markdown
Can we actually test all combinations of options? Not in practice, as the number of combinations quickly grows as the length increases. It decreases again as the number of options reaches the maximum (with 20 options, there is only 1 combination involving _all_ options), but the absolute numbers are still staggering:
###Code
for combination_length in range(1, 20):
tuples = list(combinations(option_list, combination_length))
print(combination_length, len(tuples))
###Output
1 20
2 190
3 1140
4 4845
5 15504
6 38760
7 77520
8 125970
9 167960
10 184756
11 167960
12 125970
13 77520
14 38760
15 15504
16 4845
17 1140
18 190
19 20
###Markdown
Formally, the number of combinations of length $k$ in a set of options of length $n$ is the binomial coefficient$${n \choose k} = \frac{n!}{k!(n - k)!}$$ which for $k = 2$ (all pairs) gives us$${n \choose 2} = \frac{n!}{2(n - 2)!} = \frac{n (n - 1)}{2}$$ For `autopep8` with its 29 options...
###Code
len(autopep8_runner.ebnf_grammar()["<option>"])
###Output
_____no_output_____
###Markdown
... we thus have 406 distinct pairs. However, the binomial coefficient does not differentiate between permutations of elements of the pairs, which our tests do. Therefore we need 812 tests to cover all pairs:
###Code
len(autopep8_runner.ebnf_grammar()["<option>"]) * \
(len(autopep8_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
For `mypy` with its 110 options, though, we already end up with 11,990 tests to be conducted:
###Code
len(mypy_runner.ebnf_grammar()["<option>"])
len(mypy_runner.ebnf_grammar()["<option>"]) * \
(len(mypy_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
Even if each pair takes a second to run, we'd still be done in three hours of testing, though. If your program has more options that you all want to get covered in combinations, it is advisable that you limit the number of configurations further – for instance by limiting combinatorial testing to those combinations that possibly can interact with each other; and covering all other (presumably orthogonal) options individually. This mechanism of creating configurations by extending grammars can be easily extended to other configuration targets. One may want to explore a greater number of configurations, or expansions in specific contexts. The [exercises](Exercises), below, have a number of options ready for you. SynopsisThis chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options. `OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
###Output
_____no_output_____
###Markdown
The grammar can be extracted via the method `ebnf_grammar()`:
###Code
option_ebnf_grammar = autopep8_runner.ebnf_grammar()
print(option_ebnf_grammar)
###Output
{'<start>': ['(<option>)*<arguments>'], '<option>': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code'], '<arguments>': [' foo.py'], '<str>': ['<char>+'], '<char>': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '#', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '<filename>': ['<str>'], '<int>': ['(-)?<digit>+'], '<digit>': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '<n>': ['<int>'], '<globs>': ['<str>'], '<errors>': ['<str>'], '<line>': ['<int>']}
###Markdown
The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:
###Code
from Grammars import convert_ebnf_grammar
fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))
[fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
autopep8_fuzzer = OptionFuzzer(autopep8_runner)
[autopep8_fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The final step in testing would now to invoke the program with these arguments. Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. Lessons Learned* Besides regular input data, program _configurations_ make an important testing target.* For a given program using a standard library to parse command-line options and arguments, one can automatically extract these and convert them into a grammar.* To cover not only single options, but combinations of options, one can expand the grammar to cover all pairs, or come up with even more ambitious targets. Next StepsIf you liked the idea of mining a grammar from a program, do not miss:* [how to mine grammars for input data](GrammarMiner.ipynb) Our next steps in the book focus on:* [how to parse and recombine inputs](Parser.ipynb)* [how to assign weights and probabilities to specific productions](ProbabilisticGrammarFuzzer.ipynb)* [how to simplify inputs that cause a failure](Reducer.ipynb) BackgroundAlthough configuration data is just as likely to cause failures as other input data, it has received relatively little attention in test generation – possibly because, unlike "regular" input data, configuration data is not so much under control of external parties, and because, again unlike regular data, there is little variance in configurations. Creating models for software configurations and using these models for testing is commonplace, as is the idea of pairwise testing. For an overview, see \cite{Pezze2008}; for a discussion and comparison of state-of-the-art techniques, see \cite{Petke2015}.More specifically, \cite{Sutton2007} also discuss techniques to systematically cover command-line options. Dai et al. \cite{Dai2010} apply configuration fuzzing by changing variables associated with configuration files. Exercises Exercise 1: ifdef Configuration FuzzingIn C programs, the *C preprocessor* can be used to choose which code parts should be compiled and which ones should not. As an example, in the C code```Cifdef LONG_FOOlong foo() { ... }elseint foo() { ... }endif```the compiler will compile the function `foo()` with return type`long` if the _preprocessor variable_ `LONG_FOO` is defined, and with return type `int` if not. Such preprocessor variables are either set in the source files (using `define`, as in `define LONG_FOO`) or on the C compiler command line (using `-D` or `-D=`, as in `-DLONG_FOO`. Such *conditional compilation* is used to configure C programs towards their environment. System-specific code can contain lots of conditional compilation. As an example, consider this excerpt of `xmlparse.c`, the XML parser that is part of the Python runtime library:```cif defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32) define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800endifif !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \ && !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \ && !defined(XML_DEV_URANDOM) \ && !defined(_WIN32) \ && !defined(XML_POOR_ENTROPY) errorendifif !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */endififdef XML_UNICODE_WCHAR_Tdefine XML_T(x) (const wchar_t)xdefine XML_L(x) L xelsedefine XML_T(x) (const unsigned short)xdefine XML_L(x) xendifint fun(int x) { return XML_T(x); }``` A typical configuration for the C preprocessor on the above code could be `cc -c -D_WIN32 -DXML_POOR_ENTROPY -DXML_UNICODE_WCHAR_T xmlparse.c`, defining the given preprocessor variables and selecting the appropriate code fragments. Since the compiler can only compile one configuration at a time (implying that we can also only _test_ one resulting executable at a time), your task is to find out which of these configurations actually compile. To this end, proceed in three steps. Part 1: Extract Preprocessor VariablesWrite a _function_ `cpp_identifiers()` that, given a set of lines (say, from `open(filename).readlines()`), extracts all preprocessor variables referenced in `if` or `ifdef` preprocessor instructions. Apply `ifdef_identifiers()` on the sample C input above, such that```pythoncpp_identifiers(open("xmlparse.c").readlines()) ```returns the set```python{'_WIN32', 'LOAD_LIBRARY_SEARCH_SYSTEM32', 'HAVE_GETRANDOM', 'HAVE_SYSCALL_GETRANDOM', 'HAVE_ARC4RANDOM_BUF', ...}``` **Solution.** Let us start with creating a sample input file, `xmlparse.c`:
###Code
filename = "xmlparse.c"
open(filename, "w").write(
"""
#if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32)
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
#endif
#if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \
&& !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \
&& !defined(XML_DEV_URANDOM) \
&& !defined(_WIN32) \
&& !defined(XML_POOR_ENTROPY)
# error
#endif
#if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)
#define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */
#endif
#ifdef XML_UNICODE_WCHAR_T
#define XML_T(x) (const wchar_t)x
#define XML_L(x) L ## x
#else
#define XML_T(x) (const unsigned short)x
#define XML_L(x) x
#endif
int fun(int x) { return XML_T(x); }
""");
###Output
_____no_output_____
###Markdown
To find C preprocessor `if` directives and preprocessor variables, we use regular expressions matching them.
###Code
import re
re_cpp_if_directive = re.compile(r"\s*#\s*(el)?if")
re_cpp_identifier = re.compile(r"[a-zA-Z_$]+")
def cpp_identifiers(lines):
identifiers = set()
for line in lines:
if re_cpp_if_directive.match(line):
identifiers |= set(re_cpp_identifier.findall(line))
# These are preprocessor keywords
identifiers -= {"if", "ifdef", "ifndef", "defined"}
return identifiers
cpp_ids = cpp_identifiers(open("xmlparse.c").readlines())
cpp_ids
###Output
_____no_output_____
###Markdown
Part 2: Derive an Option GrammarWith the help of `cpp_identifiers()`, create a grammar which has C compiler invocations with a list of options, where each option takes the form `-D` for a preprocessor variable ``. Using this grammar `cpp_grammar`, a fuzzer ```pythong = GrammarCoverageFuzzer(cpp_grammar)```would create C compiler invocations such as```python[g.fuzz() for i in range(10)]['cc -DHAVE_SYSCALL_GETRANDOM xmlparse.c', 'cc -D__SCO__ -DRANDOM_BUF -DXML_UNICODE_WCHAR_T -D__UNIXWARE__ xmlparse.c', 'cc -DXML_POOR_ENTROPY xmlparse.c', 'cc -DRANDOM xmlparse.c', 'cc -D_WIN xmlparse.c', 'cc -DHAVE_ARC xmlparse.c', ...]``` **Solution.** This is not very difficult:
###Code
from Grammars import new_symbol
cpp_grammar = {
"<start>": ["cc -c<options> " + filename],
"<options>": ["<option>", "<options><option>"],
"<option>": []
}
for id in cpp_ids:
s = new_symbol(cpp_grammar, "<" + id + ">")
cpp_grammar["<option>"].append(s)
cpp_grammar[s] = [" -D" + id]
cpp_grammar
assert is_valid_grammar(cpp_grammar)
###Output
_____no_output_____
###Markdown
Part 3: C Preprocessor Configuration FuzzingUsing the grammar just produced, use a `GrammarCoverageFuzzer` to1. Test each processor variable individually2. Test each pair of processor variables, using `pairwise()`.What happens if you actually run the invocations? **Solution.** We can simply run the coverage fuzzer, as described above.
###Code
g = GrammarCoverageFuzzer(cpp_grammar)
g.fuzz()
from Fuzzer import ProgramRunner
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -D__SCO__ -DRANDOM -DHAVE_SYSCALL_GETRANDOM xmlparse.c
$ cc -c -DXML_DEV_URANDOM -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 1 error generated.
$ cc -c -DHAVE_GETRANDOM xmlparse.c
$ cc -c -DTIOCSWINSZ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM -DRANDOM_BUF xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -D__UNIXWARE__ -DHAVE_ARC xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM -DXML_POOR_ENTROPY -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -DXML_POOR_ENTROPY -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -D_WIN xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
###Markdown
To test all pairs, we can use `pairwise()`:
###Code
pairwise_cpp_grammar = extend_grammar(cpp_grammar)
pairwise_cpp_grammar["<option>"] = pairwise(cpp_grammar["<option>"])
pairwise_cpp_grammar["<option>"][:10]
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:7:3: error:
# error
^
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 2 errors generated.
$ cc -c -DTIOCSWINSZ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM -DRANDOM -DXML_DEV_URANDOM -DLOAD_LIBRARY_SEARCH_SYSTEM -DXML_DEV_URANDOM xmlparse.c
$ cc -c -D__UNIXWARE__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_UNICODE_WCHAR_T -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:7:3: error:
# error
^
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 2 errors generated.
$ cc -c -D_WIN -DRANDOM_BUF -DXML_POOR_ENTROPY -DRANDOM -DXML_DEV_URANDOM -DHAVE_SYSCALL_GETRANDOM -DTIOCSWINSZ xmlparse.c
$ cc -c -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:7:3: error:
# error
^
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 2 errors generated.
$ cc -c -D__UNIXWARE__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -D_WIN -DHAVE_ARC -DRANDOM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
###Markdown
Some of the compilation errors we get could be expected – for instance, defining `XML_UNICODE_WCHAR_T` when actually, the type is not supported in our environment. Other errors may not be expected – and it is these errors we would find through systematic configuration fuzzing, as described above. At the end, don't forget to clean up:
###Code
os.remove("xmlparse.c")
if os.path.exists("xmlparse.o"):
os.remove("xmlparse.o")
###Output
_____no_output_____
###Markdown
Exercise 2: .ini Configuration FuzzingBesides command-line options, another important source of configurations are _configuration files_. In this exercise, we will consider the very simple configuration language provided by the Python `ConfigParser` module, which is very similar to what is found in Microsoft Windows _.ini_ files. The following example for a `ConfigParser` input file stems right from [the ConfigParser documentation](https://docs.python.org/3/library/configparser.html):```[DEFAULT]ServerAliveInterval = 45Compression = yesCompressionLevel = 9ForwardX11 = yes[bitbucket.org]User = hg[topsecret.server.com]Port = 50022ForwardX11 = no``` The above `ConfigParser` file can be created programmatically:
###Code
import configparser
config = configparser.ConfigParser()
config['DEFAULT'] = {'ServerAliveInterval': '45',
'Compression': 'yes',
'CompressionLevel': '9'}
config['bitbucket.org'] = {}
config['bitbucket.org']['User'] = 'hg'
config['topsecret.server.com'] = {}
topsecret = config['topsecret.server.com']
topsecret['Port'] = '50022' # mutates the parser
topsecret['ForwardX11'] = 'no' # same here
config['DEFAULT']['ForwardX11'] = 'yes'
with open('example.ini', 'w') as configfile:
config.write(configfile)
with open('example.ini') as configfile:
print(configfile.read(), end="")
###Output
[DEFAULT]
serveraliveinterval = 45
compression = yes
compressionlevel = 9
forwardx11 = yes
[bitbucket.org]
user = hg
[topsecret.server.com]
port = 50022
forwardx11 = no
###Markdown
and be read in again:
###Code
config = configparser.ConfigParser()
config.read('example.ini')
topsecret = config['topsecret.server.com']
topsecret['Port']
###Output
_____no_output_____
###Markdown
Part 1: Read ConfigurationUsing `configparser`, create a program reading in the above configuration file and accessing the individual elements. Part 2: Create a Configuration GrammarDesign a grammar that will automatically create configuration files suitable for your above program. Fuzz your program with it. Part 3: Mine a Configuration GrammarBy dynamically tracking the individual accesses to configuration elements, you can again extract a basic grammar from the execution. To this end, create a subclass of `ConfigParser` with a special method `__getitem__`:
###Code
class TrackingConfigParser(configparser.ConfigParser):
def __getitem__(self, key):
print("Accessing", repr(key))
return super().__getitem__(key)
###Output
_____no_output_____
###Markdown
For a `TrackingConfigParser` object `p`, `p.__getitem__(key)` will be invoked whenever `p[key]` is accessed:
###Code
tracking_config_parser = TrackingConfigParser()
tracking_config_parser.read('example.ini')
section = tracking_config_parser['topsecret.server.com']
###Output
Accessing 'topsecret.server.com'
###Markdown
Using `__getitem__()`, as above, implement a tracking mechanism that, while your program accesses the read configuration, automatically saves options accessed and values read. Create a prototype grammar from these values; use it for fuzzing. At the end, don't forget to clean up:
###Code
import os
os.remove("example.ini")
###Output
_____no_output_____
###Markdown
**Solution.** Left to the reader. Enjoy! Exercise 3: Extracting and Fuzzing C Command-Line OptionsIn C programs, the `getopt()` function are frequently used to process configuration options. A call```getopt(argc, argv, "bf:")```indicates that the program accepts two options `-b` and `-f`, with `-f` taking an argument (as indicated by the following colon). Part 1: Getopt FuzzingWrite a framework which, for a given C program, automatically extracts the argument to `getopt()` and derives a fuzzing grammar for it. There are multiple ways to achieve this:1. Scan the program source code for occurrences of `getopt()` and return the string passed. (Crude, but should frequently work.)2. Insert your own implementation of `getopt()` into the source code (effectively replacing `getopt()` from the runtime library), which outputs the `getopt()` argument and exits the program. Recompile and run.3. (Advanced.) As above, but instead of changing the source code, hook into the _dynamic linker_ which at runtime links the program with the C runtime library. Set the library loading path (on Linux and Unix, this is the `LD_LIBRARY_PATH` environment variable) such that your own version of `getopt()` is linked first, and the regular libraries later. Executing the program (without recompiling) should yield the desired result.Apply this on `grep` and `ls`; report the resulting grammars and results. **Solution.** Left to the reader. Enjoy hacking! Part 2: Fuzzing Long Options in CSame as Part 1, but also hook into the GNU variant `getopt_long()`, which accepts "long" arguments with double dashes such as `--help`. Note that method 1, above, will not work here, since the "long" options are defined in a separately defined structure. **Solution.** Left to the reader. Enjoy hacking! Exercise 4: Expansions in ContextIn our above option configurations, we have multiple symbols which all expand to the same integer. For instance, the `--line-range` option of `autopep8` takes two `` parameters which both expand into the same `` symbol:``` ::= ... | --line-range | ... ::= ::= (-)?+ ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9```
###Code
autopep8_runner.ebnf_grammar()["<line>"]
autopep8_runner.ebnf_grammar()["<int>"]
autopep8_runner.ebnf_grammar()["<digit>"]
###Output
_____no_output_____
###Markdown
Testing ConfigurationsThe behavior of a program is not only governed by its data. The _configuration_ of a program – that is, the settings that govern the execution of a program on its (regular) input data, as set by options or configuration files – just as well influences behavior, and thus can and should be tested. In this chapter, we explore how to systematically _test_ and _cover_ software configurations. By _automatically inferring configuration options_, we can apply these techniques out of the box, with no need for writing a grammar. Finally, we show how to systematically cover _combinations_ of configuration options, quickly detecting unwanted interferences.
###Code
from bookutils import YouTubeVideo
YouTubeVideo('XTGFX-tcotE')
###Output
_____no_output_____
###Markdown
**Prerequisites*** You should have read the [chapter on grammars](Grammars.ipynb).* You should have read the [chapter on grammar coverage](GrammarCoverageFuzzer.ipynb).
###Code
import bookutils
from typing import List, Union, Optional, Callable, Type
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.ConfigurationFuzzer import ```and then make use of the following features.This chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options.`OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")```The grammar can be extracted via the method `ebnf_grammar()`:```python>>> option_ebnf_grammar = autopep8_runner.ebnf_grammar()>>> option_ebnf_grammar{'': ['()*'], '': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config ', ' --ignore-local-config', ' -r', ' --recursive', ' -j ', ' --jobs ', ' -p ', ' --pep8-passes ', ' -a', ' --aggressive', ' --experimental', ' --exclude ', ' --list-fixes', ' --ignore ', ' --select ', ' --max-line-length ', ' --line-range ', ' --range ', ' --indent-size ', ' --hang-closing', ' --exit-code'], '': [' foo.py'], '': ['+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '': [''], '': ['(-)?+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '': [''], '': [''], '': [''], '': ['']}```The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:```python>>> from Grammars import convert_ebnf_grammar>>> fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))>>> [fuzzer.fuzz() for i in range(3)][' foo.py', ' --max-line-length 6 --jobs -594 --ignore , --ignore-local-config -r --in-place --list-fixes --recursive -v --experimental -p 72 -h --aggressive --indent-size 3 --exit-code --hang-closing --pep8-passes -180 -d --global-config XQjT --diff --exclude *g -j 43 --help --select A --version --verbose -a --line-range -3963 0 --range 1 4 -i --in-place --version foo.py', ' --global-config 2 --select PuR --ignore b --ignore @ --ignore ;7d --ignore ) --ignore Fw1Z --ignore 0 --global-config ynf --select >G --select + --global-config ( --exclude v --exclude V --ignore ^ --select L --exclude 6 --exclude =$` --ignore % --global-config N --ignore [8maop --ignore 3! --select ~?c< --exclude C --select U --exclude h --global-config --global-config 5O --select x --select B] --ignore _ --global-config .K --global-config S --exclude r --global-config qW --exclude te4/ --exclude J} --ignore " --exclude |H --global-config -&k{s --global-config E --select :I --ignore 9 --global-config M --exclude YD --select \\ --exclude z --ignore i --select \'l --ignore M --ignore ;h --exit-code foo.py']```The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")>>> autopep8_fuzzer = OptionFuzzer(autopep8_runner)>>> [autopep8_fuzzer.fuzz() for i in range(3)][' --diff foo.py', ' --exclude --global-config V --select He --global-config | --global-config n}aicm --ignore 7 --ignore b --global-config u --exclude WB` --exclude 2 --exclude JpZt --exclude l_ --select *%^ --exclude & --exclude )Lv --global-config [ --global-config " --exclude sOEXP --aggressive --exclude \' --help --diff --experimental foo.py', ' --ignore FCw; --global-config /1K?:6 --exclude U --exclude z --ignore rQ --select x --select Y --select { --global-config o --select 34 --exclude ]j --select ~ --exclude 9@ --ignore w --global-config CVL --diff foo.py']```The final step in testing would now to invoke the program with these arguments.Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs.The `OptionRunner` constructor accepts an additional `miner` keyword parameter, which takes the class of the argument grammar miner to be used. By default, this is `OptionGrammarMiner` – a helper class that can be used (and extended) to create own option grammar miners.![](PICS/ConfigurationFuzzer-synopsis-1.svg) Configuration OptionsWhen we talk about the input to a program, we usually think of the _data_ it processes. This is also what we have been fuzzing in the past chapters – be it with [random input](Fuzzer.ipynb), [mutation-based fuzzing](MutationFuzzer.ipynb), or [grammar-based fuzzing](GrammarFuzzer.ipynb). However, programs typically have several input sources, all of which can and should be tested – and included in test generation. One important source of input is the program's _configuration_ – that is, a set of inputs that typically is set once when beginning to process data and then stays constant while processing data, while the program is running, or even while the program is deployed. Such a configuration is frequently set in _configuration files_ (for instance, as key/value pairs); the most ubiquitous method for command-line tools, though, are _configuration options_ on the command line. As an example, consider the `grep` utility to find textual patterns in files. The exact mode by which `grep` works is governed by a multitude of options, which can be listed by providing a `--help` option:
###Code
!grep --help
###Output
usage: grep [-abcdDEFGHhIiJLlMmnOopqRSsUVvwXxZz] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
###Markdown
All these options need to be tested for whether they operate correctly. In security testing, any such option may also trigger a yet unknown vulnerability. Hence, such options can become _fuzz targets_ on their own. In this chapter, we analyze how to systematically test such options – and better yet, how to extract possible configurations right out of given program files, such that we do not have to specify anything. Options in PythonLet us stick to our common programming language here and examine how options are processed in Python. The `argparse` module provides a parser for command-line arguments (and options) with great functionality – and great complexity. You start by defining a parser (`argparse.ArgumentParser()`) to which individual arguments with various features are added, one after another. Additional parameters for each argument can specify the type (`type`) of the argument (say, integers or strings), or the number of arguments (`nargs`). By default, arguments are stored under their name in the `args` object coming from `parse_args()` – thus, `args.integers` holds the `integer` arguments added earlier. Special actions (`actions`) allow to store specific values in given variables; the `store_const` action stores the given `const` in the attribute named by `dest`. The following example takes a number of integer arguments (`integers`) as well as an operator (`--sum`, `--min`, or `--max`) to be applied on these integers. The operators all store a function reference in the `accumulate` attribute, which is finally invoked on the integers parsed:
###Code
import argparse
def process_numbers(args=[]):
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--sum', dest='accumulate', action='store_const',
const=sum,
help='sum the integers')
group.add_argument('--min', dest='accumulate', action='store_const',
const=min,
help='compute the minimum')
group.add_argument('--max', dest='accumulate', action='store_const',
const=max,
help='compute the maximum')
args = parser.parse_args(args)
print(args.accumulate(args.integers))
###Output
_____no_output_____
###Markdown
Here's how `process_numbers()` works. We can, for instance, invoke the `--min` option on the given arguments to compute the minimum:
###Code
process_numbers(["--min", "100", "200", "300"])
###Output
100
###Markdown
Or compute the sum of three numbers:
###Code
process_numbers(["--sum", "1", "2", "3"])
###Output
6
###Markdown
When defined via `add_mutually_exclusive_group()` (as above), options are mutually exclusive. Consequently, we can have only one operator:
###Code
import bookutils
from ExpectError import ExpectError
with ExpectError(SystemExit, print_traceback=False):
process_numbers(["--sum", "--max", "1", "2", "3"])
###Output
usage: ipykernel_launcher.py [-h] (--sum | --min | --max) N [N ...]
ipykernel_launcher.py: error: argument --max: not allowed with argument --sum
SystemExit: 2 (expected)
###Markdown
A Grammar for ConfigurationsHow can we test a system with several options? The easiest answer is to write a grammar for it. The grammar `PROCESS_NUMBERS_EBNF_GRAMMAR` reflects the possible combinations of options and arguments:
###Code
from Grammars import crange, srange, convert_ebnf_grammar, extend_grammar, is_valid_grammar
from Grammars import START_SYMBOL, new_symbol, Grammar
PROCESS_NUMBERS_EBNF_GRAMMAR: Grammar = {
"<start>": ["<operator> <integers>"],
"<operator>": ["--sum", "--min", "--max"],
"<integers>": ["<integer>", "<integers> <integer>"],
"<integer>": ["<digit>+"],
"<digit>": crange('0', '9')
}
assert is_valid_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
PROCESS_NUMBERS_GRAMMAR = convert_ebnf_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
###Output
_____no_output_____
###Markdown
We can feed this grammar into our [grammar coverage fuzzer](GrammarCoverageFuzzer.ipynb) and have it cover one option after another:
###Code
from GrammarCoverageFuzzer import GrammarCoverageFuzzer
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
print(f.fuzz())
###Output
--max 9 5 8 210 80 9756431
--sum 9 4 99 1245 612370
--min 2 3 0 46 15798 7570926
###Markdown
Of course, we can also invoke `process_numbers()` with these very arguments. To this end, we need to convert the string produced by the grammar back into a list of individual arguments, using `split()`:
###Code
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
args = f.fuzz().split()
print(args)
process_numbers(args)
###Output
['--max', '8', '9', '3067', '44', '13852967057']
13852967057
['--sum', '9', '8', '63', '9278111', '59206197798']
59215475989
['--min', '4', '1', '4864', '33342', '7827970808951']
1
###Markdown
In a similar way, we can define grammars for any program to be tested; as well as define grammars for, say, configuration files. Yet, the grammar has to be updated with every change to the program, which creates a maintenance burden. Given that the information required for the grammar is already all encoded in the program, the question arises: _Can't we go and extract configuration options right out of the program in the first place?_ Mining Configuration OptionsIn this section, we try to extract option and argument information right out of a program, such that we do not have to specify a configuration grammar. The aim is to have a configuration fuzzer that works on the options and arguments of an arbitrary program, as long as it follows specific conventions for processing its arguments. In the case of Python programs, this means using the `argparse` module.Our idea is as follows: We execute the given program up to the point where the arguments are actually parsed – that is, `argparse.parse_args()` is invoked. Up to this point, we track all calls into the argument parser, notably those calls that define arguments and options (`add_argument()`). From these, we construct the grammar. Tracking ArgumentsLet us illustrate this approach with a simple experiment: We define a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while `process_numbers` is invoked. If we have a call to a method `add_argument`, we access and print out the local variables (which at this point are the arguments to the method).
###Code
import sys
import string
def trace_locals(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(method_name, locals)
###Output
_____no_output_____
###Markdown
What we get is a list of all calls to `add_argument()`, together with the method arguments passed:
###Code
sys.settrace(trace_locals)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
add_argument {'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True), 'args': ('-h', '--help'), 'kwargs': {'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}}
add_argument {'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True), 'args': ('integers',), 'kwargs': {'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x110489700>, 'args': ('--sum',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x110489700>, 'args': ('--min',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x110489700>, 'args': ('--max',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}}
6
###Markdown
From the `args` argument, we can access the individual options and arguments to be defined:
###Code
def trace_options(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(locals['args'])
sys.settrace(trace_options)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
('-h', '--help')
('integers',)
('--sum',)
('--min',)
('--max',)
6
###Markdown
We see that each argument comes as a tuple with one (say, `integers` or `--sum`) or two members (`-h` and `--help`), which denote alternate forms for the same option. Our job will be to go through the arguments of `add_arguments()` and detect not only the names of options and arguments, but also whether they accept additional parameters, as well as the type of the parameters. A Grammar Miner for Options and Arguments Let us now build a class that gathers all this information to create a grammar. We use the `ParseInterrupt` exception to interrupt program execution after gathering all arguments and options:
###Code
class ParseInterrupt(Exception):
pass
###Output
_____no_output_____
###Markdown
The class `OptionGrammarMiner` takes an executable function for which the grammar of options and arguments is to be mined:
###Code
class OptionGrammarMiner:
"""Helper class for extracting option grammars"""
def __init__(self, function: Callable, log: bool = False):
"""Constructor.
`function` - a function processing arguments using argparse()
`log` - output diagnostics if True
"""
self.function = function
self.log = log
###Output
_____no_output_____
###Markdown
The method `mine_ebnf_grammar()` is where everything happens. It creates a grammar of the form``` ::= * ::= ::= ```in which the options and arguments will be collected. It then sets a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while the previously defined `function` is invoked. Raising `ParseInterrupt` (when `parse_args()` is invoked) ends execution.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
OPTION_SYMBOL = "<option>"
ARGUMENTS_SYMBOL = "<arguments>"
def mine_ebnf_grammar(self):
"""Extract EBNF option grammar"""
self.grammar = {
START_SYMBOL: ["(" + self.OPTION_SYMBOL + ")*" + self.ARGUMENTS_SYMBOL],
self.OPTION_SYMBOL: [],
self.ARGUMENTS_SYMBOL: []
}
self.current_group = self.OPTION_SYMBOL
old_trace = sys.gettrace()
sys.settrace(self.traceit)
try:
self.function()
except ParseInterrupt:
pass
sys.settrace(old_trace)
return self.grammar
def mine_grammar(self):
"""Extract BNF option grammar"""
return convert_ebnf_grammar(self.mine_ebnf_grammar())
###Output
_____no_output_____
###Markdown
The trace function checks for four methods: `add_argument()` is the most important function, resulting in processing arguments; `frame.f_locals` again is the set of local variables, which at this point is mostly the arguments to `add_argument()`. Since mutually exclusive groups also have a method `add_argument()`, we set the flag `in_group` to differentiate. Note that we make no specific efforts to differentiate between multiple parsers or groups; we simply assume that there is one parser, and at any point at most one mutually exclusive group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def traceit(self, frame, event, arg):
if event != "call":
return
if "self" not in frame.f_locals:
return
self_var = frame.f_locals["self"]
method_name = frame.f_code.co_name
if method_name == "add_argument":
in_group = repr(type(self_var)).find("Group") >= 0
self.process_argument(frame.f_locals, in_group)
elif method_name == "add_mutually_exclusive_group":
self.add_group(frame.f_locals, exclusive=True)
elif method_name == "add_argument_group":
# self.add_group(frame.f_locals, exclusive=False)
pass
elif method_name == "parse_args":
raise ParseInterrupt
return self.traceit
###Output
_____no_output_____
###Markdown
The `process_arguments()` now analyzes the arguments passed and adds them to the grammar:* If the argument starts with `-`, it gets added as an optional element to the `` list* Otherwise, it gets added to the `` list.The optional `nargs` argument specifies how many arguments can follow. If it is a number, we add the appropriate number of elements to the grammar; if it is an abstract specifier (say, `+` or `*`), we use it directly as EBNF operator.Given the large number of parameters and optional behavior, this is a somewhat messy function, but it does the job.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def process_argument(self, locals, in_group):
args = locals["args"]
kwargs = locals["kwargs"]
if self.log:
print(args)
print(kwargs)
print()
for arg in args:
self.process_arg(arg, in_group, kwargs)
class OptionGrammarMiner(OptionGrammarMiner):
def process_arg(self, arg, in_group, kwargs):
if arg.startswith('-'):
if not in_group:
target = self.OPTION_SYMBOL
else:
target = self.current_group
metavar = None
arg = " " + arg
else:
target = self.ARGUMENTS_SYMBOL
metavar = arg
arg = ""
if "nargs" in kwargs:
nargs = kwargs["nargs"]
else:
nargs = 1
param = self.add_parameter(kwargs, metavar)
if param == "":
nargs = 0
if isinstance(nargs, int):
for i in range(nargs):
arg += param
else:
assert nargs in "?+*"
arg += '(' + param + ')' + nargs
if target == self.OPTION_SYMBOL:
self.grammar[target].append(arg)
else:
self.grammar[target].append(arg)
###Output
_____no_output_____
###Markdown
The method `add_parameter()` handles possible parameters of options. If the argument has an `action` defined, it takes no parameter. Otherwise, we identify the type of the parameter (as `int` or `str`) and augment the grammar with an appropriate rule.
###Code
import inspect
class OptionGrammarMiner(OptionGrammarMiner):
def add_parameter(self, kwargs, metavar):
if "action" in kwargs:
# No parameter
return ""
type_ = "str"
if "type" in kwargs:
given_type = kwargs["type"]
# int types come as '<class int>'
if inspect.isclass(given_type) and issubclass(given_type, int):
type_ = "int"
if metavar is None:
if "metavar" in kwargs:
metavar = kwargs["metavar"]
else:
metavar = type_
self.add_type_rule(type_)
if metavar != type_:
self.add_metavar_rule(metavar, type_)
param = " <" + metavar + ">"
return param
###Output
_____no_output_____
###Markdown
The method `add_type_rule()` adds a rule for parameter types to the grammar. If the parameter is identified by a meta-variable (say, `N`), we add a rule for this as well to improve legibility.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_type_rule(self, type_):
if type_ == "int":
self.add_int_rule()
else:
self.add_str_rule()
def add_int_rule(self):
self.grammar["<int>"] = ["(-)?<digit>+"]
self.grammar["<digit>"] = crange('0', '9')
def add_str_rule(self):
self.grammar["<str>"] = ["<char>+"]
self.grammar["<char>"] = srange(
string.digits
+ string.ascii_letters
+ string.punctuation)
def add_metavar_rule(self, metavar, type_):
self.grammar["<" + metavar + ">"] = ["<" + type_ + ">"]
###Output
_____no_output_____
###Markdown
The method `add_group()` adds a new mutually exclusive group to the grammar. We define a new symbol (say, ``) for the options added to the group, and use the `required` and `exclusive` flags to define an appropriate expansion operator. The group is then prefixed to the grammar, as in``` ::= * ::= ```and filled with the next calls to `add_argument()` within the group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_group(self, locals, exclusive):
kwargs = locals["kwargs"]
if self.log:
print(kwargs)
required = kwargs.get("required", False)
group = new_symbol(self.grammar, "<group>")
if required and exclusive:
group_expansion = group
if required and not exclusive:
group_expansion = group + "+"
if not required and exclusive:
group_expansion = group + "?"
if not required and not exclusive:
group_expansion = group + "*"
self.grammar[START_SYMBOL][0] = group_expansion + \
self.grammar[START_SYMBOL][0]
self.grammar[group] = []
self.current_group = group
###Output
_____no_output_____
###Markdown
That's it! With this, we can now extract the grammar from our `process_numbers()` program. Turning on logging again reveals the variables we draw upon.
###Code
miner = OptionGrammarMiner(process_numbers, log=True)
process_numbers_grammar = miner.mine_ebnf_grammar()
###Output
('-h', '--help')
{'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}
('integers',)
{'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}
{'required': True}
('--sum',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}
('--min',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}
('--max',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}
###Markdown
Here is the extracted grammar:
###Code
process_numbers_grammar
###Output
_____no_output_____
###Markdown
The grammar properly identifies the group found:
###Code
process_numbers_grammar["<start>"]
process_numbers_grammar["<group>"]
###Output
_____no_output_____
###Markdown
It also identifies a `--help` option provided not by us, but by the `argparse` module:
###Code
process_numbers_grammar["<option>"]
###Output
_____no_output_____
###Markdown
The grammar also correctly identifies the types of the arguments:
###Code
process_numbers_grammar["<arguments>"]
process_numbers_grammar["<integers>"]
###Output
_____no_output_____
###Markdown
The rules for `int` are set as defined by `add_int_rule()`
###Code
process_numbers_grammar["<int>"]
###Output
_____no_output_____
###Markdown
We can take this grammar and convert it to BNF, such that we can fuzz with it right away:
###Code
assert is_valid_grammar(process_numbers_grammar)
grammar = convert_ebnf_grammar(process_numbers_grammar)
assert is_valid_grammar(grammar)
f = GrammarCoverageFuzzer(grammar)
for i in range(10):
print(f.fuzz())
###Output
--sum 9
--max -h --help --help -16 -0
--min --help 2745341 8
--min 1 27
--sum --help --help -2
--sum --help 0 3 -77
--sum -3
--sum --help 429 8 10 0295 -694 1
--max -h 91 -1425 99
--sum -795 -94 8 -44
###Markdown
Each and every invocation adheres to the rules as set forth in the `argparse` calls. By mining options and arguments from existing programs, we can now fuzz these options out of the box – without having to specify a grammar. Testing Autopep8 Let us try out the option grammar miner on real-world Python programs. `autopep8` is a tool that automatically converts Python code to the [PEP 8 Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). (Actually, all Python code in this book runs through `autopep8` during production.) `autopep8` offers a wide range of options, as can be seen by invoking it with `--help`:
###Code
!autopep8 --help
###Output
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing] [--exit-code]
[files ...]
Automatically formats Python code to conform to the PEP 8 style guide.
positional arguments:
files files to format or '-' for standard in
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-v, --verbose print verbose messages; multiple -v result in more
verbose messages
-d, --diff print the diff for the fixed source
-i, --in-place make changes to files in place
--global-config filename
path to a global pep8 config file; if this file does
not exist then this is ignored (default:
/Users/zeller/.config/pep8)
--ignore-local-config
don't look for and apply local config files; if not
passed, defaults are updated with any config files in
the project's root directory
-r, --recursive run recursively over directories; must be used with
--in-place or --diff
-j n, --jobs n number of parallel jobs; match CPU count if value is
less than 1
-p n, --pep8-passes n
maximum number of additional pep8 passes (default:
infinite)
-a, --aggressive enable non-whitespace changes; multiple -a result in
more aggressive changes
--experimental enable experimental fixes
--exclude globs exclude file/directory names that match these comma-
separated globs
--list-fixes list codes for fixes; used by --ignore and --select
--ignore errors do not fix these errors/warnings (default:
E226,E24,W50,W690)
--select errors fix only these errors/warnings (e.g. E4,W)
--max-line-length n set maximum allowed line length (default: 79)
--line-range line line, --range line line
only fix errors found within this inclusive range of
line numbers (e.g. 1 99); line numbers are indexed at
1
--hang-closing hang-closing option passed to pycodestyle
--exit-code change to behavior of exit code. default behavior of
return value, 0 is no differences, 1 is error exit.
return 2 when add this option. 2 is exists
differences.
###Markdown
Autopep8 SetupWe want to systematically test these options. In order to deploy our configuration grammar miner, we need to find the source code of the executable:
###Code
import os
def find_executable(name):
for path in os.get_exec_path():
qualified_name = os.path.join(path, name)
if os.path.exists(qualified_name):
return qualified_name
return None
autopep8_executable = find_executable("autopep8")
assert autopep8_executable is not None
autopep8_executable
###Output
_____no_output_____
###Markdown
Next, we build a function that reads the contents of the file and executes it.
###Code
def autopep8():
executable = find_executable("autopep8")
# First line has to contain "/usr/bin/env python" or like
first_line = open(executable).readline()
assert first_line.find("python") >= 0
contents = open(executable).read()
exec(contents)
###Output
_____no_output_____
###Markdown
Mining an Autopep8 GrammarWe can use the `autopep8()` function in our grammar miner:
###Code
autopep8_miner = OptionGrammarMiner(autopep8)
###Output
_____no_output_____
###Markdown
and extract a grammar for it:
###Code
autopep8_ebnf_grammar = autopep8_miner.mine_ebnf_grammar()
###Output
_____no_output_____
###Markdown
This works because here, `autopep8` is not a separate process (and a separate Python interpreter), but we run the `autopep8()` function (and the `autopep8` code) in our current Python interpreter – up to the call to `parse_args()`, where we interrupt execution again. At this point, the `autopep8` code has done nothing but setting up the argument parser – which is what we are interested in. The grammar options mined reflect precisely the options seen when providing `--help`:
###Code
print(autopep8_ebnf_grammar["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
Metavariables like `` or `` are placeholders for integers. We assume all metavariables of the same name have the same type:
###Code
autopep8_ebnf_grammar["<line>"]
###Output
_____no_output_____
###Markdown
The grammar miner has inferred that the argument to `autopep8` is a list of files:
###Code
autopep8_ebnf_grammar["<arguments>"]
###Output
_____no_output_____
###Markdown
which in turn all are strings:
###Code
autopep8_ebnf_grammar["<files>"]
###Output
_____no_output_____
###Markdown
As we are only interested in testing options, not arguments, we fix the arguments to a single mandatory input. (Otherwise, we'd have plenty of random file names generated.)
###Code
autopep8_ebnf_grammar["<arguments>"] = [" <files>"]
autopep8_ebnf_grammar["<files>"] = ["foo.py"]
assert is_valid_grammar(autopep8_ebnf_grammar)
###Output
_____no_output_____
###Markdown
Creating Autopep8 Options Let us now use the inferred grammar for fuzzing. Again, we convert the EBNF grammar into a regular BNF grammar:
###Code
autopep8_grammar = convert_ebnf_grammar(autopep8_ebnf_grammar)
assert is_valid_grammar(autopep8_grammar)
###Output
_____no_output_____
###Markdown
And we can use the grammar for fuzzing all options:
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=4)
for i in range(20):
print(f.fuzz())
###Output
-r foo.py
-h --experimental --hang-closing foo.py
--list-fixes -v foo.py
--aggressive -d foo.py
--indent-size 9 --help foo.py
--exit-code --recursive foo.py
--diff --version -i foo.py
--max-line-length 0 --in-place --verbose foo.py
--ignore-local-config -a foo.py
--select x -i --exit-code foo.py
-j 8 --diff foo.py
-d -v -d foo.py
-p 6 -i foo.py
-v --diff foo.py
--ignore uA --recursive foo.py
--jobs 5 -r foo.py
--range 4 1 foo.py
--ignore-local-config -i foo.py
-r --exit-code foo.py
-v -r foo.py
###Markdown
Let us apply these options on the actual program. We need a file `foo.py` that will serve as input: (Note that the following commands will overwrite the file `foo.py`, if it already exists in the current working directory. Be aware of this, if you downloaded the notebooks and are working locally.)
###Code
def create_foo_py():
open("foo.py", "w").write("""
def twice(x = 2):
return x + x
""")
create_foo_py()
print(open("foo.py").read(), end="")
###Output
def twice(x = 2):
return x + x
###Markdown
We see how `autopep8` fixes the spacing:
###Code
!autopep8 foo.py
###Output
def twice(x=2):
return x + x
###Markdown
Let us now put things together. We define a `ProgramRunner` that will run the `autopep8` executable with arguments coming from the mined `autopep8` grammar.
###Code
from Fuzzer import ProgramRunner
###Output
_____no_output_____
###Markdown
Running `autopep8` with the mined options reveals a surprisingly high number of passing runs. (We see that some options depend on each other or are mutually exclusive, but this is handled by the program logic, not the argument parser, and hence out of our scope.) The `GrammarCoverageFuzzer` ensures that each option is tested at least once. (Digits and letters, too, by the way.)
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=5)
for i in range(20):
invocation = "autopep8" + f.fuzz()
print("$ " + invocation)
args = invocation.split()
autopep8_runner = ProgramRunner(args)
result, outcome = autopep8_runner.run()
if result.stderr != "":
print(result.stderr, end="")
###Output
$ autopep8 foo.py
$ autopep8 --diff --max-line-length 4 --exit-code --range 5 8 -p 2 foo.py
$ autopep8 --ignore z --verbose -r --list-fixes foo.py
--recursive must be used with --in-place or --diff$ autopep8 --exclude 5 -h -i --aggressive --in-place foo.py
$ autopep8 --select a --help --experimental foo.py
$ autopep8 --indent-size -30 --recursive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --global-config < -j 9 -v -a foo.py
parallel jobs requires --in-place$ autopep8 --line-range 7 1 --hang-closing -d foo.py
First value of --range should be less than or equal to the second$ autopep8 --pep8-passes 6 --hang-closing --version --ignore-local-config foo.py
$ autopep8 --jobs -2 --experimental --version foo.py
$ autopep8 --ignore Y: --select ! --global-config e foo.py
$ autopep8 --select 1 -a --recursive --aggressive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --ignore * --ignore `0 --global-config _ --verbose foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config ,\ --exclude r -v foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config xd6M --recursive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --select R --exclude L --version --ignore-local-config foo.py
$ autopep8 --select " --verbose -h -d foo.py
$ autopep8 --diff -i -h foo.py
$ autopep8 --in-place --select w --version -i foo.py
$ autopep8 --ignore 49 --exclude lI -i foo.py
###Markdown
Our `foo.py` file now has been formatted in place a number of times:
###Code
print(open("foo.py").read(), end="")
###Output
def twice(x=2):
return x + x
###Markdown
We don't need it anymore, so we clean up things:
###Code
import os
os.remove("foo.py")
###Output
_____no_output_____
###Markdown
Classes for Fuzzing Configuration OptionsLet us now create reusable classes that we can use for testing arbitrary programs. (Okay, make that "arbitrary programs that are written in Python and use the `argparse` module to process command-line arguments.") The class `OptionRunner` is a subclass of `ProgramRunner` that takes care of automatically determining the grammar, using the same steps we used for `autopep8`, above.
###Code
from Grammars import unreachable_nonterminals
class OptionRunner(ProgramRunner):
"""Run a program while determining its option grammar"""
def __init__(self, program: Union[str, List[str]],
arguments: Optional[str] = None, *,
log: bool = False,
miner_class: Optional[Type[OptionGrammarMiner]] = None):
"""Constructor.
`program` - the (Python) program to be executed
`arguments` - an (optional) string with arguments for `program`
`log` - if True, enable logging in miner
`miner_class` - the `OptionGrammarMiner` class to be used
(default: `OptionGrammarMiner`)
"""
if isinstance(program, str):
self.base_executable = program
else:
self.base_executable = program[0]
if miner_class is None:
miner_class = OptionGrammarMiner
self.miner_class = miner_class
self.log = log
self.find_contents()
self.find_grammar()
if arguments is not None:
self.set_arguments(arguments)
super().__init__(program)
###Output
_____no_output_____
###Markdown
First, we find the contents of the Python executable:
###Code
class OptionRunner(OptionRunner):
def find_contents(self):
self._executable = find_executable(self.base_executable)
if self._executable is None:
raise IOError(self.base_executable + ": not found")
first_line = open(self._executable).readline()
if first_line.find("python") < 0:
raise IOError(self.base_executable + ": not a Python executable")
self.contents = open(self._executable).read()
def invoker(self):
# We are passing the local variables as is, such that we can access `self`
# We set __name__ to '__main__' to invoke the script as an executable
exec(self.contents, {'__name__': '__main__'})
def executable(self):
return self._executable
###Output
_____no_output_____
###Markdown
Next, we determine the grammar using the `OptionGrammarMiner` class:
###Code
class OptionRunner(OptionRunner):
def find_grammar(self):
miner = self.miner_class(self.invoker, log=self.log)
self._ebnf_grammar = miner.mine_ebnf_grammar()
def ebnf_grammar(self):
"""Return extracted grammar in EBNF form"""
return self._ebnf_grammar
def grammar(self):
"""Return extracted grammar in BNF form"""
return convert_ebnf_grammar(self._ebnf_grammar)
###Output
_____no_output_____
###Markdown
The two service methods `set_arguments()` and `set_invocation()` help us to change the arguments and program, respectively.
###Code
class OptionRunner(OptionRunner):
def set_arguments(self, args):
self._ebnf_grammar["<arguments>"] = [" " + args]
# Delete rules for previous arguments
for nonterminal in unreachable_nonterminals(self._ebnf_grammar):
del self._ebnf_grammar[nonterminal]
def set_invocation(self, program):
self.program = program
###Output
_____no_output_____
###Markdown
We can instantiate the class on `autopep8` and immediately get the grammar:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
print(autopep8_runner.ebnf_grammar()["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
An `OptionFuzzer` interacts with the given `OptionRunner` to obtain its grammar, which is then passed to its `GrammarCoverageFuzzer` superclass.
###Code
class OptionFuzzer(GrammarCoverageFuzzer):
"""Fuzz a (Python) program using its arguments"""
def __init__(self, runner: OptionRunner, *args, **kwargs):
"""Constructor. `runner` is an OptionRunner."""
assert issubclass(type(runner), OptionRunner)
self.runner = runner
grammar = runner.grammar()
super().__init__(grammar, *args, **kwargs)
###Output
_____no_output_____
###Markdown
When invoking `run()`, the `OptionFuzzer` creates a new invocation (using `fuzz()` from its grammar) and runs the now given (or previously set) runner with the arguments from the grammar. Note that the runner specified in `run()` can differ from the one set during initialization; this allows for mining options from one program and applying it in another context.
###Code
class OptionFuzzer(OptionFuzzer):
def run(self, runner=None, inp=""):
if runner is None:
runner = self.runner
assert issubclass(type(runner), OptionRunner)
invocation = runner.executable() + " " + self.fuzz()
runner.set_invocation(invocation.split())
return runner.run(inp)
###Output
_____no_output_____
###Markdown
Example: Autopep8Let us apply our newly defined classes on the `autopep8` runner:
###Code
autopep8_fuzzer = OptionFuzzer(autopep8_runner, max_nonterminals=5)
for i in range(3):
print(autopep8_fuzzer.fuzz())
###Output
foo.py
--in-place --ignore-local-config --jobs 6 --recursive -i foo.py
--help -a --indent-size -95 --pep8-passes 3 --exclude = -r foo.py
###Markdown
We can now systematically test `autopep8` with these classes:
###Code
autopep8_fuzzer.run(autopep8_runner)
###Output
_____no_output_____
###Markdown
Example: MyPyWe can extract options for the `mypy` static type checker for Python:
###Code
assert find_executable("mypy") is not None
mypy_runner = OptionRunner("mypy", "foo.py")
print(mypy_runner.ebnf_grammar()["<option>"])
mypy_fuzzer = OptionFuzzer(mypy_runner, max_nonterminals=5)
for i in range(10):
print(mypy_fuzzer.fuzz())
###Output
--follow-imports l foo.py
--logical-deps --allow-any-generics --any-exprs-report yB --fast-exit --show-error-context -i --no-site-packages --no-explicit-package-bases --no-sqlite-cache --warn-unused-ignores foo.py
--html-report 7/ --tb --error-summary foo.py
--no-implicit-reexport --no-error-summary --warn-return-any --dump-build-stats --check-untyped-defs --hide-error-context --no-install-types --install-types foo.py
--no-warn-redundant-casts --no-warn-return-any --bazel --sqlite-cache --enable-error-code foo.py
--cache-map { t --version --warn-unused-configs foo.py
--explicit-package-bases --exclude --help --semantic-analysis-only --soft-error-limit 6 --skip-version-check --no-warn-unused-configs foo.py
--disallow-untyped-defs --hide-error-codes --show-absolute-path --shadow-file --package-root --interactive foo.py
--strict-optional --disable-error-code --disallow-untyped-globals --disallow-incomplete-defs --no-namespace-packages --show-column-numbers --inferstats --raise-exceptions --no-color-output --allow-incomplete-defs --warn-incomplete-stub --command --module --disallow-redefinition --cache-dir --no-strict-optional -c -h -p foo.py
--implicit-optional --disallow-any-generics --no-warn-unreachable --strict-equality --show-error-codes --package --always-false foo.py
###Markdown
Example: NotedownHere's the configuration options for the `notedown` Notebook to Markdown converter:
###Code
assert find_executable("notedown") is not None
notedown_runner = OptionRunner("notedown")
print(notedown_runner.ebnf_grammar()["<option>"])
notedown_fuzzer = OptionFuzzer(notedown_runner, max_nonterminals=5)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
c
--help -o --nomagic --version --output --match H9- --rmagic --render --run
--examples -h --precode r x --execute --strip --debug <
--execute --rmagic --rmagic !
-h --render --debug 2
--examples --version --version .
--execute --help --render R
--run -h 6T
--execute --rmagic -h ;
--from 5* --version --help s
###Markdown
Combinatorial TestingOur `CoverageGrammarFuzzer` does a good job in covering each and every option at least once, which is great for systematic testing. However, as we also can see in our examples above, some options require each other, while others interfere with each other. What we should do as good testers is not only to cover every option individually, but also _combinations_ of options. The Python `itertools` module gives us means to create combinations from lists. We can, for instance, take the `notedown` options and create a list of all pairs.
###Code
from itertools import combinations
option_list = notedown_runner.ebnf_grammar()["<option>"]
pairs = list(combinations(option_list, 2))
###Output
_____no_output_____
###Markdown
There's quite a number of pairs:
###Code
len(pairs)
print(pairs[:20])
###Output
[(' -h', ' --help'), (' -h', ' -o( <str>)?'), (' -h', ' --output( <str>)?'), (' -h', ' --from <str>'), (' -h', ' --to <str>'), (' -h', ' --run'), (' -h', ' --execute'), (' -h', ' --timeout <int>'), (' -h', ' --strip'), (' -h', ' --precode( <str>)+'), (' -h', ' --knit( <str>)?'), (' -h', ' --rmagic'), (' -h', ' --nomagic'), (' -h', ' --render'), (' -h', ' --template <str>'), (' -h', ' --match <str>'), (' -h', ' --examples'), (' -h', ' --version'), (' -h', ' --debug'), (' --help', ' -o( <str>)?')]
###Markdown
Testing every such pair of options frequently suffices to cover all interferences between options. (Programs rarely have conditions involving three or more configuration settings.) To this end, we _change_ the grammar from having a list of options to having a list of _option pairs_, such that covering these will automatically cover all pairs. We create a function `pairwise()` that takes a list of options as occurring in our grammar and returns a list of _pairwise options_ – that is, our original options, but concatenated.
###Code
def pairwise(option_list):
return [option_1 + option_2
for (option_1, option_2) in combinations(option_list, 2)]
###Output
_____no_output_____
###Markdown
Here's the first 20 pairs:
###Code
print(pairwise(option_list)[:20])
###Output
[' -h --help', ' -h -o( <str>)?', ' -h --output( <str>)?', ' -h --from <str>', ' -h --to <str>', ' -h --run', ' -h --execute', ' -h --timeout <int>', ' -h --strip', ' -h --precode( <str>)+', ' -h --knit( <str>)?', ' -h --rmagic', ' -h --nomagic', ' -h --render', ' -h --template <str>', ' -h --match <str>', ' -h --examples', ' -h --version', ' -h --debug', ' --help -o( <str>)?']
###Markdown
The new grammar `pairwise_notedown_grammar` is a copy of the `notedown` grammar, but with the list of options replaced with the above pairwise option list.
###Code
notedown_grammar = notedown_runner.grammar()
pairwise_notedown_grammar = extend_grammar(notedown_grammar)
pairwise_notedown_grammar["<option>"] = pairwise(notedown_grammar["<option>"])
assert is_valid_grammar(pairwise_notedown_grammar)
###Output
_____no_output_____
###Markdown
Using the "pairwise" grammar to fuzz now covers one pair after another:
###Code
notedown_pairwise_fuzzer = GrammarCoverageFuzzer(
pairwise_notedown_grammar, max_nonterminals=4)
for i in range(10):
print(notedown_pairwise_fuzzer.fuzz())
###Output
--help --timeout -0 w
--rmagic --render --execute --strip
--execute --rmagic --nomagic --debug
--help --strip --strip --version r
-h --debug --help --execute
--execute --debug -h --render
--strip --rmagic --render --examples h
-h --run --examples --debug
--from PX --examples >
--run --strip --strip --render
###Markdown
Can we actually test all combinations of options? Not in practice, as the number of combinations quickly grows as the length increases. It decreases again as the number of options reaches the maximum (with 20 options, there is only 1 combination involving _all_ options), but the absolute numbers are still staggering:
###Code
for combination_length in range(1, 20):
tuples = list(combinations(option_list, combination_length))
print(combination_length, len(tuples))
###Output
1 20
2 190
3 1140
4 4845
5 15504
6 38760
7 77520
8 125970
9 167960
10 184756
11 167960
12 125970
13 77520
14 38760
15 15504
16 4845
17 1140
18 190
19 20
###Markdown
Formally, the number of combinations of length $k$ in a set of options of length $n$ is the binomial coefficient$${n \choose k} = \frac{n!}{k!(n - k)!}$$ which for $k = 2$ (all pairs) gives us$${n \choose 2} = \frac{n!}{2(n - 2)!} = \frac{n (n - 1)}{2}$$ For `autopep8` with its 29 options...
###Code
len(autopep8_runner.ebnf_grammar()["<option>"])
###Output
_____no_output_____
###Markdown
... we thus have 406 distinct pairs. However, the binomial coefficient does not differentiate between permutations of elements of the pairs, which our tests do. Therefore we need 812 tests to cover all pairs:
###Code
len(autopep8_runner.ebnf_grammar()["<option>"]) * \
(len(autopep8_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
For `mypy` with its 110 options, though, we already end up with 11,990 tests to be conducted:
###Code
len(mypy_runner.ebnf_grammar()["<option>"])
len(mypy_runner.ebnf_grammar()["<option>"]) * \
(len(mypy_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
Even if each pair takes a second to run, we'd still be done in three hours of testing, though. If your program has more options that you all want to get covered in combinations, it is advisable that you limit the number of configurations further – for instance by limiting combinatorial testing to those combinations that possibly can interact with each other; and covering all other (presumably orthogonal) options individually. This mechanism of creating configurations by extending grammars can be easily extended to other configuration targets. One may want to explore a greater number of configurations, or expansions in specific contexts. The [exercises](Exercises), below, have a number of options ready for you. SynopsisThis chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options. `OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
###Output
_____no_output_____
###Markdown
The grammar can be extracted via the method `ebnf_grammar()`:
###Code
option_ebnf_grammar = autopep8_runner.ebnf_grammar()
option_ebnf_grammar
###Output
_____no_output_____
###Markdown
The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:
###Code
from Grammars import convert_ebnf_grammar
fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))
[fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
autopep8_fuzzer = OptionFuzzer(autopep8_runner)
[autopep8_fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The final step in testing would now to invoke the program with these arguments. Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. The `OptionRunner` constructor accepts an additional `miner` keyword parameter, which takes the class of the argument grammar miner to be used. By default, this is `OptionGrammarMiner` – a helper class that can be used (and extended) to create own option grammar miners.
###Code
# ignore
from ClassDiagram import display_class_hierarchy
from Fuzzer import Fuzzer, Runner, ProgramRunner
from Grammars import Expansion
from GrammarFuzzer import GrammarFuzzer, DerivationTree
from GrammarCoverageFuzzer import TrackingGrammarCoverageFuzzer
# ignore
display_class_hierarchy([OptionRunner, OptionFuzzer, OptionGrammarMiner],
public_methods=[
Fuzzer.__init__,
Fuzzer.fuzz,
Fuzzer.run,
Fuzzer.runs,
GrammarFuzzer.__init__,
GrammarFuzzer.fuzz,
GrammarFuzzer.fuzz_tree,
TrackingGrammarCoverageFuzzer.__init__,
OptionFuzzer.__init__,
OptionFuzzer.run,
Runner.__init__,
Runner.run,
ProgramRunner.__init__,
ProgramRunner.__init__,
OptionRunner.__init__,
OptionRunner.ebnf_grammar,
OptionRunner.grammar,
OptionGrammarMiner.__init__,
OptionGrammarMiner.mine_ebnf_grammar,
OptionGrammarMiner.mine_grammar,
],
types={
'DerivationTree': DerivationTree,
'Expansion': Expansion,
'Grammar': Grammar
},
project='fuzzingbook')
###Output
_____no_output_____
###Markdown
Lessons Learned* Besides regular input data, program _configurations_ make an important testing target.* For a given program using a standard library to parse command-line options and arguments, one can automatically extract these and convert them into a grammar.* To cover not only single options, but combinations of options, one can expand the grammar to cover all pairs, or come up with even more ambitious targets. Next StepsIf you liked the idea of mining a grammar from a program, do not miss:* [how to mine grammars for input data](GrammarMiner.ipynb) Our next steps in the book focus on:* [how to parse and recombine inputs](Parser.ipynb)* [how to assign weights and probabilities to specific productions](ProbabilisticGrammarFuzzer.ipynb)* [how to simplify inputs that cause a failure](Reducer.ipynb) BackgroundAlthough configuration data is just as likely to cause failures as other input data, it has received relatively little attention in test generation – possibly because, unlike "regular" input data, configuration data is not so much under control of external parties, and because, again unlike regular data, there is little variance in configurations. Creating models for software configurations and using these models for testing is commonplace, as is the idea of pairwise testing. For an overview, see \cite{Pezze2008}; for a discussion and comparison of state-of-the-art techniques, see \cite{Petke2015}.More specifically, \cite{Sutton2007} also discuss techniques to systematically cover command-line options. Dai et al. \cite{Dai2010} apply configuration fuzzing by changing variables associated with configuration files. Exercises Exercise 1: ifdef Configuration FuzzingIn C programs, the *C preprocessor* can be used to choose which code parts should be compiled and which ones should not. As an example, in the C code```Cifdef LONG_FOOlong foo() { ... }elseint foo() { ... }endif```the compiler will compile the function `foo()` with return type`long` if the _preprocessor variable_ `LONG_FOO` is defined, and with return type `int` if not. Such preprocessor variables are either set in the source files (using `define`, as in `define LONG_FOO`) or on the C compiler command line (using `-D` or `-D=`, as in `-DLONG_FOO`. Such *conditional compilation* is used to configure C programs towards their environment. System-specific code can contain lots of conditional compilation. As an example, consider this excerpt of `xmlparse.c`, the XML parser that is part of the Python runtime library:```cif defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32) define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800endifif !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \ && !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \ && !defined(XML_DEV_URANDOM) \ && !defined(_WIN32) \ && !defined(XML_POOR_ENTROPY) errorendifif !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */endififdef XML_UNICODE_WCHAR_Tdefine XML_T(x) (const wchar_t)xdefine XML_L(x) L xelsedefine XML_T(x) (const unsigned short)xdefine XML_L(x) xendifint fun(int x) { return XML_T(x); }``` A typical configuration for the C preprocessor on the above code could be `cc -c -D_WIN32 -DXML_POOR_ENTROPY -DXML_UNICODE_WCHAR_T xmlparse.c`, defining the given preprocessor variables and selecting the appropriate code fragments. Since the compiler can only compile one configuration at a time (implying that we can also only _test_ one resulting executable at a time), your task is to find out which of these configurations actually compile. To this end, proceed in three steps. Part 1: Extract Preprocessor VariablesWrite a _function_ `cpp_identifiers()` that, given a set of lines (say, from `open(filename).readlines()`), extracts all preprocessor variables referenced in `if` or `ifdef` preprocessor instructions. Apply `ifdef_identifiers()` on the sample C input above, such that```pythoncpp_identifiers(open("xmlparse.c").readlines()) ```returns the set```python{'_WIN32', 'LOAD_LIBRARY_SEARCH_SYSTEM32', 'HAVE_GETRANDOM', 'HAVE_SYSCALL_GETRANDOM', 'HAVE_ARC4RANDOM_BUF', ...}``` **Solution.** Let us start with creating a sample input file, `xmlparse.c`:
###Code
filename = "xmlparse.c"
open(filename, "w").write(
"""
#if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32)
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
#endif
#if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \
&& !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \
&& !defined(XML_DEV_URANDOM) \
&& !defined(_WIN32) \
&& !defined(XML_POOR_ENTROPY)
# error
#endif
#if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)
#define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */
#endif
#ifdef XML_UNICODE_WCHAR_T
#define XML_T(x) (const wchar_t)x
#define XML_L(x) L ## x
#else
#define XML_T(x) (const unsigned short)x
#define XML_L(x) x
#endif
int fun(int x) { return XML_T(x); }
""");
###Output
_____no_output_____
###Markdown
To find C preprocessor `if` directives and preprocessor variables, we use regular expressions matching them.
###Code
import re
re_cpp_if_directive = re.compile(r"\s*#\s*(el)?if")
re_cpp_identifier = re.compile(r"[a-zA-Z_$]+")
def cpp_identifiers(lines):
identifiers = set()
for line in lines:
if re_cpp_if_directive.match(line):
identifiers |= set(re_cpp_identifier.findall(line))
# These are preprocessor keywords
identifiers -= {"if", "ifdef", "ifndef", "defined"}
return identifiers
cpp_ids = cpp_identifiers(open("xmlparse.c").readlines())
cpp_ids
###Output
_____no_output_____
###Markdown
Part 2: Derive an Option GrammarWith the help of `cpp_identifiers()`, create a grammar which has C compiler invocations with a list of options, where each option takes the form `-D` for a preprocessor variable ``. Using this grammar `cpp_grammar`, a fuzzer ```pythong = GrammarCoverageFuzzer(cpp_grammar)```would create C compiler invocations such as```python[g.fuzz() for i in range(10)]['cc -DHAVE_SYSCALL_GETRANDOM xmlparse.c', 'cc -D__SCO__ -DRANDOM_BUF -DXML_UNICODE_WCHAR_T -D__UNIXWARE__ xmlparse.c', 'cc -DXML_POOR_ENTROPY xmlparse.c', 'cc -DRANDOM xmlparse.c', 'cc -D_WIN xmlparse.c', 'cc -DHAVE_ARC xmlparse.c', ...]``` **Solution.** This is not very difficult:
###Code
from Grammars import Grammar, is_valid_grammar
cpp_grammar: Grammar = {
"<start>": ["cc -c<options> " + filename],
"<options>": ["<option>", "<options><option>"],
"<option>": []
}
for id in cpp_ids:
s = new_symbol(cpp_grammar, "<" + id + ">")
cpp_grammar["<option>"].append(s)
cpp_grammar[s] = [" -D" + id]
assert is_valid_grammar(cpp_grammar)
cpp_grammar
###Output
_____no_output_____
###Markdown
Part 3: C Preprocessor Configuration FuzzingUsing the grammar just produced, use a `GrammarCoverageFuzzer` to1. Test each processor variable individually2. Test each pair of processor variables, using `pairwise()`.What happens if you actually run the invocations? **Solution.** We can simply run the coverage fuzzer, as described above.
###Code
g = GrammarCoverageFuzzer(cpp_grammar)
g.fuzz()
from Fuzzer import ProgramRunner
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -D_WIN -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_SYSCALL_GETRANDOM xmlparse.c
$ cc -c -DHAVE_GETRANDOM -DXML_DEV_URANDOM -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 1 error generated.
$ cc -c -DRANDOM_BUF xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_ARC -DTIOCSWINSZ -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM -DRANDOM_BUF -DTIOCSWINSZ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -D_WIN xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:7:3: error:
# error
^
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 2 errors generated.
###Markdown
To test all pairs, we can use `pairwise()`:
###Code
pairwise_cpp_grammar = extend_grammar(cpp_grammar)
pairwise_cpp_grammar["<option>"] = pairwise(cpp_grammar["<option>"])
pairwise_cpp_grammar["<option>"][:10]
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DXML_DEV_URANDOM -DRANDOM -D__SCO__ xmlparse.c
$ cc -c -DRANDOM_BUF -DXML_DEV_URANDOM xmlparse.c
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_ARC xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -D_WIN -DRANDOM -DHAVE_ARC xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM -D__UNIXWARE__ -DRANDOM -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DTIOCSWINSZ -DHAVE_SYSCALL_GETRANDOM xmlparse.c
$ cc -c -D__SCO__ -DHAVE_ARC -DTIOCSWINSZ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_DEV_URANDOM xmlparse.c
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM -DRANDOM_BUF -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
###Markdown
Some of the compilation errors we get could be expected – for instance, defining `XML_UNICODE_WCHAR_T` when actually, the type is not supported in our environment. Other errors may not be expected – and it is these errors we would find through systematic configuration fuzzing, as described above. At the end, don't forget to clean up:
###Code
os.remove("xmlparse.c")
if os.path.exists("xmlparse.o"):
os.remove("xmlparse.o")
###Output
_____no_output_____
###Markdown
Exercise 2: .ini Configuration FuzzingBesides command-line options, another important source of configurations are _configuration files_. In this exercise, we will consider the very simple configuration language provided by the Python `ConfigParser` module, which is very similar to what is found in Microsoft Windows _.ini_ files. The following example for a `ConfigParser` input file stems right from [the ConfigParser documentation](https://docs.python.org/3/library/configparser.html):```[DEFAULT]ServerAliveInterval = 45Compression = yesCompressionLevel = 9ForwardX11 = yes[bitbucket.org]User = hg[topsecret.server.com]Port = 50022ForwardX11 = no``` The above `ConfigParser` file can be created programmatically:
###Code
import configparser
config = configparser.ConfigParser()
config['DEFAULT'] = {'ServerAliveInterval': '45',
'Compression': 'yes',
'CompressionLevel': '9'}
config['bitbucket.org'] = {}
config['bitbucket.org']['User'] = 'hg'
config['topsecret.server.com'] = {}
topsecret = config['topsecret.server.com']
topsecret['Port'] = '50022' # mutates the parser
topsecret['ForwardX11'] = 'no' # same here
config['DEFAULT']['ForwardX11'] = 'yes'
with open('example.ini', 'w') as configfile:
config.write(configfile)
with open('example.ini') as configfile:
print(configfile.read(), end="")
###Output
[DEFAULT]
serveraliveinterval = 45
compression = yes
compressionlevel = 9
forwardx11 = yes
[bitbucket.org]
user = hg
[topsecret.server.com]
port = 50022
forwardx11 = no
###Markdown
and be read in again:
###Code
config = configparser.ConfigParser()
config.read('example.ini')
topsecret = config['topsecret.server.com']
topsecret['Port']
###Output
_____no_output_____
###Markdown
Part 1: Read ConfigurationUsing `configparser`, create a program reading in the above configuration file and accessing the individual elements. Part 2: Create a Configuration GrammarDesign a grammar that will automatically create configuration files suitable for your above program. Fuzz your program with it. Part 3: Mine a Configuration GrammarBy dynamically tracking the individual accesses to configuration elements, you can again extract a basic grammar from the execution. To this end, create a subclass of `ConfigParser` with a special method `__getitem__`:
###Code
class TrackingConfigParser(configparser.ConfigParser):
def __getitem__(self, key):
print("Accessing", repr(key))
return super().__getitem__(key)
###Output
_____no_output_____
###Markdown
For a `TrackingConfigParser` object `p`, `p.__getitem__(key)` will be invoked whenever `p[key]` is accessed:
###Code
tracking_config_parser = TrackingConfigParser()
tracking_config_parser.read('example.ini')
section = tracking_config_parser['topsecret.server.com']
###Output
Accessing 'topsecret.server.com'
###Markdown
Using `__getitem__()`, as above, implement a tracking mechanism that, while your program accesses the read configuration, automatically saves options accessed and values read. Create a prototype grammar from these values; use it for fuzzing. At the end, don't forget to clean up:
###Code
import os
os.remove("example.ini")
###Output
_____no_output_____
###Markdown
**Solution.** Left to the reader. Enjoy! Exercise 3: Extracting and Fuzzing C Command-Line OptionsIn C programs, the `getopt()` function are frequently used to process configuration options. A call```getopt(argc, argv, "bf:")```indicates that the program accepts two options `-b` and `-f`, with `-f` taking an argument (as indicated by the following colon). Part 1: Getopt FuzzingWrite a framework which, for a given C program, automatically extracts the argument to `getopt()` and derives a fuzzing grammar for it. There are multiple ways to achieve this:1. Scan the program source code for occurrences of `getopt()` and return the string passed. (Crude, but should frequently work.)2. Insert your own implementation of `getopt()` into the source code (effectively replacing `getopt()` from the runtime library), which outputs the `getopt()` argument and exits the program. Recompile and run.3. (Advanced.) As above, but instead of changing the source code, hook into the _dynamic linker_ which at runtime links the program with the C runtime library. Set the library loading path (on Linux and Unix, this is the `LD_LIBRARY_PATH` environment variable) such that your own version of `getopt()` is linked first, and the regular libraries later. Executing the program (without recompiling) should yield the desired result.Apply this on `grep` and `ls`; report the resulting grammars and results. **Solution.** Left to the reader. Enjoy hacking! Part 2: Fuzzing Long Options in CSame as Part 1, but also hook into the GNU variant `getopt_long()`, which accepts "long" arguments with double dashes such as `--help`. Note that method 1, above, will not work here, since the "long" options are defined in a separately defined structure. **Solution.** Left to the reader. Enjoy hacking! Exercise 4: Expansions in ContextIn our above option configurations, we have multiple symbols which all expand to the same integer. For instance, the `--line-range` option of `autopep8` takes two `` parameters which both expand into the same `` symbol:``` ::= ... | --line-range | ... ::= ::= (-)?+ ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9```
###Code
autopep8_runner.ebnf_grammar()["<line>"]
autopep8_runner.ebnf_grammar()["<int>"]
autopep8_runner.ebnf_grammar()["<digit>"]
###Output
_____no_output_____
###Markdown
Testing ConfigurationsThe behavior of a program is not only governed by its data. The _configuration_ of a program – that is, the settings that govern the execution of a program on its (regular) input data, as set by options or configuration files – just as well influences behavior, and thus can and should be tested. In this chapter, we explore how to systematically _test_ and _cover_ software configurations. By _automatically inferring configuration options_, we can apply these techniques out of the box, with no need for writing a grammar. Finally, we show how to systematically cover _combinations_ of configuration options, quickly detecting unwanted interferences.
###Code
from bookutils import YouTubeVideo
YouTubeVideo('L0ztoXVru2U')
###Output
_____no_output_____
###Markdown
**Prerequisites*** You should have read the [chapter on grammars](Grammars.ipynb).* You should have read the [chapter on grammar coverage](GrammarCoverageFuzzer.ipynb).
###Code
import bookutils
from typing import List, Union, Optional, Callable, Type
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.ConfigurationFuzzer import ```and then make use of the following features.This chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options.`OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")```The grammar can be extracted via the method `ebnf_grammar()`:```python>>> option_ebnf_grammar = autopep8_runner.ebnf_grammar()>>> option_ebnf_grammar{'': ['()*'], '': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config ', ' --ignore-local-config', ' -r', ' --recursive', ' -j ', ' --jobs ', ' -p ', ' --pep8-passes ', ' -a', ' --aggressive', ' --experimental', ' --exclude ', ' --list-fixes', ' --ignore ', ' --select ', ' --max-line-length ', ' --line-range ', ' --range ', ' --indent-size ', ' --hang-closing', ' --exit-code'], '': [' foo.py'], '': ['+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '': [''], '': ['(-)?+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '': [''], '': [''], '': [''], '': ['']}```The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:```python>>> from Grammars import convert_ebnf_grammar>>> fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))>>> [fuzzer.fuzz() for i in range(3)][' -d foo.py', ' --max-line-length -5 --version --line-range 3 -14 --verbose --in-place --ignore-local-config --help --jobs -8 --recursive --list-fixes --exit-code -h --hang-closing --ignore $n --range -002 -9 -j -67 --pep8-passes -8 --global-config YO&lT -i -a --experimental --indent-size 79 -r --aggressive -p -4 --exclude ?oh -v --diff --select fXd) --global-config c --ignore FV --global-config 8_Mk --global-config 1 --ignore b5 --global-config pj --ignore 6>[ --global-config ~N --exclude 0 --select L --exclude I --ignore !B --ignore eC]9z` --ignore K --global-config Ew --global-config A --select - --exclude .v --ignore P --select + --ignore H --select :ga --global-config @t --exclude R --exclude J{" --select s^< --ignore %\' --global-config x --ignore 7/( --global-config Z --global-config W2 --exclude D --ignore m --exclude y --select 3 --exclude ;,QU| --exclude } --global-config uq --ignore =S*r --ignore i\\ --ignore 4 --select G --ignore z --select zl --in-place foo.py', ' --help --ignore A --global-config S foo.py']```The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")>>> autopep8_fuzzer = OptionFuzzer(autopep8_runner)>>> [autopep8_fuzzer.fuzz() for i in range(3)][' foo.py', " --exclude 1n --select J -h --in-place -p -50 --diff --experimental --max-line-length 86 --line-range 3 -9 -a --jobs -2 --recursive --aggressive --global-config tqHaBS --pep8-passes 7 --ignore fb; --list-fixes --help -j 14 --indent-size 7 --exit-code --ignore-local-config --version -v --hang-closing -i --range 1 9 --verbose -d -r --global-config = --exclude hD --global-config L` --exclude C~ --global-config 4w --ignore $ --ignore 7 --ignore P --select [6?e --global-config --global-config g --global-config / --select N|i --ignore _ --select osk --ignore O+ --exclude x --exclude 5 --ignore % --global-config { --ignore U --select p --exclude v8 --ignore ^z --select }*M --exclude Q&lyG -h --ignore ' -h --list-fixes --recursive foo.py", ' --global-config d --exclude K --ignore 9(rIX --global-config R --select E! --select , --global-config @ --global-config ] --ignore j --exclude Z --global-config c --global-config \\> --global-config ) --exclude F<23 --exclude m --version --version --recursive foo.py']```The final step in testing would now to invoke the program with these arguments.Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs.The `OptionRunner` constructor accepts an additional `miner` keyword parameter, which takes the class of the argument grammar miner to be used. By default, this is `OptionGrammarMiner` – a helper class that can be used (and extended) to create own option grammar miners.![](PICS/ConfigurationFuzzer-synopsis-1.svg) Configuration OptionsWhen we talk about the input to a program, we usually think of the _data_ it processes. This is also what we have been fuzzing in the past chapters – be it with [random input](Fuzzer.ipynb), [mutation-based fuzzing](MutationFuzzer.ipynb), or [grammar-based fuzzing](GrammarFuzzer.ipynb). However, programs typically have several input sources, all of which can and should be tested – and included in test generation. One important source of input is the program's _configuration_ – that is, a set of inputs that typically is set once when beginning to process data and then stays constant while processing data, while the program is running, or even while the program is deployed. Such a configuration is frequently set in _configuration files_ (for instance, as key/value pairs); the most ubiquitous method for command-line tools, though, are _configuration options_ on the command line. As an example, consider the `grep` utility to find textual patterns in files. The exact mode by which `grep` works is governed by a multitude of options, which can be listed by providing a `--help` option:
###Code
!grep --help
###Output
usage: grep [-abcdDEFGHhIiJLlMmnOopqRSsUVvwXxZz] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
###Markdown
All these options need to be tested for whether they operate correctly. In security testing, any such option may also trigger a yet unknown vulnerability. Hence, such options can become _fuzz targets_ on their own. In this chapter, we analyze how to systematically test such options – and better yet, how to extract possible configurations right out of given program files, such that we do not have to specify anything. Options in PythonLet us stick to our common programming language here and examine how options are processed in Python. The `argparse` module provides a parser for command-line arguments (and options) with great functionality – and great complexity. You start by defining a parser (`argparse.ArgumentParser()`) to which individual arguments with various features are added, one after another. Additional parameters for each argument can specify the type (`type`) of the argument (say, integers or strings), or the number of arguments (`nargs`). By default, arguments are stored under their name in the `args` object coming from `parse_args()` – thus, `args.integers` holds the `integer` arguments added earlier. Special actions (`actions`) allow to store specific values in given variables; the `store_const` action stores the given `const` in the attribute named by `dest`. The following example takes a number of integer arguments (`integers`) as well as an operator (`--sum`, `--min`, or `--max`) to be applied on these integers. The operators all store a function reference in the `accumulate` attribute, which is finally invoked on the integers parsed:
###Code
import argparse
def process_numbers(args=[]):
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--sum', dest='accumulate', action='store_const',
const=sum,
help='sum the integers')
group.add_argument('--min', dest='accumulate', action='store_const',
const=min,
help='compute the minimum')
group.add_argument('--max', dest='accumulate', action='store_const',
const=max,
help='compute the maximum')
args = parser.parse_args(args)
print(args.accumulate(args.integers))
###Output
_____no_output_____
###Markdown
Here's how `process_numbers()` works. We can, for instance, invoke the `--min` option on the given arguments to compute the minimum:
###Code
process_numbers(["--min", "100", "200", "300"])
###Output
100
###Markdown
Or compute the sum of three numbers:
###Code
process_numbers(["--sum", "1", "2", "3"])
###Output
6
###Markdown
When defined via `add_mutually_exclusive_group()` (as above), options are mutually exclusive. Consequently, we can have only one operator:
###Code
import bookutils
from ExpectError import ExpectError
with ExpectError(SystemExit, print_traceback=False):
process_numbers(["--sum", "--max", "1", "2", "3"])
###Output
usage: ipykernel_launcher.py [-h] (--sum | --min | --max) N [N ...]
ipykernel_launcher.py: error: argument --max: not allowed with argument --sum
SystemExit: 2 (expected)
###Markdown
A Grammar for ConfigurationsHow can we test a system with several options? The easiest answer is to write a grammar for it. The grammar `PROCESS_NUMBERS_EBNF_GRAMMAR` reflects the possible combinations of options and arguments:
###Code
from Grammars import crange, srange, convert_ebnf_grammar, extend_grammar, is_valid_grammar
from Grammars import START_SYMBOL, new_symbol, Grammar
PROCESS_NUMBERS_EBNF_GRAMMAR: Grammar = {
"<start>": ["<operator> <integers>"],
"<operator>": ["--sum", "--min", "--max"],
"<integers>": ["<integer>", "<integers> <integer>"],
"<integer>": ["<digit>+"],
"<digit>": crange('0', '9')
}
assert is_valid_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
PROCESS_NUMBERS_GRAMMAR = convert_ebnf_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
###Output
_____no_output_____
###Markdown
We can feed this grammar into our [grammar coverage fuzzer](GrammarCoverageFuzzer.ipynb) and have it cover one option after another:
###Code
from GrammarCoverageFuzzer import GrammarCoverageFuzzer
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
print(f.fuzz())
###Output
--max 9 5 8 210 80 9756431
--sum 9 4 99 1245 612370
--min 2 3 0 46 15798 7570926
###Markdown
Of course, we can also invoke `process_numbers()` with these very arguments. To this end, we need to convert the string produced by the grammar back into a list of individual arguments, using `split()`:
###Code
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
args = f.fuzz().split()
print(args)
process_numbers(args)
###Output
['--max', '8', '9', '3067', '44', '13852967057']
13852967057
['--sum', '9', '8', '63', '9278111', '59206197798']
59215475989
['--min', '4', '1', '4864', '33342', '7827970808951']
1
###Markdown
In a similar way, we can define grammars for any program to be tested; as well as define grammars for, say, configuration files. Yet, the grammar has to be updated with every change to the program, which creates a maintenance burden. Given that the information required for the grammar is already all encoded in the program, the question arises: _Can't we go and extract configuration options right out of the program in the first place?_ Mining Configuration OptionsIn this section, we try to extract option and argument information right out of a program, such that we do not have to specify a configuration grammar. The aim is to have a configuration fuzzer that works on the options and arguments of an arbitrary program, as long as it follows specific conventions for processing its arguments. In the case of Python programs, this means using the `argparse` module.Our idea is as follows: We execute the given program up to the point where the arguments are actually parsed – that is, `argparse.parse_args()` is invoked. Up to this point, we track all calls into the argument parser, notably those calls that define arguments and options (`add_argument()`). From these, we construct the grammar. Tracking ArgumentsLet us illustrate this approach with a simple experiment: We define a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while `process_numbers` is invoked. If we have a call to a method `add_argument`, we access and print out the local variables (which at this point are the arguments to the method).
###Code
import sys
import string
def trace_locals(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(method_name, locals)
###Output
_____no_output_____
###Markdown
What we get is a list of all calls to `add_argument()`, together with the method arguments passed:
###Code
sys.settrace(trace_locals)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
add_argument {'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True), 'args': ('-h', '--help'), 'kwargs': {'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}}
add_argument {'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True), 'args': ('integers',), 'kwargs': {'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x109a7bfa0>, 'args': ('--sum',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x109a7bfa0>, 'args': ('--min',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x109a7bfa0>, 'args': ('--max',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}}
6
###Markdown
From the `args` argument, we can access the individual options and arguments to be defined:
###Code
def trace_options(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(locals['args'])
sys.settrace(trace_options)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
('-h', '--help')
('integers',)
('--sum',)
('--min',)
('--max',)
6
###Markdown
We see that each argument comes as a tuple with one (say, `integers` or `--sum`) or two members (`-h` and `--help`), which denote alternate forms for the same option. Our job will be to go through the arguments of `add_arguments()` and detect not only the names of options and arguments, but also whether they accept additional parameters, as well as the type of the parameters. A Grammar Miner for Options and Arguments Let us now build a class that gathers all this information to create a grammar. We use the `ParseInterrupt` exception to interrupt program execution after gathering all arguments and options:
###Code
class ParseInterrupt(Exception):
pass
###Output
_____no_output_____
###Markdown
The class `OptionGrammarMiner` takes an executable function for which the grammar of options and arguments is to be mined:
###Code
class OptionGrammarMiner:
"""Helper class for extracting option grammars"""
def __init__(self, function: Callable, log: bool = False):
"""Constructor.
`function` - a function processing arguments using argparse()
`log` - output diagnostics if True
"""
self.function = function
self.log = log
###Output
_____no_output_____
###Markdown
The method `mine_ebnf_grammar()` is where everything happens. It creates a grammar of the form``` ::= * ::= ::= ```in which the options and arguments will be collected. It then sets a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while the previously defined `function` is invoked. Raising `ParseInterrupt` (when `parse_args()` is invoked) ends execution.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
OPTION_SYMBOL = "<option>"
ARGUMENTS_SYMBOL = "<arguments>"
def mine_ebnf_grammar(self):
"""Extract EBNF option grammar"""
self.grammar = {
START_SYMBOL: ["(" + self.OPTION_SYMBOL + ")*" + self.ARGUMENTS_SYMBOL],
self.OPTION_SYMBOL: [],
self.ARGUMENTS_SYMBOL: []
}
self.current_group = self.OPTION_SYMBOL
old_trace = sys.gettrace()
sys.settrace(self.traceit)
try:
self.function()
except ParseInterrupt:
pass
sys.settrace(old_trace)
return self.grammar
def mine_grammar(self):
"""Extract BNF option grammar"""
return convert_ebnf_grammar(self.mine_ebnf_grammar())
###Output
_____no_output_____
###Markdown
The trace function checks for four methods: `add_argument()` is the most important function, resulting in processing arguments; `frame.f_locals` again is the set of local variables, which at this point is mostly the arguments to `add_argument()`. Since mutually exclusive groups also have a method `add_argument()`, we set the flag `in_group` to differentiate. Note that we make no specific efforts to differentiate between multiple parsers or groups; we simply assume that there is one parser, and at any point at most one mutually exclusive group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def traceit(self, frame, event, arg):
if event != "call":
return
if "self" not in frame.f_locals:
return
self_var = frame.f_locals["self"]
method_name = frame.f_code.co_name
if method_name == "add_argument":
in_group = repr(type(self_var)).find("Group") >= 0
self.process_argument(frame.f_locals, in_group)
elif method_name == "add_mutually_exclusive_group":
self.add_group(frame.f_locals, exclusive=True)
elif method_name == "add_argument_group":
# self.add_group(frame.f_locals, exclusive=False)
pass
elif method_name == "parse_args":
raise ParseInterrupt
return self.traceit
###Output
_____no_output_____
###Markdown
The `process_arguments()` now analyzes the arguments passed and adds them to the grammar:* If the argument starts with `-`, it gets added as an optional element to the `` list* Otherwise, it gets added to the `` list.The optional `nargs` argument specifies how many arguments can follow. If it is a number, we add the appropriate number of elements to the grammar; if it is an abstract specifier (say, `+` or `*`), we use it directly as EBNF operator.Given the large number of parameters and optional behavior, this is a somewhat messy function, but it does the job.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def process_argument(self, locals, in_group):
args = locals["args"]
kwargs = locals["kwargs"]
if self.log:
print(args)
print(kwargs)
print()
for arg in args:
self.process_arg(arg, in_group, kwargs)
class OptionGrammarMiner(OptionGrammarMiner):
def process_arg(self, arg, in_group, kwargs):
if arg.startswith('-'):
if not in_group:
target = self.OPTION_SYMBOL
else:
target = self.current_group
metavar = None
arg = " " + arg
else:
target = self.ARGUMENTS_SYMBOL
metavar = arg
arg = ""
if "nargs" in kwargs:
nargs = kwargs["nargs"]
else:
nargs = 1
param = self.add_parameter(kwargs, metavar)
if param == "":
nargs = 0
if isinstance(nargs, int):
for i in range(nargs):
arg += param
else:
assert nargs in "?+*"
arg += '(' + param + ')' + nargs
if target == self.OPTION_SYMBOL:
self.grammar[target].append(arg)
else:
self.grammar[target].append(arg)
###Output
_____no_output_____
###Markdown
The method `add_parameter()` handles possible parameters of options. If the argument has an `action` defined, it takes no parameter. Otherwise, we identify the type of the parameter (as `int` or `str`) and augment the grammar with an appropriate rule.
###Code
import inspect
class OptionGrammarMiner(OptionGrammarMiner):
def add_parameter(self, kwargs, metavar):
if "action" in kwargs:
# No parameter
return ""
type_ = "str"
if "type" in kwargs:
given_type = kwargs["type"]
# int types come as '<class int>'
if inspect.isclass(given_type) and issubclass(given_type, int):
type_ = "int"
if metavar is None:
if "metavar" in kwargs:
metavar = kwargs["metavar"]
else:
metavar = type_
self.add_type_rule(type_)
if metavar != type_:
self.add_metavar_rule(metavar, type_)
param = " <" + metavar + ">"
return param
###Output
_____no_output_____
###Markdown
The method `add_type_rule()` adds a rule for parameter types to the grammar. If the parameter is identified by a meta-variable (say, `N`), we add a rule for this as well to improve legibility.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_type_rule(self, type_):
if type_ == "int":
self.add_int_rule()
else:
self.add_str_rule()
def add_int_rule(self):
self.grammar["<int>"] = ["(-)?<digit>+"]
self.grammar["<digit>"] = crange('0', '9')
def add_str_rule(self):
self.grammar["<str>"] = ["<char>+"]
self.grammar["<char>"] = srange(
string.digits
+ string.ascii_letters
+ string.punctuation)
def add_metavar_rule(self, metavar, type_):
self.grammar["<" + metavar + ">"] = ["<" + type_ + ">"]
###Output
_____no_output_____
###Markdown
The method `add_group()` adds a new mutually exclusive group to the grammar. We define a new symbol (say, ``) for the options added to the group, and use the `required` and `exclusive` flags to define an appropriate expansion operator. The group is then prefixed to the grammar, as in``` ::= * ::= ```and filled with the next calls to `add_argument()` within the group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_group(self, locals, exclusive):
kwargs = locals["kwargs"]
if self.log:
print(kwargs)
required = kwargs.get("required", False)
group = new_symbol(self.grammar, "<group>")
if required and exclusive:
group_expansion = group
if required and not exclusive:
group_expansion = group + "+"
if not required and exclusive:
group_expansion = group + "?"
if not required and not exclusive:
group_expansion = group + "*"
self.grammar[START_SYMBOL][0] = group_expansion + \
self.grammar[START_SYMBOL][0]
self.grammar[group] = []
self.current_group = group
###Output
_____no_output_____
###Markdown
That's it! With this, we can now extract the grammar from our `process_numbers()` program. Turning on logging again reveals the variables we draw upon.
###Code
miner = OptionGrammarMiner(process_numbers, log=True)
process_numbers_grammar = miner.mine_ebnf_grammar()
###Output
('-h', '--help')
{'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}
('integers',)
{'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}
{'required': True}
('--sum',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}
('--min',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}
('--max',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}
###Markdown
Here is the extracted grammar:
###Code
process_numbers_grammar
###Output
_____no_output_____
###Markdown
The grammar properly identifies the group found:
###Code
process_numbers_grammar["<start>"]
process_numbers_grammar["<group>"]
###Output
_____no_output_____
###Markdown
It also identifies a `--help` option provided not by us, but by the `argparse` module:
###Code
process_numbers_grammar["<option>"]
###Output
_____no_output_____
###Markdown
The grammar also correctly identifies the types of the arguments:
###Code
process_numbers_grammar["<arguments>"]
process_numbers_grammar["<integers>"]
###Output
_____no_output_____
###Markdown
The rules for `int` are set as defined by `add_int_rule()`
###Code
process_numbers_grammar["<int>"]
###Output
_____no_output_____
###Markdown
We can take this grammar and convert it to BNF, such that we can fuzz with it right away:
###Code
assert is_valid_grammar(process_numbers_grammar)
grammar = convert_ebnf_grammar(process_numbers_grammar)
assert is_valid_grammar(grammar)
f = GrammarCoverageFuzzer(grammar)
for i in range(10):
print(f.fuzz())
###Output
--sum 9
--max -h --help --help -16 -0
--min --help 2745341 8
--min 1 27
--sum --help --help -2
--sum --help 0 3 -77
--sum -3
--sum --help 429 8 10 0295 -694 1
--max -h 91 -1425 99
--sum -795 -94 8 -44
###Markdown
Each and every invocation adheres to the rules as set forth in the `argparse` calls. By mining options and arguments from existing programs, we can now fuzz these options out of the box – without having to specify a grammar. Testing Autopep8 Let us try out the option grammar miner on real-world Python programs. `autopep8` is a tool that automatically converts Python code to the [PEP 8 Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). (Actually, all Python code in this book runs through `autopep8` during production.) `autopep8` offers a wide range of options, as can be seen by invoking it with `--help`:
###Code
!autopep8 --help
###Output
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing] [--exit-code]
[files ...]
Automatically formats Python code to conform to the PEP 8 style guide.
positional arguments:
files files to format or '-' for standard in
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-v, --verbose print verbose messages; multiple -v result in more
verbose messages
-d, --diff print the diff for the fixed source
-i, --in-place make changes to files in place
--global-config filename
path to a global pep8 config file; if this file does
not exist then this is ignored (default:
/Users/zeller/.config/pep8)
--ignore-local-config
don't look for and apply local config files; if not
passed, defaults are updated with any config files in
the project's root directory
-r, --recursive run recursively over directories; must be used with
--in-place or --diff
-j n, --jobs n number of parallel jobs; match CPU count if value is
less than 1
-p n, --pep8-passes n
maximum number of additional pep8 passes (default:
infinite)
-a, --aggressive enable non-whitespace changes; multiple -a result in
more aggressive changes
--experimental enable experimental fixes
--exclude globs exclude file/directory names that match these comma-
separated globs
--list-fixes list codes for fixes; used by --ignore and --select
--ignore errors do not fix these errors/warnings (default:
E226,E24,W50,W690)
--select errors fix only these errors/warnings (e.g. E4,W)
--max-line-length n set maximum allowed line length (default: 79)
--line-range line line, --range line line
only fix errors found within this inclusive range of
line numbers (e.g. 1 99); line numbers are indexed at
1
--hang-closing hang-closing option passed to pycodestyle
--exit-code change to behavior of exit code. default behavior of
return value, 0 is no differences, 1 is error exit.
return 2 when add this option. 2 is exists
differences.
###Markdown
Autopep8 SetupWe want to systematically test these options. In order to deploy our configuration grammar miner, we need to find the source code of the executable:
###Code
import os
def find_executable(name):
for path in os.get_exec_path():
qualified_name = os.path.join(path, name)
if os.path.exists(qualified_name):
return qualified_name
return None
autopep8_executable = find_executable("autopep8")
assert autopep8_executable is not None
autopep8_executable
###Output
_____no_output_____
###Markdown
Next, we build a function that reads the contents of the file and executes it.
###Code
def autopep8():
executable = find_executable("autopep8")
# First line has to contain "/usr/bin/env python" or like
first_line = open(executable).readline()
assert first_line.find("python") >= 0
contents = open(executable).read()
exec(contents)
###Output
_____no_output_____
###Markdown
Mining an Autopep8 GrammarWe can use the `autopep8()` function in our grammar miner:
###Code
autopep8_miner = OptionGrammarMiner(autopep8)
###Output
_____no_output_____
###Markdown
and extract a grammar for it:
###Code
autopep8_ebnf_grammar = autopep8_miner.mine_ebnf_grammar()
###Output
_____no_output_____
###Markdown
This works because here, `autopep8` is not a separate process (and a separate Python interpreter), but we run the `autopep8()` function (and the `autopep8` code) in our current Python interpreter – up to the call to `parse_args()`, where we interrupt execution again. At this point, the `autopep8` code has done nothing but setting up the argument parser – which is what we are interested in. The grammar options mined reflect precisely the options seen when providing `--help`:
###Code
print(autopep8_ebnf_grammar["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
Metavariables like `` or `` are placeholders for integers. We assume all metavariables of the same name have the same type:
###Code
autopep8_ebnf_grammar["<line>"]
###Output
_____no_output_____
###Markdown
The grammar miner has inferred that the argument to `autopep8` is a list of files:
###Code
autopep8_ebnf_grammar["<arguments>"]
###Output
_____no_output_____
###Markdown
which in turn all are strings:
###Code
autopep8_ebnf_grammar["<files>"]
###Output
_____no_output_____
###Markdown
As we are only interested in testing options, not arguments, we fix the arguments to a single mandatory input. (Otherwise, we'd have plenty of random file names generated.)
###Code
autopep8_ebnf_grammar["<arguments>"] = [" <files>"]
autopep8_ebnf_grammar["<files>"] = ["foo.py"]
assert is_valid_grammar(autopep8_ebnf_grammar)
###Output
_____no_output_____
###Markdown
Creating Autopep8 Options Let us now use the inferred grammar for fuzzing. Again, we convert the EBNF grammar into a regular BNF grammar:
###Code
autopep8_grammar = convert_ebnf_grammar(autopep8_ebnf_grammar)
assert is_valid_grammar(autopep8_grammar)
###Output
_____no_output_____
###Markdown
And we can use the grammar for fuzzing all options:
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=4)
for i in range(20):
print(f.fuzz())
###Output
-r foo.py
-h --experimental --hang-closing foo.py
--list-fixes -v foo.py
--aggressive -d foo.py
--indent-size 9 --help foo.py
--exit-code --recursive foo.py
--diff --version -i foo.py
--max-line-length 0 --in-place --verbose foo.py
--ignore-local-config -a foo.py
--select x -i --exit-code foo.py
-j 8 --diff foo.py
-d -v -d foo.py
-p 6 -i foo.py
-v --diff foo.py
--ignore uA --recursive foo.py
--jobs 5 -r foo.py
--range 4 1 foo.py
--ignore-local-config -i foo.py
-r --exit-code foo.py
-v -r foo.py
###Markdown
Let us apply these options on the actual program. We need a file `foo.py` that will serve as input: (Note that the following commands will overwrite the file `foo.py`, if it already exists in the current working directory. Be aware of this, if you downloaded the notebooks and are working locally.)
###Code
def create_foo_py():
open("foo.py", "w").write("""
def twice(x = 2):
return x + x
""")
create_foo_py()
print(open("foo.py").read(), end="")
###Output
def twice(x = 2):
return x + x
###Markdown
We see how `autopep8` fixes the spacing:
###Code
!autopep8 foo.py
###Output
def twice(x=2):
return x + x
###Markdown
Let us now put things together. We define a `ProgramRunner` that will run the `autopep8` executable with arguments coming from the mined `autopep8` grammar.
###Code
from Fuzzer import ProgramRunner
###Output
_____no_output_____
###Markdown
Running `autopep8` with the mined options reveals a surprisingly high number of passing runs. (We see that some options depend on each other or are mutually exclusive, but this is handled by the program logic, not the argument parser, and hence out of our scope.) The `GrammarCoverageFuzzer` ensures that each option is tested at least once. (Digits and letters, too, by the way.)
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=5)
for i in range(20):
invocation = "autopep8" + f.fuzz()
print("$ " + invocation)
args = invocation.split()
autopep8_runner = ProgramRunner(args)
result, outcome = autopep8_runner.run()
if result.stderr != "":
print(result.stderr, end="")
###Output
$ autopep8 foo.py
$ autopep8 --diff --max-line-length 4 --exit-code --range 5 8 -p 2 foo.py
$ autopep8 --ignore z --verbose -r --list-fixes foo.py
--recursive must be used with --in-place or --diff$ autopep8 --exclude 5 -h -i --aggressive --in-place foo.py
$ autopep8 --select a --help --experimental foo.py
$ autopep8 --indent-size -30 --recursive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --global-config < -j 9 -v -a foo.py
parallel jobs requires --in-place$ autopep8 --line-range 7 1 --hang-closing -d foo.py
First value of --range should be less than or equal to the second$ autopep8 --pep8-passes 6 --hang-closing --version --ignore-local-config foo.py
$ autopep8 --jobs -2 --experimental --version foo.py
$ autopep8 --ignore Y: --select ! --global-config e foo.py
$ autopep8 --select 1 -a --recursive --aggressive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --ignore * --ignore `0 --global-config _ --verbose foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config ,\ --exclude r -v foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config xd6M --recursive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --select R --exclude L --version --ignore-local-config foo.py
$ autopep8 --select " --verbose -h -d foo.py
$ autopep8 --diff -i -h foo.py
$ autopep8 --in-place --select w --version -i foo.py
$ autopep8 --ignore 49 --exclude lI -i foo.py
###Markdown
Our `foo.py` file now has been formatted in place a number of times:
###Code
print(open("foo.py").read(), end="")
###Output
def twice(x=2):
return x + x
###Markdown
We don't need it anymore, so we clean up things:
###Code
import os
os.remove("foo.py")
###Output
_____no_output_____
###Markdown
Classes for Fuzzing Configuration OptionsLet us now create reusable classes that we can use for testing arbitrary programs. (Okay, make that "arbitrary programs that are written in Python and use the `argparse` module to process command-line arguments.") The class `OptionRunner` is a subclass of `ProgramRunner` that takes care of automatically determining the grammar, using the same steps we used for `autopep8`, above.
###Code
from Grammars import unreachable_nonterminals
class OptionRunner(ProgramRunner):
"""Run a program while determining its option grammar"""
def __init__(self, program: Union[str, List[str]],
arguments: Optional[str] = None, *,
log: bool = False,
miner_class: Optional[Type[OptionGrammarMiner]] = None):
"""Constructor.
`program` - the (Python) program to be executed
`arguments` - an (optional) string with arguments for `program`
`log` - if True, enable logging in miner
`miner_class` - the `OptionGrammarMiner` class to be used
(default: `OptionGrammarMiner`)
"""
if isinstance(program, str):
self.base_executable = program
else:
self.base_executable = program[0]
if miner_class is None:
miner_class = OptionGrammarMiner
self.miner_class = miner_class
self.log = log
self.find_contents()
self.find_grammar()
if arguments is not None:
self.set_arguments(arguments)
super().__init__(program)
###Output
_____no_output_____
###Markdown
First, we find the contents of the Python executable:
###Code
class OptionRunner(OptionRunner):
def find_contents(self):
self._executable = find_executable(self.base_executable)
if self._executable is None:
raise IOError(self.base_executable + ": not found")
first_line = open(self._executable).readline()
if first_line.find("python") < 0:
raise IOError(self.base_executable + ": not a Python executable")
self.contents = open(self._executable).read()
def invoker(self):
# We are passing the local variables as is, such that we can access `self`
# We set __name__ to '__main__' to invoke the script as an executable
exec(self.contents, {'__name__': '__main__'})
def executable(self):
return self._executable
###Output
_____no_output_____
###Markdown
Next, we determine the grammar using the `OptionGrammarMiner` class:
###Code
class OptionRunner(OptionRunner):
def find_grammar(self):
miner = self.miner_class(self.invoker, log=self.log)
self._ebnf_grammar = miner.mine_ebnf_grammar()
def ebnf_grammar(self):
"""Return extracted grammar in EBNF form"""
return self._ebnf_grammar
def grammar(self):
"""Return extracted grammar in BNF form"""
return convert_ebnf_grammar(self._ebnf_grammar)
###Output
_____no_output_____
###Markdown
The two service methods `set_arguments()` and `set_invocation()` help us to change the arguments and program, respectively.
###Code
class OptionRunner(OptionRunner):
def set_arguments(self, args):
self._ebnf_grammar["<arguments>"] = [" " + args]
# Delete rules for previous arguments
for nonterminal in unreachable_nonterminals(self._ebnf_grammar):
del self._ebnf_grammar[nonterminal]
def set_invocation(self, program):
self.program = program
###Output
_____no_output_____
###Markdown
We can instantiate the class on `autopep8` and immediately get the grammar:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
print(autopep8_runner.ebnf_grammar()["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
An `OptionFuzzer` interacts with the given `OptionRunner` to obtain its grammar, which is then passed to its `GrammarCoverageFuzzer` superclass.
###Code
class OptionFuzzer(GrammarCoverageFuzzer):
"""Fuzz a (Python) program using its arguments"""
def __init__(self, runner: OptionRunner, *args, **kwargs):
"""Constructor. `runner` is an OptionRunner."""
assert issubclass(type(runner), OptionRunner)
self.runner = runner
grammar = runner.grammar()
super().__init__(grammar, *args, **kwargs)
###Output
_____no_output_____
###Markdown
When invoking `run()`, the `OptionFuzzer` creates a new invocation (using `fuzz()` from its grammar) and runs the now given (or previously set) runner with the arguments from the grammar. Note that the runner specified in `run()` can differ from the one set during initialization; this allows for mining options from one program and applying it in another context.
###Code
class OptionFuzzer(OptionFuzzer):
def run(self, runner=None, inp=""):
if runner is None:
runner = self.runner
assert issubclass(type(runner), OptionRunner)
invocation = runner.executable() + " " + self.fuzz()
runner.set_invocation(invocation.split())
return runner.run(inp)
###Output
_____no_output_____
###Markdown
Example: Autopep8Let us apply our newly defined classes on the `autopep8` runner:
###Code
autopep8_fuzzer = OptionFuzzer(autopep8_runner, max_nonterminals=5)
for i in range(3):
print(autopep8_fuzzer.fuzz())
###Output
foo.py
--in-place --ignore-local-config --jobs 6 --recursive -i foo.py
--help -a --indent-size -95 --pep8-passes 3 --exclude = -r foo.py
###Markdown
We can now systematically test `autopep8` with these classes:
###Code
autopep8_fuzzer.run(autopep8_runner)
###Output
_____no_output_____
###Markdown
Example: MyPyWe can extract options for the `mypy` static type checker for Python:
###Code
assert find_executable("mypy") is not None
mypy_runner = OptionRunner("mypy", "foo.py")
print(mypy_runner.ebnf_grammar()["<option>"])
mypy_fuzzer = OptionFuzzer(mypy_runner, max_nonterminals=5)
for i in range(10):
print(mypy_fuzzer.fuzz())
###Output
--follow-imports l foo.py
--semantic-analysis-only --allow-any-generics --fast-exit --no-warn-unused-configs --shadow-file --allow-redefinition --pretty -h --disable-error-code --enable-error-code --hide-absolute-path --strict --warn-incomplete-stub foo.py
--disallow-untyped-defs --always-true --warn-return-any --show-error-codes --disallow-any-unimported foo.py
--disallow-any-expr --disallow-untyped-decorators --allow-untyped-calls -p --xml-report g --no-warn-incomplete-stub --disallow-untyped-globals foo.py
--no-namespace-packages --no-strict-concatenate --no-fast-exit --python-executable --disallow-incomplete-defs --install-types foo.py
--enable-incomplete-features vN --pdb --hide-error-context foo.py
--disallow-any-explicit --interactive --xslt-html-report M --no-check-untyped-defs --dump-graph -m foo.py
--tb --no-incremental --help --package-root --no-warn-unused-ignores --no-color-output --txt-report @ --always-false --config-file t --show-error-context --hide-error-codes --implicit-optional --disallow-redefinition --logical-deps --no-warn-return-any foo.py
--allow-subclassing-any --quickstart-file jX --no-silence-site-packages --no-warn-unreachable --disallow-any-generics foo.py
--ignore-missing-imports --local-partial-types --warn-no-return --no-warn-redundant-casts foo.py
###Markdown
Example: NotedownHere's the configuration options for the `notedown` Notebook to Markdown converter:
###Code
assert find_executable("notedown") is not None
import warnings
with warnings.catch_warnings():
# Workaround: `notedown` can issue a `DeprecationWarning`
warnings.filterwarnings("ignore", category=DeprecationWarning)
notedown_runner = OptionRunner("notedown")
print(notedown_runner.ebnf_grammar()["<option>"])
notedown_fuzzer = OptionFuzzer(notedown_runner, max_nonterminals=5)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
--render :[
--from r --rmagic -h --nomagic --output qJ8 --to 3 --execute
--version --match }` --timeout 8 --strip --debug o
--precode K I --help --run ?p
--examples -h --render j
-o --knit l --template U --version --rmagic F
--examples -h --strip V
--render -h --version b
--version --execute ;Q
--rmagic --knit --rmagic S
###Markdown
Combinatorial TestingOur `CoverageGrammarFuzzer` does a good job in covering each and every option at least once, which is great for systematic testing. However, as we also can see in our examples above, some options require each other, while others interfere with each other. What we should do as good testers is not only to cover every option individually, but also _combinations_ of options. The Python `itertools` module gives us means to create combinations from lists. We can, for instance, take the `notedown` options and create a list of all pairs.
###Code
from itertools import combinations
option_list = notedown_runner.ebnf_grammar()["<option>"]
pairs = list(combinations(option_list, 2))
###Output
_____no_output_____
###Markdown
There's quite a number of pairs:
###Code
len(pairs)
print(pairs[:20])
###Output
[(' -h', ' --help'), (' -h', ' -o( <str>)?'), (' -h', ' --output( <str>)?'), (' -h', ' --from <str>'), (' -h', ' --to <str>'), (' -h', ' --run'), (' -h', ' --execute'), (' -h', ' --timeout <int>'), (' -h', ' --strip'), (' -h', ' --precode( <str>)+'), (' -h', ' --knit( <str>)?'), (' -h', ' --rmagic'), (' -h', ' --nomagic'), (' -h', ' --render'), (' -h', ' --template <str>'), (' -h', ' --match <str>'), (' -h', ' --examples'), (' -h', ' --version'), (' -h', ' --debug'), (' --help', ' -o( <str>)?')]
###Markdown
Testing every such pair of options frequently suffices to cover all interferences between options. (Programs rarely have conditions involving three or more configuration settings.) To this end, we _change_ the grammar from having a list of options to having a list of _option pairs_, such that covering these will automatically cover all pairs. We create a function `pairwise()` that takes a list of options as occurring in our grammar and returns a list of _pairwise options_ – that is, our original options, but concatenated.
###Code
def pairwise(option_list):
return [option_1 + option_2
for (option_1, option_2) in combinations(option_list, 2)]
###Output
_____no_output_____
###Markdown
Here's the first 20 pairs:
###Code
print(pairwise(option_list)[:20])
###Output
[' -h --help', ' -h -o( <str>)?', ' -h --output( <str>)?', ' -h --from <str>', ' -h --to <str>', ' -h --run', ' -h --execute', ' -h --timeout <int>', ' -h --strip', ' -h --precode( <str>)+', ' -h --knit( <str>)?', ' -h --rmagic', ' -h --nomagic', ' -h --render', ' -h --template <str>', ' -h --match <str>', ' -h --examples', ' -h --version', ' -h --debug', ' --help -o( <str>)?']
###Markdown
The new grammar `pairwise_notedown_grammar` is a copy of the `notedown` grammar, but with the list of options replaced with the above pairwise option list.
###Code
notedown_grammar = notedown_runner.grammar()
pairwise_notedown_grammar = extend_grammar(notedown_grammar)
pairwise_notedown_grammar["<option>"] = pairwise(notedown_grammar["<option>"])
assert is_valid_grammar(pairwise_notedown_grammar)
###Output
_____no_output_____
###Markdown
Using the "pairwise" grammar to fuzz now covers one pair after another:
###Code
notedown_pairwise_fuzzer = GrammarCoverageFuzzer(
pairwise_notedown_grammar, max_nonterminals=4)
for i in range(10):
print(notedown_pairwise_fuzzer.fuzz())
###Output
Z
--help --version --strip --render
--help --debug -h --nomagic
-h --strip --run --nomagic
--execute --version --examples --version
--help --nomagic --rmagic --version r
--help --execute --help --rmagic
--strip --version -h --version
--strip --debug --examples --debug h
-h --run -h --execute
###Markdown
Can we actually test all combinations of options? Not in practice, as the number of combinations quickly grows as the length increases. It decreases again as the number of options reaches the maximum (with 20 options, there is only 1 combination involving _all_ options), but the absolute numbers are still staggering:
###Code
for combination_length in range(1, 20):
tuples = list(combinations(option_list, combination_length))
print(combination_length, len(tuples))
###Output
1 20
2 190
3 1140
4 4845
5 15504
6 38760
7 77520
8 125970
9 167960
10 184756
11 167960
12 125970
13 77520
14 38760
15 15504
16 4845
17 1140
18 190
19 20
###Markdown
Formally, the number of combinations of length $k$ in a set of options of length $n$ is the binomial coefficient$${n \choose k} = \frac{n!}{k!(n - k)!}$$ which for $k = 2$ (all pairs) gives us$${n \choose 2} = \frac{n!}{2(n - 2)!} = \frac{n (n - 1)}{2}$$ For `autopep8` with its 30 options...
###Code
len(autopep8_runner.ebnf_grammar()["<option>"])
# docassert
assert len(autopep8_runner.ebnf_grammar()["<option>"]) == 30
###Output
_____no_output_____
###Markdown
... we thus need 870 tests to cover all pairs:
###Code
len(autopep8_runner.ebnf_grammar()["<option>"]) * \
(len(autopep8_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
For `mypy` with its 143 options, though, we already end up with 20,000+ tests to be conducted:
###Code
len(mypy_runner.ebnf_grammar()["<option>"])
# docassert
assert len(mypy_runner.ebnf_grammar()["<option>"]) == 143
len(mypy_runner.ebnf_grammar()["<option>"]) * \
(len(mypy_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
Even if each pair takes a second to run, we'd still be done in three hours of testing, though. If your program has more options that you all want to get covered in combinations, it is advisable that you limit the number of configurations further – for instance by limiting combinatorial testing to those combinations that possibly can interact with each other; and covering all other (presumably orthogonal) options individually. This mechanism of creating configurations by extending grammars can be easily extended to other configuration targets. One may want to explore a greater number of configurations, or expansions in specific contexts. The [exercises](Exercises), below, have a number of options ready for you. SynopsisThis chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options. `OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
###Output
_____no_output_____
###Markdown
The grammar can be extracted via the method `ebnf_grammar()`:
###Code
option_ebnf_grammar = autopep8_runner.ebnf_grammar()
option_ebnf_grammar
###Output
_____no_output_____
###Markdown
The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:
###Code
from Grammars import convert_ebnf_grammar
fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))
[fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
autopep8_fuzzer = OptionFuzzer(autopep8_runner)
[autopep8_fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The final step in testing would now to invoke the program with these arguments. Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. The `OptionRunner` constructor accepts an additional `miner` keyword parameter, which takes the class of the argument grammar miner to be used. By default, this is `OptionGrammarMiner` – a helper class that can be used (and extended) to create own option grammar miners.
###Code
# ignore
from ClassDiagram import display_class_hierarchy
from Fuzzer import Fuzzer, Runner, ProgramRunner
from Grammars import Expansion
from GrammarFuzzer import GrammarFuzzer, DerivationTree
from GrammarCoverageFuzzer import TrackingGrammarCoverageFuzzer
# ignore
display_class_hierarchy([OptionRunner, OptionFuzzer, OptionGrammarMiner],
public_methods=[
Fuzzer.__init__,
Fuzzer.fuzz,
Fuzzer.run,
Fuzzer.runs,
GrammarFuzzer.__init__,
GrammarFuzzer.fuzz,
GrammarFuzzer.fuzz_tree,
TrackingGrammarCoverageFuzzer.__init__,
OptionFuzzer.__init__,
OptionFuzzer.run,
Runner.__init__,
Runner.run,
ProgramRunner.__init__,
ProgramRunner.__init__,
OptionRunner.__init__,
OptionRunner.ebnf_grammar,
OptionRunner.grammar,
OptionGrammarMiner.__init__,
OptionGrammarMiner.mine_ebnf_grammar,
OptionGrammarMiner.mine_grammar,
],
types={
'DerivationTree': DerivationTree,
'Expansion': Expansion,
'Grammar': Grammar
},
project='fuzzingbook')
###Output
_____no_output_____
###Markdown
Lessons Learned* Besides regular input data, program _configurations_ make an important testing target.* For a given program using a standard library to parse command-line options and arguments, one can automatically extract these and convert them into a grammar.* To cover not only single options, but combinations of options, one can expand the grammar to cover all pairs, or come up with even more ambitious targets. Next StepsIf you liked the idea of mining a grammar from a program, do not miss:* [how to mine grammars for input data](GrammarMiner.ipynb) Our next steps in the book focus on:* [how to parse and recombine inputs](Parser.ipynb)* [how to assign weights and probabilities to specific productions](ProbabilisticGrammarFuzzer.ipynb)* [how to simplify inputs that cause a failure](Reducer.ipynb) BackgroundAlthough configuration data is just as likely to cause failures as other input data, it has received relatively little attention in test generation – possibly because, unlike "regular" input data, configuration data is not so much under control of external parties, and because, again unlike regular data, there is little variance in configurations. Creating models for software configurations and using these models for testing is commonplace, as is the idea of pairwise testing. For an overview, see \cite{Pezze2008}; for a discussion and comparison of state-of-the-art techniques, see \cite{Petke2015}.More specifically, \cite{Sutton2007} also discuss techniques to systematically cover command-line options. Dai et al. \cite{Dai2010} apply configuration fuzzing by changing variables associated with configuration files. Exercises Exercise 1: ifdef Configuration FuzzingIn C programs, the *C preprocessor* can be used to choose which code parts should be compiled and which ones should not. As an example, in the C code```Cifdef LONG_FOOlong foo() { ... }elseint foo() { ... }endif```the compiler will compile the function `foo()` with return type`long` if the _preprocessor variable_ `LONG_FOO` is defined, and with return type `int` if not. Such preprocessor variables are either set in the source files (using `define`, as in `define LONG_FOO`) or on the C compiler command line (using `-D` or `-D=`, as in `-DLONG_FOO`. Such *conditional compilation* is used to configure C programs towards their environment. System-specific code can contain lots of conditional compilation. As an example, consider this excerpt of `xmlparse.c`, the XML parser that is part of the Python runtime library:```cif defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32) define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800endifif !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \ && !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \ && !defined(XML_DEV_URANDOM) \ && !defined(_WIN32) \ && !defined(XML_POOR_ENTROPY) errorendifif !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */endififdef XML_UNICODE_WCHAR_Tdefine XML_T(x) (const wchar_t)xdefine XML_L(x) L xelsedefine XML_T(x) (const unsigned short)xdefine XML_L(x) xendifint fun(int x) { return XML_T(x); }``` A typical configuration for the C preprocessor on the above code could be `cc -c -D_WIN32 -DXML_POOR_ENTROPY -DXML_UNICODE_WCHAR_T xmlparse.c`, defining the given preprocessor variables and selecting the appropriate code fragments. Since the compiler can only compile one configuration at a time (implying that we can also only _test_ one resulting executable at a time), your task is to find out which of these configurations actually compile. To this end, proceed in three steps. Part 1: Extract Preprocessor VariablesWrite a _function_ `cpp_identifiers()` that, given a set of lines (say, from `open(filename).readlines()`), extracts all preprocessor variables referenced in `if` or `ifdef` preprocessor instructions. Apply `ifdef_identifiers()` on the sample C input above, such that```pythoncpp_identifiers(open("xmlparse.c").readlines()) ```returns the set```python{'_WIN32', 'LOAD_LIBRARY_SEARCH_SYSTEM32', 'HAVE_GETRANDOM', 'HAVE_SYSCALL_GETRANDOM', 'HAVE_ARC4RANDOM_BUF', ...}``` **Solution.** Let us start with creating a sample input file, `xmlparse.c`:
###Code
filename = "xmlparse.c"
open(filename, "w").write(
"""
#if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32)
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
#endif
#if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \
&& !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \
&& !defined(XML_DEV_URANDOM) \
&& !defined(_WIN32) \
&& !defined(XML_POOR_ENTROPY)
# error
#endif
#if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)
#define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */
#endif
#ifdef XML_UNICODE_WCHAR_T
#define XML_T(x) (const wchar_t)x
#define XML_L(x) L ## x
#else
#define XML_T(x) (const unsigned short)x
#define XML_L(x) x
#endif
int fun(int x) { return XML_T(x); }
""");
###Output
_____no_output_____
###Markdown
To find C preprocessor `if` directives and preprocessor variables, we use regular expressions matching them.
###Code
import re
re_cpp_if_directive = re.compile(r"\s*#\s*(el)?if")
re_cpp_identifier = re.compile(r"[a-zA-Z_$]+")
def cpp_identifiers(lines):
identifiers = set()
for line in lines:
if re_cpp_if_directive.match(line):
identifiers |= set(re_cpp_identifier.findall(line))
# These are preprocessor keywords
identifiers -= {"if", "ifdef", "ifndef", "defined"}
return identifiers
cpp_ids = cpp_identifiers(open("xmlparse.c").readlines())
cpp_ids
###Output
_____no_output_____
###Markdown
Part 2: Derive an Option GrammarWith the help of `cpp_identifiers()`, create a grammar which has C compiler invocations with a list of options, where each option takes the form `-D` for a preprocessor variable ``. Using this grammar `cpp_grammar`, a fuzzer ```pythong = GrammarCoverageFuzzer(cpp_grammar)```would create C compiler invocations such as```python[g.fuzz() for i in range(10)]['cc -DHAVE_SYSCALL_GETRANDOM xmlparse.c', 'cc -D__SCO__ -DRANDOM_BUF -DXML_UNICODE_WCHAR_T -D__UNIXWARE__ xmlparse.c', 'cc -DXML_POOR_ENTROPY xmlparse.c', 'cc -DRANDOM xmlparse.c', 'cc -D_WIN xmlparse.c', 'cc -DHAVE_ARC xmlparse.c', ...]``` **Solution.** This is not very difficult:
###Code
from Grammars import Grammar, is_valid_grammar
cpp_grammar: Grammar = {
"<start>": ["cc -c<options> " + filename],
"<options>": ["<option>", "<options><option>"],
"<option>": []
}
for id in cpp_ids:
s = new_symbol(cpp_grammar, "<" + id + ">")
cpp_grammar["<option>"].append(s)
cpp_grammar[s] = [" -D" + id]
assert is_valid_grammar(cpp_grammar)
cpp_grammar
###Output
_____no_output_____
###Markdown
Part 3: C Preprocessor Configuration FuzzingUsing the grammar just produced, use a `GrammarCoverageFuzzer` to1. Test each processor variable individually2. Test each pair of processor variables, using `pairwise()`.What happens if you actually run the invocations? **Solution.** We can simply run the coverage fuzzer, as described above.
###Code
g = GrammarCoverageFuzzer(cpp_grammar)
g.fuzz()
from Fuzzer import ProgramRunner
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DHAVE_SYSCALL_GETRANDOM -DXML_POOR_ENTROPY -DRANDOM_BUF xmlparse.c
$ cc -c -DXML_UNICODE_WCHAR_T -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 2 errors generated.
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM -DHAVE_GETRANDOM xmlparse.c
$ cc -c -D_WIN xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DTIOCSWINSZ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_DEV_URANDOM xmlparse.c
$ cc -c -DHAVE_GETRANDOM -D__SCO__ -D__UNIXWARE__ xmlparse.c
$ cc -c -DXML_POOR_ENTROPY -DRANDOM xmlparse.c
$ cc -c -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM_BUF xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
###Markdown
To test all pairs, we can use `pairwise()`:
###Code
pairwise_cpp_grammar = extend_grammar(cpp_grammar)
pairwise_cpp_grammar["<option>"] = pairwise(cpp_grammar["<option>"])
pairwise_cpp_grammar["<option>"][:10]
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DHAVE_SYSCALL_GETRANDOM xmlparse.c
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM_BUF -DRANDOM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_SYSCALL_GETRANDOM xmlparse.c
$ cc -c -DRANDOM_BUF -DHAVE_GETRANDOM xmlparse.c
$ cc -c -DHAVE_ARC xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_DEV_URANDOM xmlparse.c
$ cc -c -DXML_POOR_ENTROPY -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 1 error generated.
$ cc -c -DHAVE_SYSCALL_GETRANDOM xmlparse.c
$ cc -c -DHAVE_ARC -DTIOCSWINSZ -DHAVE_ARC -DTIOCSWINSZ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
###Markdown
Some of the compilation errors we get could be expected – for instance, defining `XML_UNICODE_WCHAR_T` when actually, the type is not supported in our environment. Other errors may not be expected – and it is these errors we would find through systematic configuration fuzzing, as described above. At the end, don't forget to clean up:
###Code
os.remove("xmlparse.c")
if os.path.exists("xmlparse.o"):
os.remove("xmlparse.o")
###Output
_____no_output_____
###Markdown
Exercise 2: .ini Configuration FuzzingBesides command-line options, another important source of configurations are _configuration files_. In this exercise, we will consider the very simple configuration language provided by the Python `ConfigParser` module, which is very similar to what is found in Microsoft Windows _.ini_ files. The following example for a `ConfigParser` input file stems right from [the ConfigParser documentation](https://docs.python.org/3/library/configparser.html):```[DEFAULT]ServerAliveInterval = 45Compression = yesCompressionLevel = 9ForwardX11 = yes[bitbucket.org]User = hg[topsecret.server.com]Port = 50022ForwardX11 = no``` The above `ConfigParser` file can be created programmatically:
###Code
import configparser
config = configparser.ConfigParser()
config['DEFAULT'] = {'ServerAliveInterval': '45',
'Compression': 'yes',
'CompressionLevel': '9'}
config['bitbucket.org'] = {}
config['bitbucket.org']['User'] = 'hg'
config['topsecret.server.com'] = {}
topsecret = config['topsecret.server.com']
topsecret['Port'] = '50022' # mutates the parser
topsecret['ForwardX11'] = 'no' # same here
config['DEFAULT']['ForwardX11'] = 'yes'
with open('example.ini', 'w') as configfile:
config.write(configfile)
with open('example.ini') as configfile:
print(configfile.read(), end="")
###Output
[DEFAULT]
serveraliveinterval = 45
compression = yes
compressionlevel = 9
forwardx11 = yes
[bitbucket.org]
user = hg
[topsecret.server.com]
port = 50022
forwardx11 = no
###Markdown
and be read in again:
###Code
config = configparser.ConfigParser()
config.read('example.ini')
topsecret = config['topsecret.server.com']
topsecret['Port']
###Output
_____no_output_____
###Markdown
Part 1: Read ConfigurationUsing `configparser`, create a program reading in the above configuration file and accessing the individual elements. Part 2: Create a Configuration GrammarDesign a grammar that will automatically create configuration files suitable for your above program. Fuzz your program with it. Part 3: Mine a Configuration GrammarBy dynamically tracking the individual accesses to configuration elements, you can again extract a basic grammar from the execution. To this end, create a subclass of `ConfigParser` with a special method `__getitem__`:
###Code
class TrackingConfigParser(configparser.ConfigParser):
def __getitem__(self, key):
print("Accessing", repr(key))
return super().__getitem__(key)
###Output
_____no_output_____
###Markdown
For a `TrackingConfigParser` object `p`, `p.__getitem__(key)` will be invoked whenever `p[key]` is accessed:
###Code
tracking_config_parser = TrackingConfigParser()
tracking_config_parser.read('example.ini')
section = tracking_config_parser['topsecret.server.com']
###Output
Accessing 'topsecret.server.com'
###Markdown
Using `__getitem__()`, as above, implement a tracking mechanism that, while your program accesses the read configuration, automatically saves options accessed and values read. Create a prototype grammar from these values; use it for fuzzing. At the end, don't forget to clean up:
###Code
import os
os.remove("example.ini")
###Output
_____no_output_____
###Markdown
**Solution.** Left to the reader. Enjoy! Exercise 3: Extracting and Fuzzing C Command-Line OptionsIn C programs, the `getopt()` function are frequently used to process configuration options. A call```getopt(argc, argv, "bf:")```indicates that the program accepts two options `-b` and `-f`, with `-f` taking an argument (as indicated by the following colon). Part 1: Getopt FuzzingWrite a framework which, for a given C program, automatically extracts the argument to `getopt()` and derives a fuzzing grammar for it. There are multiple ways to achieve this:1. Scan the program source code for occurrences of `getopt()` and return the string passed. (Crude, but should frequently work.)2. Insert your own implementation of `getopt()` into the source code (effectively replacing `getopt()` from the runtime library), which outputs the `getopt()` argument and exits the program. Recompile and run.3. (Advanced.) As above, but instead of changing the source code, hook into the _dynamic linker_ which at runtime links the program with the C runtime library. Set the library loading path (on Linux and Unix, this is the `LD_LIBRARY_PATH` environment variable) such that your own version of `getopt()` is linked first, and the regular libraries later. Executing the program (without recompiling) should yield the desired result.Apply this on `grep` and `ls`; report the resulting grammars and results. **Solution.** Left to the reader. Enjoy hacking! Part 2: Fuzzing Long Options in CSame as Part 1, but also hook into the GNU variant `getopt_long()`, which accepts "long" arguments with double dashes such as `--help`. Note that method 1, above, will not work here, since the "long" options are defined in a separately defined structure. **Solution.** Left to the reader. Enjoy hacking! Exercise 4: Expansions in ContextIn our above option configurations, we have multiple symbols which all expand to the same integer. For instance, the `--line-range` option of `autopep8` takes two `` parameters which both expand into the same `` symbol:``` ::= ... | --line-range | ... ::= ::= (-)?+ ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9```
###Code
autopep8_runner.ebnf_grammar()["<line>"]
autopep8_runner.ebnf_grammar()["<int>"]
autopep8_runner.ebnf_grammar()["<digit>"]
###Output
_____no_output_____
###Markdown
Testing ConfigurationsThe behavior of a program is not only governed by its data. The _configuration_ of a program – that is, the settings that govern the execution of a program on its (regular) input data, as set by options or configuration files – just as well influences behavior, and thus can and should be tested. In this chapter, we explore how to systematically _test_ and _cover_ software configurations. By _automatically inferring configuration options_, we can apply these techniques out of the box, with no need for writing a grammar. Finally, we show how to systematically cover _combinations_ of configuration options, quickly detecting unwanted interferences. **Prerequisites*** You should have read the [chapter on grammars](Grammars.ipynb).* You should have read the [chapter on grammar coverage](GrammarCoverageFuzzer.ipynb). SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.ConfigurationFuzzer import ```and then make use of the following features.This chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options.`OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")```The grammar can be extracted via the method `ebnf_grammar()`:```python>>> option_ebnf_grammar = autopep8_runner.ebnf_grammar()>>> print(option_ebnf_grammar){'': ['()*'], '': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config ', ' --ignore-local-config', ' -r', ' --recursive', ' -j ', ' --jobs ', ' -p ', ' --pep8-passes ', ' -a', ' --aggressive', ' --experimental', ' --exclude ', ' --list-fixes', ' --ignore ', ' --select ', ' --max-line-length ', ' --line-range ', ' --range ', ' --indent-size ', ' --hang-closing'], '': [' foo.py'], '': ['+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '': [''], '': ['(-)?+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '': [''], '': [''], '': [''], '': ['']}```The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:```python>>> from Grammars import convert_ebnf_grammar>>> fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))>>> [fuzzer.fuzz() for i in range(3)][' foo.py', ' --indent-size 54 --diff --global-config k --select &, --list-fixes -a --hang-closing --range 0 72 --ignore-local-config -p 8 --version -d --experimental foo.py', ' --ignore i --jobs -16 --verbose -v --line-range -3 9 -r --help --max-line-length 8 -h --aggressive --recursive --exclude qE" --in-place -j -979 -i --pep8-passes 4 --version --in-place --aggressive --version foo.py']```The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")>>> autopep8_fuzzer = OptionFuzzer(autopep8_runner)>>> [autopep8_fuzzer.fuzz() for i in range(3)][' foo.py', ' --range 46 -1 --recursive -d --select <6 --exclude :" --global-config UVE --help --aggressive --experimental -r --line-range -7 -9 --version -i -h --indent-size -05 --max-line-length 8 --in-place --verbose --jobs -32 --ignore-local-config -v -p -1 --hang-closing -j 38 -a --list-fixes --pep8-passes 67 --diff --ignore v --select I --ignore (1NJ --ignore Km --ignore ? --select ^kZ --global-config y --select ia]9 --exclude o --ignore R!4GP.x8/ --ignore D --exclude 7 --exclude Bd -a --recursive --verbose foo.py', " --ignore \\ --global-config l --global-config @ --ignore ,CM~& --ignore nb --select c --global-config zgW --ignore $`s{H --global-config - --exclude 2| --select O --exclude 0 --exclude * --ignore qA'F}X --global-config p>_r+ --global-config eQ --exclude [ --ignore t --select h) --select %f --exclude u3;=TL --global-config w --ignore j5 --exclude Y --ignore S --ignore ]J --global-config 1 --ignore-local-config --max-line-length 36693 -i foo.py"]```The final step in testing would now to invoke the program with these arguments.Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. Configuration OptionsWhen we talk about the input to a program, we usually think of the _data_ it processes. This is also what we have been fuzzing in the past chapters – be it with [random input](Fuzzer.ipynb), [mutation-based fuzzing](MutationFuzzer.ipynb), or [grammar-based fuzzing](GrammarFuzzer.ipynb). However, programs typically have several input sources, all of which can and should be tested – and included in test generation. One important source of input is the program's _configuration_ – that is, a set of inputs that typically is set once when beginning to process data and then stays constant while processing data, while the program is running, or even while the program is deployed. Such a configuration is frequently set in _configuration files_ (for instance, as key/value pairs); the most ubiquitous method for command-line tools, though, are _configuration options_ on the command line. As an example, consider the `grep` utility to find textual patterns in files. The exact mode by which `grep` works is governed by a multitude of options, which can be listed by providing a `--help` option:
###Code
!grep --help
###Output
usage: grep [-abcDEFGHhIiJLlmnOoqRSsUVvwxZ] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
###Markdown
All these options need to be tested for whether they operate correctly. In security testing, any such option may also trigger a yet unknown vulnerability. Hence, such options can become _fuzz targets_ on their own. In this chapter, we analyze how to systematically test such options – and better yet, how to extract possible configurations right out of given program files, such that we do not have to specify anything. Options in PythonLet us stick to our common programming language here and examine how options are processed in Python. The `argparse` module provides a parser for command-line arguments (and options) with great functionality – and great complexity. You start by defining a parser (`argparse.ArgumentParser()`) to which individual arguments with various features are added, one after another. Additional parameters for each argument can specify the type (`type`) of the argument (say, integers or strings), or the number of arguments (`nargs`). By default, arguments are stored under their name in the `args` object coming from `parse_args()` – thus, `args.integers` holds the `integers` arguments added earlier. Special actions (`actions`) allow to store specific values in given variables; the `store_const` action stores the given `const` in the attribute named by `dest`. The following example takes a number of integer arguments (`integers`) as well as an operator (`--sum`, `--min`, or `--max`) to be applied on these integers. The operators all store a function reference in the `accumulate` attribute, which is finally invoked on the integers parsed:
###Code
import argparse
def process_numbers(args=[]):
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--sum', dest='accumulate', action='store_const',
const=sum,
help='sum the integers')
group.add_argument('--min', dest='accumulate', action='store_const',
const=min,
help='compute the minimum')
group.add_argument('--max', dest='accumulate', action='store_const',
const=max,
help='compute the maximum')
args = parser.parse_args(args)
print(args.accumulate(args.integers))
###Output
_____no_output_____
###Markdown
Here's how `process_numbers()` works. We can, for instance, invoke the `--min` option on the given arguments to compute the minimum:
###Code
process_numbers(["--min", "100", "200", "300"])
###Output
100
###Markdown
Or compute the sum of three numbers:
###Code
process_numbers(["--sum", "1", "2", "3"])
###Output
6
###Markdown
When defined via `add_mutually_exclusive_group()` (as above), options are mutually exclusive. Consequently, we can have only one operator:
###Code
import fuzzingbook_utils
from ExpectError import ExpectError
with ExpectError(print_traceback=False):
process_numbers(["--sum", "--max", "1", "2", "3"])
###Output
usage: ipykernel_launcher.py [-h] (--sum | --min | --max) N [N ...]
ipykernel_launcher.py: error: argument --max: not allowed with argument --sum
SystemExit: 2 (expected)
###Markdown
A Grammar for ConfigurationsHow can we test a system with several options? The easiest answer is to write a grammar for it. The grammar `PROCESS_NUMBERS_EBNF_GRAMMAR` reflects the possible combinations of options and arguments:
###Code
from Grammars import crange, srange, convert_ebnf_grammar, extend_grammar, is_valid_grammar
from Grammars import START_SYMBOL, new_symbol
PROCESS_NUMBERS_EBNF_GRAMMAR = {
"<start>": ["<operator> <integers>"],
"<operator>": ["--sum", "--min", "--max"],
"<integers>": ["<integer>", "<integers> <integer>"],
"<integer>": ["<digit>+"],
"<digit>": crange('0', '9')
}
assert is_valid_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
PROCESS_NUMBERS_GRAMMAR = convert_ebnf_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
###Output
_____no_output_____
###Markdown
We can feed this grammar into our [grammar coverage fuzzer](GrammarCoverageFuzzer.ipynb) and have it cover one option after another:
###Code
from GrammarCoverageFuzzer import GrammarCoverageFuzzer
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
print(f.fuzz())
###Output
--max 9 5 8 210 80 9756431
--sum 9 4 99 1245 612370
--min 2 3 0 46 15798 7570926
###Markdown
Of course, we can also invoke `process_numbers()` with these very arguments. To this end, we need to convert the string produced by the grammar back into a list of individual arguments, using `split()`:
###Code
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
args = f.fuzz().split()
print(args)
process_numbers(args)
###Output
['--max', '8', '9', '3067', '44', '13852967057']
13852967057
['--sum', '9', '8', '63', '9278111', '59206197798']
59215475989
['--min', '4', '1', '4864', '33342', '7827970808951']
1
###Markdown
In a similar way, we can define grammars for any program to be tested; as well as define grammars for, say, configuration files. Yet, the grammar has to be updated with every change to the program, which creates a maintenance burden. Given that the information required for the grammar is already all encoded in the program, the question arises: _Can't we go and extract configuration options right out of the program in the first place?_ Mining Configuration OptionsIn this section, we try to extract option and argument information right out of a program, such that we do not have to specify a configuration grammar. The aim is to have a configuration fuzzer that works on the options and arguments of an arbitrary program, as long as it follows specific conventions for processing its arguments. In the case of Python programs, this means using the `argparse` module.Our idea is as follows: We execute the given program up to the point where the arguments are actually parsed – that is, `argparse.parse_args()` is invoked. Up to this point, we track all calls into the argument parser, notably those calls that define arguments and options (`add_argument()`). From these, we construct the grammar. Tracking ArgumentsLet us illustrate this approach with a simple experiment: We define a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while `process_numbers` is invoked. If we have a call to a method `add_argument`, we access and print out the local variables (which at this point are the arguments to the method).
###Code
import sys
import string
def traceit(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(method_name, locals)
###Output
_____no_output_____
###Markdown
What we get is a list of all calls to `add_argument()`, together with the method arguments passed:
###Code
sys.settrace(traceit)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
add_argument {'kwargs': {'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}, 'args': ('-h', '--help'), 'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', version=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True)}
add_argument {'kwargs': {'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}, 'args': ('integers',), 'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', version=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True)}
add_argument {'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}, 'args': ('--sum',), 'self': <argparse._MutuallyExclusiveGroup object at 0x11c6eb198>}
add_argument {'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}, 'args': ('--min',), 'self': <argparse._MutuallyExclusiveGroup object at 0x11c6eb198>}
add_argument {'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}, 'args': ('--max',), 'self': <argparse._MutuallyExclusiveGroup object at 0x11c6eb198>}
6
###Markdown
From the `args` argument, we can access the individual options and arguments to be defined:
###Code
def traceit(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(locals['args'])
sys.settrace(traceit)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
('-h', '--help')
('integers',)
('--sum',)
('--min',)
('--max',)
6
###Markdown
We see that each argument comes as a tuple with one (say, `integers` or `--sum`) or two members (`-h` and `--help`), which denote alternate forms for the same option. Our job will be to go through the arguments of `add_arguments()` and detect not only the names of options and arguments, but also whether they accept additional parameters, as well as the type of the parameters. A Grammar Miner for Options and Arguments Let us now build a class that gathers all this information to create a grammar. We use the `ParseInterrupt` exception to interrupt program execution after gathering all arguments and options:
###Code
class ParseInterrupt(Exception):
pass
###Output
_____no_output_____
###Markdown
The class `OptionGrammarMiner` takes an executable function for which the grammar of options and arguments is to be mined:
###Code
class OptionGrammarMiner(object):
def __init__(self, function, log=False):
self.function = function
self.log = log
###Output
_____no_output_____
###Markdown
The method `mine_ebnf_grammar()` is where everything happens. It creates a grammar of the form``` ::= * ::= ::= ```in which the options and arguments will be collected. It then sets a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while the previously defined `function` is invoked. Raising `ParseInterrupt` (when `parse_args()` is invoked) ends execution.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
OPTION_SYMBOL = "<option>"
ARGUMENTS_SYMBOL = "<arguments>"
def mine_ebnf_grammar(self):
self.grammar = {
START_SYMBOL: ["(" + self.OPTION_SYMBOL + ")*" + self.ARGUMENTS_SYMBOL],
self.OPTION_SYMBOL: [],
self.ARGUMENTS_SYMBOL: []
}
self.current_group = self.OPTION_SYMBOL
old_trace = sys.settrace(self.traceit)
try:
self.function()
except ParseInterrupt:
pass
sys.settrace(old_trace)
return self.grammar
def mine_grammar(self):
return convert_ebnf_grammar(self.mine_ebnf_grammar())
###Output
_____no_output_____
###Markdown
The trace function checks for four methods: `add_argument()` is the most important function, resulting in processing arguments; `frame.f_locals` again is the set of local variables, which at this point is mostly the arguments to `add_argument()`. Since mutually exclusive groups also have a method `add_argument()`, we set the flag `in_group` to differentiate. Note that we make no specific efforts to differentiate between multiple parsers or groups; we simply assume that there is one parser, and at any point at most one mutually exclusive group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def traceit(self, frame, event, arg):
if event != "call":
return
if "self" not in frame.f_locals:
return
self_var = frame.f_locals["self"]
method_name = frame.f_code.co_name
if method_name == "add_argument":
in_group = repr(type(self_var)).find("Group") >= 0
self.process_argument(frame.f_locals, in_group)
elif method_name == "add_mutually_exclusive_group":
self.add_group(frame.f_locals, exclusive=True)
elif method_name == "add_argument_group":
# self.add_group(frame.f_locals, exclusive=False)
pass
elif method_name == "parse_args":
raise ParseInterrupt
return None
###Output
_____no_output_____
###Markdown
The `process_arguments()` now analyzes the arguments passed and adds them to the grammar:* If the argument starts with `-`, it gets added as an optional element to the `` list* Otherwise, it gets added to the `` list.The optional `nargs` argument specifies how many arguments can follow. If it is a number, we add the appropriate number of elements to the grammar; if it is an abstract specifier (say, `+` or `*`), we use it directly as EBNF operator.Given the large number of parameters and optional behavior, this is a somewhat messy function, but it does the job.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def process_argument(self, locals, in_group):
args = locals["args"]
kwargs = locals["kwargs"]
if self.log:
print(args)
print(kwargs)
print()
for arg in args:
self.process_arg(arg, in_group, kwargs)
class OptionGrammarMiner(OptionGrammarMiner):
def process_arg(self, arg, in_group, kwargs):
if arg.startswith('-'):
if not in_group:
target = self.OPTION_SYMBOL
else:
target = self.current_group
metavar = None
arg = " " + arg
else:
target = self.ARGUMENTS_SYMBOL
metavar = arg
arg = ""
if "nargs" in kwargs:
nargs = kwargs["nargs"]
else:
nargs = 1
param = self.add_parameter(kwargs, metavar)
if param == "":
nargs = 0
if isinstance(nargs, int):
for i in range(nargs):
arg += param
else:
assert nargs in "?+*"
arg += '(' + param + ')' + nargs
if target == self.OPTION_SYMBOL:
self.grammar[target].append(arg)
else:
self.grammar[target].append(arg)
###Output
_____no_output_____
###Markdown
The method `add_parameter()` handles possible parameters of options. If the argument has an `action` defined, it takes no parameter. Otherwise, we identify the type of the parameter (as `int` or `str`) and augment the grammar with an appropriate rule.
###Code
import inspect
class OptionGrammarMiner(OptionGrammarMiner):
def add_parameter(self, kwargs, metavar):
if "action" in kwargs:
# No parameter
return ""
type_ = "str"
if "type" in kwargs:
given_type = kwargs["type"]
# int types come as '<class int>'
if inspect.isclass(given_type) and issubclass(given_type, int):
type_ = "int"
if metavar is None:
if "metavar" in kwargs:
metavar = kwargs["metavar"]
else:
metavar = type_
self.add_type_rule(type_)
if metavar != type_:
self.add_metavar_rule(metavar, type_)
param = " <" + metavar + ">"
return param
###Output
_____no_output_____
###Markdown
The method `add_type_rule()` adds a rule for parameter types to the grammar. If the parameter is identified by a meta-variable (say, `N`), we add a rule for this as well to improve legibility.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_type_rule(self, type_):
if type_ == "int":
self.add_int_rule()
else:
self.add_str_rule()
def add_int_rule(self):
self.grammar["<int>"] = ["(-)?<digit>+"]
self.grammar["<digit>"] = crange('0', '9')
def add_str_rule(self):
self.grammar["<str>"] = ["<char>+"]
self.grammar["<char>"] = srange(
string.digits
+ string.ascii_letters
+ string.punctuation)
def add_metavar_rule(self, metavar, type_):
self.grammar["<" + metavar + ">"] = ["<" + type_ + ">"]
###Output
_____no_output_____
###Markdown
The method `add_group()` adds a new mutually exclusive group to the grammar. We define a new symbol (say, ``) for the options added to the group, and use the `required` and `exclusive` flags to define an appropriate expansion operator. The group is then prefixed to the grammar, as in``` ::= * ::= ```and filled with the next calls to `add_argument()` within the group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_group(self, locals, exclusive):
kwargs = locals["kwargs"]
if self.log:
print(kwargs)
required = kwargs.get("required", False)
group = new_symbol(self.grammar, "<group>")
if required and exclusive:
group_expansion = group
if required and not exclusive:
group_expansion = group + "+"
if not required and exclusive:
group_expansion = group + "?"
if not required and not exclusive:
group_expansion = group + "*"
self.grammar[START_SYMBOL][0] = group_expansion + \
self.grammar[START_SYMBOL][0]
self.grammar[group] = []
self.current_group = group
###Output
_____no_output_____
###Markdown
That's it! With this, we can now extract the grammar from our `process_numbers()` program. Turning on logging again reveals the variables we draw upon.
###Code
miner = OptionGrammarMiner(process_numbers, log=True)
process_numbers_grammar = miner.mine_ebnf_grammar()
###Output
('-h', '--help')
{'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}
('integers',)
{'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}
{'required': True}
('--sum',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}
('--min',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}
('--max',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}
###Markdown
Here is the extracted grammar:
###Code
process_numbers_grammar
###Output
_____no_output_____
###Markdown
The grammar properly identifies the group found:
###Code
process_numbers_grammar["<start>"]
process_numbers_grammar["<group>"]
###Output
_____no_output_____
###Markdown
It also identifies a `--help` option provided not by us, but by the `argparse` module:
###Code
process_numbers_grammar["<option>"]
###Output
_____no_output_____
###Markdown
The grammar also correctly identifies the types of the arguments:
###Code
process_numbers_grammar["<arguments>"]
process_numbers_grammar["<integers>"]
###Output
_____no_output_____
###Markdown
The rules for `int` are set as defined by `add_int_rule()`
###Code
process_numbers_grammar["<int>"]
###Output
_____no_output_____
###Markdown
We can take this grammar and convert it to BNF, such that we can fuzz with it right away:
###Code
assert is_valid_grammar(process_numbers_grammar)
grammar = convert_ebnf_grammar(process_numbers_grammar)
assert is_valid_grammar(grammar)
f = GrammarCoverageFuzzer(grammar)
for i in range(10):
print(f.fuzz())
###Output
--sum 9
--max -h --help --help -16 -0
--min --help 2745341 8
--min 1 27
--sum --help --help -2
--sum --help 0 3 -77
--sum -3
--sum --help 429 8 10 0295 -694 1
--max -h 91 -1425 99
--sum -795 -94 8 -44
###Markdown
Each and every invocation adheres to the rules as set forth in the `argparse` calls. By mining options and arguments from existing programs, we can now fuzz these options out of the box – without having to specify a grammar. Testing Autopep8 Let us try out the option grammar miner on real-world Python programs. `autopep8` is a tool that automatically converts Python code to the [PEP 8 Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). (Actually, all Python code in this book runs through `autopep8` during production.) `autopep8` offers a wide range of options, as can be seen by invoking it with `--help`:
###Code
!autopep8 --help
###Output
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing]
[files [files ...]]
Automatically formats Python code to conform to the PEP 8 style guide.
positional arguments:
files files to format or '-' for standard in
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-v, --verbose print verbose messages; multiple -v result in more
verbose messages
-d, --diff print the diff for the fixed source
-i, --in-place make changes to files in place
--global-config filename
path to a global pep8 config file; if this file does
not exist then this is ignored (default:
/Users/zeller/.config/pep8)
--ignore-local-config
don't look for and apply local config files; if not
passed, defaults are updated with any config files in
the project's root directory
-r, --recursive run recursively over directories; must be used with
--in-place or --diff
-j n, --jobs n number of parallel jobs; match CPU count if value is
less than 1
-p n, --pep8-passes n
maximum number of additional pep8 passes (default:
infinite)
-a, --aggressive enable non-whitespace changes; multiple -a result in
more aggressive changes
--experimental enable experimental fixes
--exclude globs exclude file/directory names that match these comma-
separated globs
--list-fixes list codes for fixes; used by --ignore and --select
--ignore errors do not fix these errors/warnings (default:
E226,E24,W503)
--select errors fix only these errors/warnings (e.g. E4,W)
--max-line-length n set maximum allowed line length (default: 79)
--line-range line line, --range line line
only fix errors found within this inclusive range of
line numbers (e.g. 1 99); line numbers are indexed at
1
--hang-closing hang-closing option passed to pycodestyle
###Markdown
Autopep8 SetupWe want to systematically test these options. In order to deploy our configuration grammar miner, we need to find the source code of the executable:
###Code
import os
def find_executable(name):
for path in os.get_exec_path():
qualified_name = os.path.join(path, name)
if os.path.exists(qualified_name):
return qualified_name
return None
autopep8_executable = find_executable("autopep8")
assert autopep8_executable is not None
autopep8_executable
###Output
_____no_output_____
###Markdown
Next, we build a function that reads the contents of the file and executes it.
###Code
def autopep8():
executable = find_executable("autopep8")
# First line has to contain "/usr/bin/env python" or like
first_line = open(executable).readline()
assert first_line.find("python") >= 0
contents = open(executable).read()
exec(contents)
###Output
_____no_output_____
###Markdown
Mining an Autopep8 GrammarWe can use the `autopep8()` function in our grammar miner:
###Code
autopep8_miner = OptionGrammarMiner(autopep8)
###Output
_____no_output_____
###Markdown
and extract a grammar for it:
###Code
autopep8_ebnf_grammar = autopep8_miner.mine_ebnf_grammar()
###Output
_____no_output_____
###Markdown
This works because here, `autopep8` is not a separate process (and a separate Python interpreter), but we run the `autopep8()` function (and the `autopep8` code) in our current Python interpreter – up to the call to `parse_args()`, where we interrupt execution again. At this point, the `autopep8` code has done nothing but setting up the argument parser – which is what we are interested in. The grammar options mined reflect precisely the options seen when providing `--help`:
###Code
print(autopep8_ebnf_grammar["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing']
###Markdown
Metavariables like `` or `` are placeholders for integers. We assume all metavariables of the same name have the same type:
###Code
autopep8_ebnf_grammar["<line>"]
###Output
_____no_output_____
###Markdown
The grammar miner has inferred that the argument to `autopep8` is a list of files:
###Code
autopep8_ebnf_grammar["<arguments>"]
###Output
_____no_output_____
###Markdown
which in turn all are strings:
###Code
autopep8_ebnf_grammar["<files>"]
###Output
_____no_output_____
###Markdown
As we are only interested in testing options, not arguments, we fix the arguments to a single mandatory input. (Otherwise, we'd have plenty of random file names generated.)
###Code
autopep8_ebnf_grammar["<arguments>"] = [" <files>"]
autopep8_ebnf_grammar["<files>"] = ["foo.py"]
assert is_valid_grammar(autopep8_ebnf_grammar)
###Output
_____no_output_____
###Markdown
Creating Autopep8 Options Let us now use the inferred grammar for fuzzing. Again, we convert the EBNF grammar into a regular BNF grammar:
###Code
autopep8_grammar = convert_ebnf_grammar(autopep8_ebnf_grammar)
assert is_valid_grammar(autopep8_grammar)
###Output
_____no_output_____
###Markdown
And we can use the grammar for fuzzing all options:
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=4)
for i in range(20):
print(f.fuzz())
###Output
-r foo.py
--hang-closing --experimental --aggressive foo.py
--ignore-local-config -d -h -p 9 --version --list-fixes foo.py
-a --verbose foo.py
-v --indent-size 7 --global-config { foo.py
--in-place --help --select ~s --max-line-length 1 foo.py
--pep8-passes 8 --diff foo.py
-i --recursive foo.py
-r --hang-closing foo.py
--jobs 0 -i foo.py
--exclude k --line-range 3 6 --verbose foo.py
-v -i foo.py
--version -a --list-fixes foo.py
--ignore x -r foo.py
-j 4 --in-place -a foo.py
--range 5 2 --list-fixes foo.py
--indent-size 5 --indent-size 3 foo.py
--indent-size 0 --indent-size 8 foo.py
--indent-size 7 --indent-size 3 foo.py
--indent-size 9 --verbose foo.py
###Markdown
Let us apply these options on the actual program. We need a file `foo.py` that will serve as input:
###Code
def create_foo_py():
open("foo.py", "w").write("""
def twice(x = 2):
return x + x
""")
create_foo_py()
print(open("foo.py").read(), end="")
###Output
def twice(x = 2):
return x + x
###Markdown
We see how `autopep8` fixes the spacing:
###Code
!autopep8 foo.py
###Output
def twice(x=2):
return x + x
###Markdown
Let us now put things together. We define a `ProgramRunner` that will run the `autopep8` executable with arguments coming from the mined `autopep8` grammar.
###Code
from Fuzzer import ProgramRunner
###Output
_____no_output_____
###Markdown
Running `autopep8` with the mined options reveals a surprisingly high number of passing runs. (We see that some options depend on each other or are mutually exclusive, but this is handled by the program logic, not the argument parser, and hence out of our scope.) The `GrammarCoverageFuzzer` ensures that each option is tested at least once. (Digits and letters, too, by the way.)
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=5)
for i in range(20):
invocation = "autopep8" + f.fuzz()
print("$ " + invocation)
args = invocation.split()
autopep8 = ProgramRunner(args)
result, outcome = autopep8.run()
if result.stderr != "":
print(result.stderr, end="")
###Output
$ autopep8 foo.py
$ autopep8 -a --max-line-length 2 --jobs 5 --help -r foo.py
$ autopep8 --version --indent-size 0 --ignore-local-config -h foo.py
autopep8 1.3.4 (pycodestyle: 2.5.0)
$ autopep8 --ignore z --diff -j 7 --experimental --list-fixes --verbose -i --recursive foo.py
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing]
[files [files ...]]
autopep8: error: --in-place and --diff are mutually exclusive
$ autopep8 --line-range 1 6 --in-place --select _ foo.py
$ autopep8 --exclude n --pep8-passes 3 --aggressive foo.py
$ autopep8 --global-config &F -p 4 -d foo.py
$ autopep8 --hang-closing --range 8 9 -v foo.py
[file:foo.py]
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
$ autopep8 --indent-size 1 --version --hang-closing foo.py
autopep8 1.3.4 (pycodestyle: 2.5.0)
$ autopep8 --indent-size 3 --hang-closing --aggressive foo.py
$ autopep8 --indent-size 8 -r --in-place foo.py
$ autopep8 --indent-size 9 --indent-size 7 --version foo.py
autopep8 1.3.4 (pycodestyle: 2.5.0)
$ autopep8 -a --aggressive --help -v foo.py
$ autopep8 --indent-size 9 --indent-size 7 foo.py
$ autopep8 --indent-size 5 --indent-size 2 --verbose foo.py
[file:foo.py]
---> Applying global fix for E265
---> 1 issue(s) to fix {'E111': {3}}
$ autopep8 --indent-size 9 --in-place --recursive foo.py
$ autopep8 --indent-size 9 --indent-size 9 foo.py
$ autopep8 --indent-size 6 --indent-size 9 -v foo.py
[file:foo.py]
---> Applying global fix for E265
---> 2 issue(s) to fix {'E111': {3}, 'E117': {3}}
$ autopep8 --indent-size 4 --indent-size -5 --list-fixes foo.py
$ autopep8 --indent-size 93 -a foo.py
###Markdown
Our `foo.py` file now has been formatted in place a number of times:
###Code
print(open("foo.py").read(), end="")
###Output
def twice(x=2):
return x + x
###Markdown
We don't need it anymore, so we clean up things:
###Code
import os
os.remove("foo.py")
###Output
_____no_output_____
###Markdown
Classes for Fuzzing Configuration OptionsLet us now create reusable classes that we can use for testing arbitrary programs. (Okay, make that "arbitrary programs that are written in Python and use the `argparse` module to process command-line arguments.") The class `OptionRunner` is a subclass of `ProgramRunner` that takes care of automatically determining the grammar, using the same steps we used for `autopep8`, above.
###Code
class OptionRunner(ProgramRunner):
def __init__(self, program, arguments=None):
if isinstance(program, str):
self.base_executable = program
else:
self.base_executable = program[0]
self.find_contents()
self.find_grammar()
if arguments is not None:
self.set_arguments(arguments)
super().__init__(program)
###Output
_____no_output_____
###Markdown
First, we find the contents of the Python executable:
###Code
class OptionRunner(OptionRunner):
def find_contents(self):
self._executable = find_executable(self.base_executable)
first_line = open(self._executable).readline()
assert first_line.find("python") >= 0
self.contents = open(self._executable).read()
def invoker(self):
exec(self.contents)
def executable(self):
return self._executable
###Output
_____no_output_____
###Markdown
Next, we determine the grammar using the `OptionGrammarMiner` class:
###Code
class OptionRunner(OptionRunner):
def find_grammar(self):
miner = OptionGrammarMiner(self.invoker)
self._ebnf_grammar = miner.mine_ebnf_grammar()
def ebnf_grammar(self):
return self._ebnf_grammar
def grammar(self):
return convert_ebnf_grammar(self._ebnf_grammar)
###Output
_____no_output_____
###Markdown
The two service methods `set_arguments()` and `set_invocation()` help us to change the arguments and program, respectively.
###Code
from Grammars import unreachable_nonterminals
class OptionRunner(OptionRunner):
def set_arguments(self, args):
self._ebnf_grammar["<arguments>"] = [" " + args]
# Delete rules for previous arguments
for nonterminal in unreachable_nonterminals(self._ebnf_grammar):
del self._ebnf_grammar[nonterminal]
def set_invocation(self, program):
self.program = program
###Output
_____no_output_____
###Markdown
We can instantiate the class on `autopep8` and immediately get the grammar:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
print(autopep8_runner.ebnf_grammar()["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing']
###Markdown
An `OptionFuzzer` interacts with the given `OptionRunner` to obtain its grammar, which is then passed to its `GrammarCoverageFuzzer` superclass.
###Code
class OptionFuzzer(GrammarCoverageFuzzer):
def __init__(self, runner, *args, **kwargs):
assert issubclass(type(runner), OptionRunner)
self.runner = runner
grammar = runner.grammar()
super().__init__(grammar, *args, **kwargs)
###Output
_____no_output_____
###Markdown
When invoking `run()`, the `OptionFuzzer` creates a new invocation (using `fuzz()` from its grammar) and runs the now given (or previously set) runner with the arguments from the grammar. Note that the runner specified in `run()` can differ from the one set during initialization; this allows for mining options from one program and applying it in another context.
###Code
class OptionFuzzer(OptionFuzzer):
def run(self, runner=None, inp=""):
if runner is None:
runner = self.runner
assert issubclass(type(runner), OptionRunner)
invocation = runner.executable() + " " + self.fuzz()
runner.set_invocation(invocation.split())
return runner.run(inp)
###Output
_____no_output_____
###Markdown
Example: Autopep8Let us apply our newly defined classes on the `autopep8` runner:
###Code
autopep8_fuzzer = OptionFuzzer(autopep8_runner, max_nonterminals=5)
for i in range(3):
print(autopep8_fuzzer.fuzz())
###Output
-j -8 foo.py
--aggressive --global-config U} --version --verbose foo.py
--help --experimental -p 01 --hang-closing -r -d --list-fixes foo.py
###Markdown
We can now systematically test `autopep8` with these classes:
###Code
autopep8_fuzzer.run(autopep8_runner)
###Output
_____no_output_____
###Markdown
Example: MyPyWe can extract options for the `mypy` static type checker for Python:
###Code
assert find_executable("mypy") is not None
mypy_runner = OptionRunner("mypy", "foo.py")
print(mypy_runner.ebnf_grammar()["<option>"])
mypy_fuzzer = OptionFuzzer(mypy_runner, max_nonterminals=5)
for i in range(10):
print(mypy_fuzzer.fuzz())
###Output
foo.py
-m --no-warn-unused-configs --cache-dir --dirty-stubs --xml-report b --strict --always-false --no-warn-no-return --disallow-any-generics foo.py
--linecoverage-report VF? --local-partial-types foo.py
--python-executable --dump-graph --any-exprs-report j --warn-unused-ignores --bazel -2 foo.py
--scripts-are-modules --warn-no-return --verbose -p --no-silence-site-packages --shadow-file --no-strict-optional --disallow-subclassing-any --strict-optional --almost-silent --package --help foo.py
--check-untyped-defs --warn-incomplete-stub --no-check-untyped-defs --allow-untyped-calls --ignore-missing-imports foo.py
--show-traceback --hide-column-numbers --disallow-any-decorated --disallow-untyped-decorators --xslt-html-report pm --warn-redundant-casts --fast-parser --package-root --html-report x --no-site-packages --hide-error-context --always-true foo.py
--disallow-incomplete-defs --strict-optional-whitelist K -V foo.py
--custom-typeshed-dir r^ --command -i --skip-version-check foo.py
--config-file C --allow-incomplete-defs --no-warn-redundant-casts --find-occurrences v8 --warn-unused-configs --disallow-untyped-defs foo.py
###Markdown
Example: NotedownHere's the configuration options for the `notedown` Notebook to Markdown converter:
###Code
assert find_executable("notedown") is not None
notedown_runner = OptionRunner("notedown")
print(notedown_runner.ebnf_grammar()["<option>"])
notedown_fuzzer = OptionFuzzer(notedown_runner, max_nonterminals=5)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
--nomagic
-o --examples --match 6? --timeout 93 --help --run >
--precode Y --rmagic --version 2
--template '* --strip s8p
--output -h --debug ^
--execute --render --debug v
--knit --to q --from m --run -h --version +
-o --rmagic --nomagic J
--precode 4 f --version ]
-o E --version HB
###Markdown
Combinatorial TestingOur `CoverageGrammarFuzzer` does a good job in covering each and every option at least once, which is great for systematic testing. However, as we also can see in our examples above, some options require each other, while others interfere with each other. What we should do as good testers is not only to cover every option individually, but also _combinations_ of options. The Python `itertools` module gives us means to create combinations from lists. We can, for instance, take the `notedown` options and create a list of all pairs.
###Code
from itertools import combinations
option_list = notedown_runner.ebnf_grammar()["<option>"]
pairs = list(combinations(option_list, 2))
###Output
_____no_output_____
###Markdown
There's quite a number of pairs:
###Code
len(pairs)
print(pairs[:20])
###Output
[(' -h', ' --help'), (' -h', ' -o( <str>)?'), (' -h', ' --output( <str>)?'), (' -h', ' --from <str>'), (' -h', ' --to <str>'), (' -h', ' --run'), (' -h', ' --execute'), (' -h', ' --timeout <int>'), (' -h', ' --strip'), (' -h', ' --precode( <str>)+'), (' -h', ' --knit( <str>)?'), (' -h', ' --rmagic'), (' -h', ' --nomagic'), (' -h', ' --render'), (' -h', ' --template <str>'), (' -h', ' --match <str>'), (' -h', ' --examples'), (' -h', ' --version'), (' -h', ' --debug'), (' --help', ' -o( <str>)?')]
###Markdown
Testing every such pair of options frequently suffices to cover all interferences between options. (Programs rarely have conditions involving three or more configuration settings.) To this end, we _change_ the grammar from having a list of options to having a list of _option pairs_, such that covering these will automatically cover all pairs. We create a function `pairwise()` that takes a list of options as occurring in our grammar and returns a list of _pairwise options_ – that is, our original options, but concatenated.
###Code
def pairwise(option_list):
return [option_1 + option_2
for (option_1, option_2) in combinations(option_list, 2)]
###Output
_____no_output_____
###Markdown
Here's the first 20 pairs:
###Code
print(pairwise(option_list)[:20])
###Output
[' -h --help', ' -h -o( <str>)?', ' -h --output( <str>)?', ' -h --from <str>', ' -h --to <str>', ' -h --run', ' -h --execute', ' -h --timeout <int>', ' -h --strip', ' -h --precode( <str>)+', ' -h --knit( <str>)?', ' -h --rmagic', ' -h --nomagic', ' -h --render', ' -h --template <str>', ' -h --match <str>', ' -h --examples', ' -h --version', ' -h --debug', ' --help -o( <str>)?']
###Markdown
The new grammar `pairwise_notedown_grammar` is a copy of the `notedown` grammar, but with the list of options replaced with the above pairwise option list.
###Code
notedown_grammar = notedown_runner.grammar()
pairwise_notedown_grammar = extend_grammar(notedown_grammar)
pairwise_notedown_grammar["<option>"] = pairwise(notedown_grammar["<option>"])
assert is_valid_grammar(pairwise_notedown_grammar)
###Output
_____no_output_____
###Markdown
Using the "pairwise" grammar to fuzz now covers one pair after another:
###Code
notedown_fuzzer = GrammarCoverageFuzzer(
pairwise_notedown_grammar, max_nonterminals=4)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
--run --debug --help --execute
-o --timeout 8
--precode : --render -h --run
--help --debug --strip --nomagic G
--render --debug --rmagic --nomagic r
--help --version --execute --strip ^
-h --execute --precode t --version ip
--nomagic --debug --version --debug -h --render K
--examples --version --help --examples
###Markdown
Can we actually test all combinations of options? Not in practice, as the number of combinations quickly grows as the length increases. It decreases again as the number of options reaches the maximum (with 20 options, there is only 1 combination involving _all_ options), but the absolute numbers are still staggering:
###Code
for combination_length in range(1, 20):
tuples = list(combinations(option_list, combination_length))
print(combination_length, len(tuples))
###Output
1 20
2 190
3 1140
4 4845
5 15504
6 38760
7 77520
8 125970
9 167960
10 184756
11 167960
12 125970
13 77520
14 38760
15 15504
16 4845
17 1140
18 190
19 20
###Markdown
Formally, the number of combinations of length $k$ in a set of options of length $n$ is the binomial coefficient$${n \choose k} = \frac{n!}{k!(n - k)!}$$ which for $k = 2$ (all pairs) gives us$${n \choose 2} = \frac{n!}{2(n - 2)!} = n \times (n - 1)$$ For `autopep8` with its 29 options...
###Code
len(autopep8_runner.ebnf_grammar()["<option>"])
###Output
_____no_output_____
###Markdown
... we thus need 812 tests to cover all pairs:
###Code
len(autopep8_runner.ebnf_grammar()["<option>"]) * \
(len(autopep8_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
For `mypy` with its 110 options, though, we already end up with 11,990 tests to be conducted:
###Code
len(mypy_runner.ebnf_grammar()["<option>"])
len(mypy_runner.ebnf_grammar()["<option>"]) * \
(len(mypy_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
Even if each pair takes a second to run, we'd still be done in three hours of testing, though. If your program has more options that you all want to get covered in combinations, it is advisable that you limit the number of configurations further – for instance by limiting combinatorial testing to those combinations that possibly can interact with each other; and covering all other (presumably orthogonal) options individually. This mechanism of creating configurations by extending grammars can be easily extended to other configuration targets. One may want to explore a greater number of configurations, or expansions in specific contexts. The [exercises](Exercises), below, have a number of options ready for you. SynopsisThis chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options. `OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
###Output
_____no_output_____
###Markdown
The grammar can be extracted via the method `ebnf_grammar()`:
###Code
option_ebnf_grammar = autopep8_runner.ebnf_grammar()
print(option_ebnf_grammar)
###Output
{'<start>': ['(<option>)*<arguments>'], '<option>': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing'], '<arguments>': [' foo.py'], '<str>': ['<char>+'], '<char>': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '#', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '<filename>': ['<str>'], '<int>': ['(-)?<digit>+'], '<digit>': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '<n>': ['<int>'], '<globs>': ['<str>'], '<errors>': ['<str>'], '<line>': ['<int>']}
###Markdown
The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:
###Code
from Grammars import convert_ebnf_grammar
fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))
[fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
autopep8_fuzzer = OptionFuzzer(autopep8_runner)
[autopep8_fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The final step in testing would now to invoke the program with these arguments. Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. Lessons Learned* Besides regular input data, program _configurations_ make an important testing target.* For a given program using a standard library to parse command-line options and arguments, one can automatically extract these and convert them into a grammar.* To cover not only single options, but combinations of options, one can expand the grammar to cover all pairs, or come up with even more ambitious targets. Next StepsIf you liked the idea of mining a grammar from a program, do not miss:* [how to mine grammars for input data](GrammarMiner.ipynb) Our next steps in the book focus on:* [how to parse and recombine inputs](Parser.ipynb)* [how to assign weights and probabilities to specific productions](ProbabilisticGrammarFuzzer.ipynb)* [how to simplify inputs that cause a failure](Reducer.ipynb) BackgroundAlthough configuration data is just as likely to cause failures as other input data, it has received relatively little attention in test generation – possibly because, unlike "regular" input data, configuration data is not so much under control of external parties, and because, again unlike regular data, there is little variance in configurations. Creating models for software configurations and using these models for testing is commonplace, as is the idea of pairwise testing. For an overview, see \cite{Pezze2008}; for a discussion and comparison of state-of-the-art techniques, see \cite{Petke2015}.More specifically, \cite{Sutton2007} also discuss techniques to systematically cover command-line options. Dai et al. \cite{Dai2010} apply configuration fuzzing by changing variables associated with configuration files. Exercises Exercise 1: ifdef Configuration FuzzingIn C programs, the *C preprocessor* can be used to choose which code parts should be compiled and which ones should not. As an example, in the C code```Cifdef LONG_FOOlong foo() { ... }elseint foo() { ... }endif```the compiler will compile the function `foo()` with return type`long` if the _preprocessor variable_ `LONG_FOO` is defined, and with return type `int` if not. Such preprocessor variables are either set in the source files (using `define`, as in `define LONG_FOO`) or on the C compiler command line (using `-D` or `-D=`, as in `-DLONG_FOO`. Such *conditional compilation* is used to configure C programs towards their environment. System-specific code can contain lots of conditional compilation. As an example, consider this excerpt of `xmlparse.c`, the XML parser that is part of the Python runtime library:```cif defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32) define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800endifif !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \ && !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \ && !defined(XML_DEV_URANDOM) \ && !defined(_WIN32) \ && !defined(XML_POOR_ENTROPY) errorendifif !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */endififdef XML_UNICODE_WCHAR_Tdefine XML_T(x) (const wchar_t)xdefine XML_L(x) L xelsedefine XML_T(x) (const unsigned short)xdefine XML_L(x) xendifint fun(int x) { return XML_T(x); }``` A typical configuration for the C preprocessor on the above code could be `cc -c -D_WIN32 -DXML_POOR_ENTROPY -DXML_UNICODE_WCHAR_T xmlparse.c`, defining the given preprocessor variables and selecting the appropriate code fragments. Since the compiler can only compile one configuration at a time (implying that we can also only _test_ one resulting executable at a time), your task is to find out which of these configurations actually compile. To this end, proceed in three steps. Part 1: Extract Preprocessor VariablesWrite a _function_ `cpp_identifiers()` that, given a set of lines (say, from `open(filename).readlines()`), extracts all preprocessor variables referenced in `if` or `ifdef` preprocessor instructions. Apply `ifdef_identifiers()` on the sample C input above, such that```pythoncpp_identifiers(open("xmlparse.c").readlines()) ```returns the set```python{'_WIN32', 'LOAD_LIBRARY_SEARCH_SYSTEM32', 'HAVE_GETRANDOM', 'HAVE_SYSCALL_GETRANDOM', 'HAVE_ARC4RANDOM_BUF', ...}``` **Solution.** Let us start with creating a sample input file, `xmlparse.c`:
###Code
filename = "xmlparse.c"
open(filename, "w").write(
"""
#if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32)
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
#endif
#if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \
&& !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \
&& !defined(XML_DEV_URANDOM) \
&& !defined(_WIN32) \
&& !defined(XML_POOR_ENTROPY)
# error
#endif
#if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)
#define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */
#endif
#ifdef XML_UNICODE_WCHAR_T
#define XML_T(x) (const wchar_t)x
#define XML_L(x) L ## x
#else
#define XML_T(x) (const unsigned short)x
#define XML_L(x) x
#endif
int fun(int x) { return XML_T(x); }
""");
###Output
_____no_output_____
###Markdown
To find C preprocessor `if` directives and preprocessor variables, we use regular expressions matching them.
###Code
import re
re_cpp_if_directive = re.compile(r"\s*#\s*(el)?if")
re_cpp_identifier = re.compile(r"[a-zA-Z_$]+")
def cpp_identifiers(lines):
identifiers = set()
for line in lines:
if re_cpp_if_directive.match(line):
identifiers |= set(re_cpp_identifier.findall(line))
# These are preprocessor keywords
identifiers -= {"if", "ifdef", "ifndef", "defined"}
return identifiers
cpp_ids = cpp_identifiers(open("xmlparse.c").readlines())
cpp_ids
###Output
_____no_output_____
###Markdown
Part 2: Derive an Option GrammarWith the help of `cpp_identifiers()`, create a grammar which has C compiler invocations with a list of options, where each option takes the form `-D` for a preprocessor variable ``. Using this grammar `cpp_grammar`, a fuzzer ```pythong = GrammarCoverageFuzzer(cpp_grammar)```would create C compiler invocations such as```python[g.fuzz() for i in range(10)]['cc -DHAVE_SYSCALL_GETRANDOM xmlparse.c', 'cc -D__SCO__ -DRANDOM_BUF -DXML_UNICODE_WCHAR_T -D__UNIXWARE__ xmlparse.c', 'cc -DXML_POOR_ENTROPY xmlparse.c', 'cc -DRANDOM xmlparse.c', 'cc -D_WIN xmlparse.c', 'cc -DHAVE_ARC xmlparse.c', ...]``` **Solution.** This is not very difficult:
###Code
from Grammars import new_symbol
cpp_grammar = {
"<start>": ["cc -c<options> " + filename],
"<options>": ["<option>", "<options><option>"],
"<option>": []
}
for id in cpp_ids:
s = new_symbol(cpp_grammar, "<" + id + ">")
cpp_grammar["<option>"].append(s)
cpp_grammar[s] = [" -D" + id]
cpp_grammar
assert is_valid_grammar(cpp_grammar)
###Output
_____no_output_____
###Markdown
Part 3: C Preprocessor Configuration FuzzingUsing the grammar just produced, use a `GrammarCoverageFuzzer` to1. Test each processor variable individually2. Test each pair of processor variables, using `pairwise()`.What happens if you actually run the invocations? **Solution.** We can simply run the coverage fuzzer, as described above.
###Code
g = GrammarCoverageFuzzer(cpp_grammar)
g.fuzz()
from Fuzzer import ProgramRunner
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -D__SCO__ -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -DRANDOM -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:7:3: error:
# error
^
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 2 errors generated.
$ cc -c -D_WIN -DHAVE_SYSCALL_GETRANDOM xmlparse.c
$ cc -c -DHAVE_GETRANDOM xmlparse.c
$ cc -c -DHAVE_ARC xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM_BUF -DXML_DEV_URANDOM -DLOAD_LIBRARY_SEARCH_SYSTEM -D__UNIXWARE__ xmlparse.c
$ cc -c -DHAVE_SYSCALL_GETRANDOM xmlparse.c
$ cc -c -D_WIN xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -D_WIN xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:7:3: error:
# error
^
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 2 errors generated.
###Markdown
To test all pairs, we can use `pairwise()`:
###Code
pairwise_cpp_grammar = extend_grammar(cpp_grammar)
pairwise_cpp_grammar["<option>"] = pairwise(cpp_grammar["<option>"])
pairwise_cpp_grammar["<option>"][:10]
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DXML_DEV_URANDOM xmlparse.c
$ cc -c -DXML_DEV_URANDOM xmlparse.c
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_ARC xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM_BUF -DHAVE_GETRANDOM xmlparse.c
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM_BUF xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_DEV_URANDOM xmlparse.c
$ cc -c -DHAVE_GETRANDOM -DHAVE_SYSCALL_GETRANDOM -D__UNIXWARE__ xmlparse.c
###Markdown
Some of the compilation errors we get could be expected – for instance, defining `XML_UNICODE_WCHAR_T` when actually, the type is not supported in our environment. Other errors may not be expected – and it is these errors we would find through systematic configuration fuzzing, as described above. At the end, don't forget to clean up:
###Code
os.remove("xmlparse.c")
if os.path.exists("xmlparse.o"):
os.remove("xmlparse.o")
###Output
_____no_output_____
###Markdown
Exercise 2: .ini Configuration FuzzingBesides command-line options, another important source of configurations are _configuration files_. In this exercise, we will consider the very simple configuration language provided by the Python `ConfigParser` module, which is very similar to what is found in Microsoft Windows _.ini_ files. The following example for a `ConfigParser` input file stems right from [the ConfigParser documentation](https://docs.python.org/3/library/configparser.html):```[DEFAULT]ServerAliveInterval = 45Compression = yesCompressionLevel = 9ForwardX11 = yes[bitbucket.org]User = hg[topsecret.server.com]Port = 50022ForwardX11 = no``` The above `ConfigParser` file can be created programmatically:
###Code
import configparser
config = configparser.ConfigParser()
config['DEFAULT'] = {'ServerAliveInterval': '45',
'Compression': 'yes',
'CompressionLevel': '9'}
config['bitbucket.org'] = {}
config['bitbucket.org']['User'] = 'hg'
config['topsecret.server.com'] = {}
topsecret = config['topsecret.server.com']
topsecret['Port'] = '50022' # mutates the parser
topsecret['ForwardX11'] = 'no' # same here
config['DEFAULT']['ForwardX11'] = 'yes'
with open('example.ini', 'w') as configfile:
config.write(configfile)
with open('example.ini') as configfile:
print(configfile.read(), end="")
###Output
[DEFAULT]
serveraliveinterval = 45
compression = yes
compressionlevel = 9
forwardx11 = yes
[bitbucket.org]
user = hg
[topsecret.server.com]
port = 50022
forwardx11 = no
###Markdown
and be read in again:
###Code
config = configparser.ConfigParser()
config.read('example.ini')
topsecret = config['topsecret.server.com']
topsecret['Port']
###Output
_____no_output_____
###Markdown
Part 1: Read ConfigurationUsing `configparser`, create a program reading in the above configuration file and accessing the individual elements. Part 2: Create a Configuration GrammarDesign a grammar that will automatically create configuration files suitable for your above program. Fuzz your program with it. Part 3: Mine a Configuration GrammarBy dynamically tracking the individual accesses to configuration elements, you can again extract a basic grammar from the execution. To this end, create a subclass of `ConfigParser` with a special method `__getitem__`:
###Code
class TrackingConfigParser(configparser.ConfigParser):
def __getitem__(self, key):
print("Accessing", repr(key))
return super().__getitem__(key)
###Output
_____no_output_____
###Markdown
For a `TrackingConfigParser` object `p`, `p.__getitem__(key)` will be invoked whenever `p[key]` is accessed:
###Code
tracking_config_parser = TrackingConfigParser()
tracking_config_parser.read('example.ini')
section = tracking_config_parser['topsecret.server.com']
###Output
Accessing 'topsecret.server.com'
###Markdown
Using `__getitem__()`, as above, implement a tracking mechanism that, while your program accesses the read configuration, automatically saves options accessed and values read. Create a prototype grammar from these values; use it for fuzzing. At the end, don't forget to clean up:
###Code
import os
os.remove("example.ini")
###Output
_____no_output_____
###Markdown
**Solution.** Left to the reader. Enjoy! Exercise 3: Extracting and Fuzzing C Command-Line OptionsIn C programs, the `getopt()` function are frequently used to process configuration options. A call```getopt(argc, argv, "bf:")```indicates that the program accepts two options `-b` and `-f`, with `-f` taking an argument (as indicated by the following colon). Part 1: Getopt FuzzingWrite a framework which, for a given C program, automatically extracts the argument to `getopt()` and derives a fuzzing grammar for it. There are multiple ways to achieve this:1. Scan the program source code for occurrences of `getopt()` and return the string passed. (Crude, but should frequently work.)2. Insert your own implementation of `getopt()` into the source code (effectively replacing `getopt()` from the runtime library), which outputs the `getopt()` argument and exits the program. Recompile and run.3. (Advanced.) As above, but instead of changing the source code, hook into the _dynamic linker_ which at runtime links the program with the C runtime library. Set the library loading path (on Linux and Unix, this is the `LD_LIBRARY_PATH` environment variable) such that your own version of `getopt()` is linked first, and the regular libraries later. Executing the program (without recompiling) should yield the desired result.Apply this on `grep` and `ls`; report the resulting grammars and results. **Solution.** Left to the reader. Enjoy hacking! Part 2: Fuzzing Long Options in CSame as Part 1, but also hook into the GNU variant `getopt_long()`, which accepts "long" arguments with double dashes such as `--help`. Note that method 1, above, will not work here, since the "long" options are defined in a separately defined structure. **Solution.** Left to the reader. Enjoy hacking! Exercise 4: Expansions in ContextIn our above option configurations, we have multiple symbols which all expand to the same integer. For instance, the `--line-range` option of `autopep8` takes two `` parameters which both expand into the same `` symbol:``` ::= ... | --line-range | ... ::= ::= (-)?+ ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9```
###Code
autopep8_runner.ebnf_grammar()["<line>"]
autopep8_runner.ebnf_grammar()["<int>"]
autopep8_runner.ebnf_grammar()["<digit>"]
###Output
_____no_output_____
###Markdown
Testing ConfigurationsThe behavior of a program is not only governed by its data. The _configuration_ of a program – that is, the settings that govern the execution of a program on its (regular) input data, as set by options or configuration files – just as well influences behavior, and thus can and should be tested. In this chapter, we explore how to systematically _test_ and _cover_ software configurations. By _automatically inferring configuration options_, we can apply these techniques out of the box, with no need for writing a grammar. Finally, we show how to systematically cover _combinations_ of configuration options, quickly detecting unwanted interferences. **Prerequisites*** You should have read the [chapter on grammars](Grammars.ipynb).* You should have read the [chapter on grammar coverage](GrammarCoverageFuzzer.ipynb).
###Code
import bookutils
from typing import List, Union, Optional, Callable, Type
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.ConfigurationFuzzer import ```and then make use of the following features.This chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options.`OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")```The grammar can be extracted via the method `ebnf_grammar()`:```python>>> option_ebnf_grammar = autopep8_runner.ebnf_grammar()>>> option_ebnf_grammar{'': ['()*'], '': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config ', ' --ignore-local-config', ' -r', ' --recursive', ' -j ', ' --jobs ', ' -p ', ' --pep8-passes ', ' -a', ' --aggressive', ' --experimental', ' --exclude ', ' --list-fixes', ' --ignore ', ' --select ', ' --max-line-length ', ' --line-range ', ' --range ', ' --indent-size ', ' --hang-closing', ' --exit-code'], '': [' foo.py'], '': ['+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '': [''], '': ['(-)?+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '': [''], '': [''], '': [''], '': ['']}```The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:```python>>> from Grammars import convert_ebnf_grammar>>> fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))>>> [fuzzer.fuzz() for i in range(3)][' foo.py', ' --max-line-length 6 --jobs -594 --ignore , --ignore-local-config -r --in-place --list-fixes --recursive -v --experimental -p 72 -h --aggressive --indent-size 3 --exit-code --hang-closing --pep8-passes -180 -d --global-config XQjT --diff --exclude *g -j 43 --help --select A --version --verbose -a --line-range -3963 0 --range 1 4 -i --in-place --version foo.py', ' --global-config 2 --select PuR --ignore b --ignore @ --ignore ;7d --ignore ) --ignore Fw1Z --ignore 0 --global-config ynf --select >G --select + --global-config ( --exclude v --exclude V --ignore ^ --select L --exclude 6 --exclude =$` --ignore % --global-config N --ignore [8maop --ignore 3! --select ~?c< --exclude C --select U --exclude h --global-config --global-config 5O --select x --select B] --ignore _ --global-config .K --global-config S --exclude r --global-config qW --exclude te4/ --exclude J} --ignore " --exclude |H --global-config -&k{s --global-config E --select :I --ignore 9 --global-config M --exclude YD --select \\ --exclude z --ignore i --select \'l --ignore M --ignore ;h --exit-code foo.py']```The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")>>> autopep8_fuzzer = OptionFuzzer(autopep8_runner)>>> [autopep8_fuzzer.fuzz() for i in range(3)][' --diff foo.py', ' --exclude --global-config V --select He --global-config | --global-config n}aicm --ignore 7 --ignore b --global-config u --exclude WB` --exclude 2 --exclude JpZt --exclude l_ --select *%^ --exclude & --exclude )Lv --global-config [ --global-config " --exclude sOEXP --aggressive --exclude \' --help --diff --experimental foo.py', ' --ignore FCw; --global-config /1K?:6 --exclude U --exclude z --ignore rQ --select x --select Y --select { --global-config o --select 34 --exclude ]j --select ~ --exclude 9@ --ignore w --global-config CVL --diff foo.py']```The final step in testing would now to invoke the program with these arguments.Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs.The `OptionRunner` constructor accepts an additional `miner` keyword parameter, which takes the class of the argument grammar miner to be used. By default, this is `OptionGrammarMiner` – a helper class that can be used (and extended) to create own option grammar miners.![](PICS/ConfigurationFuzzer-synopsis-1.svg) Configuration OptionsWhen we talk about the input to a program, we usually think of the _data_ it processes. This is also what we have been fuzzing in the past chapters – be it with [random input](Fuzzer.ipynb), [mutation-based fuzzing](MutationFuzzer.ipynb), or [grammar-based fuzzing](GrammarFuzzer.ipynb). However, programs typically have several input sources, all of which can and should be tested – and included in test generation. One important source of input is the program's _configuration_ – that is, a set of inputs that typically is set once when beginning to process data and then stays constant while processing data, while the program is running, or even while the program is deployed. Such a configuration is frequently set in _configuration files_ (for instance, as key/value pairs); the most ubiquitous method for command-line tools, though, are _configuration options_ on the command line. As an example, consider the `grep` utility to find textual patterns in files. The exact mode by which `grep` works is governed by a multitude of options, which can be listed by providing a `--help` option:
###Code
!grep --help
###Output
usage: grep [-abcdDEFGHhIiJLlMmnOopqRSsUVvwXxZz] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
###Markdown
All these options need to be tested for whether they operate correctly. In security testing, any such option may also trigger a yet unknown vulnerability. Hence, such options can become _fuzz targets_ on their own. In this chapter, we analyze how to systematically test such options – and better yet, how to extract possible configurations right out of given program files, such that we do not have to specify anything. Options in PythonLet us stick to our common programming language here and examine how options are processed in Python. The `argparse` module provides a parser for command-line arguments (and options) with great functionality – and great complexity. You start by defining a parser (`argparse.ArgumentParser()`) to which individual arguments with various features are added, one after another. Additional parameters for each argument can specify the type (`type`) of the argument (say, integers or strings), or the number of arguments (`nargs`). By default, arguments are stored under their name in the `args` object coming from `parse_args()` – thus, `args.integers` holds the `integer` arguments added earlier. Special actions (`actions`) allow to store specific values in given variables; the `store_const` action stores the given `const` in the attribute named by `dest`. The following example takes a number of integer arguments (`integers`) as well as an operator (`--sum`, `--min`, or `--max`) to be applied on these integers. The operators all store a function reference in the `accumulate` attribute, which is finally invoked on the integers parsed:
###Code
import argparse
def process_numbers(args=[]):
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--sum', dest='accumulate', action='store_const',
const=sum,
help='sum the integers')
group.add_argument('--min', dest='accumulate', action='store_const',
const=min,
help='compute the minimum')
group.add_argument('--max', dest='accumulate', action='store_const',
const=max,
help='compute the maximum')
args = parser.parse_args(args)
print(args.accumulate(args.integers))
###Output
_____no_output_____
###Markdown
Here's how `process_numbers()` works. We can, for instance, invoke the `--min` option on the given arguments to compute the minimum:
###Code
process_numbers(["--min", "100", "200", "300"])
###Output
100
###Markdown
Or compute the sum of three numbers:
###Code
process_numbers(["--sum", "1", "2", "3"])
###Output
6
###Markdown
When defined via `add_mutually_exclusive_group()` (as above), options are mutually exclusive. Consequently, we can have only one operator:
###Code
import bookutils
from ExpectError import ExpectError
with ExpectError(print_traceback=False):
process_numbers(["--sum", "--max", "1", "2", "3"])
###Output
usage: ipykernel_launcher.py [-h] (--sum | --min | --max) N [N ...]
ipykernel_launcher.py: error: argument --max: not allowed with argument --sum
SystemExit: 2 (expected)
###Markdown
A Grammar for ConfigurationsHow can we test a system with several options? The easiest answer is to write a grammar for it. The grammar `PROCESS_NUMBERS_EBNF_GRAMMAR` reflects the possible combinations of options and arguments:
###Code
from Grammars import crange, srange, convert_ebnf_grammar, extend_grammar, is_valid_grammar
from Grammars import START_SYMBOL, new_symbol, Grammar
PROCESS_NUMBERS_EBNF_GRAMMAR: Grammar = {
"<start>": ["<operator> <integers>"],
"<operator>": ["--sum", "--min", "--max"],
"<integers>": ["<integer>", "<integers> <integer>"],
"<integer>": ["<digit>+"],
"<digit>": crange('0', '9')
}
assert is_valid_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
PROCESS_NUMBERS_GRAMMAR = convert_ebnf_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
###Output
_____no_output_____
###Markdown
We can feed this grammar into our [grammar coverage fuzzer](GrammarCoverageFuzzer.ipynb) and have it cover one option after another:
###Code
from GrammarCoverageFuzzer import GrammarCoverageFuzzer
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
print(f.fuzz())
###Output
--max 9 5 8 210 80 9756431
--sum 9 4 99 1245 612370
--min 2 3 0 46 15798 7570926
###Markdown
Of course, we can also invoke `process_numbers()` with these very arguments. To this end, we need to convert the string produced by the grammar back into a list of individual arguments, using `split()`:
###Code
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
args = f.fuzz().split()
print(args)
process_numbers(args)
###Output
['--max', '8', '9', '3067', '44', '13852967057']
13852967057
['--sum', '9', '8', '63', '9278111', '59206197798']
59215475989
['--min', '4', '1', '4864', '33342', '7827970808951']
1
###Markdown
In a similar way, we can define grammars for any program to be tested; as well as define grammars for, say, configuration files. Yet, the grammar has to be updated with every change to the program, which creates a maintenance burden. Given that the information required for the grammar is already all encoded in the program, the question arises: _Can't we go and extract configuration options right out of the program in the first place?_ Mining Configuration OptionsIn this section, we try to extract option and argument information right out of a program, such that we do not have to specify a configuration grammar. The aim is to have a configuration fuzzer that works on the options and arguments of an arbitrary program, as long as it follows specific conventions for processing its arguments. In the case of Python programs, this means using the `argparse` module.Our idea is as follows: We execute the given program up to the point where the arguments are actually parsed – that is, `argparse.parse_args()` is invoked. Up to this point, we track all calls into the argument parser, notably those calls that define arguments and options (`add_argument()`). From these, we construct the grammar. Tracking ArgumentsLet us illustrate this approach with a simple experiment: We define a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while `process_numbers` is invoked. If we have a call to a method `add_argument`, we access and print out the local variables (which at this point are the arguments to the method).
###Code
import sys
import string
def trace_locals(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(method_name, locals)
###Output
_____no_output_____
###Markdown
What we get is a list of all calls to `add_argument()`, together with the method arguments passed:
###Code
sys.settrace(trace_locals)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
add_argument {'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True), 'args': ('-h', '--help'), 'kwargs': {'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}}
add_argument {'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True), 'args': ('integers',), 'kwargs': {'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x11100dfd0>, 'args': ('--sum',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x11100dfd0>, 'args': ('--min',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x11100dfd0>, 'args': ('--max',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}}
6
###Markdown
From the `args` argument, we can access the individual options and arguments to be defined:
###Code
def trace_options(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(locals['args'])
sys.settrace(trace_options)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
('-h', '--help')
('integers',)
('--sum',)
('--min',)
('--max',)
6
###Markdown
We see that each argument comes as a tuple with one (say, `integers` or `--sum`) or two members (`-h` and `--help`), which denote alternate forms for the same option. Our job will be to go through the arguments of `add_arguments()` and detect not only the names of options and arguments, but also whether they accept additional parameters, as well as the type of the parameters. A Grammar Miner for Options and Arguments Let us now build a class that gathers all this information to create a grammar. We use the `ParseInterrupt` exception to interrupt program execution after gathering all arguments and options:
###Code
class ParseInterrupt(Exception):
pass
###Output
_____no_output_____
###Markdown
The class `OptionGrammarMiner` takes an executable function for which the grammar of options and arguments is to be mined:
###Code
class OptionGrammarMiner:
"""Helper class for extracting option grammars"""
def __init__(self, function: Callable, log: bool = False):
"""Constructor.
`function` - a function processing arguments using argparse()
`log` - output diagnostics if True
"""
self.function = function
self.log = log
###Output
_____no_output_____
###Markdown
The method `mine_ebnf_grammar()` is where everything happens. It creates a grammar of the form``` ::= * ::= ::= ```in which the options and arguments will be collected. It then sets a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while the previously defined `function` is invoked. Raising `ParseInterrupt` (when `parse_args()` is invoked) ends execution.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
OPTION_SYMBOL = "<option>"
ARGUMENTS_SYMBOL = "<arguments>"
def mine_ebnf_grammar(self):
"""Extract EBNF option grammar"""
self.grammar = {
START_SYMBOL: ["(" + self.OPTION_SYMBOL + ")*" + self.ARGUMENTS_SYMBOL],
self.OPTION_SYMBOL: [],
self.ARGUMENTS_SYMBOL: []
}
self.current_group = self.OPTION_SYMBOL
old_trace = sys.gettrace()
sys.settrace(self.traceit)
try:
self.function()
except ParseInterrupt:
pass
sys.settrace(old_trace)
return self.grammar
def mine_grammar(self):
"""Extract BNF option grammar"""
return convert_ebnf_grammar(self.mine_ebnf_grammar())
###Output
_____no_output_____
###Markdown
The trace function checks for four methods: `add_argument()` is the most important function, resulting in processing arguments; `frame.f_locals` again is the set of local variables, which at this point is mostly the arguments to `add_argument()`. Since mutually exclusive groups also have a method `add_argument()`, we set the flag `in_group` to differentiate. Note that we make no specific efforts to differentiate between multiple parsers or groups; we simply assume that there is one parser, and at any point at most one mutually exclusive group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def traceit(self, frame, event, arg):
if event != "call":
return
if "self" not in frame.f_locals:
return
self_var = frame.f_locals["self"]
method_name = frame.f_code.co_name
if method_name == "add_argument":
in_group = repr(type(self_var)).find("Group") >= 0
self.process_argument(frame.f_locals, in_group)
elif method_name == "add_mutually_exclusive_group":
self.add_group(frame.f_locals, exclusive=True)
elif method_name == "add_argument_group":
# self.add_group(frame.f_locals, exclusive=False)
pass
elif method_name == "parse_args":
raise ParseInterrupt
return None
###Output
_____no_output_____
###Markdown
The `process_arguments()` now analyzes the arguments passed and adds them to the grammar:* If the argument starts with `-`, it gets added as an optional element to the `` list* Otherwise, it gets added to the `` list.The optional `nargs` argument specifies how many arguments can follow. If it is a number, we add the appropriate number of elements to the grammar; if it is an abstract specifier (say, `+` or `*`), we use it directly as EBNF operator.Given the large number of parameters and optional behavior, this is a somewhat messy function, but it does the job.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def process_argument(self, locals, in_group):
args = locals["args"]
kwargs = locals["kwargs"]
if self.log:
print(args)
print(kwargs)
print()
for arg in args:
self.process_arg(arg, in_group, kwargs)
class OptionGrammarMiner(OptionGrammarMiner):
def process_arg(self, arg, in_group, kwargs):
if arg.startswith('-'):
if not in_group:
target = self.OPTION_SYMBOL
else:
target = self.current_group
metavar = None
arg = " " + arg
else:
target = self.ARGUMENTS_SYMBOL
metavar = arg
arg = ""
if "nargs" in kwargs:
nargs = kwargs["nargs"]
else:
nargs = 1
param = self.add_parameter(kwargs, metavar)
if param == "":
nargs = 0
if isinstance(nargs, int):
for i in range(nargs):
arg += param
else:
assert nargs in "?+*"
arg += '(' + param + ')' + nargs
if target == self.OPTION_SYMBOL:
self.grammar[target].append(arg)
else:
self.grammar[target].append(arg)
###Output
_____no_output_____
###Markdown
The method `add_parameter()` handles possible parameters of options. If the argument has an `action` defined, it takes no parameter. Otherwise, we identify the type of the parameter (as `int` or `str`) and augment the grammar with an appropriate rule.
###Code
import inspect
class OptionGrammarMiner(OptionGrammarMiner):
def add_parameter(self, kwargs, metavar):
if "action" in kwargs:
# No parameter
return ""
type_ = "str"
if "type" in kwargs:
given_type = kwargs["type"]
# int types come as '<class int>'
if inspect.isclass(given_type) and issubclass(given_type, int):
type_ = "int"
if metavar is None:
if "metavar" in kwargs:
metavar = kwargs["metavar"]
else:
metavar = type_
self.add_type_rule(type_)
if metavar != type_:
self.add_metavar_rule(metavar, type_)
param = " <" + metavar + ">"
return param
###Output
_____no_output_____
###Markdown
The method `add_type_rule()` adds a rule for parameter types to the grammar. If the parameter is identified by a meta-variable (say, `N`), we add a rule for this as well to improve legibility.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_type_rule(self, type_):
if type_ == "int":
self.add_int_rule()
else:
self.add_str_rule()
def add_int_rule(self):
self.grammar["<int>"] = ["(-)?<digit>+"]
self.grammar["<digit>"] = crange('0', '9')
def add_str_rule(self):
self.grammar["<str>"] = ["<char>+"]
self.grammar["<char>"] = srange(
string.digits
+ string.ascii_letters
+ string.punctuation)
def add_metavar_rule(self, metavar, type_):
self.grammar["<" + metavar + ">"] = ["<" + type_ + ">"]
###Output
_____no_output_____
###Markdown
The method `add_group()` adds a new mutually exclusive group to the grammar. We define a new symbol (say, ``) for the options added to the group, and use the `required` and `exclusive` flags to define an appropriate expansion operator. The group is then prefixed to the grammar, as in``` ::= * ::= ```and filled with the next calls to `add_argument()` within the group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_group(self, locals, exclusive):
kwargs = locals["kwargs"]
if self.log:
print(kwargs)
required = kwargs.get("required", False)
group = new_symbol(self.grammar, "<group>")
if required and exclusive:
group_expansion = group
if required and not exclusive:
group_expansion = group + "+"
if not required and exclusive:
group_expansion = group + "?"
if not required and not exclusive:
group_expansion = group + "*"
self.grammar[START_SYMBOL][0] = group_expansion + \
self.grammar[START_SYMBOL][0]
self.grammar[group] = []
self.current_group = group
###Output
_____no_output_____
###Markdown
That's it! With this, we can now extract the grammar from our `process_numbers()` program. Turning on logging again reveals the variables we draw upon.
###Code
miner = OptionGrammarMiner(process_numbers, log=True)
process_numbers_grammar = miner.mine_ebnf_grammar()
###Output
('-h', '--help')
{'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}
('integers',)
{'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}
{'required': True}
('--sum',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}
('--min',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}
('--max',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}
###Markdown
Here is the extracted grammar:
###Code
process_numbers_grammar
###Output
_____no_output_____
###Markdown
The grammar properly identifies the group found:
###Code
process_numbers_grammar["<start>"]
process_numbers_grammar["<group>"]
###Output
_____no_output_____
###Markdown
It also identifies a `--help` option provided not by us, but by the `argparse` module:
###Code
process_numbers_grammar["<option>"]
###Output
_____no_output_____
###Markdown
The grammar also correctly identifies the types of the arguments:
###Code
process_numbers_grammar["<arguments>"]
process_numbers_grammar["<integers>"]
###Output
_____no_output_____
###Markdown
The rules for `int` are set as defined by `add_int_rule()`
###Code
process_numbers_grammar["<int>"]
###Output
_____no_output_____
###Markdown
We can take this grammar and convert it to BNF, such that we can fuzz with it right away:
###Code
assert is_valid_grammar(process_numbers_grammar)
grammar = convert_ebnf_grammar(process_numbers_grammar)
assert is_valid_grammar(grammar)
f = GrammarCoverageFuzzer(grammar)
for i in range(10):
print(f.fuzz())
###Output
--sum 9
--max -h --help --help -16 -0
--min --help 2745341 8
--min 1 27
--sum --help --help -2
--sum --help 0 3 -77
--sum -3
--sum --help 429 8 10 0295 -694 1
--max -h 91 -1425 99
--sum -795 -94 8 -44
###Markdown
Each and every invocation adheres to the rules as set forth in the `argparse` calls. By mining options and arguments from existing programs, we can now fuzz these options out of the box – without having to specify a grammar. Testing Autopep8 Let us try out the option grammar miner on real-world Python programs. `autopep8` is a tool that automatically converts Python code to the [PEP 8 Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). (Actually, all Python code in this book runs through `autopep8` during production.) `autopep8` offers a wide range of options, as can be seen by invoking it with `--help`:
###Code
!autopep8 --help
###Output
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing] [--exit-code]
[files ...]
Automatically formats Python code to conform to the PEP 8 style guide.
positional arguments:
files files to format or '-' for standard in
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-v, --verbose print verbose messages; multiple -v result in more
verbose messages
-d, --diff print the diff for the fixed source
-i, --in-place make changes to files in place
--global-config filename
path to a global pep8 config file; if this file does
not exist then this is ignored (default:
/Users/zeller/.config/pep8)
--ignore-local-config
don't look for and apply local config files; if not
passed, defaults are updated with any config files in
the project's root directory
-r, --recursive run recursively over directories; must be used with
--in-place or --diff
-j n, --jobs n number of parallel jobs; match CPU count if value is
less than 1
-p n, --pep8-passes n
maximum number of additional pep8 passes (default:
infinite)
-a, --aggressive enable non-whitespace changes; multiple -a result in
more aggressive changes
--experimental enable experimental fixes
--exclude globs exclude file/directory names that match these comma-
separated globs
--list-fixes list codes for fixes; used by --ignore and --select
--ignore errors do not fix these errors/warnings (default:
E226,E24,W50,W690)
--select errors fix only these errors/warnings (e.g. E4,W)
--max-line-length n set maximum allowed line length (default: 79)
--line-range line line, --range line line
only fix errors found within this inclusive range of
line numbers (e.g. 1 99); line numbers are indexed at
1
--hang-closing hang-closing option passed to pycodestyle
--exit-code change to behavior of exit code. default behavior of
return value, 0 is no differences, 1 is error exit.
return 2 when add this option. 2 is exists
differences.
###Markdown
Autopep8 SetupWe want to systematically test these options. In order to deploy our configuration grammar miner, we need to find the source code of the executable:
###Code
import os
def find_executable(name):
for path in os.get_exec_path():
qualified_name = os.path.join(path, name)
if os.path.exists(qualified_name):
return qualified_name
return None
autopep8_executable = find_executable("autopep8")
assert autopep8_executable is not None
autopep8_executable
###Output
_____no_output_____
###Markdown
Next, we build a function that reads the contents of the file and executes it.
###Code
def autopep8():
executable = find_executable("autopep8")
# First line has to contain "/usr/bin/env python" or like
first_line = open(executable).readline()
assert first_line.find("python") >= 0
contents = open(executable).read()
exec(contents)
###Output
_____no_output_____
###Markdown
Mining an Autopep8 GrammarWe can use the `autopep8()` function in our grammar miner:
###Code
autopep8_miner = OptionGrammarMiner(autopep8)
###Output
_____no_output_____
###Markdown
and extract a grammar for it:
###Code
autopep8_ebnf_grammar = autopep8_miner.mine_ebnf_grammar()
###Output
_____no_output_____
###Markdown
This works because here, `autopep8` is not a separate process (and a separate Python interpreter), but we run the `autopep8()` function (and the `autopep8` code) in our current Python interpreter – up to the call to `parse_args()`, where we interrupt execution again. At this point, the `autopep8` code has done nothing but setting up the argument parser – which is what we are interested in. The grammar options mined reflect precisely the options seen when providing `--help`:
###Code
print(autopep8_ebnf_grammar["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
Metavariables like `` or `` are placeholders for integers. We assume all metavariables of the same name have the same type:
###Code
autopep8_ebnf_grammar["<line>"]
###Output
_____no_output_____
###Markdown
The grammar miner has inferred that the argument to `autopep8` is a list of files:
###Code
autopep8_ebnf_grammar["<arguments>"]
###Output
_____no_output_____
###Markdown
which in turn all are strings:
###Code
autopep8_ebnf_grammar["<files>"]
###Output
_____no_output_____
###Markdown
As we are only interested in testing options, not arguments, we fix the arguments to a single mandatory input. (Otherwise, we'd have plenty of random file names generated.)
###Code
autopep8_ebnf_grammar["<arguments>"] = [" <files>"]
autopep8_ebnf_grammar["<files>"] = ["foo.py"]
assert is_valid_grammar(autopep8_ebnf_grammar)
###Output
_____no_output_____
###Markdown
Creating Autopep8 Options Let us now use the inferred grammar for fuzzing. Again, we convert the EBNF grammar into a regular BNF grammar:
###Code
autopep8_grammar = convert_ebnf_grammar(autopep8_ebnf_grammar)
assert is_valid_grammar(autopep8_grammar)
###Output
_____no_output_____
###Markdown
And we can use the grammar for fuzzing all options:
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=4)
for i in range(20):
print(f.fuzz())
###Output
-r foo.py
-h --experimental --hang-closing foo.py
--list-fixes -v foo.py
--aggressive -d foo.py
--indent-size 9 --help foo.py
--exit-code --recursive foo.py
--diff --version -i foo.py
--max-line-length 0 --in-place --verbose foo.py
--ignore-local-config -a foo.py
--select x -i --exit-code foo.py
-j 8 --diff foo.py
-d -v -d foo.py
-p 6 -i foo.py
-v --diff foo.py
--ignore uA --recursive foo.py
--jobs 5 -r foo.py
--range 4 1 foo.py
--ignore-local-config -i foo.py
-r --exit-code foo.py
-v -r foo.py
###Markdown
Let us apply these options on the actual program. We need a file `foo.py` that will serve as input: (Note that the following commands will overwrite the file `foo.py`, if it already exists in the current working directory. Be aware of this, if you downloaded the notebooks and are working locally.)
###Code
def create_foo_py():
open("foo.py", "w").write("""
def twice(x = 2):
return x + x
""")
create_foo_py()
print(open("foo.py").read(), end="")
###Output
def twice(x = 2):
return x + x
###Markdown
We see how `autopep8` fixes the spacing:
###Code
!autopep8 foo.py
###Output
def twice(x=2):
return x + x
###Markdown
Let us now put things together. We define a `ProgramRunner` that will run the `autopep8` executable with arguments coming from the mined `autopep8` grammar.
###Code
from Fuzzer import ProgramRunner
###Output
_____no_output_____
###Markdown
Running `autopep8` with the mined options reveals a surprisingly high number of passing runs. (We see that some options depend on each other or are mutually exclusive, but this is handled by the program logic, not the argument parser, and hence out of our scope.) The `GrammarCoverageFuzzer` ensures that each option is tested at least once. (Digits and letters, too, by the way.)
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=5)
for i in range(20):
invocation = "autopep8" + f.fuzz()
print("$ " + invocation)
args = invocation.split()
autopep8_runner = ProgramRunner(args)
result, outcome = autopep8_runner.run()
if result.stderr != "":
print(result.stderr, end="")
###Output
$ autopep8 foo.py
$ autopep8 --diff --max-line-length 4 --exit-code --range 5 8 -p 2 foo.py
$ autopep8 --ignore z --verbose -r --list-fixes foo.py
--recursive must be used with --in-place or --diff$ autopep8 --exclude 5 -h -i --aggressive --in-place foo.py
$ autopep8 --select a --help --experimental foo.py
$ autopep8 --indent-size -30 --recursive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --global-config < -j 9 -v -a foo.py
parallel jobs requires --in-place$ autopep8 --line-range 7 1 --hang-closing -d foo.py
First value of --range should be less than or equal to the second$ autopep8 --pep8-passes 6 --hang-closing --version --ignore-local-config foo.py
$ autopep8 --jobs -2 --experimental --version foo.py
$ autopep8 --ignore Y: --select ! --global-config e foo.py
$ autopep8 --select 1 -a --recursive --aggressive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --ignore * --ignore `0 --global-config _ --verbose foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config ,\ --exclude r -v foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config xd6M --recursive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --select R --exclude L --version --ignore-local-config foo.py
$ autopep8 --select " --verbose -h -d foo.py
$ autopep8 --diff -i -h foo.py
$ autopep8 --in-place --select w --version -i foo.py
$ autopep8 --ignore 49 --exclude lI -i foo.py
###Markdown
Our `foo.py` file now has been formatted in place a number of times:
###Code
print(open("foo.py").read(), end="")
###Output
def twice(x=2):
return x + x
###Markdown
We don't need it anymore, so we clean up things:
###Code
import os
os.remove("foo.py")
###Output
_____no_output_____
###Markdown
Classes for Fuzzing Configuration OptionsLet us now create reusable classes that we can use for testing arbitrary programs. (Okay, make that "arbitrary programs that are written in Python and use the `argparse` module to process command-line arguments.") The class `OptionRunner` is a subclass of `ProgramRunner` that takes care of automatically determining the grammar, using the same steps we used for `autopep8`, above.
###Code
from Grammars import unreachable_nonterminals
class OptionRunner(ProgramRunner):
"""Run a program while determining its option grammar"""
def __init__(self, program: Union[str, List[str]],
arguments: Optional[str] = None, *,
miner_class: Optional[Type[OptionGrammarMiner]] = None):
"""Constructor.
`program` - the (Python) program to be executed
`arguments` - an (optional) string with arguments for `program`
`miner` - the `OptionGrammarMiner` class to be used
(default: `OptionGrammarMiner`)
"""
if isinstance(program, str):
self.base_executable = program
else:
self.base_executable = program[0]
if miner_class is None:
miner_class = OptionGrammarMiner
self.miner_class = miner_class
self.find_contents()
self.find_grammar()
if arguments is not None:
self.set_arguments(arguments)
super().__init__(program)
###Output
_____no_output_____
###Markdown
First, we find the contents of the Python executable:
###Code
class OptionRunner(OptionRunner):
def find_contents(self):
self._executable = find_executable(self.base_executable)
first_line = open(self._executable).readline()
assert first_line.find("python") >= 0
self.contents = open(self._executable).read()
def invoker(self):
exec(self.contents)
def executable(self):
return self._executable
###Output
_____no_output_____
###Markdown
Next, we determine the grammar using the `OptionGrammarMiner` class:
###Code
class OptionRunner(OptionRunner):
def find_grammar(self):
miner = self.miner_class(self.invoker)
self._ebnf_grammar = miner.mine_ebnf_grammar()
def ebnf_grammar(self):
"""Return extracted grammar in EBNF form"""
return self._ebnf_grammar
def grammar(self):
"""Return extracted grammar in BNF form"""
return convert_ebnf_grammar(self._ebnf_grammar)
###Output
_____no_output_____
###Markdown
The two service methods `set_arguments()` and `set_invocation()` help us to change the arguments and program, respectively.
###Code
class OptionRunner(OptionRunner):
def set_arguments(self, args):
self._ebnf_grammar["<arguments>"] = [" " + args]
# Delete rules for previous arguments
for nonterminal in unreachable_nonterminals(self._ebnf_grammar):
del self._ebnf_grammar[nonterminal]
def set_invocation(self, program):
self.program = program
###Output
_____no_output_____
###Markdown
We can instantiate the class on `autopep8` and immediately get the grammar:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
print(autopep8_runner.ebnf_grammar()["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
An `OptionFuzzer` interacts with the given `OptionRunner` to obtain its grammar, which is then passed to its `GrammarCoverageFuzzer` superclass.
###Code
class OptionFuzzer(GrammarCoverageFuzzer):
"""Fuzz a (Python) program using its arguments"""
def __init__(self, runner: OptionRunner, *args, **kwargs):
"""Constructor. `runner` is an OptionRunner."""
assert issubclass(type(runner), OptionRunner)
self.runner = runner
grammar = runner.grammar()
super().__init__(grammar, *args, **kwargs)
###Output
_____no_output_____
###Markdown
When invoking `run()`, the `OptionFuzzer` creates a new invocation (using `fuzz()` from its grammar) and runs the now given (or previously set) runner with the arguments from the grammar. Note that the runner specified in `run()` can differ from the one set during initialization; this allows for mining options from one program and applying it in another context.
###Code
class OptionFuzzer(OptionFuzzer):
def run(self, runner=None, inp=""):
if runner is None:
runner = self.runner
assert issubclass(type(runner), OptionRunner)
invocation = runner.executable() + " " + self.fuzz()
runner.set_invocation(invocation.split())
return runner.run(inp)
###Output
_____no_output_____
###Markdown
Example: Autopep8Let us apply our newly defined classes on the `autopep8` runner:
###Code
autopep8_fuzzer = OptionFuzzer(autopep8_runner, max_nonterminals=5)
for i in range(3):
print(autopep8_fuzzer.fuzz())
###Output
foo.py
--in-place --ignore-local-config --jobs 6 --recursive -i foo.py
--help -a --indent-size -95 --pep8-passes 3 --exclude = -r foo.py
###Markdown
We can now systematically test `autopep8` with these classes:
###Code
autopep8_fuzzer.run(autopep8_runner)
###Output
_____no_output_____
###Markdown
Example: MyPyWe can extract options for the `mypy` static type checker for Python:
###Code
assert find_executable("mypy") is not None
mypy_runner = OptionRunner("mypy", "foo.py")
print(mypy_runner.ebnf_grammar()["<option>"])
mypy_fuzzer = OptionFuzzer(mypy_runner, max_nonterminals=5)
for i in range(10):
print(mypy_fuzzer.fuzz())
###Output
--follow-imports l foo.py
--logical-deps --allow-any-generics --any-exprs-report yB --fast-exit --show-error-context -i --no-site-packages --no-explicit-package-bases --no-sqlite-cache --warn-unused-ignores foo.py
--html-report 7/ --tb --error-summary foo.py
--no-implicit-reexport --no-error-summary --warn-return-any --dump-build-stats --check-untyped-defs --hide-error-context --no-install-types --install-types foo.py
--no-warn-redundant-casts --no-warn-return-any --bazel --sqlite-cache --enable-error-code foo.py
--cache-map { t --version --warn-unused-configs foo.py
--explicit-package-bases --exclude --help --semantic-analysis-only --soft-error-limit 6 --skip-version-check --no-warn-unused-configs foo.py
--disallow-untyped-defs --hide-error-codes --show-absolute-path --shadow-file --package-root --interactive foo.py
--strict-optional --disable-error-code --disallow-untyped-globals --disallow-incomplete-defs --no-namespace-packages --show-column-numbers --inferstats --raise-exceptions --no-color-output --allow-incomplete-defs --warn-incomplete-stub --command --module --disallow-redefinition --cache-dir --no-strict-optional -c -h -p foo.py
--implicit-optional --disallow-any-generics --no-warn-unreachable --strict-equality --show-error-codes --package --always-false foo.py
###Markdown
Example: NotedownHere's the configuration options for the `notedown` Notebook to Markdown converter:
###Code
assert find_executable("notedown") is not None
notedown_runner = OptionRunner("notedown")
print(notedown_runner.ebnf_grammar()["<option>"])
notedown_fuzzer = OptionFuzzer(notedown_runner, max_nonterminals=5)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
c
--help -o --nomagic --version --output --match H9- --rmagic --render --run
--examples -h --precode r x --execute --strip --debug <
--execute --rmagic --rmagic !
-h --render --debug 2
--examples --version --version .
--execute --help --render R
--run -h 6T
--execute --rmagic -h ;
--from 5* --version --help s
###Markdown
Combinatorial TestingOur `CoverageGrammarFuzzer` does a good job in covering each and every option at least once, which is great for systematic testing. However, as we also can see in our examples above, some options require each other, while others interfere with each other. What we should do as good testers is not only to cover every option individually, but also _combinations_ of options. The Python `itertools` module gives us means to create combinations from lists. We can, for instance, take the `notedown` options and create a list of all pairs.
###Code
from itertools import combinations
option_list = notedown_runner.ebnf_grammar()["<option>"]
pairs = list(combinations(option_list, 2))
###Output
_____no_output_____
###Markdown
There's quite a number of pairs:
###Code
len(pairs)
print(pairs[:20])
###Output
[(' -h', ' --help'), (' -h', ' -o( <str>)?'), (' -h', ' --output( <str>)?'), (' -h', ' --from <str>'), (' -h', ' --to <str>'), (' -h', ' --run'), (' -h', ' --execute'), (' -h', ' --timeout <int>'), (' -h', ' --strip'), (' -h', ' --precode( <str>)+'), (' -h', ' --knit( <str>)?'), (' -h', ' --rmagic'), (' -h', ' --nomagic'), (' -h', ' --render'), (' -h', ' --template <str>'), (' -h', ' --match <str>'), (' -h', ' --examples'), (' -h', ' --version'), (' -h', ' --debug'), (' --help', ' -o( <str>)?')]
###Markdown
Testing every such pair of options frequently suffices to cover all interferences between options. (Programs rarely have conditions involving three or more configuration settings.) To this end, we _change_ the grammar from having a list of options to having a list of _option pairs_, such that covering these will automatically cover all pairs. We create a function `pairwise()` that takes a list of options as occurring in our grammar and returns a list of _pairwise options_ – that is, our original options, but concatenated.
###Code
def pairwise(option_list):
return [option_1 + option_2
for (option_1, option_2) in combinations(option_list, 2)]
###Output
_____no_output_____
###Markdown
Here's the first 20 pairs:
###Code
print(pairwise(option_list)[:20])
###Output
[' -h --help', ' -h -o( <str>)?', ' -h --output( <str>)?', ' -h --from <str>', ' -h --to <str>', ' -h --run', ' -h --execute', ' -h --timeout <int>', ' -h --strip', ' -h --precode( <str>)+', ' -h --knit( <str>)?', ' -h --rmagic', ' -h --nomagic', ' -h --render', ' -h --template <str>', ' -h --match <str>', ' -h --examples', ' -h --version', ' -h --debug', ' --help -o( <str>)?']
###Markdown
The new grammar `pairwise_notedown_grammar` is a copy of the `notedown` grammar, but with the list of options replaced with the above pairwise option list.
###Code
notedown_grammar = notedown_runner.grammar()
pairwise_notedown_grammar = extend_grammar(notedown_grammar)
pairwise_notedown_grammar["<option>"] = pairwise(notedown_grammar["<option>"])
assert is_valid_grammar(pairwise_notedown_grammar)
###Output
_____no_output_____
###Markdown
Using the "pairwise" grammar to fuzz now covers one pair after another:
###Code
notedown_pairwise_fuzzer = GrammarCoverageFuzzer(
pairwise_notedown_grammar, max_nonterminals=4)
for i in range(10):
print(notedown_pairwise_fuzzer.fuzz())
###Output
--help --timeout -0 w
--rmagic --render --execute --strip
--execute --rmagic --nomagic --debug
--help --strip --strip --version r
-h --debug --help --execute
--execute --debug -h --render
--strip --rmagic --render --examples h
-h --run --examples --debug
--from PX --examples >
--run --strip --strip --render
###Markdown
Can we actually test all combinations of options? Not in practice, as the number of combinations quickly grows as the length increases. It decreases again as the number of options reaches the maximum (with 20 options, there is only 1 combination involving _all_ options), but the absolute numbers are still staggering:
###Code
for combination_length in range(1, 20):
tuples = list(combinations(option_list, combination_length))
print(combination_length, len(tuples))
###Output
1 20
2 190
3 1140
4 4845
5 15504
6 38760
7 77520
8 125970
9 167960
10 184756
11 167960
12 125970
13 77520
14 38760
15 15504
16 4845
17 1140
18 190
19 20
###Markdown
Formally, the number of combinations of length $k$ in a set of options of length $n$ is the binomial coefficient$${n \choose k} = \frac{n!}{k!(n - k)!}$$ which for $k = 2$ (all pairs) gives us$${n \choose 2} = \frac{n!}{2(n - 2)!} = \frac{n (n - 1)}{2}$$ For `autopep8` with its 29 options...
###Code
len(autopep8_runner.ebnf_grammar()["<option>"])
###Output
_____no_output_____
###Markdown
... we thus have 406 distinct pairs. However, the binomial coefficient does not differentiate between permutations of elements of the pairs, which our tests do. Therefore we need 812 tests to cover all pairs:
###Code
len(autopep8_runner.ebnf_grammar()["<option>"]) * \
(len(autopep8_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
For `mypy` with its 110 options, though, we already end up with 11,990 tests to be conducted:
###Code
len(mypy_runner.ebnf_grammar()["<option>"])
len(mypy_runner.ebnf_grammar()["<option>"]) * \
(len(mypy_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
Even if each pair takes a second to run, we'd still be done in three hours of testing, though. If your program has more options that you all want to get covered in combinations, it is advisable that you limit the number of configurations further – for instance by limiting combinatorial testing to those combinations that possibly can interact with each other; and covering all other (presumably orthogonal) options individually. This mechanism of creating configurations by extending grammars can be easily extended to other configuration targets. One may want to explore a greater number of configurations, or expansions in specific contexts. The [exercises](Exercises), below, have a number of options ready for you. SynopsisThis chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options. `OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
###Output
_____no_output_____
###Markdown
The grammar can be extracted via the method `ebnf_grammar()`:
###Code
option_ebnf_grammar = autopep8_runner.ebnf_grammar()
option_ebnf_grammar
###Output
_____no_output_____
###Markdown
The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:
###Code
from Grammars import convert_ebnf_grammar
fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))
[fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
autopep8_fuzzer = OptionFuzzer(autopep8_runner)
[autopep8_fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The final step in testing would now to invoke the program with these arguments. Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. The `OptionRunner` constructor accepts an additional `miner` keyword parameter, which takes the class of the argument grammar miner to be used. By default, this is `OptionGrammarMiner` – a helper class that can be used (and extended) to create own option grammar miners.
###Code
# ignore
from ClassDiagram import display_class_hierarchy
from Fuzzer import Fuzzer, Runner, ProgramRunner
from Grammars import Expansion
from GrammarFuzzer import GrammarFuzzer, DerivationTree
from GrammarCoverageFuzzer import TrackingGrammarCoverageFuzzer
# ignore
display_class_hierarchy([OptionRunner, OptionFuzzer, OptionGrammarMiner],
public_methods=[
Fuzzer.__init__,
Fuzzer.fuzz,
Fuzzer.run,
Fuzzer.runs,
GrammarFuzzer.__init__,
GrammarFuzzer.fuzz,
GrammarFuzzer.fuzz_tree,
TrackingGrammarCoverageFuzzer.__init__,
OptionFuzzer.__init__,
OptionFuzzer.run,
Runner.__init__,
Runner.run,
ProgramRunner.__init__,
ProgramRunner.__init__,
OptionRunner.__init__,
OptionRunner.ebnf_grammar,
OptionRunner.grammar,
OptionGrammarMiner.__init__,
OptionGrammarMiner.mine_ebnf_grammar,
OptionGrammarMiner.mine_grammar,
],
types={
'DerivationTree': DerivationTree,
'Expansion': Expansion,
'Grammar': Grammar
},
project='fuzzingbook')
###Output
_____no_output_____
###Markdown
Lessons Learned* Besides regular input data, program _configurations_ make an important testing target.* For a given program using a standard library to parse command-line options and arguments, one can automatically extract these and convert them into a grammar.* To cover not only single options, but combinations of options, one can expand the grammar to cover all pairs, or come up with even more ambitious targets. Next StepsIf you liked the idea of mining a grammar from a program, do not miss:* [how to mine grammars for input data](GrammarMiner.ipynb) Our next steps in the book focus on:* [how to parse and recombine inputs](Parser.ipynb)* [how to assign weights and probabilities to specific productions](ProbabilisticGrammarFuzzer.ipynb)* [how to simplify inputs that cause a failure](Reducer.ipynb) BackgroundAlthough configuration data is just as likely to cause failures as other input data, it has received relatively little attention in test generation – possibly because, unlike "regular" input data, configuration data is not so much under control of external parties, and because, again unlike regular data, there is little variance in configurations. Creating models for software configurations and using these models for testing is commonplace, as is the idea of pairwise testing. For an overview, see \cite{Pezze2008}; for a discussion and comparison of state-of-the-art techniques, see \cite{Petke2015}.More specifically, \cite{Sutton2007} also discuss techniques to systematically cover command-line options. Dai et al. \cite{Dai2010} apply configuration fuzzing by changing variables associated with configuration files. Exercises Exercise 1: ifdef Configuration FuzzingIn C programs, the *C preprocessor* can be used to choose which code parts should be compiled and which ones should not. As an example, in the C code```Cifdef LONG_FOOlong foo() { ... }elseint foo() { ... }endif```the compiler will compile the function `foo()` with return type`long` if the _preprocessor variable_ `LONG_FOO` is defined, and with return type `int` if not. Such preprocessor variables are either set in the source files (using `define`, as in `define LONG_FOO`) or on the C compiler command line (using `-D` or `-D=`, as in `-DLONG_FOO`. Such *conditional compilation* is used to configure C programs towards their environment. System-specific code can contain lots of conditional compilation. As an example, consider this excerpt of `xmlparse.c`, the XML parser that is part of the Python runtime library:```cif defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32) define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800endifif !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \ && !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \ && !defined(XML_DEV_URANDOM) \ && !defined(_WIN32) \ && !defined(XML_POOR_ENTROPY) errorendifif !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */endififdef XML_UNICODE_WCHAR_Tdefine XML_T(x) (const wchar_t)xdefine XML_L(x) L xelsedefine XML_T(x) (const unsigned short)xdefine XML_L(x) xendifint fun(int x) { return XML_T(x); }``` A typical configuration for the C preprocessor on the above code could be `cc -c -D_WIN32 -DXML_POOR_ENTROPY -DXML_UNICODE_WCHAR_T xmlparse.c`, defining the given preprocessor variables and selecting the appropriate code fragments. Since the compiler can only compile one configuration at a time (implying that we can also only _test_ one resulting executable at a time), your task is to find out which of these configurations actually compile. To this end, proceed in three steps. Part 1: Extract Preprocessor VariablesWrite a _function_ `cpp_identifiers()` that, given a set of lines (say, from `open(filename).readlines()`), extracts all preprocessor variables referenced in `if` or `ifdef` preprocessor instructions. Apply `ifdef_identifiers()` on the sample C input above, such that```pythoncpp_identifiers(open("xmlparse.c").readlines()) ```returns the set```python{'_WIN32', 'LOAD_LIBRARY_SEARCH_SYSTEM32', 'HAVE_GETRANDOM', 'HAVE_SYSCALL_GETRANDOM', 'HAVE_ARC4RANDOM_BUF', ...}``` **Solution.** Let us start with creating a sample input file, `xmlparse.c`:
###Code
filename = "xmlparse.c"
open(filename, "w").write(
"""
#if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32)
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
#endif
#if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \
&& !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \
&& !defined(XML_DEV_URANDOM) \
&& !defined(_WIN32) \
&& !defined(XML_POOR_ENTROPY)
# error
#endif
#if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)
#define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */
#endif
#ifdef XML_UNICODE_WCHAR_T
#define XML_T(x) (const wchar_t)x
#define XML_L(x) L ## x
#else
#define XML_T(x) (const unsigned short)x
#define XML_L(x) x
#endif
int fun(int x) { return XML_T(x); }
""");
###Output
_____no_output_____
###Markdown
To find C preprocessor `if` directives and preprocessor variables, we use regular expressions matching them.
###Code
import re
re_cpp_if_directive = re.compile(r"\s*#\s*(el)?if")
re_cpp_identifier = re.compile(r"[a-zA-Z_$]+")
def cpp_identifiers(lines):
identifiers = set()
for line in lines:
if re_cpp_if_directive.match(line):
identifiers |= set(re_cpp_identifier.findall(line))
# These are preprocessor keywords
identifiers -= {"if", "ifdef", "ifndef", "defined"}
return identifiers
cpp_ids = cpp_identifiers(open("xmlparse.c").readlines())
cpp_ids
###Output
_____no_output_____
###Markdown
Part 2: Derive an Option GrammarWith the help of `cpp_identifiers()`, create a grammar which has C compiler invocations with a list of options, where each option takes the form `-D` for a preprocessor variable ``. Using this grammar `cpp_grammar`, a fuzzer ```pythong = GrammarCoverageFuzzer(cpp_grammar)```would create C compiler invocations such as```python[g.fuzz() for i in range(10)]['cc -DHAVE_SYSCALL_GETRANDOM xmlparse.c', 'cc -D__SCO__ -DRANDOM_BUF -DXML_UNICODE_WCHAR_T -D__UNIXWARE__ xmlparse.c', 'cc -DXML_POOR_ENTROPY xmlparse.c', 'cc -DRANDOM xmlparse.c', 'cc -D_WIN xmlparse.c', 'cc -DHAVE_ARC xmlparse.c', ...]``` **Solution.** This is not very difficult:
###Code
from Grammars import Grammar, is_valid_grammar
cpp_grammar: Grammar = {
"<start>": ["cc -c<options> " + filename],
"<options>": ["<option>", "<options><option>"],
"<option>": []
}
for id in cpp_ids:
s = new_symbol(cpp_grammar, "<" + id + ">")
cpp_grammar["<option>"].append(s)
cpp_grammar[s] = [" -D" + id]
assert is_valid_grammar(cpp_grammar)
cpp_grammar
###Output
_____no_output_____
###Markdown
Part 3: C Preprocessor Configuration FuzzingUsing the grammar just produced, use a `GrammarCoverageFuzzer` to1. Test each processor variable individually2. Test each pair of processor variables, using `pairwise()`.What happens if you actually run the invocations? **Solution.** We can simply run the coverage fuzzer, as described above.
###Code
g = GrammarCoverageFuzzer(cpp_grammar)
g.fuzz()
from Fuzzer import ProgramRunner
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:7:3: error:
# error
^
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 2 errors generated.
$ cc -c -DTIOCSWINSZ -DRANDOM_BUF xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_POOR_ENTROPY -D__UNIXWARE__ -DXML_DEV_URANDOM xmlparse.c
$ cc -c -DHAVE_SYSCALL_GETRANDOM xmlparse.c
$ cc -c -D__SCO__ -D_WIN -DRANDOM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_GETRANDOM -DHAVE_SYSCALL_GETRANDOM -D_WIN xmlparse.c
$ cc -c -DTIOCSWINSZ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_DEV_URANDOM xmlparse.c
###Markdown
To test all pairs, we can use `pairwise()`:
###Code
pairwise_cpp_grammar = extend_grammar(cpp_grammar)
pairwise_cpp_grammar["<option>"] = pairwise(cpp_grammar["<option>"])
pairwise_cpp_grammar["<option>"][:10]
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -D__UNIXWARE__ -DHAVE_GETRANDOM -DRANDOM_BUF xmlparse.c
$ cc -c -DHAVE_SYSCALL_GETRANDOM -D__UNIXWARE__ xmlparse.c
$ cc -c -DRANDOM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DTIOCSWINSZ -DHAVE_GETRANDOM -D__SCO__ xmlparse.c
$ cc -c -DRANDOM -DHAVE_ARC -DHAVE_GETRANDOM -DRANDOM xmlparse.c
$ cc -c -D_WIN -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM_BUF -D__SCO__ -D_WIN xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -D__UNIXWARE__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DRANDOM -DHAVE_SYSCALL_GETRANDOM -DRANDOM_BUF xmlparse.c
###Markdown
Some of the compilation errors we get could be expected – for instance, defining `XML_UNICODE_WCHAR_T` when actually, the type is not supported in our environment. Other errors may not be expected – and it is these errors we would find through systematic configuration fuzzing, as described above. At the end, don't forget to clean up:
###Code
os.remove("xmlparse.c")
if os.path.exists("xmlparse.o"):
os.remove("xmlparse.o")
###Output
_____no_output_____
###Markdown
Exercise 2: .ini Configuration FuzzingBesides command-line options, another important source of configurations are _configuration files_. In this exercise, we will consider the very simple configuration language provided by the Python `ConfigParser` module, which is very similar to what is found in Microsoft Windows _.ini_ files. The following example for a `ConfigParser` input file stems right from [the ConfigParser documentation](https://docs.python.org/3/library/configparser.html):```[DEFAULT]ServerAliveInterval = 45Compression = yesCompressionLevel = 9ForwardX11 = yes[bitbucket.org]User = hg[topsecret.server.com]Port = 50022ForwardX11 = no``` The above `ConfigParser` file can be created programmatically:
###Code
import configparser
config = configparser.ConfigParser()
config['DEFAULT'] = {'ServerAliveInterval': '45',
'Compression': 'yes',
'CompressionLevel': '9'}
config['bitbucket.org'] = {}
config['bitbucket.org']['User'] = 'hg'
config['topsecret.server.com'] = {}
topsecret = config['topsecret.server.com']
topsecret['Port'] = '50022' # mutates the parser
topsecret['ForwardX11'] = 'no' # same here
config['DEFAULT']['ForwardX11'] = 'yes'
with open('example.ini', 'w') as configfile:
config.write(configfile)
with open('example.ini') as configfile:
print(configfile.read(), end="")
###Output
[DEFAULT]
serveraliveinterval = 45
compression = yes
compressionlevel = 9
forwardx11 = yes
[bitbucket.org]
user = hg
[topsecret.server.com]
port = 50022
forwardx11 = no
###Markdown
and be read in again:
###Code
config = configparser.ConfigParser()
config.read('example.ini')
topsecret = config['topsecret.server.com']
topsecret['Port']
###Output
_____no_output_____
###Markdown
Part 1: Read ConfigurationUsing `configparser`, create a program reading in the above configuration file and accessing the individual elements. Part 2: Create a Configuration GrammarDesign a grammar that will automatically create configuration files suitable for your above program. Fuzz your program with it. Part 3: Mine a Configuration GrammarBy dynamically tracking the individual accesses to configuration elements, you can again extract a basic grammar from the execution. To this end, create a subclass of `ConfigParser` with a special method `__getitem__`:
###Code
class TrackingConfigParser(configparser.ConfigParser):
def __getitem__(self, key):
print("Accessing", repr(key))
return super().__getitem__(key)
###Output
_____no_output_____
###Markdown
For a `TrackingConfigParser` object `p`, `p.__getitem__(key)` will be invoked whenever `p[key]` is accessed:
###Code
tracking_config_parser = TrackingConfigParser()
tracking_config_parser.read('example.ini')
section = tracking_config_parser['topsecret.server.com']
###Output
Accessing 'topsecret.server.com'
###Markdown
Using `__getitem__()`, as above, implement a tracking mechanism that, while your program accesses the read configuration, automatically saves options accessed and values read. Create a prototype grammar from these values; use it for fuzzing. At the end, don't forget to clean up:
###Code
import os
os.remove("example.ini")
###Output
_____no_output_____
###Markdown
**Solution.** Left to the reader. Enjoy! Exercise 3: Extracting and Fuzzing C Command-Line OptionsIn C programs, the `getopt()` function are frequently used to process configuration options. A call```getopt(argc, argv, "bf:")```indicates that the program accepts two options `-b` and `-f`, with `-f` taking an argument (as indicated by the following colon). Part 1: Getopt FuzzingWrite a framework which, for a given C program, automatically extracts the argument to `getopt()` and derives a fuzzing grammar for it. There are multiple ways to achieve this:1. Scan the program source code for occurrences of `getopt()` and return the string passed. (Crude, but should frequently work.)2. Insert your own implementation of `getopt()` into the source code (effectively replacing `getopt()` from the runtime library), which outputs the `getopt()` argument and exits the program. Recompile and run.3. (Advanced.) As above, but instead of changing the source code, hook into the _dynamic linker_ which at runtime links the program with the C runtime library. Set the library loading path (on Linux and Unix, this is the `LD_LIBRARY_PATH` environment variable) such that your own version of `getopt()` is linked first, and the regular libraries later. Executing the program (without recompiling) should yield the desired result.Apply this on `grep` and `ls`; report the resulting grammars and results. **Solution.** Left to the reader. Enjoy hacking! Part 2: Fuzzing Long Options in CSame as Part 1, but also hook into the GNU variant `getopt_long()`, which accepts "long" arguments with double dashes such as `--help`. Note that method 1, above, will not work here, since the "long" options are defined in a separately defined structure. **Solution.** Left to the reader. Enjoy hacking! Exercise 4: Expansions in ContextIn our above option configurations, we have multiple symbols which all expand to the same integer. For instance, the `--line-range` option of `autopep8` takes two `` parameters which both expand into the same `` symbol:``` ::= ... | --line-range | ... ::= ::= (-)?+ ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9```
###Code
autopep8_runner.ebnf_grammar()["<line>"]
autopep8_runner.ebnf_grammar()["<int>"]
autopep8_runner.ebnf_grammar()["<digit>"]
###Output
_____no_output_____
###Markdown
Testing ConfigurationsThe behavior of a program is not only governed by its data. The _configuration_ of a program – that is, the settings that govern the execution of a program on its (regular) input data, as set by options or configuration files – just as well influences behavior, and thus can and should be tested. In this chapter, we explore how to systematically _test_ and _cover_ software configurations. By _automatically inferring configuration options_, we can apply these techniques out of the box, with no need for writing a grammar. Finally, we show how to systematically cover _combinations_ of configuration options, quickly detecting unwanted interferences. **Prerequisites*** You should have read the [chapter on grammars](Grammars.ipynb).* You should have read the [chapter on grammar coverage](GrammarCoverageFuzzer.ipynb). SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.ConfigurationFuzzer import ```and then make use of the following features.This chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options.`OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")```The grammar can be extracted via the method `ebnf_grammar()`:```python>>> option_ebnf_grammar = autopep8_runner.ebnf_grammar()>>> print(option_ebnf_grammar){'': ['()*'], '': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config ', ' --ignore-local-config', ' -r', ' --recursive', ' -j ', ' --jobs ', ' -p ', ' --pep8-passes ', ' -a', ' --aggressive', ' --experimental', ' --exclude ', ' --list-fixes', ' --ignore ', ' --select ', ' --max-line-length ', ' --line-range ', ' --range ', ' --indent-size ', ' --hang-closing', ' --exit-code'], '': [' foo.py'], '': ['+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '': [''], '': ['(-)?+'], '': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '': [''], '': [''], '': [''], '': ['']}```The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:```python>>> from Grammars import convert_ebnf_grammar>>> fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))>>> [fuzzer.fuzz() for i in range(3)][' foo.py', ' --max-line-length 6 --jobs -594 --ignore , --ignore-local-config -r --in-place --list-fixes --recursive -v --experimental -p 72 -h --aggressive --indent-size 3 --exit-code --hang-closing --pep8-passes -180 -d --global-config XQjT --diff --exclude *g -j 43 --help --select A --version --verbose -a --line-range -3963 0 --range 1 4 -i --in-place --version foo.py', ' --global-config 2 --select PuR --ignore b --ignore @ --ignore ;7d --ignore ) --ignore Fw1Z --ignore 0 --global-config ynf --select >G --select + --global-config ( --exclude v --exclude V --ignore ^ --select L --exclude 6 --exclude =$` --ignore % --global-config N --ignore [8maop --ignore 3! --select ~?c< --exclude C --select U --exclude h --global-config --global-config 5O --select x --select B] --ignore _ --global-config .K --global-config S --exclude r --global-config qW --exclude te4/ --exclude J} --ignore " --exclude |H --global-config -&k{s --global-config E --select :I --ignore 9 --global-config M --exclude YD --select \\ --exclude z --ignore i --select \'l --ignore M --ignore ;h --exit-code foo.py']```The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.```python>>> autopep8_runner = OptionRunner("autopep8", "foo.py")>>> autopep8_fuzzer = OptionFuzzer(autopep8_runner)>>> [autopep8_fuzzer.fuzz() for i in range(3)][' --diff foo.py', ' --exclude --global-config V --select He --global-config | --global-config n}aicm --ignore 7 --ignore b --global-config u --exclude WB` --exclude 2 --exclude JpZt --exclude l_ --select *%^ --exclude & --exclude )Lv --global-config [ --global-config " --exclude sOEXP --aggressive --exclude \' --help --diff --experimental foo.py', ' --ignore FCw; --global-config /1K?:6 --exclude U --exclude z --ignore rQ --select x --select Y --select { --global-config o --select 34 --exclude ]j --select ~ --exclude 9@ --ignore w --global-config CVL --diff foo.py']```The final step in testing would now to invoke the program with these arguments.Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. Configuration OptionsWhen we talk about the input to a program, we usually think of the _data_ it processes. This is also what we have been fuzzing in the past chapters – be it with [random input](Fuzzer.ipynb), [mutation-based fuzzing](MutationFuzzer.ipynb), or [grammar-based fuzzing](GrammarFuzzer.ipynb). However, programs typically have several input sources, all of which can and should be tested – and included in test generation. One important source of input is the program's _configuration_ – that is, a set of inputs that typically is set once when beginning to process data and then stays constant while processing data, while the program is running, or even while the program is deployed. Such a configuration is frequently set in _configuration files_ (for instance, as key/value pairs); the most ubiquitous method for command-line tools, though, are _configuration options_ on the command line. As an example, consider the `grep` utility to find textual patterns in files. The exact mode by which `grep` works is governed by a multitude of options, which can be listed by providing a `--help` option:
###Code
!grep --help
###Output
usage: grep [-abcdDEFGHhIiJLlMmnOopqRSsUVvwXxZz] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
###Markdown
All these options need to be tested for whether they operate correctly. In security testing, any such option may also trigger a yet unknown vulnerability. Hence, such options can become _fuzz targets_ on their own. In this chapter, we analyze how to systematically test such options – and better yet, how to extract possible configurations right out of given program files, such that we do not have to specify anything. Options in PythonLet us stick to our common programming language here and examine how options are processed in Python. The `argparse` module provides a parser for command-line arguments (and options) with great functionality – and great complexity. You start by defining a parser (`argparse.ArgumentParser()`) to which individual arguments with various features are added, one after another. Additional parameters for each argument can specify the type (`type`) of the argument (say, integers or strings), or the number of arguments (`nargs`). By default, arguments are stored under their name in the `args` object coming from `parse_args()` – thus, `args.integers` holds the `integer` arguments added earlier. Special actions (`actions`) allow to store specific values in given variables; the `store_const` action stores the given `const` in the attribute named by `dest`. The following example takes a number of integer arguments (`integers`) as well as an operator (`--sum`, `--min`, or `--max`) to be applied on these integers. The operators all store a function reference in the `accumulate` attribute, which is finally invoked on the integers parsed:
###Code
import argparse
def process_numbers(args=[]):
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--sum', dest='accumulate', action='store_const',
const=sum,
help='sum the integers')
group.add_argument('--min', dest='accumulate', action='store_const',
const=min,
help='compute the minimum')
group.add_argument('--max', dest='accumulate', action='store_const',
const=max,
help='compute the maximum')
args = parser.parse_args(args)
print(args.accumulate(args.integers))
###Output
_____no_output_____
###Markdown
Here's how `process_numbers()` works. We can, for instance, invoke the `--min` option on the given arguments to compute the minimum:
###Code
process_numbers(["--min", "100", "200", "300"])
###Output
100
###Markdown
Or compute the sum of three numbers:
###Code
process_numbers(["--sum", "1", "2", "3"])
###Output
6
###Markdown
When defined via `add_mutually_exclusive_group()` (as above), options are mutually exclusive. Consequently, we can have only one operator:
###Code
import bookutils
from ExpectError import ExpectError
with ExpectError(print_traceback=False):
process_numbers(["--sum", "--max", "1", "2", "3"])
###Output
usage: ipykernel_launcher.py [-h] (--sum | --min | --max) N [N ...]
ipykernel_launcher.py: error: argument --max: not allowed with argument --sum
SystemExit: 2 (expected)
###Markdown
A Grammar for ConfigurationsHow can we test a system with several options? The easiest answer is to write a grammar for it. The grammar `PROCESS_NUMBERS_EBNF_GRAMMAR` reflects the possible combinations of options and arguments:
###Code
from Grammars import crange, srange, convert_ebnf_grammar, extend_grammar, is_valid_grammar
from Grammars import START_SYMBOL, new_symbol, Grammar
PROCESS_NUMBERS_EBNF_GRAMMAR: Grammar = {
"<start>": ["<operator> <integers>"],
"<operator>": ["--sum", "--min", "--max"],
"<integers>": ["<integer>", "<integers> <integer>"],
"<integer>": ["<digit>+"],
"<digit>": crange('0', '9')
}
assert is_valid_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
PROCESS_NUMBERS_GRAMMAR = convert_ebnf_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
###Output
_____no_output_____
###Markdown
We can feed this grammar into our [grammar coverage fuzzer](GrammarCoverageFuzzer.ipynb) and have it cover one option after another:
###Code
from GrammarCoverageFuzzer import GrammarCoverageFuzzer
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
print(f.fuzz())
###Output
--max 9 5 8 210 80 9756431
--sum 9 4 99 1245 612370
--min 2 3 0 46 15798 7570926
###Markdown
Of course, we can also invoke `process_numbers()` with these very arguments. To this end, we need to convert the string produced by the grammar back into a list of individual arguments, using `split()`:
###Code
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
args = f.fuzz().split()
print(args)
process_numbers(args)
###Output
['--max', '8', '9', '3067', '44', '13852967057']
13852967057
['--sum', '9', '8', '63', '9278111', '59206197798']
59215475989
['--min', '4', '1', '4864', '33342', '7827970808951']
1
###Markdown
In a similar way, we can define grammars for any program to be tested; as well as define grammars for, say, configuration files. Yet, the grammar has to be updated with every change to the program, which creates a maintenance burden. Given that the information required for the grammar is already all encoded in the program, the question arises: _Can't we go and extract configuration options right out of the program in the first place?_ Mining Configuration OptionsIn this section, we try to extract option and argument information right out of a program, such that we do not have to specify a configuration grammar. The aim is to have a configuration fuzzer that works on the options and arguments of an arbitrary program, as long as it follows specific conventions for processing its arguments. In the case of Python programs, this means using the `argparse` module.Our idea is as follows: We execute the given program up to the point where the arguments are actually parsed – that is, `argparse.parse_args()` is invoked. Up to this point, we track all calls into the argument parser, notably those calls that define arguments and options (`add_argument()`). From these, we construct the grammar. Tracking ArgumentsLet us illustrate this approach with a simple experiment: We define a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while `process_numbers` is invoked. If we have a call to a method `add_argument`, we access and print out the local variables (which at this point are the arguments to the method).
###Code
import sys
import string
def trace_locals(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(method_name, locals)
###Output
_____no_output_____
###Markdown
What we get is a list of all calls to `add_argument()`, together with the method arguments passed:
###Code
sys.settrace(trace_locals)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
add_argument {'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True), 'args': ('-h', '--help'), 'kwargs': {'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}}
add_argument {'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True), 'args': ('integers',), 'kwargs': {'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x10e7f9130>, 'args': ('--sum',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x10e7f9130>, 'args': ('--min',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}}
add_argument {'self': <argparse._MutuallyExclusiveGroup object at 0x10e7f9130>, 'args': ('--max',), 'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}}
6
###Markdown
From the `args` argument, we can access the individual options and arguments to be defined:
###Code
def trace_options(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(locals['args'])
sys.settrace(trace_options)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
('-h', '--help')
('integers',)
('--sum',)
('--min',)
('--max',)
6
###Markdown
We see that each argument comes as a tuple with one (say, `integers` or `--sum`) or two members (`-h` and `--help`), which denote alternate forms for the same option. Our job will be to go through the arguments of `add_arguments()` and detect not only the names of options and arguments, but also whether they accept additional parameters, as well as the type of the parameters. A Grammar Miner for Options and Arguments Let us now build a class that gathers all this information to create a grammar. We use the `ParseInterrupt` exception to interrupt program execution after gathering all arguments and options:
###Code
class ParseInterrupt(Exception):
pass
###Output
_____no_output_____
###Markdown
The class `OptionGrammarMiner` takes an executable function for which the grammar of options and arguments is to be mined:
###Code
class OptionGrammarMiner:
def __init__(self, function, log=False):
self.function = function
self.log = log
###Output
_____no_output_____
###Markdown
The method `mine_ebnf_grammar()` is where everything happens. It creates a grammar of the form``` ::= * ::= ::= ```in which the options and arguments will be collected. It then sets a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while the previously defined `function` is invoked. Raising `ParseInterrupt` (when `parse_args()` is invoked) ends execution.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
OPTION_SYMBOL = "<option>"
ARGUMENTS_SYMBOL = "<arguments>"
def mine_ebnf_grammar(self):
self.grammar = {
START_SYMBOL: ["(" + self.OPTION_SYMBOL + ")*" + self.ARGUMENTS_SYMBOL],
self.OPTION_SYMBOL: [],
self.ARGUMENTS_SYMBOL: []
}
self.current_group = self.OPTION_SYMBOL
old_trace = sys.gettrace()
sys.settrace(self.traceit)
try:
self.function()
except ParseInterrupt:
pass
sys.settrace(old_trace)
return self.grammar
def mine_grammar(self):
return convert_ebnf_grammar(self.mine_ebnf_grammar())
###Output
_____no_output_____
###Markdown
The trace function checks for four methods: `add_argument()` is the most important function, resulting in processing arguments; `frame.f_locals` again is the set of local variables, which at this point is mostly the arguments to `add_argument()`. Since mutually exclusive groups also have a method `add_argument()`, we set the flag `in_group` to differentiate. Note that we make no specific efforts to differentiate between multiple parsers or groups; we simply assume that there is one parser, and at any point at most one mutually exclusive group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def traceit(self, frame, event, arg):
if event != "call":
return
if "self" not in frame.f_locals:
return
self_var = frame.f_locals["self"]
method_name = frame.f_code.co_name
if method_name == "add_argument":
in_group = repr(type(self_var)).find("Group") >= 0
self.process_argument(frame.f_locals, in_group)
elif method_name == "add_mutually_exclusive_group":
self.add_group(frame.f_locals, exclusive=True)
elif method_name == "add_argument_group":
# self.add_group(frame.f_locals, exclusive=False)
pass
elif method_name == "parse_args":
raise ParseInterrupt
return None
###Output
_____no_output_____
###Markdown
The `process_arguments()` now analyzes the arguments passed and adds them to the grammar:* If the argument starts with `-`, it gets added as an optional element to the `` list* Otherwise, it gets added to the `` list.The optional `nargs` argument specifies how many arguments can follow. If it is a number, we add the appropriate number of elements to the grammar; if it is an abstract specifier (say, `+` or `*`), we use it directly as EBNF operator.Given the large number of parameters and optional behavior, this is a somewhat messy function, but it does the job.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def process_argument(self, locals, in_group):
args = locals["args"]
kwargs = locals["kwargs"]
if self.log:
print(args)
print(kwargs)
print()
for arg in args:
self.process_arg(arg, in_group, kwargs)
class OptionGrammarMiner(OptionGrammarMiner):
def process_arg(self, arg, in_group, kwargs):
if arg.startswith('-'):
if not in_group:
target = self.OPTION_SYMBOL
else:
target = self.current_group
metavar = None
arg = " " + arg
else:
target = self.ARGUMENTS_SYMBOL
metavar = arg
arg = ""
if "nargs" in kwargs:
nargs = kwargs["nargs"]
else:
nargs = 1
param = self.add_parameter(kwargs, metavar)
if param == "":
nargs = 0
if isinstance(nargs, int):
for i in range(nargs):
arg += param
else:
assert nargs in "?+*"
arg += '(' + param + ')' + nargs
if target == self.OPTION_SYMBOL:
self.grammar[target].append(arg)
else:
self.grammar[target].append(arg)
###Output
_____no_output_____
###Markdown
The method `add_parameter()` handles possible parameters of options. If the argument has an `action` defined, it takes no parameter. Otherwise, we identify the type of the parameter (as `int` or `str`) and augment the grammar with an appropriate rule.
###Code
import inspect
class OptionGrammarMiner(OptionGrammarMiner):
def add_parameter(self, kwargs, metavar):
if "action" in kwargs:
# No parameter
return ""
type_ = "str"
if "type" in kwargs:
given_type = kwargs["type"]
# int types come as '<class int>'
if inspect.isclass(given_type) and issubclass(given_type, int):
type_ = "int"
if metavar is None:
if "metavar" in kwargs:
metavar = kwargs["metavar"]
else:
metavar = type_
self.add_type_rule(type_)
if metavar != type_:
self.add_metavar_rule(metavar, type_)
param = " <" + metavar + ">"
return param
###Output
_____no_output_____
###Markdown
The method `add_type_rule()` adds a rule for parameter types to the grammar. If the parameter is identified by a meta-variable (say, `N`), we add a rule for this as well to improve legibility.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_type_rule(self, type_):
if type_ == "int":
self.add_int_rule()
else:
self.add_str_rule()
def add_int_rule(self):
self.grammar["<int>"] = ["(-)?<digit>+"]
self.grammar["<digit>"] = crange('0', '9')
def add_str_rule(self):
self.grammar["<str>"] = ["<char>+"]
self.grammar["<char>"] = srange(
string.digits
+ string.ascii_letters
+ string.punctuation)
def add_metavar_rule(self, metavar, type_):
self.grammar["<" + metavar + ">"] = ["<" + type_ + ">"]
###Output
_____no_output_____
###Markdown
The method `add_group()` adds a new mutually exclusive group to the grammar. We define a new symbol (say, ``) for the options added to the group, and use the `required` and `exclusive` flags to define an appropriate expansion operator. The group is then prefixed to the grammar, as in``` ::= * ::= ```and filled with the next calls to `add_argument()` within the group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_group(self, locals, exclusive):
kwargs = locals["kwargs"]
if self.log:
print(kwargs)
required = kwargs.get("required", False)
group = new_symbol(self.grammar, "<group>")
if required and exclusive:
group_expansion = group
if required and not exclusive:
group_expansion = group + "+"
if not required and exclusive:
group_expansion = group + "?"
if not required and not exclusive:
group_expansion = group + "*"
self.grammar[START_SYMBOL][0] = group_expansion + \
self.grammar[START_SYMBOL][0]
self.grammar[group] = []
self.current_group = group
###Output
_____no_output_____
###Markdown
That's it! With this, we can now extract the grammar from our `process_numbers()` program. Turning on logging again reveals the variables we draw upon.
###Code
miner = OptionGrammarMiner(process_numbers, log=True)
process_numbers_grammar = miner.mine_ebnf_grammar()
###Output
('-h', '--help')
{'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}
('integers',)
{'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}
{'required': True}
('--sum',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}
('--min',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}
('--max',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}
###Markdown
Here is the extracted grammar:
###Code
process_numbers_grammar
###Output
_____no_output_____
###Markdown
The grammar properly identifies the group found:
###Code
process_numbers_grammar["<start>"]
process_numbers_grammar["<group>"]
###Output
_____no_output_____
###Markdown
It also identifies a `--help` option provided not by us, but by the `argparse` module:
###Code
process_numbers_grammar["<option>"]
###Output
_____no_output_____
###Markdown
The grammar also correctly identifies the types of the arguments:
###Code
process_numbers_grammar["<arguments>"]
process_numbers_grammar["<integers>"]
###Output
_____no_output_____
###Markdown
The rules for `int` are set as defined by `add_int_rule()`
###Code
process_numbers_grammar["<int>"]
###Output
_____no_output_____
###Markdown
We can take this grammar and convert it to BNF, such that we can fuzz with it right away:
###Code
assert is_valid_grammar(process_numbers_grammar)
grammar = convert_ebnf_grammar(process_numbers_grammar)
assert is_valid_grammar(grammar)
f = GrammarCoverageFuzzer(grammar)
for i in range(10):
print(f.fuzz())
###Output
--sum 9
--max -h --help --help -16 -0
--min --help 2745341 8
--min 1 27
--sum --help --help -2
--sum --help 0 3 -77
--sum -3
--sum --help 429 8 10 0295 -694 1
--max -h 91 -1425 99
--sum -795 -94 8 -44
###Markdown
Each and every invocation adheres to the rules as set forth in the `argparse` calls. By mining options and arguments from existing programs, we can now fuzz these options out of the box – without having to specify a grammar. Testing Autopep8 Let us try out the option grammar miner on real-world Python programs. `autopep8` is a tool that automatically converts Python code to the [PEP 8 Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). (Actually, all Python code in this book runs through `autopep8` during production.) `autopep8` offers a wide range of options, as can be seen by invoking it with `--help`:
###Code
!autopep8 --help
###Output
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing] [--exit-code]
[files ...]
Automatically formats Python code to conform to the PEP 8 style guide.
positional arguments:
files files to format or '-' for standard in
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-v, --verbose print verbose messages; multiple -v result in more
verbose messages
-d, --diff print the diff for the fixed source
-i, --in-place make changes to files in place
--global-config filename
path to a global pep8 config file; if this file does
not exist then this is ignored (default:
/Users/zeller/.config/pep8)
--ignore-local-config
don't look for and apply local config files; if not
passed, defaults are updated with any config files in
the project's root directory
-r, --recursive run recursively over directories; must be used with
--in-place or --diff
-j n, --jobs n number of parallel jobs; match CPU count if value is
less than 1
-p n, --pep8-passes n
maximum number of additional pep8 passes (default:
infinite)
-a, --aggressive enable non-whitespace changes; multiple -a result in
more aggressive changes
--experimental enable experimental fixes
--exclude globs exclude file/directory names that match these comma-
separated globs
--list-fixes list codes for fixes; used by --ignore and --select
--ignore errors do not fix these errors/warnings (default:
E226,E24,W50,W690)
--select errors fix only these errors/warnings (e.g. E4,W)
--max-line-length n set maximum allowed line length (default: 79)
--line-range line line, --range line line
only fix errors found within this inclusive range of
line numbers (e.g. 1 99); line numbers are indexed at
1
--hang-closing hang-closing option passed to pycodestyle
--exit-code change to behavior of exit code. default behavior of
return value, 0 is no differences, 1 is error exit.
return 2 when add this option. 2 is exists
differences.
###Markdown
Autopep8 SetupWe want to systematically test these options. In order to deploy our configuration grammar miner, we need to find the source code of the executable:
###Code
import os
def find_executable(name):
for path in os.get_exec_path():
qualified_name = os.path.join(path, name)
if os.path.exists(qualified_name):
return qualified_name
return None
autopep8_executable = find_executable("autopep8")
assert autopep8_executable is not None
autopep8_executable
###Output
_____no_output_____
###Markdown
Next, we build a function that reads the contents of the file and executes it.
###Code
def autopep8():
executable = find_executable("autopep8")
# First line has to contain "/usr/bin/env python" or like
first_line = open(executable).readline()
assert first_line.find("python") >= 0
contents = open(executable).read()
exec(contents)
###Output
_____no_output_____
###Markdown
Mining an Autopep8 GrammarWe can use the `autopep8()` function in our grammar miner:
###Code
autopep8_miner = OptionGrammarMiner(autopep8)
###Output
_____no_output_____
###Markdown
and extract a grammar for it:
###Code
autopep8_ebnf_grammar = autopep8_miner.mine_ebnf_grammar()
###Output
_____no_output_____
###Markdown
This works because here, `autopep8` is not a separate process (and a separate Python interpreter), but we run the `autopep8()` function (and the `autopep8` code) in our current Python interpreter – up to the call to `parse_args()`, where we interrupt execution again. At this point, the `autopep8` code has done nothing but setting up the argument parser – which is what we are interested in. The grammar options mined reflect precisely the options seen when providing `--help`:
###Code
print(autopep8_ebnf_grammar["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
Metavariables like `` or `` are placeholders for integers. We assume all metavariables of the same name have the same type:
###Code
autopep8_ebnf_grammar["<line>"]
###Output
_____no_output_____
###Markdown
The grammar miner has inferred that the argument to `autopep8` is a list of files:
###Code
autopep8_ebnf_grammar["<arguments>"]
###Output
_____no_output_____
###Markdown
which in turn all are strings:
###Code
autopep8_ebnf_grammar["<files>"]
###Output
_____no_output_____
###Markdown
As we are only interested in testing options, not arguments, we fix the arguments to a single mandatory input. (Otherwise, we'd have plenty of random file names generated.)
###Code
autopep8_ebnf_grammar["<arguments>"] = [" <files>"]
autopep8_ebnf_grammar["<files>"] = ["foo.py"]
assert is_valid_grammar(autopep8_ebnf_grammar)
###Output
_____no_output_____
###Markdown
Creating Autopep8 Options Let us now use the inferred grammar for fuzzing. Again, we convert the EBNF grammar into a regular BNF grammar:
###Code
autopep8_grammar = convert_ebnf_grammar(autopep8_ebnf_grammar)
assert is_valid_grammar(autopep8_grammar)
###Output
_____no_output_____
###Markdown
And we can use the grammar for fuzzing all options:
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=4)
for i in range(20):
print(f.fuzz())
###Output
-r foo.py
-h --experimental --hang-closing foo.py
--list-fixes -v foo.py
--aggressive -d foo.py
--indent-size 9 --help foo.py
--exit-code --recursive foo.py
--diff --version -i foo.py
--max-line-length 0 --in-place --verbose foo.py
--ignore-local-config -a foo.py
--select x -i --exit-code foo.py
-j 8 --diff foo.py
-d -v -d foo.py
-p 6 -i foo.py
-v --diff foo.py
--ignore uA --recursive foo.py
--jobs 5 -r foo.py
--range 4 1 foo.py
--ignore-local-config -i foo.py
-r --exit-code foo.py
-v -r foo.py
###Markdown
Let us apply these options on the actual program. We need a file `foo.py` that will serve as input: (Note that the following commands will overwrite the file `foo.py`, if it already exists in the current working directory. Be aware of this, if you downloaded the notebooks and are working locally.)
###Code
def create_foo_py():
open("foo.py", "w").write("""
def twice(x = 2):
return x + x
""")
create_foo_py()
print(open("foo.py").read(), end="")
###Output
def twice(x = 2):
return x + x
###Markdown
We see how `autopep8` fixes the spacing:
###Code
!autopep8 foo.py
###Output
def twice(x=2):
return x + x
###Markdown
Let us now put things together. We define a `ProgramRunner` that will run the `autopep8` executable with arguments coming from the mined `autopep8` grammar.
###Code
from Fuzzer import ProgramRunner
###Output
_____no_output_____
###Markdown
Running `autopep8` with the mined options reveals a surprisingly high number of passing runs. (We see that some options depend on each other or are mutually exclusive, but this is handled by the program logic, not the argument parser, and hence out of our scope.) The `GrammarCoverageFuzzer` ensures that each option is tested at least once. (Digits and letters, too, by the way.)
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=5)
for i in range(20):
invocation = "autopep8" + f.fuzz()
print("$ " + invocation)
args = invocation.split()
autopep8_runner = ProgramRunner(args)
result, outcome = autopep8_runner.run()
if result.stderr != "":
print(result.stderr, end="")
###Output
$ autopep8 foo.py
$ autopep8 --diff --max-line-length 4 --exit-code --range 5 8 -p 2 foo.py
$ autopep8 --ignore z --verbose -r --list-fixes foo.py
--recursive must be used with --in-place or --diff$ autopep8 --exclude 5 -h -i --aggressive --in-place foo.py
$ autopep8 --select a --help --experimental foo.py
$ autopep8 --indent-size -30 --recursive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --global-config < -j 9 -v -a foo.py
parallel jobs requires --in-place$ autopep8 --line-range 7 1 --hang-closing -d foo.py
First value of --range should be less than or equal to the second$ autopep8 --pep8-passes 6 --hang-closing --version --ignore-local-config foo.py
$ autopep8 --jobs -2 --experimental --version foo.py
$ autopep8 --ignore Y: --select ! --global-config e foo.py
$ autopep8 --select 1 -a --recursive --aggressive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --ignore * --ignore `0 --global-config _ --verbose foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config ,\ --exclude r -v foo.py
[file:foo.py]
---> Applying global fix for E265
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
---> 3 issue(s) to fix {'E251': {2}, 'E221': {3}, 'E222': {3}}
---> 1 issue(s) to fix {'E222': {3}}
---> 0 issue(s) to fix {}
$ autopep8 --global-config xd6M --recursive foo.py
--recursive must be used with --in-place or --diff$ autopep8 --select R --exclude L --version --ignore-local-config foo.py
$ autopep8 --select " --verbose -h -d foo.py
$ autopep8 --diff -i -h foo.py
$ autopep8 --in-place --select w --version -i foo.py
$ autopep8 --ignore 49 --exclude lI -i foo.py
###Markdown
Our `foo.py` file now has been formatted in place a number of times:
###Code
print(open("foo.py").read(), end="")
###Output
def twice(x=2):
return x + x
###Markdown
We don't need it anymore, so we clean up things:
###Code
import os
os.remove("foo.py")
###Output
_____no_output_____
###Markdown
Classes for Fuzzing Configuration OptionsLet us now create reusable classes that we can use for testing arbitrary programs. (Okay, make that "arbitrary programs that are written in Python and use the `argparse` module to process command-line arguments.") The class `OptionRunner` is a subclass of `ProgramRunner` that takes care of automatically determining the grammar, using the same steps we used for `autopep8`, above.
###Code
class OptionRunner(ProgramRunner):
def __init__(self, program, arguments=None):
if isinstance(program, str):
self.base_executable = program
else:
self.base_executable = program[0]
self.find_contents()
self.find_grammar()
if arguments is not None:
self.set_arguments(arguments)
super().__init__(program)
###Output
_____no_output_____
###Markdown
First, we find the contents of the Python executable:
###Code
class OptionRunner(OptionRunner):
def find_contents(self):
self._executable = find_executable(self.base_executable)
first_line = open(self._executable).readline()
assert first_line.find("python") >= 0
self.contents = open(self._executable).read()
def invoker(self):
exec(self.contents)
def executable(self):
return self._executable
###Output
_____no_output_____
###Markdown
Next, we determine the grammar using the `OptionGrammarMiner` class:
###Code
class OptionRunner(OptionRunner):
def find_grammar(self):
miner = OptionGrammarMiner(self.invoker)
self._ebnf_grammar = miner.mine_ebnf_grammar()
def ebnf_grammar(self):
return self._ebnf_grammar
def grammar(self):
return convert_ebnf_grammar(self._ebnf_grammar)
###Output
_____no_output_____
###Markdown
The two service methods `set_arguments()` and `set_invocation()` help us to change the arguments and program, respectively.
###Code
from Grammars import unreachable_nonterminals
class OptionRunner(OptionRunner):
def set_arguments(self, args):
self._ebnf_grammar["<arguments>"] = [" " + args]
# Delete rules for previous arguments
for nonterminal in unreachable_nonterminals(self._ebnf_grammar):
del self._ebnf_grammar[nonterminal]
def set_invocation(self, program):
self.program = program
###Output
_____no_output_____
###Markdown
We can instantiate the class on `autopep8` and immediately get the grammar:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
print(autopep8_runner.ebnf_grammar()["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code']
###Markdown
An `OptionFuzzer` interacts with the given `OptionRunner` to obtain its grammar, which is then passed to its `GrammarCoverageFuzzer` superclass.
###Code
class OptionFuzzer(GrammarCoverageFuzzer):
def __init__(self, runner, *args, **kwargs):
assert issubclass(type(runner), OptionRunner)
self.runner = runner
grammar = runner.grammar()
super().__init__(grammar, *args, **kwargs)
###Output
_____no_output_____
###Markdown
When invoking `run()`, the `OptionFuzzer` creates a new invocation (using `fuzz()` from its grammar) and runs the now given (or previously set) runner with the arguments from the grammar. Note that the runner specified in `run()` can differ from the one set during initialization; this allows for mining options from one program and applying it in another context.
###Code
class OptionFuzzer(OptionFuzzer):
def run(self, runner=None, inp=""):
if runner is None:
runner = self.runner
assert issubclass(type(runner), OptionRunner)
invocation = runner.executable() + " " + self.fuzz()
runner.set_invocation(invocation.split())
return runner.run(inp)
###Output
_____no_output_____
###Markdown
Example: Autopep8Let us apply our newly defined classes on the `autopep8` runner:
###Code
autopep8_fuzzer = OptionFuzzer(autopep8_runner, max_nonterminals=5)
for i in range(3):
print(autopep8_fuzzer.fuzz())
###Output
foo.py
--in-place --ignore-local-config --jobs 6 --recursive -i foo.py
--help -a --indent-size -95 --pep8-passes 3 --exclude = -r foo.py
###Markdown
We can now systematically test `autopep8` with these classes:
###Code
autopep8_fuzzer.run(autopep8_runner)
###Output
_____no_output_____
###Markdown
Example: MyPyWe can extract options for the `mypy` static type checker for Python:
###Code
assert find_executable("mypy") is not None
mypy_runner = OptionRunner("mypy", "foo.py")
print(mypy_runner.ebnf_grammar()["<option>"])
mypy_fuzzer = OptionFuzzer(mypy_runner, max_nonterminals=5)
for i in range(10):
print(mypy_fuzzer.fuzz())
###Output
--follow-imports l foo.py
--logical-deps --allow-any-generics --any-exprs-report yB --fast-exit --show-error-context -i --no-site-packages --no-explicit-package-bases --no-sqlite-cache --warn-unused-ignores foo.py
--html-report 7/ --tb --error-summary foo.py
--no-implicit-reexport --no-error-summary --warn-return-any --dump-build-stats --check-untyped-defs --hide-error-context --no-install-types --install-types foo.py
--no-warn-redundant-casts --no-warn-return-any --bazel --sqlite-cache --enable-error-code foo.py
--cache-map { t --version --warn-unused-configs foo.py
--explicit-package-bases --exclude --help --semantic-analysis-only --soft-error-limit 6 --skip-version-check --no-warn-unused-configs foo.py
--disallow-untyped-defs --hide-error-codes --show-absolute-path --shadow-file --package-root --interactive foo.py
--strict-optional --disable-error-code --disallow-untyped-globals --disallow-incomplete-defs --no-namespace-packages --show-column-numbers --inferstats --raise-exceptions --no-color-output --allow-incomplete-defs --warn-incomplete-stub --command --module --disallow-redefinition --cache-dir --no-strict-optional -c -h -p foo.py
--implicit-optional --disallow-any-generics --no-warn-unreachable --strict-equality --show-error-codes --package --always-false foo.py
###Markdown
Example: NotedownHere's the configuration options for the `notedown` Notebook to Markdown converter:
###Code
assert find_executable("notedown") is not None
notedown_runner = OptionRunner("notedown")
print(notedown_runner.ebnf_grammar()["<option>"])
notedown_fuzzer = OptionFuzzer(notedown_runner, max_nonterminals=5)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
c
--help -o --nomagic --version --output --match H9- --rmagic --render --run
--examples -h --precode r x --execute --strip --debug <
--execute --rmagic --rmagic !
-h --render --debug 2
--examples --version --version .
--execute --help --render R
--run -h 6T
--execute --rmagic -h ;
--from 5* --version --help s
###Markdown
Combinatorial TestingOur `CoverageGrammarFuzzer` does a good job in covering each and every option at least once, which is great for systematic testing. However, as we also can see in our examples above, some options require each other, while others interfere with each other. What we should do as good testers is not only to cover every option individually, but also _combinations_ of options. The Python `itertools` module gives us means to create combinations from lists. We can, for instance, take the `notedown` options and create a list of all pairs.
###Code
from itertools import combinations
option_list = notedown_runner.ebnf_grammar()["<option>"]
pairs = list(combinations(option_list, 2))
###Output
_____no_output_____
###Markdown
There's quite a number of pairs:
###Code
len(pairs)
print(pairs[:20])
###Output
[(' -h', ' --help'), (' -h', ' -o( <str>)?'), (' -h', ' --output( <str>)?'), (' -h', ' --from <str>'), (' -h', ' --to <str>'), (' -h', ' --run'), (' -h', ' --execute'), (' -h', ' --timeout <int>'), (' -h', ' --strip'), (' -h', ' --precode( <str>)+'), (' -h', ' --knit( <str>)?'), (' -h', ' --rmagic'), (' -h', ' --nomagic'), (' -h', ' --render'), (' -h', ' --template <str>'), (' -h', ' --match <str>'), (' -h', ' --examples'), (' -h', ' --version'), (' -h', ' --debug'), (' --help', ' -o( <str>)?')]
###Markdown
Testing every such pair of options frequently suffices to cover all interferences between options. (Programs rarely have conditions involving three or more configuration settings.) To this end, we _change_ the grammar from having a list of options to having a list of _option pairs_, such that covering these will automatically cover all pairs. We create a function `pairwise()` that takes a list of options as occurring in our grammar and returns a list of _pairwise options_ – that is, our original options, but concatenated.
###Code
def pairwise(option_list):
return [option_1 + option_2
for (option_1, option_2) in combinations(option_list, 2)]
###Output
_____no_output_____
###Markdown
Here's the first 20 pairs:
###Code
print(pairwise(option_list)[:20])
###Output
[' -h --help', ' -h -o( <str>)?', ' -h --output( <str>)?', ' -h --from <str>', ' -h --to <str>', ' -h --run', ' -h --execute', ' -h --timeout <int>', ' -h --strip', ' -h --precode( <str>)+', ' -h --knit( <str>)?', ' -h --rmagic', ' -h --nomagic', ' -h --render', ' -h --template <str>', ' -h --match <str>', ' -h --examples', ' -h --version', ' -h --debug', ' --help -o( <str>)?']
###Markdown
The new grammar `pairwise_notedown_grammar` is a copy of the `notedown` grammar, but with the list of options replaced with the above pairwise option list.
###Code
notedown_grammar = notedown_runner.grammar()
pairwise_notedown_grammar = extend_grammar(notedown_grammar)
pairwise_notedown_grammar["<option>"] = pairwise(notedown_grammar["<option>"])
assert is_valid_grammar(pairwise_notedown_grammar)
###Output
_____no_output_____
###Markdown
Using the "pairwise" grammar to fuzz now covers one pair after another:
###Code
notedown_pairwise_fuzzer = GrammarCoverageFuzzer(
pairwise_notedown_grammar, max_nonterminals=4)
for i in range(10):
print(notedown_pairwise_fuzzer.fuzz())
###Output
--help --timeout -0 w
--rmagic --render --execute --strip
--execute --rmagic --nomagic --debug
--help --strip --strip --version r
-h --debug --help --execute
--execute --debug -h --render
--strip --rmagic --render --examples h
-h --run --examples --debug
--from PX --examples >
--run --strip --strip --render
###Markdown
Can we actually test all combinations of options? Not in practice, as the number of combinations quickly grows as the length increases. It decreases again as the number of options reaches the maximum (with 20 options, there is only 1 combination involving _all_ options), but the absolute numbers are still staggering:
###Code
for combination_length in range(1, 20):
tuples = list(combinations(option_list, combination_length))
print(combination_length, len(tuples))
###Output
1 20
2 190
3 1140
4 4845
5 15504
6 38760
7 77520
8 125970
9 167960
10 184756
11 167960
12 125970
13 77520
14 38760
15 15504
16 4845
17 1140
18 190
19 20
###Markdown
Formally, the number of combinations of length $k$ in a set of options of length $n$ is the binomial coefficient$${n \choose k} = \frac{n!}{k!(n - k)!}$$ which for $k = 2$ (all pairs) gives us$${n \choose 2} = \frac{n!}{2(n - 2)!} = \frac{n (n - 1)}{2}$$ For `autopep8` with its 29 options...
###Code
len(autopep8_runner.ebnf_grammar()["<option>"])
###Output
_____no_output_____
###Markdown
... we thus have 406 distinct pairs. However, the binomial coefficient does not differentiate between permutations of elements of the pairs, which our tests do. Therefore we need 812 tests to cover all pairs:
###Code
len(autopep8_runner.ebnf_grammar()["<option>"]) * \
(len(autopep8_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
For `mypy` with its 110 options, though, we already end up with 11,990 tests to be conducted:
###Code
len(mypy_runner.ebnf_grammar()["<option>"])
len(mypy_runner.ebnf_grammar()["<option>"]) * \
(len(mypy_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
Even if each pair takes a second to run, we'd still be done in three hours of testing, though. If your program has more options that you all want to get covered in combinations, it is advisable that you limit the number of configurations further – for instance by limiting combinatorial testing to those combinations that possibly can interact with each other; and covering all other (presumably orthogonal) options individually. This mechanism of creating configurations by extending grammars can be easily extended to other configuration targets. One may want to explore a greater number of configurations, or expansions in specific contexts. The [exercises](Exercises), below, have a number of options ready for you. SynopsisThis chapter provides two classes:* `OptionRunner` automatically extract command-line options from a Python program;* `OptionFuzzer` uses these to automatically test a Python program with a large variety of options. `OptionRunner` runs a program up to the point where it parses its arguments, and then extracts a grammar that describes its invocations:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
###Output
_____no_output_____
###Markdown
The grammar can be extracted via the method `ebnf_grammar()`:
###Code
option_ebnf_grammar = autopep8_runner.ebnf_grammar()
print(option_ebnf_grammar)
###Output
{'<start>': ['(<option>)*<arguments>'], '<option>': [' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing', ' --exit-code'], '<arguments>': [' foo.py'], '<str>': ['<char>+'], '<char>': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"', '#', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~'], '<filename>': ['<str>'], '<int>': ['(-)?<digit>+'], '<digit>': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '<n>': ['<int>'], '<globs>': ['<str>'], '<errors>': ['<str>'], '<line>': ['<int>']}
###Markdown
The grammar can be immediately used for fuzzing. A `GrammarCoverageFuzzer` will ensure all options are covered:
###Code
from Grammars import convert_ebnf_grammar
fuzzer = GrammarCoverageFuzzer(convert_ebnf_grammar(option_ebnf_grammar))
[fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The `OptionFuzzer` class summarizes these steps. Its constructor takes an `OptionRunner` to automatically extract the grammar; it does the necessary steps to extract the grammar and fuzz with it.
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
autopep8_fuzzer = OptionFuzzer(autopep8_runner)
[autopep8_fuzzer.fuzz() for i in range(3)]
###Output
_____no_output_____
###Markdown
The final step in testing would now to invoke the program with these arguments. Note that `OptionRunner` is experimental: It assumes that the Python program in question uses the `argparse` module; and not all `argparse` features are supported. Still, it does a pretty good job even on nontrivial programs. Lessons Learned* Besides regular input data, program _configurations_ make an important testing target.* For a given program using a standard library to parse command-line options and arguments, one can automatically extract these and convert them into a grammar.* To cover not only single options, but combinations of options, one can expand the grammar to cover all pairs, or come up with even more ambitious targets. Next StepsIf you liked the idea of mining a grammar from a program, do not miss:* [how to mine grammars for input data](GrammarMiner.ipynb) Our next steps in the book focus on:* [how to parse and recombine inputs](Parser.ipynb)* [how to assign weights and probabilities to specific productions](ProbabilisticGrammarFuzzer.ipynb)* [how to simplify inputs that cause a failure](Reducer.ipynb) BackgroundAlthough configuration data is just as likely to cause failures as other input data, it has received relatively little attention in test generation – possibly because, unlike "regular" input data, configuration data is not so much under control of external parties, and because, again unlike regular data, there is little variance in configurations. Creating models for software configurations and using these models for testing is commonplace, as is the idea of pairwise testing. For an overview, see \cite{Pezze2008}; for a discussion and comparison of state-of-the-art techniques, see \cite{Petke2015}.More specifically, \cite{Sutton2007} also discuss techniques to systematically cover command-line options. Dai et al. \cite{Dai2010} apply configuration fuzzing by changing variables associated with configuration files. Exercises Exercise 1: ifdef Configuration FuzzingIn C programs, the *C preprocessor* can be used to choose which code parts should be compiled and which ones should not. As an example, in the C code```Cifdef LONG_FOOlong foo() { ... }elseint foo() { ... }endif```the compiler will compile the function `foo()` with return type`long` if the _preprocessor variable_ `LONG_FOO` is defined, and with return type `int` if not. Such preprocessor variables are either set in the source files (using `define`, as in `define LONG_FOO`) or on the C compiler command line (using `-D` or `-D=`, as in `-DLONG_FOO`. Such *conditional compilation* is used to configure C programs towards their environment. System-specific code can contain lots of conditional compilation. As an example, consider this excerpt of `xmlparse.c`, the XML parser that is part of the Python runtime library:```cif defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32) define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800endifif !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \ && !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \ && !defined(XML_DEV_URANDOM) \ && !defined(_WIN32) \ && !defined(XML_POOR_ENTROPY) errorendifif !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */endififdef XML_UNICODE_WCHAR_Tdefine XML_T(x) (const wchar_t)xdefine XML_L(x) L xelsedefine XML_T(x) (const unsigned short)xdefine XML_L(x) xendifint fun(int x) { return XML_T(x); }``` A typical configuration for the C preprocessor on the above code could be `cc -c -D_WIN32 -DXML_POOR_ENTROPY -DXML_UNICODE_WCHAR_T xmlparse.c`, defining the given preprocessor variables and selecting the appropriate code fragments. Since the compiler can only compile one configuration at a time (implying that we can also only _test_ one resulting executable at a time), your task is to find out which of these configurations actually compile. To this end, proceed in three steps. Part 1: Extract Preprocessor VariablesWrite a _function_ `cpp_identifiers()` that, given a set of lines (say, from `open(filename).readlines()`), extracts all preprocessor variables referenced in `if` or `ifdef` preprocessor instructions. Apply `ifdef_identifiers()` on the sample C input above, such that```pythoncpp_identifiers(open("xmlparse.c").readlines()) ```returns the set```python{'_WIN32', 'LOAD_LIBRARY_SEARCH_SYSTEM32', 'HAVE_GETRANDOM', 'HAVE_SYSCALL_GETRANDOM', 'HAVE_ARC4RANDOM_BUF', ...}``` **Solution.** Let us start with creating a sample input file, `xmlparse.c`:
###Code
filename = "xmlparse.c"
open(filename, "w").write(
"""
#if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32)
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
#endif
#if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \
&& !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \
&& !defined(XML_DEV_URANDOM) \
&& !defined(_WIN32) \
&& !defined(XML_POOR_ENTROPY)
# error
#endif
#if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)
#define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */
#endif
#ifdef XML_UNICODE_WCHAR_T
#define XML_T(x) (const wchar_t)x
#define XML_L(x) L ## x
#else
#define XML_T(x) (const unsigned short)x
#define XML_L(x) x
#endif
int fun(int x) { return XML_T(x); }
""");
###Output
_____no_output_____
###Markdown
To find C preprocessor `if` directives and preprocessor variables, we use regular expressions matching them.
###Code
import re
re_cpp_if_directive = re.compile(r"\s*#\s*(el)?if")
re_cpp_identifier = re.compile(r"[a-zA-Z_$]+")
def cpp_identifiers(lines):
identifiers = set()
for line in lines:
if re_cpp_if_directive.match(line):
identifiers |= set(re_cpp_identifier.findall(line))
# These are preprocessor keywords
identifiers -= {"if", "ifdef", "ifndef", "defined"}
return identifiers
cpp_ids = cpp_identifiers(open("xmlparse.c").readlines())
cpp_ids
###Output
_____no_output_____
###Markdown
Part 2: Derive an Option GrammarWith the help of `cpp_identifiers()`, create a grammar which has C compiler invocations with a list of options, where each option takes the form `-D` for a preprocessor variable ``. Using this grammar `cpp_grammar`, a fuzzer ```pythong = GrammarCoverageFuzzer(cpp_grammar)```would create C compiler invocations such as```python[g.fuzz() for i in range(10)]['cc -DHAVE_SYSCALL_GETRANDOM xmlparse.c', 'cc -D__SCO__ -DRANDOM_BUF -DXML_UNICODE_WCHAR_T -D__UNIXWARE__ xmlparse.c', 'cc -DXML_POOR_ENTROPY xmlparse.c', 'cc -DRANDOM xmlparse.c', 'cc -D_WIN xmlparse.c', 'cc -DHAVE_ARC xmlparse.c', ...]``` **Solution.** This is not very difficult:
###Code
from Grammars import Grammar, is_valid_grammar
cpp_grammar: Grammar = {
"<start>": ["cc -c<options> " + filename],
"<options>": ["<option>", "<options><option>"],
"<option>": []
}
for id in cpp_ids:
s = new_symbol(cpp_grammar, "<" + id + ">")
cpp_grammar["<option>"].append(s)
cpp_grammar[s] = [" -D" + id]
assert is_valid_grammar(cpp_grammar)
cpp_grammar
###Output
_____no_output_____
###Markdown
Part 3: C Preprocessor Configuration FuzzingUsing the grammar just produced, use a `GrammarCoverageFuzzer` to1. Test each processor variable individually2. Test each pair of processor variables, using `pairwise()`.What happens if you actually run the invocations? **Solution.** We can simply run the coverage fuzzer, as described above.
###Code
g = GrammarCoverageFuzzer(cpp_grammar)
g.fuzz()
from Fuzzer import ProgramRunner
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DHAVE_SYSCALL_GETRANDOM xmlparse.c
$ cc -c -DHAVE_ARC -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -D__UNIXWARE__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_UNICODE_WCHAR_T -DRANDOM_BUF -D_WIN xmlparse.c
xmlparse.c:7:3: error:
# error
^
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 2 errors generated.
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -D__SCO__ -DTIOCSWINSZ -DHAVE_GETRANDOM xmlparse.c
$ cc -c -DRANDOM -DLOAD_LIBRARY_SEARCH_SYSTEM -DTIOCSWINSZ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_ARC xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_GETRANDOM xmlparse.c
$ cc -c -D_WIN xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
###Markdown
To test all pairs, we can use `pairwise()`:
###Code
pairwise_cpp_grammar = extend_grammar(cpp_grammar)
pairwise_cpp_grammar["<option>"] = pairwise(cpp_grammar["<option>"])
pairwise_cpp_grammar["<option>"][:10]
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DRANDOM_BUF -DRANDOM -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM -DRANDOM_BUF xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_GETRANDOM xmlparse.c
$ cc -c -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_ARC -DRANDOM -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_GETRANDOM -DXML_DEV_URANDOM -DRANDOM -DHAVE_GETRANDOM xmlparse.c
$ cc -c -DTIOCSWINSZ -D__UNIXWARE__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DXML_POOR_ENTROPY -D__SCO__ -DTIOCSWINSZ xmlparse.c
$ cc -c -DRANDOM_BUF xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_GETRANDOM -DLOAD_LIBRARY_SEARCH_SYSTEM -DXML_POOR_ENTROPY xmlparse.c
###Markdown
Some of the compilation errors we get could be expected – for instance, defining `XML_UNICODE_WCHAR_T` when actually, the type is not supported in our environment. Other errors may not be expected – and it is these errors we would find through systematic configuration fuzzing, as described above. At the end, don't forget to clean up:
###Code
os.remove("xmlparse.c")
if os.path.exists("xmlparse.o"):
os.remove("xmlparse.o")
###Output
_____no_output_____
###Markdown
Exercise 2: .ini Configuration FuzzingBesides command-line options, another important source of configurations are _configuration files_. In this exercise, we will consider the very simple configuration language provided by the Python `ConfigParser` module, which is very similar to what is found in Microsoft Windows _.ini_ files. The following example for a `ConfigParser` input file stems right from [the ConfigParser documentation](https://docs.python.org/3/library/configparser.html):```[DEFAULT]ServerAliveInterval = 45Compression = yesCompressionLevel = 9ForwardX11 = yes[bitbucket.org]User = hg[topsecret.server.com]Port = 50022ForwardX11 = no``` The above `ConfigParser` file can be created programmatically:
###Code
import configparser
config = configparser.ConfigParser()
config['DEFAULT'] = {'ServerAliveInterval': '45',
'Compression': 'yes',
'CompressionLevel': '9'}
config['bitbucket.org'] = {}
config['bitbucket.org']['User'] = 'hg'
config['topsecret.server.com'] = {}
topsecret = config['topsecret.server.com']
topsecret['Port'] = '50022' # mutates the parser
topsecret['ForwardX11'] = 'no' # same here
config['DEFAULT']['ForwardX11'] = 'yes'
with open('example.ini', 'w') as configfile:
config.write(configfile)
with open('example.ini') as configfile:
print(configfile.read(), end="")
###Output
[DEFAULT]
serveraliveinterval = 45
compression = yes
compressionlevel = 9
forwardx11 = yes
[bitbucket.org]
user = hg
[topsecret.server.com]
port = 50022
forwardx11 = no
###Markdown
and be read in again:
###Code
config = configparser.ConfigParser()
config.read('example.ini')
topsecret = config['topsecret.server.com']
topsecret['Port']
###Output
_____no_output_____
###Markdown
Part 1: Read ConfigurationUsing `configparser`, create a program reading in the above configuration file and accessing the individual elements. Part 2: Create a Configuration GrammarDesign a grammar that will automatically create configuration files suitable for your above program. Fuzz your program with it. Part 3: Mine a Configuration GrammarBy dynamically tracking the individual accesses to configuration elements, you can again extract a basic grammar from the execution. To this end, create a subclass of `ConfigParser` with a special method `__getitem__`:
###Code
class TrackingConfigParser(configparser.ConfigParser):
def __getitem__(self, key):
print("Accessing", repr(key))
return super().__getitem__(key)
###Output
_____no_output_____
###Markdown
For a `TrackingConfigParser` object `p`, `p.__getitem__(key)` will be invoked whenever `p[key]` is accessed:
###Code
tracking_config_parser = TrackingConfigParser()
tracking_config_parser.read('example.ini')
section = tracking_config_parser['topsecret.server.com']
###Output
Accessing 'topsecret.server.com'
###Markdown
Using `__getitem__()`, as above, implement a tracking mechanism that, while your program accesses the read configuration, automatically saves options accessed and values read. Create a prototype grammar from these values; use it for fuzzing. At the end, don't forget to clean up:
###Code
import os
os.remove("example.ini")
###Output
_____no_output_____
###Markdown
**Solution.** Left to the reader. Enjoy! Exercise 3: Extracting and Fuzzing C Command-Line OptionsIn C programs, the `getopt()` function are frequently used to process configuration options. A call```getopt(argc, argv, "bf:")```indicates that the program accepts two options `-b` and `-f`, with `-f` taking an argument (as indicated by the following colon). Part 1: Getopt FuzzingWrite a framework which, for a given C program, automatically extracts the argument to `getopt()` and derives a fuzzing grammar for it. There are multiple ways to achieve this:1. Scan the program source code for occurrences of `getopt()` and return the string passed. (Crude, but should frequently work.)2. Insert your own implementation of `getopt()` into the source code (effectively replacing `getopt()` from the runtime library), which outputs the `getopt()` argument and exits the program. Recompile and run.3. (Advanced.) As above, but instead of changing the source code, hook into the _dynamic linker_ which at runtime links the program with the C runtime library. Set the library loading path (on Linux and Unix, this is the `LD_LIBRARY_PATH` environment variable) such that your own version of `getopt()` is linked first, and the regular libraries later. Executing the program (without recompiling) should yield the desired result.Apply this on `grep` and `ls`; report the resulting grammars and results. **Solution.** Left to the reader. Enjoy hacking! Part 2: Fuzzing Long Options in CSame as Part 1, but also hook into the GNU variant `getopt_long()`, which accepts "long" arguments with double dashes such as `--help`. Note that method 1, above, will not work here, since the "long" options are defined in a separately defined structure. **Solution.** Left to the reader. Enjoy hacking! Exercise 4: Expansions in ContextIn our above option configurations, we have multiple symbols which all expand to the same integer. For instance, the `--line-range` option of `autopep8` takes two `` parameters which both expand into the same `` symbol:``` ::= ... | --line-range | ... ::= ::= (-)?+ ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9```
###Code
autopep8_runner.ebnf_grammar()["<line>"]
autopep8_runner.ebnf_grammar()["<int>"]
autopep8_runner.ebnf_grammar()["<digit>"]
###Output
_____no_output_____
###Markdown
Testing ConfigurationsThe behavior of a program is not only governed by its data. The _configuration_ of a program – that is, the settings that govern the execution of a program on its (regular) input data, as set by options or configuration files – just as well influences behavior, and thus can and should be tested. In this chapter, we explore how to systematically _test_ and _cover_ software configurations. By _automatically inferring configuration options_, we can apply these techniques out of the box, with no need for writing a grammar. Finally, we show how to systematically cover _combinations_ of configuration options, quickly detecting unwanted interferences. **Prerequisites*** You should have read the [chapter on grammars](Grammars.ipynb).* You should have read the [chapter on grammar coverage](GrammarCoverageFuzzer.ipynb). Configuration OptionsWhen we talk about the input to a program, we usually think of the _data_ it processes. This is also what we have been fuzzing in the past chapters – be it with [random input](Fuzzer.ipynb), [mutation-based fuzzing](MutationFuzzer.ipynb), or [grammar-based fuzzing](GrammarFuzzer.ipynb). However, programs typically have several input sources, all of which can and should be tested – and included in test generation. One important source of input is the program's _configuration_ – that is, a set of inputs that typically is set once when beginning to process data and then stays constant while processing data, while the program is running, or even while the program is deployed. Such a configuration is frequently set in _configuration files_ (for instance, as key/value pairs); the most ubiquitous method for command-line tools, though, are _configuration options_ on the command line. As an example, consider the `grep` utility to find textual patterns in files. The exact mode by which `grep` works is governed by a multitude of options, which can be listed by providing a `--help` option:
###Code
!grep --help
###Output
usage: grep [-abcDEFGHhIiJLlmnOoqRSsUVvwxZ] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
###Markdown
All these options need to be tested for whether they operate correctly. In security testing, any such option may also trigger a yet unknown vulnerability. Hence, such options can become _fuzz targets_ on their own. In this chapter, we analyze how to systematically test such options – and better yet, how to extract possible configurations right out of given program files, such that we do not have to specify anything. Options in PythonLet us stick to our common programming language here and examine how options are processed in Python. The `argparse` module provides a parser for command-line arguments (and options) with great functionality – and great complexity. You start by defining a parser (`argparse.ArgumentParser()`) to which individual arguments with various features are added, one after another. Additional parameters for each argument can specify the type (`type`) of the argument (say, integers or strings), or the number of arguments (`nargs`). By default, arguments are stored under their name in the `args` object coming from `parse_args()` – thus, `args.integers` holds the `integers` arguments added earlier. Special actions (`actions`) allow to store specific values in given variables; the `store_const` action stores the given `const` in the attribute named by `dest`. The following example takes a number of integer arguments (`integers`) as well as an operator (`--sum`, `--min`, or `--max`) to be applied on these integers. The operators all store a function reference in the `accumulate` attribute, which is finally invoked on the integers parsed:
###Code
import argparse
def process_numbers(args=[]):
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--sum', dest='accumulate', action='store_const',
const=sum,
help='sum the integers')
group.add_argument('--min', dest='accumulate', action='store_const',
const=min,
help='compute the minimum')
group.add_argument('--max', dest='accumulate', action='store_const',
const=max,
help='compute the maximum')
args = parser.parse_args(args)
print(args.accumulate(args.integers))
###Output
_____no_output_____
###Markdown
Here's how `process_numbers()` works. We can, for instance, invoke the `--min` option on the given arguments to compute the minimum:
###Code
process_numbers(["--min", "100", "200", "300"])
###Output
100
###Markdown
Or compute the sum of three numbers:
###Code
process_numbers(["--sum", "1", "2", "3"])
###Output
6
###Markdown
When defined via `add_mutually_exclusive_group()` (as above), options are mutually exclusive. Consequently, we can have only one operator:
###Code
import fuzzingbook_utils
from ExpectError import ExpectError
with ExpectError(print_traceback=False):
process_numbers(["--sum", "--max", "1", "2", "3"])
###Output
usage: ipykernel_launcher.py [-h] (--sum | --min | --max) N [N ...]
ipykernel_launcher.py: error: argument --max: not allowed with argument --sum
SystemExit: 2 (expected)
###Markdown
A Grammar for ConfigurationsHow can we test a system with several options? The easiest answer is to write a grammar for it. The grammar `PROCESS_NUMBERS_EBNF_GRAMMAR` reflects the possible combinations of options and arguments:
###Code
from Grammars import crange, srange, convert_ebnf_grammar, is_valid_grammar, START_SYMBOL, new_symbol
PROCESS_NUMBERS_EBNF_GRAMMAR = {
"<start>": ["<operator> <integers>"],
"<operator>": ["--sum", "--min", "--max"],
"<integers>": ["<integer>", "<integers> <integer>"],
"<integer>": ["<digit>+"],
"<digit>": crange('0', '9')
}
assert is_valid_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
PROCESS_NUMBERS_GRAMMAR = convert_ebnf_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)
###Output
_____no_output_____
###Markdown
We can feed this grammar into our [grammar coverage fuzzer](GrammarCoverageFuzzer.ipynb) and have it cover one option after another:
###Code
from GrammarCoverageFuzzer import GrammarCoverageFuzzer
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
print(f.fuzz())
###Output
--max 9 5 8 210 80 9756431
--sum 9 4 99 1245 612370
--min 2 3 0 46 15798 7570926
###Markdown
Of course, we can also invoke `process_numbers()` with these very arguments. To this end, we need to convert the string produced by the grammar back into a list of individual arguments, using `split()`:
###Code
f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)
for i in range(3):
args = f.fuzz().split()
print(args)
process_numbers(args)
###Output
['--max', '8', '9', '3067', '44', '13852967057']
13852967057
['--sum', '9', '8', '63', '9278111', '59206197798']
59215475989
['--min', '4', '1', '4864', '33342', '7827970808951']
1
###Markdown
In a similar way, we can define grammars for any program to be tested; as well as define grammars for, say, configuration files. Yet, the grammar has to be updated with every change to the program, which creates a maintenance burden. Given that the information required for the grammar is already all encoded in the program, the question arises: _Can't we go and extract configuration options right out of the program in the first place?_ Mining Configuration OptionsIn this section, we try to extract option and argument information right out of a program, such that we do not have to specify a configuration grammar. The aim is to have a configuration fuzzer that works on the options and arguments of an arbitrary program, as long as it follows specific conventions for processing its arguments. In the case of Python programs, this means using the `argparse` module.Our idea is as follows: We execute the given program up to the point where the arguments are actually parsed – that is, `argparse.parse_args()` is invoked. Up to this point, we track all calls into the argument parser, notably those calls that define arguments and options (`add_argument()`). From these, we construct the grammar. Tracking ArgumentsLet us illustrate this approach with a simple experiment: We define a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while `process_numbers` is invoked. If we have a call to a method `add_argument`, we access and print out the local variables (which at this point are the arguments to the method).
###Code
import sys
import string
def traceit(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(method_name, locals)
###Output
_____no_output_____
###Markdown
What we get is a list of all calls to `add_argument()`, together with the method arguments passed:
###Code
sys.settrace(traceit)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
add_argument {'kwargs': {'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}, 'args': ('-h', '--help'), 'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True)}
add_argument {'kwargs': {'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}, 'args': ('integers',), 'self': ArgumentParser(prog='ipykernel_launcher.py', usage=None, description='Process some integers.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True)}
add_argument {'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}, 'args': ('--sum',), 'self': <argparse._MutuallyExclusiveGroup object at 0x109d06518>}
add_argument {'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}, 'args': ('--min',), 'self': <argparse._MutuallyExclusiveGroup object at 0x109d06518>}
add_argument {'kwargs': {'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}, 'args': ('--max',), 'self': <argparse._MutuallyExclusiveGroup object at 0x109d06518>}
6
###Markdown
From the `args` argument, we can access the individual options and arguments to be defined:
###Code
def traceit(frame, event, arg):
if event != "call":
return
method_name = frame.f_code.co_name
if method_name != "add_argument":
return
locals = frame.f_locals
print(locals['args'])
sys.settrace(traceit)
process_numbers(["--sum", "1", "2", "3"])
sys.settrace(None)
###Output
('-h', '--help')
('integers',)
('--sum',)
('--min',)
('--max',)
6
###Markdown
We see that each argument comes as a tuple with one (say, `integers` or `--sum`) or two members (`-h` and `--help`), which denote alternate forms for the same option. Our job will be to go through the arguments of `add_arguments()` and detect not only the names of options and arguments, but also whether they accept additional parameters, as well as the type of the parameters. A Grammar Miner for Options and Arguments Let us now build a class that gathers all this information to create a grammar. We use the `ParseInterrupt` exception to interrupt program execution after gathering all arguments and options:
###Code
class ParseInterrupt(Exception):
pass
###Output
_____no_output_____
###Markdown
The class `OptionGrammarMiner` takes an executable function for which the grammar of options and arguments is to be mined:
###Code
class OptionGrammarMiner(object):
def __init__(self, function, log=False):
self.function = function
self.log = log
###Output
_____no_output_____
###Markdown
The method `mine_ebnf_grammar()` is where everything happens. It creates a grammar of the form``` ::= * ::= ::= ```in which the options and arguments will be collected. It then sets a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while the previously defined `function` is invoked. Raising `ParseInterrupt` (when `parse_args()` is invoked) ends execution.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
OPTION_SYMBOL = "<option>"
ARGUMENTS_SYMBOL = "<arguments>"
def mine_ebnf_grammar(self):
self.grammar = {
START_SYMBOL: ["(" + self.OPTION_SYMBOL + ")*" + self.ARGUMENTS_SYMBOL],
self.OPTION_SYMBOL: [],
self.ARGUMENTS_SYMBOL: []
}
self.current_group = self.OPTION_SYMBOL
old_trace = sys.settrace(self.traceit)
try:
self.function()
except ParseInterrupt:
pass
sys.settrace(old_trace)
return self.grammar
def mine_grammar(self):
return convert_ebnf_grammar(self.mine_ebnf_grammar())
###Output
_____no_output_____
###Markdown
The trace function checks for four methods: `add_argument()` is the most important function, resulting in processing arguments; `frame.f_locals` again is the set of local variables, which at this point is mostly the arguments to `add_argument()`. Since mutually exclusive groups also have a method `add_argument()`, we set the flag `in_group` to differentiate. Note that we make no specific efforts to differentiate between multiple parsers or groups; we simply assume that there is one parser, and at any point at most one mutually exclusive group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def traceit(self, frame, event, arg):
if event != "call":
return
if "self" not in frame.f_locals:
return
self_var = frame.f_locals["self"]
method_name = frame.f_code.co_name
if method_name == "add_argument":
in_group = repr(type(self_var)).find("Group") >= 0
self.process_argument(frame.f_locals, in_group)
elif method_name == "add_mutually_exclusive_group":
self.add_group(frame.f_locals, exclusive=True)
elif method_name == "add_argument_group":
# self.add_group(frame.f_locals, exclusive=False)
pass
elif method_name == "parse_args":
raise ParseInterrupt
return None
###Output
_____no_output_____
###Markdown
The `process_arguments()` now analyzes the arguments passed and adds them to the grammar:* If the argument starts with `-`, it gets added as an optional element to the `` list* Otherwise, it gets added to the `` list.The optional `nargs` argument specifies how many arguments can follow. If it is a number, we add the appropriate number of elements to the grammar; if it is an abstract specifier (say, `+` or `*`), we use it directly as EBNF operator.Given the large number of parameters and optional behavior, this is a somewhat messy function, but it does the job.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def process_argument(self, locals, in_group):
args = locals["args"]
kwargs = locals["kwargs"]
if self.log:
print(args)
print(kwargs)
print()
for arg in args:
self.process_arg(arg, in_group, kwargs)
class OptionGrammarMiner(OptionGrammarMiner):
def process_arg(self, arg, in_group, kwargs):
if arg.startswith('-'):
if not in_group:
target = self.OPTION_SYMBOL
else:
target = self.current_group
metavar = None
arg = " " + arg
else:
target = self.ARGUMENTS_SYMBOL
metavar = arg
arg = ""
if "nargs" in kwargs:
nargs = kwargs["nargs"]
else:
nargs = 1
param = self.add_parameter(kwargs, metavar)
if param == "":
nargs = 0
if isinstance(nargs, int):
for i in range(nargs):
arg += param
else:
assert nargs in "?+*"
arg += '(' + param + ')' + nargs
if target == self.OPTION_SYMBOL:
self.grammar[target].append(arg)
else:
self.grammar[target].append(arg)
###Output
_____no_output_____
###Markdown
The method `add_parameter()` handles possible parameters of options. If the argument has an `action` defined, it takes no parameter. Otherwise, we identify the type of the parameter (as `int` or `str`) and augment the grammar with an appropriate rule.
###Code
import inspect
class OptionGrammarMiner(OptionGrammarMiner):
def add_parameter(self, kwargs, metavar):
if "action" in kwargs:
# No parameter
return ""
type_ = "str"
if "type" in kwargs:
given_type = kwargs["type"]
# int types come as '<class int>'
if inspect.isclass(given_type) and issubclass(given_type, int):
type_ = "int"
if metavar is None:
if "metavar" in kwargs:
metavar = kwargs["metavar"]
else:
metavar = type_
self.add_type_rule(type_)
if metavar != type_:
self.add_metavar_rule(metavar, type_)
param = " <" + metavar + ">"
return param
###Output
_____no_output_____
###Markdown
The method `add_type_rule()` adds a rule for parameter types to the grammar. If the parameter is identified by a meta-variable (say, `N`), we add a rule for this as well to improve legibility.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_type_rule(self, type_):
if type_ == "int":
self.add_int_rule()
else:
self.add_str_rule()
def add_int_rule(self):
self.grammar["<int>"] = ["(-)?<digit>+"]
self.grammar["<digit>"] = crange('0', '9')
def add_str_rule(self):
self.grammar["<str>"] = ["<char>+"]
self.grammar["<char>"] = srange(
string.digits
+ string.ascii_letters
+ string.punctuation)
def add_metavar_rule(self, metavar, type_):
self.grammar["<" + metavar + ">"] = ["<" + type_ + ">"]
###Output
_____no_output_____
###Markdown
The method `add_group()` adds a new mutually exclusive group to the grammar. We define a new symbol (say, ``) for the options added to the group, and use the `required` and `exclusive` flags to define an appropriate expansion operator. The group is then prefixed to the grammar, as in``` ::= * ::= ```and filled with the next calls to `add_argument()` within the group.
###Code
class OptionGrammarMiner(OptionGrammarMiner):
def add_group(self, locals, exclusive):
kwargs = locals["kwargs"]
if self.log:
print(kwargs)
required = kwargs.get("required", False)
group = new_symbol(self.grammar, "<group>")
if required and exclusive:
group_expansion = group
if required and not exclusive:
group_expansion = group + "+"
if not required and exclusive:
group_expansion = group + "?"
if not required and not exclusive:
group_expansion = group + "*"
self.grammar[START_SYMBOL][0] = group_expansion + \
self.grammar[START_SYMBOL][0]
self.grammar[group] = []
self.current_group = group
###Output
_____no_output_____
###Markdown
That's it! With this, we can now extract the grammar from our `process_numbers()` program. Turning on logging again reveals the variables we draw upon.
###Code
miner = OptionGrammarMiner(process_numbers, log=True)
process_numbers_grammar = miner.mine_ebnf_grammar()
###Output
('-h', '--help')
{'action': 'help', 'default': '==SUPPRESS==', 'help': 'show this help message and exit'}
('integers',)
{'metavar': 'N', 'type': <class 'int'>, 'nargs': '+', 'help': 'an integer for the accumulator'}
{'required': True}
('--sum',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function sum>, 'help': 'sum the integers'}
('--min',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function min>, 'help': 'compute the minimum'}
('--max',)
{'dest': 'accumulate', 'action': 'store_const', 'const': <built-in function max>, 'help': 'compute the maximum'}
###Markdown
Here is the extracted grammar:
###Code
process_numbers_grammar
###Output
_____no_output_____
###Markdown
The grammar properly identifies the group found:
###Code
process_numbers_grammar["<start>"]
process_numbers_grammar["<group>"]
###Output
_____no_output_____
###Markdown
It also identifies a `--help` option provided not by us, but by the `argparse` module:
###Code
process_numbers_grammar["<option>"]
###Output
_____no_output_____
###Markdown
The grammar also correctly identifies the types of the arguments:
###Code
process_numbers_grammar["<arguments>"]
process_numbers_grammar["<integers>"]
###Output
_____no_output_____
###Markdown
The rules for `int` are set as defined by `add_int_rule()`
###Code
process_numbers_grammar["<int>"]
###Output
_____no_output_____
###Markdown
We can take this grammar and convert it to BNF, such that we can fuzz with it right away:
###Code
assert is_valid_grammar(process_numbers_grammar)
grammar = convert_ebnf_grammar(process_numbers_grammar)
assert is_valid_grammar(grammar)
f = GrammarCoverageFuzzer(grammar)
for i in range(10):
print(f.fuzz())
###Output
--sum 9
--max -h --help --help -16 -0
--min --help 2745341 8
--min 1 27
--sum --help --help -2
--sum --help 0 3 -77
--sum -3
--sum --help 429 8 10 0295 -694 1
--max -h 91 -1425 99
--sum -795 -94 8 -44
###Markdown
Each and every invocation adheres to the rules as set forth in the `argparse` calls. By mining options and arguments from existing programs, we can now fuzz these options out of the box – without having to specify a grammar. Testing Autopep8 Let us try out the option grammar miner on real-world Python programs. `autopep8` is a tool that automatically converts Python code to the [PEP 8 Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). (Actually, all Python code in this book runs through `autopep8` during production.) `autopep8` offers a wide range of options, as can be seen by invoking it with `--help`:
###Code
!autopep8 --help
###Output
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing]
[files [files ...]]
Automatically formats Python code to conform to the PEP 8 style guide.
positional arguments:
files files to format or '-' for standard in
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-v, --verbose print verbose messages; multiple -v result in more
verbose messages
-d, --diff print the diff for the fixed source
-i, --in-place make changes to files in place
--global-config filename
path to a global pep8 config file; if this file does
not exist then this is ignored (default:
/Users/zeller/.config/pep8)
--ignore-local-config
don't look for and apply local config files; if not
passed, defaults are updated with any config files in
the project's root directory
-r, --recursive run recursively over directories; must be used with
--in-place or --diff
-j n, --jobs n number of parallel jobs; match CPU count if value is
less than 1
-p n, --pep8-passes n
maximum number of additional pep8 passes (default:
infinite)
-a, --aggressive enable non-whitespace changes; multiple -a result in
more aggressive changes
--experimental enable experimental fixes
--exclude globs exclude file/directory names that match these comma-
separated globs
--list-fixes list codes for fixes; used by --ignore and --select
--ignore errors do not fix these errors/warnings (default:
E226,E24,W503)
--select errors fix only these errors/warnings (e.g. E4,W)
--max-line-length n set maximum allowed line length (default: 79)
--line-range line line, --range line line
only fix errors found within this inclusive range of
line numbers (e.g. 1 99); line numbers are indexed at
1
--hang-closing hang-closing option passed to pycodestyle
###Markdown
Autopep8 SetupWe want to systematically test these options. In order to deploy our configuration grammar miner, we need to find the source code of the executable:
###Code
import os
def find_executable(name):
for path in os.get_exec_path():
qualified_name = os.path.join(path, name)
if os.path.exists(qualified_name):
return qualified_name
return None
autopep8_executable = find_executable("autopep8")
assert autopep8_executable is not None
autopep8_executable
###Output
_____no_output_____
###Markdown
Next, we build a function that reads the contents of the file and executes it.
###Code
def autopep8():
executable = find_executable("autopep8")
# First line has to contain "/usr/bin/env python" or like
first_line = open(executable).readline()
assert first_line.find("python") >= 0
contents = open(executable).read()
exec(contents)
###Output
_____no_output_____
###Markdown
Mining an Autopep8 GrammarWe can use the `autopep8()` function in our grammar miner:
###Code
autopep8_miner = OptionGrammarMiner(autopep8)
###Output
_____no_output_____
###Markdown
and extract a grammar for it:
###Code
autopep8_ebnf_grammar = autopep8_miner.mine_ebnf_grammar()
###Output
_____no_output_____
###Markdown
This works because here, `autopep8` is not a separate process (and a separate Python interpreter), but we run the `autopep8()` function (and the `autopep8` code) in our current Python interpreter – up to the call to `parse_args()`, where we interrupt execution again. At this point, the `autopep8` code has done nothing but setting up the argument parser – which is what we are interested in. The grammar options mined reflect precisely the options seen when providing `--help`:
###Code
print(autopep8_ebnf_grammar["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing']
###Markdown
Metavariables like `` or `` are placeholders for integers. We assume all metavariables of the same name have the same type:
###Code
autopep8_ebnf_grammar["<line>"]
###Output
_____no_output_____
###Markdown
The grammar miner has inferred that the argument to `autopep8` is a list of files:
###Code
autopep8_ebnf_grammar["<arguments>"]
###Output
_____no_output_____
###Markdown
which in turn all are strings:
###Code
autopep8_ebnf_grammar["<files>"]
###Output
_____no_output_____
###Markdown
As we are only interested in testing options, not arguments, we fix the arguments to a single mandatory input. (Otherwise, we'd have plenty of random file names generated.)
###Code
autopep8_ebnf_grammar["<arguments>"] = [" <files>"]
autopep8_ebnf_grammar["<files>"] = ["foo.py"]
assert is_valid_grammar(autopep8_ebnf_grammar)
###Output
_____no_output_____
###Markdown
Creating Autopep8 Options Let us now use the inferred grammar for fuzzing. Again, we convert the EBNF grammar into a regular BNF grammar:
###Code
autopep8_grammar = convert_ebnf_grammar(autopep8_ebnf_grammar)
assert is_valid_grammar(autopep8_grammar)
###Output
_____no_output_____
###Markdown
And we can use the grammar for fuzzing all options:
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=4)
for i in range(20):
print(f.fuzz())
###Output
-r foo.py
--hang-closing --experimental --aggressive foo.py
--ignore-local-config -d -h -p 9 --version --list-fixes foo.py
-a --verbose foo.py
-v --indent-size 7 --global-config { foo.py
--in-place --help --select ~s --max-line-length 1 foo.py
--pep8-passes 8 --diff foo.py
-i --recursive foo.py
-r --hang-closing foo.py
--jobs 0 -i foo.py
--exclude k --line-range 3 6 --verbose foo.py
-v -i foo.py
--version -a --list-fixes foo.py
--ignore x -r foo.py
-j 4 --in-place -a foo.py
--range 5 2 --list-fixes foo.py
--indent-size 5 --indent-size 3 foo.py
--indent-size 0 --indent-size 8 foo.py
--indent-size 7 --indent-size 3 foo.py
--indent-size 9 --verbose foo.py
###Markdown
Let us apply these options on the actual program. We need a file `foo.py` that will serve as input:
###Code
def create_foo_py():
open("foo.py", "w").write("""
def twice(x = 2):
return x + x
""")
create_foo_py()
print(open("foo.py").read(), end="")
###Output
def twice(x = 2):
return x + x
###Markdown
We see how `autopep8` fixes the spacing:
###Code
!autopep8 foo.py
###Output
def twice(x=2):
return x + x
###Markdown
Let us now put things together. We define a `ProgramRunner` that will run the `autopep8` executable with arguments coming from the mined `autopep8` grammar.
###Code
from Fuzzer import ProgramRunner
###Output
_____no_output_____
###Markdown
Running `autopep8` with the mined options reveals a surprisingly high number of passing runs. (We see that some options depend on each other or are mutually exclusive, but this is handled by the program logic, not the argument parser, and hence out of our scope.) The `GrammarCoverageFuzzer` ensures that each option is tested at least once. (Digits and letters, too, by the way.)
###Code
f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=5)
for i in range(20):
invocation = "autopep8" + f.fuzz()
print("$ " + invocation)
args = invocation.split()
autopep8 = ProgramRunner(args)
result, outcome = autopep8.run()
if result.stderr != "":
print(result.stderr, end="")
###Output
$ autopep8 foo.py
$ autopep8 -a --max-line-length 2 --jobs 5 --help -r foo.py
$ autopep8 --version --indent-size 0 --ignore-local-config -h foo.py
$ autopep8 --ignore z --diff -j 7 --experimental --list-fixes --verbose -i --recursive foo.py
usage: autopep8 [-h] [--version] [-v] [-d] [-i] [--global-config filename]
[--ignore-local-config] [-r] [-j n] [-p n] [-a]
[--experimental] [--exclude globs] [--list-fixes]
[--ignore errors] [--select errors] [--max-line-length n]
[--line-range line line] [--hang-closing]
[files [files ...]]
autopep8: error: --in-place and --diff are mutually exclusive
$ autopep8 --line-range 1 6 --in-place --select _ foo.py
$ autopep8 --exclude n --pep8-passes 3 --aggressive foo.py
$ autopep8 --global-config &F -p 4 -d foo.py
$ autopep8 --hang-closing --range 8 9 -v foo.py
[file:foo.py]
---> 5 issue(s) to fix {'E251': {2}, 'E271': {3}, 'E221': {3}, 'E222': {3}}
$ autopep8 --indent-size 1 --version --hang-closing foo.py
$ autopep8 --indent-size 3 --hang-closing --aggressive foo.py
$ autopep8 --indent-size 8 -r --in-place foo.py
$ autopep8 --indent-size 9 --indent-size 7 --version foo.py
$ autopep8 -a --aggressive --help -v foo.py
$ autopep8 --indent-size 9 --indent-size 7 foo.py
$ autopep8 --indent-size 5 --indent-size 2 --verbose foo.py
[file:foo.py]
---> Applying global fix for E265
---> 1 issue(s) to fix {'E111': {3}}
$ autopep8 --indent-size 9 --in-place --recursive foo.py
$ autopep8 --indent-size 9 --indent-size 9 foo.py
$ autopep8 --indent-size 6 --indent-size 9 -v foo.py
[file:foo.py]
---> Applying global fix for E265
---> 1 issue(s) to fix {'E111': {3}}
$ autopep8 --indent-size 4 --indent-size -5 --list-fixes foo.py
$ autopep8 --indent-size 93 -a foo.py
###Markdown
Our `foo.py` file now has been formatted in place a number of times:
###Code
print(open("foo.py").read(), end="")
###Output
def twice(x=2):
return x + x
###Markdown
We don't need it anymore, so we clean up things:
###Code
import os
os.remove("foo.py")
###Output
_____no_output_____
###Markdown
Classes for Fuzzing Configuration OptionsLet us now create reusable classes that we can use for testing arbitrary programs. (Okay, make that "arbitrary programs that are written in Python and use the `argparse` module to process command-line arguments.") The class `OptionRunner` is a subclass of `ProgramRunner` that takes care of automatically determining the grammar, using the same steps we used for `autopep8`, above.
###Code
class OptionRunner(ProgramRunner):
def __init__(self, program, arguments=None):
if isinstance(program, str):
self.base_executable = program
else:
self.base_executable = program[0]
self.find_contents()
self.find_grammar()
if arguments is not None:
self.set_arguments(arguments)
super().__init__(program)
###Output
_____no_output_____
###Markdown
First, we find the contents of the Python executable:
###Code
class OptionRunner(OptionRunner):
def find_contents(self):
self._executable = find_executable(self.base_executable)
first_line = open(self._executable).readline()
assert first_line.find("python") >= 0
self.contents = open(self._executable).read()
def invoker(self):
exec(self.contents)
def executable(self):
return self._executable
###Output
_____no_output_____
###Markdown
Next, we determine the grammar using the `OptionGrammarMiner` class:
###Code
class OptionRunner(OptionRunner):
def find_grammar(self):
miner = OptionGrammarMiner(self.invoker)
self._ebnf_grammar = miner.mine_ebnf_grammar()
def ebnf_grammar(self):
return self._ebnf_grammar
def grammar(self):
return convert_ebnf_grammar(self._ebnf_grammar)
###Output
_____no_output_____
###Markdown
The two service methods `set_arguments()` and `set_invocation()` help us to change the arguments and program, respectively.
###Code
class OptionRunner(OptionRunner):
def set_arguments(self, args):
self._ebnf_grammar["<arguments>"] = [" " + args]
def set_invocation(self, program):
self.program = program
###Output
_____no_output_____
###Markdown
We can instantiate the class on `autopep8` and immediately get the grammar:
###Code
autopep8_runner = OptionRunner("autopep8", "foo.py")
print(autopep8_runner.ebnf_grammar()["<option>"])
###Output
[' -h', ' --help', ' --version', ' -v', ' --verbose', ' -d', ' --diff', ' -i', ' --in-place', ' --global-config <filename>', ' --ignore-local-config', ' -r', ' --recursive', ' -j <n>', ' --jobs <n>', ' -p <n>', ' --pep8-passes <n>', ' -a', ' --aggressive', ' --experimental', ' --exclude <globs>', ' --list-fixes', ' --ignore <errors>', ' --select <errors>', ' --max-line-length <n>', ' --line-range <line> <line>', ' --range <line> <line>', ' --indent-size <int>', ' --hang-closing']
###Markdown
An `OptionFuzzer` interacts with the given `OptionRunner` to obtain its grammar, which is then passed to its `GrammarCoverageFuzzer` superclass.
###Code
class OptionFuzzer(GrammarCoverageFuzzer):
def __init__(self, runner, *args, **kwargs):
assert issubclass(type(runner), OptionRunner)
self.runner = runner
grammar = runner.grammar()
super().__init__(grammar, *args, **kwargs)
###Output
_____no_output_____
###Markdown
When invoking `run()`, the `OptionFuzzer` creates a new invocation (using `fuzz()` from its grammar) and runs the now given (or previously set) runner with the arguments from the grammar. Note that the runner specified in `run()` can differ from the one set during initialization; this allows for mining options from one program and applying it in another context.
###Code
class OptionFuzzer(OptionFuzzer):
def run(self, runner=None, inp=""):
if runner is None:
runner = self.runner
assert issubclass(type(runner), OptionRunner)
invocation = runner.executable() + " " + self.fuzz()
runner.set_invocation(invocation.split())
return runner.run(inp)
###Output
_____no_output_____
###Markdown
Example: Autopep8Let us apply our newly defined classes on the `autopep8` runner:
###Code
autopep8_fuzzer = OptionFuzzer(autopep8_runner, max_nonterminals=5)
for i in range(3):
print(autopep8_fuzzer.fuzz())
###Output
-j -8 foo.py
--aggressive --global-config U} --version --verbose foo.py
--help --experimental -p 01 --hang-closing -r -d --list-fixes foo.py
###Markdown
We can now systematically test `autopep8` with these classes:
###Code
autopep8_fuzzer.run(autopep8_runner)
###Output
_____no_output_____
###Markdown
Example: MyPyWe can extract options for the `mypy` static type checker for Python:
###Code
assert find_executable("mypy") is not None
mypy_runner = OptionRunner("mypy", "foo.py")
print(mypy_runner.ebnf_grammar()["<option>"])
mypy_fuzzer = OptionFuzzer(mypy_runner, max_nonterminals=5)
for i in range(10):
print(mypy_fuzzer.fuzz())
###Output
foo.py
-m --no-warn-unused-configs --cache-dir --dirty-stubs --xml-report b --strict --always-false --no-warn-no-return --disallow-any-generics foo.py
--linecoverage-report VF? --local-partial-types foo.py
--python-executable --dump-graph --any-exprs-report j --warn-unused-ignores --bazel -2 foo.py
--scripts-are-modules --warn-no-return --verbose -p --no-silence-site-packages --shadow-file --no-strict-optional --disallow-subclassing-any --strict-optional --almost-silent --package --help foo.py
--check-untyped-defs --warn-incomplete-stub --no-check-untyped-defs --allow-untyped-calls --ignore-missing-imports foo.py
--show-traceback --hide-column-numbers --disallow-any-decorated --disallow-untyped-decorators --xslt-html-report pm --warn-redundant-casts --fast-parser --package-root --html-report x --no-site-packages --hide-error-context --always-true foo.py
--disallow-incomplete-defs --strict-optional-whitelist K -V foo.py
--custom-typeshed-dir r^ --command -i --skip-version-check foo.py
--config-file C --allow-incomplete-defs --no-warn-redundant-casts --find-occurrences v8 --warn-unused-configs --disallow-untyped-defs foo.py
###Markdown
Example: NotedownHere's the configuration options for the `notedown` Notebook to Markdown converter:
###Code
assert find_executable("notedown") is not None
notedown_runner = OptionRunner("notedown")
print(notedown_runner.ebnf_grammar()["<option>"])
notedown_fuzzer = OptionFuzzer(notedown_runner, max_nonterminals=5)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
--nomagic
-o --examples --match 6? --timeout 93 --help --run >
--precode Y --rmagic --version 2
--template '* --strip s8p
--output -h --debug ^
--execute --render --debug v
--knit --to q --from m --run -h --version +
-o --rmagic --nomagic J
--precode 4 f --version ]
-o E --version HB
###Markdown
Combinatorial TestingOur `CoverageGrammarFuzzer` does a good job in covering each and every option at least once, which is great for systematic testing. However, as we also can see in our examples above, some options require each other, while others interfere with each other. What we should do as good testers is not only to cover every option individually, but also _combinations_ of options. The Python `itertools` module gives us means to create combinations from lists. We can, for instance, take the `notedown` options and create a list of all pairs.
###Code
from itertools import combinations
option_list = notedown_runner.ebnf_grammar()["<option>"]
pairs = list(combinations(option_list, 2))
###Output
_____no_output_____
###Markdown
There's quite a number of pairs:
###Code
len(pairs)
print(pairs[:20])
###Output
[(' -h', ' --help'), (' -h', ' -o( <str>)?'), (' -h', ' --output( <str>)?'), (' -h', ' --from <str>'), (' -h', ' --to <str>'), (' -h', ' --run'), (' -h', ' --execute'), (' -h', ' --timeout <int>'), (' -h', ' --strip'), (' -h', ' --precode( <str>)+'), (' -h', ' --knit( <str>)?'), (' -h', ' --rmagic'), (' -h', ' --nomagic'), (' -h', ' --render'), (' -h', ' --template <str>'), (' -h', ' --match <str>'), (' -h', ' --examples'), (' -h', ' --version'), (' -h', ' --debug'), (' --help', ' -o( <str>)?')]
###Markdown
Testing every such pair of options frequently suffices to cover all interferences between options. (Programs rarely have conditions involving three or more configuration settings.) To this end, we _change_ the grammar from having a list of options to having a list of _option pairs_, such that covering these will automatically cover all pairs. We create a function `pairwise()` that takes a list of options as occurring in our grammar and returns a list of _pairwise options_ – that is, our original options, but concatenated.
###Code
def pairwise(option_list):
return [option_1 + option_2
for (option_1, option_2) in combinations(option_list, 2)]
###Output
_____no_output_____
###Markdown
Here's the first 20 pairs:
###Code
print(pairwise(option_list)[:20])
###Output
[' -h --help', ' -h -o( <str>)?', ' -h --output( <str>)?', ' -h --from <str>', ' -h --to <str>', ' -h --run', ' -h --execute', ' -h --timeout <int>', ' -h --strip', ' -h --precode( <str>)+', ' -h --knit( <str>)?', ' -h --rmagic', ' -h --nomagic', ' -h --render', ' -h --template <str>', ' -h --match <str>', ' -h --examples', ' -h --version', ' -h --debug', ' --help -o( <str>)?']
###Markdown
The new grammar `pairwise_notedown_grammar` is a copy of the `notedown` grammar, but with the list of options replaced with the above pairwise option list.
###Code
from copy import deepcopy
notedown_grammar = notedown_runner.grammar()
pairwise_notedown_grammar = deepcopy(notedown_grammar)
pairwise_notedown_grammar["<option>"] = pairwise(notedown_grammar["<option>"])
assert is_valid_grammar(pairwise_notedown_grammar)
###Output
_____no_output_____
###Markdown
Using the "pairwise" grammar to fuzz now covers one pair after another:
###Code
notedown_fuzzer = GrammarCoverageFuzzer(
pairwise_notedown_grammar, max_nonterminals=4)
for i in range(10):
print(notedown_fuzzer.fuzz())
###Output
--run --debug --help --execute
-o --timeout 8
--precode : --render -h --run
--help --debug --strip --nomagic G
--render --debug --rmagic --nomagic r
--help --version --execute --strip ^
-h --execute --precode t --version ip
--nomagic --debug --version --debug -h --render K
--examples --version --help --examples
###Markdown
Can we actually test all combinations of options? Not in practice, as the number of combinations quickly grows as the length increases. It decreases again as the number of options reaches the maximum (with 20 options, there is only 1 combination involving _all_ options), but the absolute numbers are still staggering:
###Code
for combination_length in range(1, 20):
tuples = list(combinations(option_list, combination_length))
print(combination_length, len(tuples))
###Output
1 20
2 190
3 1140
4 4845
5 15504
6 38760
7 77520
8 125970
9 167960
10 184756
11 167960
12 125970
13 77520
14 38760
15 15504
16 4845
17 1140
18 190
19 20
###Markdown
Formally, the number of combinations of length $k$ in a set of options of length $n$ is the binomial coefficient$${n \choose k} = \frac{n!}{k!(n - k)!}$$ which for $k = 2$ (all pairs) gives us$${n \choose 2} = \frac{n!}{2(n - 2)!} = n \times (n - 1)$$ For `autopep8` with its 29 options...
###Code
len(autopep8_runner.ebnf_grammar()["<option>"])
###Output
_____no_output_____
###Markdown
... we thus need 812 tests to cover all pairs:
###Code
len(autopep8_runner.ebnf_grammar()["<option>"]) * \
(len(autopep8_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
For `mypy` with its 110 options, though, we already end up with 11,990 tests to be conducted:
###Code
len(mypy_runner.ebnf_grammar()["<option>"])
len(mypy_runner.ebnf_grammar()["<option>"]) * \
(len(mypy_runner.ebnf_grammar()["<option>"]) - 1)
###Output
_____no_output_____
###Markdown
Even if each pair takes a second to run, we'd still be done in three hours of testing, though. If your program has more options that you all want to get covered in combinations, it is advisable that you limit the number of configurations further – for instance by limiting combinatorial testing to those combinations that possibly can interact with each other; and covering all other (presumably orthogonal) options individually. This mechanism of creating configurations by extending grammars can be easily extended to other configuration targets. One may want to explore a greater number of configurations, or expansions in specific contexts. The [exercises](Exercises), below, have a number of options ready for you. Lessons Learned* Besides regular input data, program _configurations_ make an important testing target.* For a given program using a standard library to parse command-line options and arguments, one can automatically extract these and convert them into a grammar.* To cover not only single options, but combinations of options, one can expand the grammar to cover all pairs, or come up with even more ambitious targets. Next StepsIf you liked the idea of mining a grammar from a program, do not miss:* [how to mine grammars for input data](GrammarMiner.ipynb) Our next steps in the book focus on:* [how to parse and recombine inputs](Parser.ipynb)* [how to assign weights and probabilities to specific productions](ProbabilisticGrammarFuzzer.ipynb)* [how to simplify inputs that cause a failure](Reducer.ipynb) BackgroundAlthough configuration data is just as likely to cause failures as other input data, it has received relatively little attention in test generation – possibly because, unlike "regular" input data, configuration data is not so much under control of external parties, and because, again unlike regular data, there is little variance in configurations. Creating models for software configurations and using these models for testing is commonplace, as is the idea of pairwise testing. For an overview, see \cite{Pezze2008}; for a discussion and comparison of state-of-the-art techniques, see \cite{Petke2015}.More specifically, \cite{Sutton2007} also discuss techniques to systematically cover command-line options. Dai et al. \cite{Dai2010} apply configuration fuzzing by changing variables associated with configuration files. Exercises Exercise 1: ifdef Configuration FuzzingIn C programs, the *C preprocessor* can be used to choose which code parts should be compiled and which ones should not. As an example, in the C code```Cifdef LONG_FOOlong foo() { ... }elseint foo() { ... }endif```the compiler will compile the function `foo()` with return type`long` if the _preprocessor variable_ `LONG_FOO` is defined, and with return type `int` if not. Such preprocessor variables are either set in the source files (using `define`, as in `define LONG_FOO`) or on the C compiler command line (using `-D` or `-D=`, as in `-DLONG_FOO`. Such *conditional compilation* is used to configure C programs towards their environment. System-specific code can contain lots of conditional compilation. As an example, consider this excerpt of `xmlparse.c`, the XML parser that is part of the Python runtime library:```cif defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32) define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800endifif !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \ && !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \ && !defined(XML_DEV_URANDOM) \ && !defined(_WIN32) \ && !defined(XML_POOR_ENTROPY) errorendifif !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */endififdef XML_UNICODE_WCHAR_Tdefine XML_T(x) (const wchar_t)xdefine XML_L(x) L xelsedefine XML_T(x) (const unsigned short)xdefine XML_L(x) xendifint fun(int x) { return XML_T(x); }``` A typical configuration for the C preprocessor on the above code could be `cc -c -D_WIN32 -DXML_POOR_ENTROPY -DXML_UNICODE_WCHAR_T xmlparse.c`, defining the given preprocessor variables and selecting the appropriate code fragments. Since the compiler can only compile one configuration at a time (implying that we can also only _test_ one resulting executable at a time), your task is to find out which of these configurations actually compile. To this end, proceed in three steps. Part 1: Extract Preprocessor VariablesWrite a _function_ `cpp_identifiers()` that, given a set of lines (say, from `open(filename).readlines()`), extracts all preprocessor variables referenced in `if` or `ifdef` preprocessor instructions. Apply `ifdef_identifiers()` on the sample C input above, such that```pythoncpp_identifiers(open("xmlparse.c").readlines()) ```returns the set```python{'_WIN32', 'LOAD_LIBRARY_SEARCH_SYSTEM32', 'HAVE_GETRANDOM', 'HAVE_SYSCALL_GETRANDOM', 'HAVE_ARC4RANDOM_BUF', ...}``` **Solution.** Let us start with creating a sample input file, `xmlparse.c`:
###Code
filename = "xmlparse.c"
open(filename, "w").write(
"""
#if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32)
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
#endif
#if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \
&& !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \
&& !defined(XML_DEV_URANDOM) \
&& !defined(_WIN32) \
&& !defined(XML_POOR_ENTROPY)
# error
#endif
#if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)
#define USE_SYSV_ENVVARS /* COLUMNS/LINES vs. TERMCAP */
#endif
#ifdef XML_UNICODE_WCHAR_T
#define XML_T(x) (const wchar_t)x
#define XML_L(x) L ## x
#else
#define XML_T(x) (const unsigned short)x
#define XML_L(x) x
#endif
int fun(int x) { return XML_T(x); }
""");
###Output
_____no_output_____
###Markdown
To find C preprocessor `if` directives and preprocessor variables, we use regular expressions matching them.
###Code
import re
re_cpp_if_directive = re.compile(r"\s*#\s*(el)?if")
re_cpp_identifier = re.compile(r"[a-zA-Z_$]+")
def cpp_identifiers(lines):
identifiers = set()
for line in lines:
if re_cpp_if_directive.match(line):
identifiers |= set(re_cpp_identifier.findall(line))
# These are preprocessor keywords
identifiers -= { "if", "ifdef", "ifndef", "defined" }
return identifiers
cpp_ids = cpp_identifiers(open("xmlparse.c").readlines())
cpp_ids
###Output
_____no_output_____
###Markdown
Part 2: Derive an Option GrammarWith the help of `cpp_identifiers()`, create a grammar which has C compiler invocations with a list of options, where each option takes the form `-D` for a preprocessor variable ``. Using this grammar `cpp_grammar`, a fuzzer ```pythong = GrammarCoverageFuzzer(cpp_grammar)```would create C compiler invocations such as```python[g.fuzz() for i in range(10)]['cc -DHAVE_SYSCALL_GETRANDOM xmlparse.c', 'cc -D__SCO__ -DRANDOM_BUF -DXML_UNICODE_WCHAR_T -D__UNIXWARE__ xmlparse.c', 'cc -DXML_POOR_ENTROPY xmlparse.c', 'cc -DRANDOM xmlparse.c', 'cc -D_WIN xmlparse.c', 'cc -DHAVE_ARC xmlparse.c', ...]``` **Solution.** This is not very difficult:
###Code
from Grammars import new_symbol
cpp_grammar = {
"<start>": ["cc -c<options> " + filename],
"<options>": ["<option>", "<options><option>"],
"<option>": []
}
for id in cpp_ids:
s = new_symbol(cpp_grammar, "<" + id + ">")
cpp_grammar["<option>"].append(s)
cpp_grammar[s] = [" -D" + id]
cpp_grammar
assert is_valid_grammar(cpp_grammar)
###Output
_____no_output_____
###Markdown
Part 3: C Preprocessor Configuration FuzzingUsing the grammar just produced, use a `GrammarCoverageFuzzer` to1. Test each processor variable individually2. Test each pair of processor variables, using `pairwise()`.What happens if you actually run the invocations? **Solution.** We can simply run the coverage fuzzer, as described above.
###Code
g = GrammarCoverageFuzzer(cpp_grammar)
g.fuzz()
from Fuzzer import ProgramRunner
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -D__UNIXWARE__ -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -DRANDOM_BUF xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DTIOCSWINSZ -DLOAD_LIBRARY_SEARCH_SYSTEM xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_ARC xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -D_WIN -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_SYSCALL_GETRANDOM -DRANDOM xmlparse.c
$ cc -c -DXML_POOR_ENTROPY -D__UNIXWARE__ -DHAVE_GETRANDOM -DXML_UNICODE_WCHAR_T -DXML_UNICODE_WCHAR_T xmlparse.c
xmlparse.c:22:25: error: expected ')'
int fun(int x) { return XML_T(x); }
^
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: note: to match this '('
xmlparse.c:15:18: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
^
xmlparse.c:22:25: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
int fun(int x) { return XML_T(x); }
^~~~~~~~
xmlparse.c:15:25: note: expanded from macro 'XML_T'
#define XML_T(x) (const wchar_t)x
~~~~~ ^
1 warning and 1 error generated.
$ cc -c -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -DHAVE_GETRANDOM -D_WIN -D__SCO__ xmlparse.c
$ cc -c -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
###Markdown
To test all pairs, we can use `pairwise()`:
###Code
pairwise_cpp_grammar = deepcopy(cpp_grammar)
pairwise_cpp_grammar["<option>"] = pairwise(cpp_grammar["<option>"])
pairwise_cpp_grammar["<option>"][:10]
for i in range(10):
invocation = g.fuzz()
print("$", invocation)
# subprocess.call(invocation, shell=True)
cc_runner = ProgramRunner(invocation.split(' '))
(result, outcome) = cc_runner.run()
print(result.stderr, end="")
###Output
$ cc -c -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -DLOAD_LIBRARY_SEARCH_SYSTEM -DXML_DEV_URANDOM -D_WIN -DXML_POOR_ENTROPY -DHAVE_SYSCALL_GETRANDOM -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -D_WIN xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_SYSCALL_GETRANDOM xmlparse.c
$ cc -c -D__SCO__ -DHAVE_SYSCALL_GETRANDOM -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -DHAVE_GETRANDOM xmlparse.c
$ cc -c -DXML_DEV_URANDOM xmlparse.c
$ cc -c -DRANDOM_BUF xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
$ cc -c -DHAVE_SYSCALL_GETRANDOM -DXML_POOR_ENTROPY xmlparse.c
$ cc -c -D_WIN -D__UNIXWARE__ -D__SCO__ xmlparse.c
xmlparse.c:7:3: error:
# error
^
1 error generated.
###Markdown
Some of the compilation errors we get could be expected – for instance, defining `XML_UNICODE_WCHAR_T` when actually, the type is not supported in our environment. Other errors may not be expected – and it is these errors we would find through systematic configuration fuzzing, as described above. At the end, don't forget to clean up:
###Code
os.remove("xmlparse.c")
if os.path.exists("xmlparse.o"):
os.remove("xmlparse.o")
###Output
_____no_output_____
###Markdown
Exercise 2: .ini Configuration FuzzingBesides command-line options, another important source of configurations are _configuration files_. In this exercise, we will consider the very simple configuration language provided by the Python `ConfigParser` module, which is very similar to what is found in Microsoft Windows _.ini_ files. The following example for a `ConfigParser` input file stems right from [the ConfigParser documentation](https://docs.python.org/3/library/configparser.html):```[DEFAULT]ServerAliveInterval = 45Compression = yesCompressionLevel = 9ForwardX11 = yes[bitbucket.org]User = hg[topsecret.server.com]Port = 50022ForwardX11 = no``` The above `ConfigParser` file can be created programmatically:
###Code
import configparser
config = configparser.ConfigParser()
config['DEFAULT'] = {'ServerAliveInterval': '45',
'Compression': 'yes',
'CompressionLevel': '9'}
config['bitbucket.org'] = {}
config['bitbucket.org']['User'] = 'hg'
config['topsecret.server.com'] = {}
topsecret = config['topsecret.server.com']
topsecret['Port'] = '50022' # mutates the parser
topsecret['ForwardX11'] = 'no' # same here
config['DEFAULT']['ForwardX11'] = 'yes'
with open('example.ini', 'w') as configfile:
config.write(configfile)
with open('example.ini') as configfile:
print(configfile.read(), end="")
###Output
[DEFAULT]
serveraliveinterval = 45
compression = yes
compressionlevel = 9
forwardx11 = yes
[bitbucket.org]
user = hg
[topsecret.server.com]
port = 50022
forwardx11 = no
###Markdown
and be read in again:
###Code
config = configparser.ConfigParser()
config.read('example.ini')
topsecret = config['topsecret.server.com']
topsecret['Port']
###Output
_____no_output_____
###Markdown
Part 1: Read ConfigurationUsing `configparser`, create a program reading in the above configuration file and accessing the individual elements. Part 2: Create a Configuration GrammarDesign a grammar that will automatically create configuration files suitable for your above program. Fuzz your program with it. Part 3: Mine a Configuration GrammarBy dynamically tracking the individual accesses to configuration elements, you can again extract a basic grammar from the execution. To this end, create a subclass of `ConfigParser` with a special method `__getitem__`:
###Code
class TrackingConfigParser(configparser.ConfigParser):
def __getitem__(self, key):
print("Accessing", repr(key))
return super().__getitem__(key)
###Output
_____no_output_____
###Markdown
For a `TrackingConfigParser` object `p`, `p.__getitem__(key)` will be invoked whenever `p[key]` is accessed:
###Code
tracking_config_parser = TrackingConfigParser()
tracking_config_parser.read('example.ini')
section = tracking_config_parser['topsecret.server.com']
###Output
Accessing 'topsecret.server.com'
###Markdown
Using `__getitem__()`, as above, implement a tracking mechanism that, while your program accesses the read configuration, automatically saves options accessed and values read. Create a prototype grammar from these values; use it for fuzzing. At the end, don't forget to clean up:
###Code
import os
os.remove("example.ini")
###Output
_____no_output_____
###Markdown
**Solution.** Left to the reader. Enjoy! Exercise 3: Extracting and Fuzzing C Command-Line OptionsIn C programs, the `getopt()` function are frequently used to process configuration options. A call```getopt(argc, argv, "bf:")```indicates that the program accepts two options `-b` and `-f`, with `-f` taking an argument (as indicated by the following colon). Part 1: Getopt FuzzingWrite a framework which, for a given C program, automatically extracts the argument to `getopt()` and derives a fuzzing grammar for it. There are multiple ways to achieve this:1. Scan the program source code for occurrences of `getopt()` and return the string passed. (Crude, but should frequently work.)2. Insert your own implementation of `getopt()` into the source code (effectively replacing `getopt()` from the runtime library), which outputs the `getopt()` argument and exits the program. Recompile and run.3. (Advanced.) As above, but instead of changing the source code, hook into the _dynamic linker_ which at runtime links the program with the C runtime library. Set the library loading path (on Linux and Unix, this is the `LD_LIBRARY_PATH` environment variable) such that your own version of `getopt()` is linked first, and the regular libraries later. Executing the program (without recompiling) should yield the desired result.Apply this on `grep` and `ls`; report the resulting grammars and results. **Solution.** Left to the reader. Enjoy hacking! Part 2: Fuzzing Long Options in CSame as Part 1, but also hook into the GNU variant `getopt_long()`, which accepts "long" arguments with double dashes such as `--help`. Note that method 1, above, will not work here, since the "long" options are defined in a separately defined structure. **Solution.** Left to the reader. Enjoy hacking! Exercise 4: Expansions in ContextIn our above option configurations, we have multiple symbols which all expand to the same integer. For instance, the `--line-range` option of `autopep8` takes two `` parameters which both expand into the same `` symbol:``` ::= ... | --line-range | ... ::= ::= (-)?+ ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9```
###Code
autopep8_runner.ebnf_grammar()["<line>"]
autopep8_runner.ebnf_grammar()["<int>"]
autopep8_runner.ebnf_grammar()["<digit>"]
###Output
_____no_output_____ |
03. 협업필터링 기반 추천시스템/03. 협업필터링 기반 추천시스템 - SGD/03. 협업필터링 기반 추천시스템 - SGD(PyTorch).ipynb | ###Markdown
Data
###Code
movie = pd.read_csv("./ratings.csv")
user2idx = {}
for i, l in enumerate(movie['userId'].unique()):
user2idx[l] = i
movie2idx = {}
for i, l in enumerate(movie['movieId'].unique()):
movie2idx[l] = i
idx2user = {i: user for user, i in user2idx.items()}
idx2movie = {i: item for item, i in movie2idx.items()}
useridx = movie['useridx'] = movie['userId'].apply(lambda x: user2idx[x]).values
movieidx = movie['movieidx'] = movie['movieId'].apply(lambda x: movie2idx[x]).values
rating = movie['rating'].values
n_users = movie['userId'].nunique()
n_items = movie['movieId'].nunique()
import scipy
ratings = scipy.sparse.csr_matrix((rating, (useridx, movieidx)), shape=(len(set(useridx)), len(set(movieidx))))
###Output
_____no_output_____
###Markdown
Model
###Code
import torch
import torch.nn.functional as F
from torch import nn
import torch.nn.init as weight_init
class MatrixFactorization(nn.Module):
def __init__(self,R, n_users, n_items, n_factors=20):
super().__init__() # 부모 클래스(torch.nn.Module)의 init을 불러옴
self.user_factors = nn.Embedding(n_users, n_factors)
self.item_factors = nn.Embedding(n_items, n_factors)
# weight 초기화
weight_init.xavier_uniform_(self.user_factors.weight)
weight_init.xavier_uniform_(self.item_factors.weight)
# original Matrix
self.R = R
def forward(self, user, item):
pred = (self.user_factors(user) * self.item_factors(item)).sum(1)
return pred
def complete_matrix(self):
return torch.matmul(self.user_factors.weight, self.item_factors.weight.T)
model = MatrixFactorization(ratings, n_users, n_items, n_factors=20)
###Output
_____no_output_____
###Markdown
Batch를 사용하지 않은 Matrix Factorization
###Code
dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
optimizer = torch.optim.SGD(model.parameters(), lr=5e-3) # learning rate
loss_func = torch.nn.MSELoss()
rows, cols = ratings.nonzero()
nb_epochs = 10
for epoch in tqdm_notebook(range(nb_epochs)):
train_loss = 0
for row, col in zip(*(rows, cols)):
# gradient 값을 0으로 설정
optimizer.zero_grad()
# 데이터를 Tensor형태로 변환
rating = torch.FloatTensor([ratings[row, col]])
row = torch.LongTensor([row])
col = torch.LongTensor([col])
# 예측값을 만들고 Loss를 계산
prediction = model(row, col)
loss = loss_func(prediction, rating)
train_loss += loss.item()
# 역전파
loss.backward()
# 파라미터를 갱신
optimizer.step()
cost_ = model.cost()
print('Epoch {:4d}/{} Loss: {:.6f}'.format(epoch+1, nb_epochs, train_loss/len(rows)))
###Output
_____no_output_____
###Markdown
Recommend
###Code
idx2rec = {}
for u in useridx.key():
item_rec = np.argsort(-torch.matmul(model.user_factors.weight[user2idx[u]], model.item_factors.weight.T).detach().numpy())[0:200]
# 추천에서 제외해야할 항목
item_rec = [idx2movie[x[0]] for x in item_rec if x not in movie[movie['useridx']==u]['movieidx'].unique()][0:100]
idx2rec[idx2user[u]] = item_rec
idx2rec[0]
###Output
_____no_output_____ |
Python3_Demo.ipynb | ###Markdown
Client Retention Demo Using PythonIn this demo, we will show Anaconda functionality accessing enterprise data from VSAM and DB2. The data stored in VSAM consists of 6,001 rows of customer information. The data stored in DB2 consists of 20,000 rows of transaction data. The data is transformed and joined in a Pandas DataFrame, which is used to perform exploratory analyses. A random forest algorithm is then used to predict customer churn.
###Code
USERNAME="???????"
PASSWORD="????????"
MDSS_SSID="AZK1"
DB2_SSID="DBBG"
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings("ignore")
warnings.simplefilter("ignore", category=PendingDeprecationWarning)
###Output
_____no_output_____
###Markdown
Set up Mainframe Data ConnectionsThis step will set up the VSAM and DB2 connections to access the data and load them into Pandas DataFrames. The dsdbc module is delivered with the z/OS IzODA Anaconda distribution. It enables Python applications to access the z/OS IzODA Mainframe Data Service. The Data Service component (MDS) provides optimized, virtualized, and parallelized access to both IBM Z data sources and other off-platform data sources.
###Code
def cp1047_to_utf8(list):
list_out = []
for e in list:
x = ()
for i in e:
if isinstance(i, (str,)):
s = i.encode('utf16').decode('cp1047').encode('utf8').decode('utf16')[2:]
x = x + (s,)
else:
x = x + (i,)
list_out.append(x)
return list_out
def load_data_from_mds(vtable_name, user, password, mds_id=MDSS_SSID):
import dsdbc
conn =dsdbc.connect(SSID=mds_id, user=user, password=password)
cursor = conn.cursor()
cursor.execute("SELECT * FROM " + vtable_name)
rows = cursor.fetchall()
label = []
for col in cursor.description: label.append(col[0].lower())
conn.close()
return pd.DataFrame(rows, columns=label)
def load_data_from_db2(table_name, user, password, mds_id=MDSS_SSID, db2_id=DB2_SSID):
import dsdbc
conn =dsdbc.connect(SSID=mds_id, user=user, password=password, dsid=db2_id)
cursor = conn.cursor()
sql = "SELECT * FROM " + table_name
#print(sql)
cursor.execute(sql)
rows = cp1047_to_utf8(cursor.fetchall())
label = []
for col in cursor.description: label.append(col[0].lower())
conn.close()
return pd.DataFrame(rows, columns=label)
###Output
_____no_output_____
###Markdown
***Credit card transactions***Load credit card transactions into a Pandas DataFrame.
###Code
txn_df = load_data_from_db2(table_name='SPARKDB.SPPAYTB1', user=USERNAME, password=PASSWORD)
txn_df['acaureq_aureq_tx_dt_ttlamt'] = pd.to_numeric(txn_df['acaureq_aureq_tx_dt_ttlamt'])
txn_df['cont_id'] = txn_df['cont_id'].astype('int64')
txn_df['acaureq_hdr_credtt'] = pd.to_datetime(txn_df['acaureq_hdr_credtt'])
txn_df['date'] = txn_df['acaureq_hdr_credtt'].apply(lambda x: x.date())
txn_df
###Output
_____no_output_____
###Markdown
***Client Data***Load client data into a Pandas DataFrame.
###Code
client_df = load_data_from_mds(vtable_name='VSAM_CLIENT', user=USERNAME, password=PASSWORD)
client_df = client_df.set_index("cont_id")
client_df
###Output
_____no_output_____
###Markdown
Aggregate statisticsCalculate a few aggregate statistics based on credit transactions and join the results to the client data DataFrame.
###Code
# Total transactions per customer
total_txns_df = txn_df.groupby('cont_id').size().rename("total_txns").to_frame()
client_df = total_txns_df.join(client_df)
# Total transaction amounts per customer
total_txn_amount_df = txn_df.groupby('cont_id')['acaureq_aureq_tx_dt_ttlamt'].sum().rename("total_txn_amount").to_frame()
client_df = client_df.join(total_txn_amount_df)
# Average transaction amounts per customer
avg_txn_amount_df = txn_df.groupby('cont_id')['acaureq_aureq_tx_dt_ttlamt'].mean().rename("avg_txn_amount").to_frame()
client_df = client_df.join(avg_txn_amount_df)
# Average daily transactions per customer
daily_txns = txn_df.groupby(['date', 'cont_id']).size()
# Missing transactions on a particular day means customer had none.
# These days should be included in the average as 0 transaction days.
avg_daily_txns_df = daily_txns.unstack().fillna(0).mean().rename("avg_daily_txns").to_frame()
client_df = client_df.join(avg_daily_txns_df)
client_df
###Output
_____no_output_____
###Markdown
Exploratory AnalysesWe begin our exploration of the data set by creating a scatterplot of annual_income vs. age_years and their associated histograms. Matplotlib and Seaborn are two common plotting libraries used in Python. These plotting libraries are useful in creating custom visualizations to help gain insights from our data.
###Code
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="whitegrid")
%matplotlib inline
def jointplot(x, y, data, **kwargs):
size = kwargs.pop('size', 10)
alpha = kwargs.pop('alpha', 0.3)
return sns.jointplot(x=x, y=y, data=data,
alpha=alpha,
size=size,
**kwargs)
# for widget
def w_jointplot(x, y):
g = jointplot(x, y, filter_outliers(client_df, by_col=y))
plt.close()
return g.fig
churn_labels = ['Did not churn', 'Did churn']
def filter_outliers(d, by_col=None):
if isinstance(d, pd.
Series):
return d[((d-d.mean()).abs()<=3*d.std())]
elif isinstance(d, pd.DataFrame):
if not by_col:
raise ValueError('by_col is required for DataFrame')
return d[np.abs(d[by_col]-d[by_col].mean())<=(3*d[by_col].std())]
ax = jointplot('age_years', 'annual_income', filter_outliers(client_df, by_col='annual_income'))
###Output
_____no_output_____
###Markdown
CorrelationsNext, we compute the correlation coefficients between each variable and create a color-coded correlation matrix.
###Code
corr = client_df.corr()
# only show lower triangle
mask = np.zeros_like(corr)
mask[np.triu_indices_from(mask)] = True
f, ax = plt.subplots(figsize=(12,12))
ax = sns.heatmap(corr, mask=mask, square=True, annot=True, fmt='.2f',
cbar=True,
ax=ax)
title = ax.set_title('Correlations', size=14)
###Output
_____no_output_____
###Markdown
ChurnHere we plot the distributions of clients who did and did not churn. The green histogram shows the number of clients who did churn. The blue histogram shows the number of clients who did not churn. The line graphs show the density functions for each case.
###Code
def plot_churn_by(df, col, **kwargs):
f, ax = plt.subplots(figsize=(12,10), sharex=True)
kde = kwargs.get('kde', False)
hist = kwargs.get('hist', False)
for churn in df.churn.unique():
sns.distplot(df[df.churn == churn][col],
label=churn_labels[churn],
kde_kws={'shade': (kde and not hist)},
ax=ax,
**kwargs)
ax.set_title('Client Churn by {}'.format(col))
label = ax.set_xlabel('{}'.format(col))
return f, ax
def w_plot_churn_by(column, hist=True, kde=False, norm_hist=False):
df = filter_outliers(client_df, by_col=column)
f, ax = plot_churn_by(df, column, hist=hist, kde=kde, norm_hist=norm_hist)
plt.legend()
plt.close()
return f
f, ax = plot_churn_by(client_df, 'age_years')
ax = plt.legend()
###Output
_____no_output_____
###Markdown
As shown in the correlation matrix above, the two features that showed a negative correlation with churn were age and activity level. Here we generate a boxplot with those two features as the axes, and churn as the category. The plot shows that clients that churn tend to be younger across all levels of activity.
###Code
col = 'age_years'
data = filter_outliers(client_df, by_col=col)
f, ax = plt.subplots(figsize=(12,8))
ax = sns.boxplot(x='activity_level', y=col, hue="churn", data=data,
palette='muted', ax=ax)
title = ax.set_title('Client Churn by Activity Level')
label = ax.set_ylabel('Age (Years)')
label = ax.set_xlabel('Activity Level')
handles, labels = ax.get_legend_handles_labels()
legend = ax.legend(handles, churn_labels)
###Output
_____no_output_____
###Markdown
This beeswarm plot shows clients binned by the level of activity they maintain with the bank. Clients that churned maintained lower levels of activity (0-2). And of clients within these lower activity levels, younger clients churned more than others.
###Code
f, ax = plt.subplots(figsize=(10,8))
ax = sns.swarmplot(x='activity_level', y='age_years', hue='churn',
data=data.sample(n=100, random_state=51),
palette='muted', ax=ax)
title = ax.set_title('Client Churn by Activity Level')
label = ax.set_ylabel('Age (Years)')
label = ax.set_xlabel('Activity Level')
handles, labels = ax.get_legend_handles_labels()
legend = ax.legend(handles, churn_labels)
###Output
_____no_output_____
###Markdown
Train churn modelWe now start to do some predictive analyses on the data to evaluate customer churn. To keep things simple, we use a single data set, which we split into training and test data sets. We use the training data to train the model, and the test data to make predictions about lost revenue to the bank.We use a supervised learning algorithm, random forest, to train the model. Random Forest is a popular algorithm for both classification and regression. It requires very little tuning and is less prone to overfitting. Random forest is an aggregation of decision trees where each tree classifies an observation in a dataset. Since random forest aggregates many classifiers, it is considered an ensemble method. Using scikit learn, we create our random forest for classification.
###Code
from sklearn.ensemble import RandomForestClassifier as RF
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
def make_feature_space(df):
'''Create the feature space required by our classifier.'''
# drop columns/features we don't want/need for the classifier
features_df = df.drop(['churn', 'customer_id'], axis=1, errors='ignore')
X = features_df.as_matrix().astype(np.float)
# normalize feature values
scaler = StandardScaler()
X = scaler.fit_transform(X)
return X
def predict_churn(X):
'''Predict the probabilit of churn from feature set.'''
return clf.predict_proba(X)[:,1]
def train_model(X, y):
'''Train our classifier using features X and target variable y.'''
clf = RF(n_estimators=100)
return clf.fit(X, y)
def init_model(df):
# split data into train, test sets
train_index, test_index = train_test_split(df.index, random_state=99)
train_df = client_df.ix[train_index]
test_df = client_df.ix[test_index]
# target variable
y = np.array(train_df['churn'])
# extract features
X = make_feature_space(train_df)
# train classifier
clf = train_model(X, y)
return clf, test_df
###Output
_____no_output_____
###Markdown
After training the model, the churn classifier and the test data set are used for our churn predictions.
###Code
clf, test_df = init_model(client_df)
###Output
_____no_output_____
###Markdown
Calculate business lossIn this simple example, we calculate the predicted loss of business (revenue) for all clients in the test data set. We calculate the revenue from each client, and multiply that by the churn probability to determine the predicted loss.
###Code
def calc_business_loss(df):
df['customer_id'] = df.index
data = df.copy()
# extract features
X = make_feature_space(df)
# predict churn
data['churn_probability'] = predict_churn(X)
# TODO: avg_daily_balance would be a nice feature to have here
# for now, we'll just use fraction of income
avg_daily_balance = df['annual_income'] / 6
# Interest made on deposits
deposit_rate = 0.02
# Fee collected for each credit txn
credit_rate = 0.015
# Assume we make some money on trading fees and/or portfolio management
mgmt_rate = 0.02
# How much is each customer worth to the business?
worth = deposit_rate * avg_daily_balance + \
mgmt_rate * df['annual_invest'] + \
credit_rate * df['total_txn_amount']
data['worth'] = worth
# How much would we lose per annum?
data['predicted_loss'] = data['churn_probability'] * worth
return data.sort_values(by='predicted_loss', ascending=False)
churn_df = calc_business_loss(test_df)
churn_df.head()
###Output
_____no_output_____
###Markdown
Loss by Age GroupIn this section, we calculate and plot the predicted loss of revenue by age group. In our data set, age is an important feature in predicting if a client will churn. We create a DataFrame containing the cumulative predicted loss by age group.
###Code
def group_by_age(df, bins=None):
if bins is None:
bin_size = 5
_min, _max = int(df.age_years.min()), int(df.age_years.max())
bins = range(_min, _max + bin_size, 5)
return df.groupby(pd.cut(df.age_years, bins=bins))
data_by_age = churn_df.pipe(group_by_age)
loss_by_age_df = data_by_age['predicted_loss'].sum().reset_index()
loss_by_age_df['age_years'] = loss_by_age_df['age_years'].astype(str)
loss_by_age_df.plot(x='age_years', y='predicted_loss', style='o')
###Output
_____no_output_____ |
notebooks/Dask-MPI_Volume_Render.ipynb | ###Markdown
Multi-Node Rendering with Dask-MPI and Dask.ArrayIf you looked at the Array.ipynb example, you saw server-side rendering driven by Jupyter's python kernel, but we can also drive the renderer with Dask. What's more, we can use a cluster of Dask-MPI workers to distribute the rendering across multiple GPUs, or even multiple nodes. It takes a little more setup than the single-node, Jupyter driven case but, with large enough data, sometimes you just want to be able to to throw hardware at the problem.Before we get started, we'll need a running Dask-MPI cluster. The scheduler process and worker processes are launched via separate `mpiexec`/`mpirun` calls--the Dask scheduler doesn't participate as a worker, which means that one of our ranks would be missing from `COMM_WORLD` if we launched the scheduler and workers together. The root project directory contains a script which demonstrates the syntax for launching a dask-mpi cluster.Once the cluster is ready, our next step is to connect to the Dask scheduler from the Jupyter client. Here we're using a description file (`scheduler.json`) output by the scheduler process at startup, but you could just as easily connect to it via URL.
###Code
from dask.distributed import Client
client = Client(scheduler_file='/tmp/scheduler.json', set_as_default=True) #Connect to the Dask scheduler
N = len(client.scheduler_info()['workers']) #Get the number of workers in our cluster
print("Connected to cluster with", N, "workers")
###Output
_____no_output_____
###Markdown
We're using the same data here as in the single-node array example, but this time we use Dask to load multiple chunks of it in parallel. We launch 128 tasks, each of which will load a single chunk in a lazy fashion. Once the data is loaded, we rechunk and rebalance in order to create a 1:1 mapping of chunks to workers.
###Code
# Make sure we're able to import urllib.request
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
import os
# Check for a cached copy of the dataset, and download it if needed
# NOTE: we use an absolute path to the data because the cluster may not be running in the same working directory as the Jupyter kernel
filename = '/tmp/Supernova_1350.dat'
if not os.path.exists(filename):
url = 'https://data.kitware.com/api/v1/item/5bdc652f8d777f21798533f0/download'
urlretrieve(url, filename)
import numpy as np
### Load the array in parallel
def getChunk(fn, n, shape, dtype):
sz = shape[0]*shape[1]*shape[2]
with open(fn, 'rb') as f:
f.seek(n*sz*4)
return np.fromfile(f, dtype=dtype, count=sz).reshape(shape)
import dask
import dask.array as da
from dask.distributed import wait
dims,dtype = [432,432,432],np.float32
shape = [1,dims[1],dims[2]]
parts = [da.from_delayed(dask.delayed(getChunk)(filename,n,shape,dtype),shape,dtype) for n in range(0,dims[0])]
ar = da.concatenate(parts) #combine array parts into a single dask array
ar = ar.rechunk([dims[0]/N, dims[1], dims[2]]).persist() #rechunk to get 1 block per worker
wait(ar) #wait on the load+concat+rechunk to make the data resident on workers
client.rebalance() #redistribute data evenly
###Output
_____no_output_____
###Markdown
Now that we're set up, we're ready to launch a `PVRenderActor` per worker. We use Dask Actors for rendering because they can maintain their own internal state, and can wait around as background threads on our workers until we request a new frame. This means we initialize our state just once at the start of rendering, and still have our cluster free for further data processing.Actually launching the Actors is a two step process. First, `client.map(actor=True)` instantiates our Actor objects across all of the workers. Note the `range(N)` (`N` is the number of workers in our cluster) in the call, which makes sure that Dask spawns the one task/worker that we want. `client.map` just returns futures, so we need to gather the results back to the Jupyter kernel in order to work with them. The final output is a list of `dask.distributed.actor.Actor` that can be used to access the Actors across all of the ranks.
###Code
from ipyparaview import PVRenderActor
renderers = client.gather(client.map(PVRenderActor, range(N), actor=True))
###Output
_____no_output_____
###Markdown
Similar to the single-node rendering example, we need to set up some ParaView state. The difference here is that the state has to be configured on every worker, instead of locally in the Jupyter kernel. The state script is wrapped in a function object, which is then passed to and run on every worker.
###Code
# Define a function for remote execution that will set up the ParaView state
def workerState(self, arr):
import numpy as np
import vtk
from vtk.util import numpy_support as vtknp
#Use the undocumented block slicing to get the block for this rank
wdims = arr.shape[::-1]
ar = arr.blocks[self.rank].compute()
dims = ar.shape[::-1]
print("Rank", self.rank, "has array with local/global dims", dims, wdims)
ar = np.reshape(ar, dims[0]*dims[1]*dims[2])
ext = [0,dims[0]-1, 0,dims[1]-1, max(self.rank*dims[2]-1,0),(self.rank+1)*dims[2]-1]
wext = [0,wdims[0]-1, 0,wdims[1]-1, 0,wdims[2]-1]
vtkimg = vtk.vtkImageData()
vtkimg.Initialize()
vtkimg.SetExtent(ext)
vtkimg.SetSpacing([1,1,1])
#set the extent for the whole dataset
vi = vtk.vtkInformation()
vtkimg.CopyInformationToPipeline(vi)
vi.Set(vtk.vtkStreamingDemandDrivenPipeline.WHOLE_EXTENT(), wext[0],wext[1],wext[2],wext[3],wext[4],wext[5])
vtkimg.CopyInformationFromPipeline(vi)
varnm = 'E' #'E' is entropy for this data
vtkarr = vtknp.numpy_to_vtk(ar)
vtkarr.SetName(varnm)
vtkimg.GetPointData().AddArray(vtkarr)
vtkimg.GetPointData().SetScalars(vtkarr)
self.TP = self.pvs.TrivialProducer()
self.TP.GetClientSideObject().SetOutput(vtkimg)
self.TP.UpdatePipeline()
#initializae some renderer settings
self.renv.ViewSize = [800, 500]
self.renv.CameraPosition = [650,0,0]
self.renv.Background = [0.0, 0.0, 0.0]
#create a display object for the data, and set it to volume render
self.TPDisplay = self.pvs.Show(self.TP, self.renv)
ePWF,eLUT = self.pvs.GetOpacityTransferFunction(varnm), self.pvs.GetColorTransferFunction(varnm)
eLUT.RGBPoints = [3.0241666020214752e-15, 0.0392156862745098, 1.0, 0.9686274509803922, 0.05988497659564321, 0.0392156862745098, 1.0, 0.9686274509803922, 0.06215288117527962, 0.0, 0.0, 0.0, 0.06337877362966537, 0.0, 0.0, 0.0, 0.06871142238378525, 0.901960784314, 0.0, 0.0, 0.0716535672545433, 0.901960784314, 0.901960784314, 0.0, 0.08403510600328445, 0.9882352941176471, 0.9882352941176471, 0.9882352941176471, 0.11376306414604187, 1.0, 1.0, 1.0]
eLUT.ColorSpace = 'RGB'
ePWF.Points = [3.0241666020214752e-15, 0.0, 0.5, 0.0, 0.032547514885663986, 0.0, 0.5, 0.0, 0.03309916704893112, 0.3529411852359772, 0.5, 0.0, 0.03346693515777588, 0.0, 0.5, 0.0, 0.06215288117527962, 0.0, 0.5, 0.0, 0.06779199838638306, 0.05882352963089943, 0.8863638639450073, 0.0, 0.07698621600866318, 0.11029411852359772, 0.5, 0.0, 0.08078648895025253, 0.04411764815449715, 0.5, 0.0, 0.08244144916534424, 0.4852941334247589, 0.5, 0.0, 0.08378992974758148, 0.0, 0.5, 0.0, 0.08746761322713148, 0.0, 0.5, 0.0, 0.09617146849632263, 0.0, 0.5, 0.0, 0.10965631902217865, 0.4117647111415863, 0.5, 0.0, 0.11376306414604187, 1.0, 0.5, 0.0]
# trace defaults for the display properties.
self.TPDisplay.Representation = 'Volume'
self.TPDisplay.ColorArrayName = ['POINTS', varnm]
self.TPDisplay.LookupTable = self.pvs.GetColorTransferFunction(varnm)
self.TPDisplay.OpacityArray = ['POINTS', varnm]
self.TPDisplay.ScalarOpacityFunction = self.pvs.GetOpacityTransferFunction(varnm)
# Submit the setup function for execution on Dask workers
wait([r.run(workerState, [ar]) for r in renderers])
###Output
_____no_output_____
###Markdown
And that's it! We can now pop up a PVDisplay widget and do some interactive rendering. Note that we pass in the list of render Actors, instead of a `RenderView` object as we did in the single-node example.
###Code
from ipyparaview.widgets import PVDisplay
w = PVDisplay(renderers)
display(w)
###Output
_____no_output_____
###Markdown
Multi-Node Rendering with Dask-MPI and Dask.ArrayIf you looked at the Array.ipynb example, you saw server-side rendering driven by Jupyter's python kernel, but we can also drive the renderer with Dask. What's more, we can use a cluster of Dask-MPI workers to distribute the rendering across multiple GPUs, or even multiple nodes. It takes a little more setup than the single-node, Jupyter driven case but, with large enough data, sometimes you just want to be able to to throw hardware at the problem.Before we get started, we'll need a running Dask-MPI cluster. The scheduler process and worker processes are launched via separate `mpiexec`/`mpirun` calls--the Dask scheduler doesn't participate as a worker, which means that one of our ranks would be missing from `COMM_WORLD` if we launched the scheduler and workers together. The root project directory contains a script which demonstrates the syntax for launching a dask-mpi cluster.Once the cluster is ready, our next step is to connect to the Dask scheduler from the Jupyter client. Here we're using a description file (`scheduler.json`) output by the scheduler process at startup, but you could just as easily connect to it via URL.
###Code
from dask.distributed import Client
client = Client(scheduler_file='/tmp/scheduler.json', set_as_default=True) #Connect to the Dask scheduler
N = len(client.scheduler_info()['workers']) #Get the number of workers in our cluster
print("Connected to cluster with", N, "workers")
###Output
_____no_output_____
###Markdown
We're using the same data here as in the single-node array example, but this time we use Dask to load multiple chunks of it in parallel. We launch 128 tasks, each of which will load a single chunk in a lazy fashion. Once the data is loaded, we rechunk and rebalance in order to create a 1:1 mapping of chunks to workers.
###Code
# Make sure we're able to import urllib.request
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
import os
# Check for a cached copy of the dataset, and download it if needed
# NOTE: we use an absolute path to the data because the cluster may not be running in the same working directory as the Jupyter kernel
filename = '/tmp/Supernova_1350.dat'
if not os.path.exists(filename):
url = 'https://data.kitware.com/api/v1/item/5bdc652f8d777f21798533f0/download'
urlretrieve(url, filename)
import numpy as np
### Load the array in parallel
def getChunk(fn, n, shape, dtype):
sz = shape[0]*shape[1]*shape[2]
with open(fn, 'rb') as f:
f.seek(n*sz*4)
return np.fromfile(f, dtype=dtype, count=sz).reshape(shape)
import dask
import dask.array as da
from dask.distributed import wait
dims,dtype = [432,432,432],np.float32
shape = [1,dims[1],dims[2]]
parts = [da.from_delayed(dask.delayed(getChunk)(filename,n,shape,dtype),shape,dtype) for n in range(0,dims[0])]
ar = da.concatenate(parts) #combine array parts into a single dask array
ar = ar.rechunk([dims[0]/N, dims[1], dims[2]]).persist() #rechunk to get 1 block per worker
wait(ar) #wait on the load+concat+rechunk to make the data resident on workers
client.rebalance() #redistribute data evenly
###Output
_____no_output_____
###Markdown
Now that we're set up, we're ready to launch a `PVRenderActor` per worker. We use Dask Actors for rendering because they can maintain their own internal state, and can wait around as background threads on our workers until we request a new frame. This means we initialize our state just once at the start of rendering, and still have our cluster free for further data processing.Actually launching the Actors is a two step process. First, `client.map(actor=True)` instantiates our Actor objects across all of the workers. Note the `range(N)` (`N` is the number of workers in our cluster) in the call, which makes sure that Dask spawns the one task/worker that we want. `client.map` just returns futures, so we need to gather the results back to the Jupyter kernel in order to work with them. The final output is a list of `dask.distributed.actor.Actor` that can be used to access the Actors across all of the ranks.
###Code
from ipyparaview import PVRenderActor
renderers = client.gather(client.map(PVRenderActor, range(N), actor=True))
###Output
_____no_output_____
###Markdown
Similar to the single-node rendering example, we need to set up some ParaView state. The difference here is that the state has to be configured on every worker, instead of locally in the Jupyter kernel. The state script is wrapped in a function object, which is then passed to and run on every worker.
###Code
# Define a function for remote execution that will set up the ParaView state
def workerState(self, arr):
import numpy as np
import dask.array as da
import vtk
from vtk.util import numpy_support as vtknp
#Set a couple of constants
useIndex = False
ghostLevels = 1
if useIndex:
self.pvs.LoadPlugin('/usr/local/paraview/lib/paraview-5.8/plugins/pvNVIDIAIndeX/pvNVIDIAIndeX.so', remote=False, ns=globals())
#Use Dask.array's overlap computation to compute the ghost (overlap) region
#NOTE: we limit ghost cell exchange to the z-axis in this example, since that's what we partition on
arr_ov = da.overlap.overlap(arr, depth={0:ghostLevels}, boundary='nearest')
#Use the undocumented block slicing to get the block for this rank
wdims = arr.shape[::-1]
ar = arr_ov.blocks[self.rank].compute()
odims = ar.shape[::-1]
dims = [odims[0], odims[1], odims[2]-2*ghostLevels]
print("[", self.rank, "] local/overlap/global dims", dims, odims, wdims)
ar = np.reshape(ar, odims[0]*odims[1]*odims[2])
ext = [0,dims[0]-1,
0,dims[1]-1,
max(self.rank*dims[2]-1,0),(self.rank+1)*dims[2]-1]
wext = [0,wdims[0]-1, 0,wdims[1]-1, 0,wdims[2]-1]
print("[", self.rank, "] local/global extent", ext, wext)
vtkimg = vtk.vtkImageData()
vtkimg.Initialize()
vtkimg.SetExtent(ext)
vtkimg.SetSpacing([1,1,1])
#set the extent for the whole dataset
vi = vtk.vtkInformation()
vtkimg.CopyInformationToPipeline(vi)
vi.Set(vtk.vtkStreamingDemandDrivenPipeline.WHOLE_EXTENT(), wext[0],wext[1],wext[2],wext[3],wext[4],wext[5])
vtkimg.CopyInformationFromPipeline(vi)
varnm = 'E' #'E' is entropy for this data
vtkarr = vtknp.numpy_to_vtk(ar)
vtkarr.SetName(varnm)
vtkimg.GetPointData().AddArray(vtkarr)
vtkimg.GetPointData().SetScalars(vtkarr)
self.TP = self.pvs.TrivialProducer()
self.TP.GetClientSideObject().SetOutput(vtkimg)
self.TP.UpdatePipeline()
#initializae some renderer settings
self.renv.ViewSize = [800, 500]
self.renv.CameraPosition = [650,0,0]
self.renv.Background = [0.0, 0.0, 0.0]
#create a display object for the data, and set it to volume render
self.TPDisplay = self.pvs.Show(self.TP, self.renv)
ePWF,eLUT = self.pvs.GetOpacityTransferFunction(varnm), self.pvs.GetColorTransferFunction(varnm)
eLUT.RGBPoints = [3.0241666020214752e-15, 0.0392156862745098, 1.0, 0.9686274509803922, 0.05988497659564321, 0.0392156862745098, 1.0, 0.9686274509803922, 0.06215288117527962, 0.0, 0.0, 0.0, 0.06337877362966537, 0.0, 0.0, 0.0, 0.06871142238378525, 0.901960784314, 0.0, 0.0, 0.0716535672545433, 0.901960784314, 0.901960784314, 0.0, 0.08403510600328445, 0.9882352941176471, 0.9882352941176471, 0.9882352941176471, 0.11376306414604187, 1.0, 1.0, 1.0]
eLUT.ColorSpace = 'RGB'
ePWF.Points = [3.0241666020214752e-15, 0.0, 0.5, 0.0, 0.032547514885663986, 0.0, 0.5, 0.0, 0.03309916704893112, 0.3529411852359772, 0.5, 0.0, 0.03346693515777588, 0.0, 0.5, 0.0, 0.06215288117527962, 0.0, 0.5, 0.0, 0.06779199838638306, 0.05882352963089943, 0.8863638639450073, 0.0, 0.07698621600866318, 0.11029411852359772, 0.5, 0.0, 0.08078648895025253, 0.04411764815449715, 0.5, 0.0, 0.08244144916534424, 0.4852941334247589, 0.5, 0.0, 0.08378992974758148, 0.0, 0.5, 0.0, 0.08746761322713148, 0.0, 0.5, 0.0, 0.09617146849632263, 0.0, 0.5, 0.0, 0.10965631902217865, 0.4117647111415863, 0.5, 0.0, 0.11376306414604187, 1.0, 0.5, 0.0]
# trace defaults for the display properties.
if useIndex:
self.TPDisplay.Representation = 'NVIDIA IndeX'
else:
self.TPDisplay.Representation = 'Volume'
self.TPDisplay.ColorArrayName = ['POINTS', varnm]
self.TPDisplay.LookupTable = self.pvs.GetColorTransferFunction(varnm)
self.TPDisplay.OpacityArray = ['POINTS', varnm]
self.TPDisplay.ScalarOpacityFunction = self.pvs.GetOpacityTransferFunction(varnm)
# Submit the setup function for execution on Dask workers
wait([r.run(workerState, [ar]) for r in renderers])
###Output
_____no_output_____
###Markdown
And that's it! We can now pop up a PVDisplay widget and do some interactive rendering. Note that we pass in the list of render Actors, instead of a `RenderView` object as we did in the single-node example.
###Code
from ipyparaview.widgets import PVDisplay
w = PVDisplay(renderers)
display(w)
###Output
_____no_output_____
###Markdown
Once you're done visualizing, the Dask-MPI cluster can be stopped by calling 'shutdown' to stop the scheduler and workers, then closing the client. You can also kill the cluster from the command line that you started it, but this tends to leave defunct processes lying around.1616
###Code
client.shutdown()
client.close()
###Output
_____no_output_____ |
Machine Learning - Coursera/machine-learning-ex1/ex1/__ex1.ipynb | ###Markdown
Part 2: Plotting
###Code
# Part 2: Plotting
data = np.genfromtxt ('ex1data1.txt', delimiter=",")
X = np.matrix(data[:, 0]).T
y = np.matrix(data[:, 1]).T
m = len(y)
plt.scatter(X, y, alpha=0.7)
ones = np.ones((m, 1))
X = np.hstack((ones, X)) # Add a column of ones to x
###Output
_____no_output_____
###Markdown
Part 3: Cost and Gradient descent
###Code
def computeCost(X, y, theta):
m = len(y)
costs = np.power((X*theta - y), 2)
return (sum(costs)) / (2*m)
def computeCostDerivative(X, y, theta, j):
m = len(y)
dcosts = np.multiply((X*theta - y), X[:, j])
return sum(dcosts) / m
def gradientDescent(X, y, theta = np.zeros((2, 1)), alpha = 0.01, num_iters = 1000, verbose = False):
"""
GRADIENTDESCENT Performs gradient descent to learn theta
theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by
taking num_iters gradient steps with learning rate alpha
"""
m = len(y) # number of training examples
J_history = np.zeros((num_iters, 1))
T_history = np.zeros((num_iters, 2))
theta_temp = np.matrix(theta, dtype='float64')
for i in range(num_iters):
if verbose:
J_history[i] = computeCost(X, y, theta)
T_history[i] = theta.T
for j in range(len(theta)):
theta_temp[j] = theta[j] - alpha * computeCostDerivative(X, y, theta, j)
theta = theta_temp
return theta, J_history, T_history.T
iterations, alpha = 1500, 0.01 # Some gradient descent settings
print('\nTesting the cost function ...\n')
# compute and display initial cost
theta = np.zeros((2, 1))
J = computeCost(X, y, theta);
print('With theta = [0 ; 0]\nCost computed = ', J);
print('Expected cost value (approx) 32.07\n');
# further testing of the cost function
theta = np.matrix('-1 ; 2')
J = computeCost(X, y, theta);
print('\nWith theta = [-1 ; 2]\nCost computed = ', J);
print('Expected cost value (approx) 54.24\n');
print('\nRunning Gradient Descent ...\n')
# run gradient descent
theta, _, _ = gradientDescent(X, y, theta, alpha, iterations);
# print theta to screen
print('Theta found by gradient descent:\n', theta);
print('Expected theta values (approx):');
print(' -3.6303\n 1.1664\n\n');
# Predict values for population sizes of 35,000 and 70,000
predict1 = [1, 3.5] *theta;
print('For population = 35,000, we predict a profit of ', predict1*10000);
predict2 = [1, 7] * theta;
print('For population = 70,000, we predict a profit of ', predict2*10000);
# Plot the linear fit
plt.scatter(X[:,1], y, alpha=0.7, label= 'Training data')
plt.plot(X[:,1], X*theta, 'r-', alpha=0.7, label= 'Linear regression')
plt.legend()
###Output
Testing the cost function ...
With theta = [0 ; 0]
Cost computed = [[ 32.07273388]]
Expected cost value (approx) 32.07
With theta = [-1 ; 2]
Cost computed = [[ 54.24245508]]
Expected cost value (approx) 54.24
Running Gradient Descent ...
Theta found by gradient descent:
[[-3.71377262]
[ 1.17478184]]
Expected theta values (approx):
-3.6303
1.1664
For population = 35,000, we predict a profit of [[ 3979.63814181]]
For population = 70,000, we predict a profit of [[ 45097.00249877]]
###Markdown
Part 4: Visualizing J(theta_0, theta_1)
###Code
print('Visualizing J(theta_0, theta_1) ...\n')
# Grid over which we will calculate J
theta0_vals = np.linspace(-10, 10, 100)
theta1_vals = np.linspace(-1, 4, 100)
# initialize J_vals to a matrix of 0's
J_vals = np.zeros((len(theta0_vals), len(theta1_vals)))
# Fill out J_vals
for i in range(len(theta0_vals)):
for j in range(len(theta1_vals)):
t = np.matrix([[theta0_vals[i]], [theta1_vals[j]]])
J_vals[i,j] = computeCost(X, y, t)
plt.plot(theta[0], theta[1], 'rx')
plt.contour(theta0_vals, theta1_vals, J_vals, np.logspace(-2, 3, 20))
plt.xlabel(r'$\theta_0$'); plt.ylabel(r'$\theta_1$');
#%%
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
plt.xlabel(r'$\theta_0$'); plt.ylabel(r'$\theta_1$');
ax.plot_surface(theta0_vals, theta1_vals, J_vals)
print(theta)
###Output
[[-3.71377262]
[ 1.17478184]]
###Markdown
Analytical solution
###Code
def get_thetas(X, y):
return (X.T*X).I * (X.T*y)
print(get_thetas(X, y))
print(gradientDescent(X, y)[0])
###Output
[[-3.89578088]
[ 1.19303364]]
[[-3.25095985]
[ 1.12837093]]
|
Logistic_Regressionof_WIKIArticles.ipynb | ###Markdown
###Code
# Spark installation on Colab
# !apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q https://downloads.apache.org/spark/spark-3.0.0/spark-3.0.0-bin-hadoop3.2.tgz
!tar xf spark-3.0.0-bin-hadoop3.2.tgz
!pip install -q findspark
!rm -rf spark-3.0.0-bin-hadoop3.2.tgz
# Set JAVA_HOME and SPARK_HOME
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64"
os.environ["SPARK_HOME"] = "spark-3.0.0-bin-hadoop3.2"
import findspark
findspark.init("spark-3.0.0-bin-hadoop3.2")# SPARK_HOME
import sys
import re
import numpy as np
from numpy import dot
from numpy.linalg import norm
from operator import add
from pyspark.sql import SparkSession
from pyspark import SparkContext
import matplotlib.pyplot as plt
regex = re.compile('[^a-zA-Z]')
spark = SparkSession.builder.master("local[*]").getOrCreate()
sc = SparkContext.getOrCreate()
!wget https://s3.amazonaws.com/metcs777/SmallTrainingData.txt
pages=sc.textFile("SmallTrainingData.txt")
pages.take(1)
# Assumption: Each document is stored in one line of the text file
# We need this count later ...
numberOfDocs = pages.count()
print(numberOfDocs)
# Each entry in validLines will be a line from the text file
validLines = pages.filter(lambda x : 'id' in x and 'url=' in x)
# Now, we transform it into a set of (docID, text) pairs
keyAndText = pages.map(lambda x : (x[x.index('id="') + 4 : x.index('" url=')], x[x.index('">') + 2:][:-6]))
keyAndText.take(1)
# The following function gets a list of dictionaryPos values,
# and then creates a TF vector
# corresponding to those values... for example,
# if we get [3, 4, 1, 1, 2] we would in the
# end have [0, 2/5, 1/5, 1/5, 1/5] because 0 appears zero times,
# 1 appears twice, 2 appears once, etc.
def buildArray(listOfIndices):
returnVal = np.zeros(20000)
for index in listOfIndices:
returnVal[index] = returnVal[index] + 1
mysum = np.sum(returnVal)
returnVal = np.divide(returnVal, mysum)
return returnVal
# Cosine Similarity of two vectors
def cousinSim (x, y):
normA = np.linalg.norm(x)
normB = np.linalg.norm(y)
return np.dot(x,y)/(normA*normB)
# Now, we split the text in each (docID, text) pair into a list of words
# After this step, we have a data set with
# (docID, ["word1", "word2", "word3", ...])
# We use a regular expression here to make
# sure that the program does not break down on some of the documents
# remove all non letter characters
keyAndListOfWords = keyAndText.map(lambda x : (str(x[0]), regex.sub(' ', x[1]).lower().split()))
# Now get the top 20,000 words... first change (docID, ["word1", "word2", "word3", ...])
# to ("word1", 1) ("word2", 1)...
allWords = keyAndListOfWords.flatMap(lambda x: x[1]).map(lambda x: (x, 1))
# Now, count all of the words, giving us ("word1", 1433), ("word2", 3423423), etc.
allCounts = allWords.reduceByKey(add)
# Get the top 20,000 words in a local array in a sorted format based on frequency
topWords = allCounts.top(20000, lambda x: x[1])
#
print("Top Words in Corpus:", allCounts.top(10, key=lambda x: x[1]))
# We'll create a RDD that has a set of (word, dictNum) pairs
# start by creating an RDD that has the number 0 through 20000
# 20000 is the number of words that will be in our dictionary
topWordsK = sc.parallelize(range(20000))
# another option:
# Now, we transform (0), (1), (2), ... to ("MostCommonWord", 1)
# ("NextMostCommon", 2), ...
# the number will be the spot in the dictionary used to tell us
# where the word is located
dictionary = topWordsK.map (lambda x : (topWords[x][0], x))
dictionary.cache()
print("Word Postions in our Feature Matrix. Last 20 words in 20k positions: ", dictionary.top(20, lambda x : x[1]))
#function to look up frequency position. words in an array form
def getfp(inRdd, words):
result = []
for w in words:
fp = inRdd.lookup(w)
if not fp:
result.append([-1])
else:
result.append(fp)
return result
################### TASK 1 Output ##################
# get the frequency position for the following words
getfp(dictionary, ['applicant', 'and', 'attack', 'protein', 'aefwe', 'car', 'for'])
################### TASK 2 ##################
# Next, we get a RDD that has, for each (docID, ["word1", "word2", "word3", ...]),
# ("word1", docID), ("word2", docId), ...
allWordsWithDocID = keyAndListOfWords.flatMap(lambda x: ((j, x[0]) for j in x[1]))
# Now join and link them, to get a set of ("word1", (dictionaryPos, docID)) pairs
allDictionaryWords = dictionary.join(allWordsWithDocID) #Correct
# Now, we drop the actual word itself to get a set of (docID, dictionaryPos) pairs
justDocAndPos = allDictionaryWords.map(lambda x: (x[1][1], x[1][0])) #Correct
# Now get a set of (docID, [dictionaryPos1, dictionaryPos2, dictionaryPos3...]) pairs
allDictionaryWordsInEachDoc = justDocAndPos.groupByKey().map(lambda x: (x[0], list(x[1])))
# The following line this gets us a set of
# (docID, [dictionaryPos1, dictionaryPos2, dictionaryPos3...]) pairs
# and converts the dictionary positions to a bag-of-words numpy array...
allDocsAsNumpyArrays = allDictionaryWordsInEachDoc.map(lambda x: (x[0], buildArray(x[1])))
# allDocsAsNumpyArrays contains the 20000 variables of dictionary words by frequency position
# and each has a respective TF and this will be used in learning the logistic regression model
# convert docID to the Y response variables of 0 and 1. 1 for Id containing AU
# create method to quickly conver AU to 1 and everything else to 0
def isAU(stringIn):
return int(stringIn[:2]=="AU")
# map DocID to int 1 or 0 and cache model to be used in logistic regression
# cache this RDD to be used
textModelRDD = allDocsAsNumpyArrays.map(lambda x: (isAU(x[0]), x[1])).cache()
allDocsAsNumpyArrays.first()
#Define some functions to calculate LLH and perform Gradient descent
# calculate the dot product of x an r:
def calcTheta(x_np, r_np):
xr_theta = np.dot(x_np, r_np)
return xr_theta
# LLH function: Take in Y, X numpy array and r numpy array
def calcLLH(y, theta_np):
result = np.sum(np.multiply(y,theta_np) - np.log(1+np.exp(theta_np)))
return result
# Define function to calculate gradient:
# take in Y, X numpy array and r numpy array
def calcGradient(y, x_np, theta_np):
term1 = (-1)*(np.multiply(y,x_np))
term2 = np.multiply(x_np, (np.exp(theta_np)/(1+np.exp(theta_np)))) #predicted
r_gd = term1+term2
return r_gd
# calculate Predicted
def prediction(x_np, theta_np):
pred_prob = 1/(1+np.exp(-np.dot(x_np, theta_np)))
predicted = int(round(pred_prob))
return predicted
# Set intitial parameters
r = np.ones(20000)
itr = 0 #initiate at 0
learnRate = 0.001 #learning Rate
numIter = 400 #maximume Interation
#initiate a cost list to keep trackof cost changes
costList = []
#regularization
reg_lambda = 20
#calculate the gradient
while itr < numIter:
regRDD1 = textModelRDD.map(lambda x: (x[0], x[1], (calcTheta(x[1], r))))
regRDD2 = regRDD1.map(lambda x: (calcGradient(x[0], x[1], x[2]), calcLLH(x[0], x[2])))
gradientRDD = regRDD2.reduce(lambda a, b: np.add(a, b))
gradient = 2*gradientRDD[0]*reg_lambda
r = r - (gradient*learnRate)
costList.append(gradientRDD[1]*(-1))
itr = itr+1
print('cost:{:2.4f} Iter:{}'.format(costList[-1], itr))
gradient = 2*gradientRDD[0]*reg_lambda
if len(costList)>2 and (costList[-2] - costList[-1] <= 0.01):
itr = 0
break
x2 = np.array(range(0, len(costList)))
fig2 = plt.figure()
fig2.add_axes()
plt.plot(x2, costList)
plt.show()
# coefficient RDD
coefRdd = sc.parallelize(r).zipWithIndex()
# five words with the largest regression coefficients save to list and look up
five = coefRdd.top(5, key = lambda x: x[0])
pos5 = []
for x in five:
pos5.append(x[1])
# flip the words and position number of the dictionary to look up words by index
reverseDict = dictionary.map(lambda x: (x[1], x[0]))
#print out the top 5 words associated with Autralian court docs
sc.parallelize(getfp(reverseDict, pos5)).coalesce(1).saveAsTextFile("test2")
# calculate prediction value on the train set
textModelwThetaRDD = textModelRDD.map(lambda x: (x[0], x[1], calcTheta(x[1], r)))
textModelwPredictionRDD = textModelwThetaRDD.map(lambda x: (x[0], (0 if x[2] < 0 else 1)))
#confusion matrix [labeled, predicted]
#TP [1, 1]
#FP [0, 1]
#FN [1, 0]
#TN [0, 0]
confMatrixRDD = textModelwPredictionRDD.map(lambda x: ((x[0], x[1]),1)).reduceByKey(lambda a, b: a+b)
confMatrixRDD.collect()
# Accuracy: (TN + TP) / (TN + TP + FN + FP)
accuracy = (3366+0)/(3366+74+2)
# Recall: TP/(TP+FN)
recall = (0/(0+74))
# Precision: TP/(TP+FP)
precision = (0/(0+2))
# F ratio = 2(precision*Recall)/(precision+recall)
#f_ratio = 2*((recall*precision)/(recall+precision))
print('accuracy={} recall={} precision={}'.format(accuracy, recall, precision))
## From Big Data Run:
Confusion Matrix:
> ((0, 0), 18339) = TN
> ((1, 0), 169) = FN
> ((1, 1), 208) = TP
> ((0, 1), 8) = FP
TN = 18339
FN = 169
TP = 208
FP = 8
Accuracy = (TN + TP) / (TN + TP + FN + FP)
Recall = TP/(TP+FN)
Precision = TP/(TP+FP)
F_ratio = 2*(Precision*Recall)/(Precision+Recall)
print('Accuracy = ', Accuracy)
print('Recall = ', Recall)
print('Precision = ', Precision)
print('F Ratio = ', F_ratio)
###Output
_____no_output_____
###Markdown
###Code
# Spark installation on Colab
# !apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q https://downloads.apache.org/spark/spark-3.0.0/spark-3.0.0-bin-hadoop3.2.tgz
!tar xf spark-3.0.0-bin-hadoop3.2.tgz
!pip install -q findspark
!rm -rf spark-3.0.0-bin-hadoop3.2.tgz
# Set JAVA_HOME and SPARK_HOME
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64"
os.environ["SPARK_HOME"] = "spark-3.0.0-bin-hadoop3.2"
import findspark
findspark.init("spark-3.0.0-bin-hadoop3.2")# SPARK_HOME
import sys
import re
import numpy as np
from numpy import dot
from numpy.linalg import norm
from operator import add
from pyspark.sql import SparkSession
from pyspark import SparkContext
import matplotlib.pyplot as plt
regex = re.compile('[^a-zA-Z]')
spark = SparkSession.builder.master("local[*]").getOrCreate()
sc = SparkContext.getOrCreate()
!wget https://s3.amazonaws.com/metcs777/SmallTrainingData.txt
pages=sc.textFile("SmallTrainingData.txt")
pages.take(1)
# Assumption: Each document is stored in one line of the text file
# We need this count later ...
numberOfDocs = pages.count()
print(numberOfDocs)
# Each entry in validLines will be a line from the text file
validLines = pages.filter(lambda x : 'id' in x and 'url=' in x)
# Now, we transform it into a set of (docID, text) pairs
keyAndText = pages.map(lambda x : (x[x.index('id="') + 4 : x.index('" url=')], x[x.index('">') + 2:][:-6]))
keyAndText.take(1)
# The following function gets a list of dictionaryPos values,
# and then creates a TF vector
# corresponding to those values... for example,
# if we get [3, 4, 1, 1, 2] we would in the
# end have [0, 2/5, 1/5, 1/5, 1/5] because 0 appears zero times,
# 1 appears twice, 2 appears once, etc.
def buildArray(listOfIndices):
returnVal = np.zeros(20000)
for index in listOfIndices:
returnVal[index] = returnVal[index] + 1
mysum = np.sum(returnVal)
returnVal = np.divide(returnVal, mysum)
return returnVal
# Cosine Similarity of two vectors
def cousinSim (x, y):
normA = np.linalg.norm(x)
normB = np.linalg.norm(y)
return np.dot(x,y)/(normA*normB)
# Now, we split the text in each (docID, text) pair into a list of words
# After this step, we have a data set with
# (docID, ["word1", "word2", "word3", ...])
# We use a regular expression here to make
# sure that the program does not break down on some of the documents
# remove all non letter characters
keyAndListOfWords = keyAndText.map(lambda x : (str(x[0]), regex.sub(' ', x[1]).lower().split()))
# Now get the top 20,000 words... first change (docID, ["word1", "word2", "word3", ...])
# to ("word1", 1) ("word2", 1)...
allWords = keyAndListOfWords.flatMap(lambda x: x[1]).map(lambda x: (x, 1))
# Now, count all of the words, giving us ("word1", 1433), ("word2", 3423423), etc.
allCounts = allWords.reduceByKey(add)
# Get the top 20,000 words in a local array in a sorted format based on frequency
topWords = allCounts.top(20000, lambda x: x[1])
#
print("Top Words in Corpus:", allCounts.top(10, key=lambda x: x[1]))
# We'll create a RDD that has a set of (word, dictNum) pairs
# start by creating an RDD that has the number 0 through 20000
# 20000 is the number of words that will be in our dictionary
topWordsK = sc.parallelize(range(20000))
# another option:
# Now, we transform (0), (1), (2), ... to ("MostCommonWord", 1)
# ("NextMostCommon", 2), ...
# the number will be the spot in the dictionary used to tell us
# where the word is located
dictionary = topWordsK.map (lambda x : (topWords[x][0], x))
dictionary.cache()
print("Word Postions in our Feature Matrix. Last 20 words in 20k positions: ", dictionary.top(20, lambda x : x[1]))
#function to look up frequency position. words in an array form
def getfp(inRdd, words):
result = []
for w in words:
fp = inRdd.lookup(w)
if not fp:
result.append([-1])
else:
result.append(fp)
return result
################### TASK 1 Output ##################
# get the frequency position for the following words
getfp(dictionary, ['applicant', 'and', 'attack', 'protein', 'aefwe', 'car', 'for'])
################### TASK 2 ##################
# Next, we get a RDD that has, for each (docID, ["word1", "word2", "word3", ...]),
# ("word1", docID), ("word2", docId), ...
allWordsWithDocID = keyAndListOfWords.flatMap(lambda x: ((j, x[0]) for j in x[1]))
# Now join and link them, to get a set of ("word1", (dictionaryPos, docID)) pairs
allDictionaryWords = dictionary.join(allWordsWithDocID) #Correct
# Now, we drop the actual word itself to get a set of (docID, dictionaryPos) pairs
justDocAndPos = allDictionaryWords.map(lambda x: (x[1][1], x[1][0])) #Correct
# Now get a set of (docID, [dictionaryPos1, dictionaryPos2, dictionaryPos3...]) pairs
allDictionaryWordsInEachDoc = justDocAndPos.groupByKey().map(lambda x: (x[0], list(x[1])))
# The following line this gets us a set of
# (docID, [dictionaryPos1, dictionaryPos2, dictionaryPos3...]) pairs
# and converts the dictionary positions to a bag-of-words numpy array...
allDocsAsNumpyArrays = allDictionaryWordsInEachDoc.map(lambda x: (x[0], buildArray(x[1])))
# allDocsAsNumpyArrays contains the 20000 variables of dictionary words by frequency position
# and each has a respective TF and this will be used in learning the logistic regression model
# convert docID to the Y response variables of 0 and 1. 1 for Id containing AU
# create method to quickly conver AU to 1 and everything else to 0
def isAU(stringIn):
return int(stringIn[:2]=="AU")
# map DocID to int 1 or 0 and cache model to be used in logistic regression
# cache this RDD to be used
textModelRDD = allDocsAsNumpyArrays.map(lambda x: (isAU(x[0]), x[1])).cache()
allDocsAsNumpyArrays.first()
#Define some functions to calculate LLH and perform Gradient descent
# calculate the dot product of x an r:
def calcTheta(x_np, r_np):
xr_theta = np.dot(x_np, r_np)
return xr_theta
# LLH function: Take in Y, X numpy array and r numpy array
def calcLLH(y, theta_np):
result = np.sum(np.multiply(y,theta_np) - np.log(1+np.exp(theta_np)))
return result
# Define function to calculate gradient:
# take in Y, X numpy array and r numpy array
def calcGradient(y, x_np, theta_np):
term1 = (-1)*(np.multiply(y,x_np))
term2 = np.multiply(x_np, (np.exp(theta_np)/(1+np.exp(theta_np)))) #predicted
r_gd = term1+term2
return r_gd
# calculate Predicted
def prediction(x_np, theta_np):
pred_prob = 1/(1+np.exp(-np.dot(x_np, theta_np)))
predicted = int(round(pred_prob))
return predicted
# Set intitial parameters
r = np.ones(20000)
itr = 0 #initiate at 0
learnRate = 0.001 #learning Rate
numIter = 400 #maximume Interation
#initiate a cost list to keep trackof cost changes
costList = []
#regularization
reg_lambda = 20
#calculate the gradient
while itr < numIter:
regRDD1 = textModelRDD.map(lambda x: (x[0], x[1], (calcTheta(x[1], r))))
regRDD2 = regRDD1.map(lambda x: (calcGradient(x[0], x[1], x[2]), calcLLH(x[0], x[2])))
gradientRDD = regRDD2.reduce(lambda a, b: np.add(a, b))
gradient = 2*gradientRDD[0]*reg_lambda
r = r - (gradient*learnRate)
costList.append(gradientRDD[1]*(-1))
itr = itr+1
print('cost:{:2.4f} Iter:{}'.format(costList[-1], itr))
gradient = 2*gradientRDD[0]*reg_lambda
if len(costList)>2 and (costList[-2] - costList[-1] <= 0.01):
itr = 0
break
x2 = np.array(range(0, len(costList)))
fig2 = plt.figure()
fig2.add_axes()
plt.plot(x2, costList)
plt.show()
# coefficient RDD
coefRdd = sc.parallelize(r).zipWithIndex()
# five words with the largest regression coefficients save to list and look up
five = coefRdd.top(5, key = lambda x: x[0])
pos5 = []
for x in five:
pos5.append(x[1])
# flip the words and position number of the dictionary to look up words by index
reverseDict = dictionary.map(lambda x: (x[1], x[0]))
#print out the top 5 words associated with Autralian court docs
sc.parallelize(getfp(reverseDict, pos5)).coalesce(1).saveAsTextFile("test2")
# calculate prediction value on the train set
textModelwThetaRDD = textModelRDD.map(lambda x: (x[0], x[1], calcTheta(x[1], r)))
textModelwPredictionRDD = textModelwThetaRDD.map(lambda x: (x[0], (0 if x[2] < 0 else 1)))
#confusion matrix [labeled, predicted]
#TP [1, 1]
#FP [0, 1]
#FN [1, 0]
#TN [0, 0]
confMatrixRDD = textModelwPredictionRDD.map(lambda x: ((x[0], x[1]),1)).reduceByKey(lambda a, b: a+b)
confMatrixRDD.collect()
# Accuracy: (TN + TP) / (TN + TP + FN + FP)
accuracy = (3366+0)/(3366+74+2)
# Recall: TP/(TP+FN)
recall = (0/(0+74))
# Precision: TP/(TP+FP)
precision = (0/(0+2))
# F ratio = 2(precision*Recall)/(precision+recall)
#f_ratio = 2*((recall*precision)/(recall+precision))
print('accuracy={} recall={} precision={}'.format(accuracy, recall, precision))
## From Big Data Run:
Confusion Matrix:
> ((0, 0), 18339) = TN
> ((1, 0), 169) = FN
> ((1, 1), 208) = TP
> ((0, 1), 8) = FP
TN = 18339
FN = 169
TP = 208
FP = 8
Accuracy = (TN + TP) / (TN + TP + FN + FP)
Recall = TP/(TP+FN)
Precision = TP/(TP+FP)
F_ratio = 2*(Precision*Recall)/(Precision+Recall)
print('Accuracy = ', Accuracy)
print('Recall = ', Recall)
print('Precision = ', Precision)
print('F Ratio = ', F_ratio)
###Output
_____no_output_____ |
notebooks/modeling_NASA_GISS.ipynb | ###Markdown
Time-series modeling We create a time series model for the Land-Ocean Temperature Index (LOTI). We evaluate different configurations for a SARIMAX (seasonal autoregressive integrated moving average) model using grid search, with mean squared error as a performance metric.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from itertools import product
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa.arima_model import ARIMA
import statsmodels.api as sm
from sklearn.base import BaseEstimator, RegressorMixin
from sklearn.model_selection import TimeSeriesSplit
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import mean_squared_error
sns.set()
sns.set_context('notebook', font_scale=1.2)
# wrapper for statsmodels SARIMAX model to make it compatible with the scikit-learn API
class SARIMAXWrapper(BaseEstimator, RegressorMixin):
"""Sklearn wrapper for statsmodels SARIMAX models"""
def __init__(self,
order=(1, 0, 0),
seasonal_order=(0, 0, 0, 0),
trend=None,
measurement_error=False,
time_varying_regression=False,
mle_regression=True,
simple_differencing=False,
enforce_stationarity=True,
enforce_invertibility=True,
hamilton_representation=False,
concentrate_scale=False,
freq=None):
self.SARIMAX = sm.tsa.SARIMAX
self.order = order
self.seasonal_order = seasonal_order
self.trend = trend
self.measurement_error = measurement_error
self.enforce_stationarity = enforce_stationarity
self.enforce_invertibility = enforce_invertibility
self.freq = freq
def fit(self, X, y=None):
self.model_ = self.SARIMAX(endog=X,
order=self.order,
seasonal_order=self.seasonal_order,
trend=self.trend,
measurement_error=self.measurement_error,
enforce_stationarity=self.enforce_stationarity,
enforce_invertibility=self.enforce_invertibility,
freq=self.freq)
try:
self.results_ = self.model_.fit(method='powell')
except ValueError as error:
print(self.order, error)
def predict(self, X):
return self.results_.forecast(len(X))
monthly_deviations = pd.read_csv('../data/NASA_GISS_LOTI_long_format.csv',
index_col='Date',
parse_dates=['Date'])
# take deviations from mean up to (and including) 2019-01
global_deviations = monthly_deviations['global']
northern_deviations = monthly_deviations['northern']
###Output
_____no_output_____
###Markdown
There is a clear upward trend and some seasonality in both time-series, thus they are most likely not stationary.
###Code
# plot time series
fig, (ax1, ax2) = plt.subplots(nrows=2, figsize=(16, 12))
global_deviations.plot(ax=ax1, title='Global', label='Deviations')
northern_deviations.plot(ax=ax2, title='Northern hemisphere', label='Deviations')
_ = ax1.legend()
_ = ax2.legend()
###Output
_____no_output_____
###Markdown
Time series modeling
###Code
# split data set into training and validation sets
# we leave the last 12 months for validation
n_test = 12
n_train = global_deviations.shape[0] - n_test
gld_train = global_deviations.iloc[:n_train]
gld_test = global_deviations.iloc[n_train:]
nhd_train = northern_deviations.iloc[:n_train]
nhd_test = northern_deviations.iloc[n_train:]
###Output
_____no_output_____
###Markdown
We test for a unit root using the Augmented Dickey-Fuller (ADF) test. The null hypothesis is that the time-series has a unit root, which implies that it is non-stationary. We allow for a constant and linear term in the regression model, the maximum number of lags is chosen using the Akaike information criterion. The test is performed at significance level $\alpha = 0.01$.The null hypothesis can not be rejected for either time-series.
###Code
print("Augmented Dickey-Fuller test",
"Global: p-value = {:.6f}".format(sm.tsa.adfuller(gld_train, regression='ct', autolag='AIC')[1]),
"Northern: p-value = {:.6f}".format(sm.tsa.adfuller(nhd_train, regression='ct', autolag='AIC')[1]), sep='\n')
###Output
Augmented Dickey-Fuller test
Global: p-value = 0.023917
Northern: p-value = 0.102194
###Markdown
The previous plots and the results form the ADF test suggests that the series are not stationary.We take the first differences to try to make the series stationary.
###Code
# calculate first differences and discard first observation (which is NaN after differencing)
gld_train_diff = gld_train.diff(1)[1:]
nhd_train_diff = nhd_train.diff(1)[1:]
###Output
_____no_output_____
###Markdown
The plot of the differenced time-series looks much more closer to a stationary process. Thus it seems reasonable to use first (or higher) order differences in the model. In the ADF test we can also reject the null hypothesis of an unit root.
###Code
# plot differenced time series
fig, (ax1, ax2) = plt.subplots(nrows=2, figsize=(16, 12))
gld_train_diff.plot(ax=ax1, title='Global', label='Deviations')
nhd_train_diff.plot(ax=ax2, title='Northern hemisphere', label='Deviations')
ax1.legend()
ax2.legend()
_ = plt.suptitle('Differenced time series')
print("Augmented Dickey-Fuller test",
"Global: p-value = {:.6f}".format(sm.tsa.adfuller(gld_train_diff, regression='c', autolag='AIC')[1]),
"Northern: p-value = {:.6f}".format(sm.tsa.adfuller(nhd_train_diff, regression='c', autolag='AIC')[1]), sep='\n')
###Output
Augmented Dickey-Fuller test
Global: p-value = 0.000000
Northern: p-value = 0.000000
###Markdown
The autocorrelation function (ACF) and partial autocorrelation function (PACF) are calculated and plotted below, to get and idea of what order the model parameters could be.For the Global deviations time-series:Since the coefficients in the PACF seems to die out after the $6$:th lag, we choose to use 6 as the upper limit for the auto-regressive (AR) order $p$. The ACF suggests a moving average (MA) order of $1$. We attribute "significant" coefficients at higher lags to random variations in the data.For the Northern hemisphere deviations time-series:The PACF suggests a maximum AR order of $11$, and the ACF a maximum MA order of $1$.
###Code
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(16, 5))
nlags = 48
_ = plot_acf(gld_train_diff, lags=nlags, color='k', ax=ax1)
_ = plot_pacf(gld_train_diff, lags=nlags, color='k', ax=ax2)
_ = plt.suptitle('Global')
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(16, 5))
_ = plot_acf(nhd_train_diff, lags=nlags, color='k', ax=ax1)
_ = plot_pacf(nhd_train_diff, lags=nlags, color='k', ax=ax2)
_ = plt.suptitle('Northern hemisphere')
###Output
_____no_output_____
###Markdown
Using the observations from the ACF and PACF we create the possible configurations for our model.The a grid search is performed over the different configurations. The models are cross-validated using time series cross-validation which preserves the order relations of the data so that the model is always trained on historical data and tested on more recent "unseen" data.
###Code
model = SARIMAXWrapper()
# we keep the seasonal orders at a fixed configuration.
# The last parameter m = 12 indicates number of time steps for a single seasonal period.
# We assume that one season is a year (12 months)
grid = GridSearchCV(model,
param_grid={
'order': [
(1, 1, 1),
(2, 1, 1),
(3, 1, 1),
(4, 1, 1),
(5, 1, 1),
(6, 1, 1)
],
'seasonal_order': [(1, 1, 1, 12),],
'trend': ['ct',],
'freq': ['MS'],
},
scoring='neg_mean_squared_error',
cv=TimeSeriesSplit(n_splits=3),
n_jobs=1,
verbose=3)
grid.fit(gld_train, gld_train)
print("Best model found through grid search:")
display(grid.best_estimator_)
print("MSE of best model: {:.4f}".format(- grid.best_score_))
###Output
Best model found through grid search:
###Markdown
Take the model with the best parameter configuration.
###Code
gld_model_fit = grid.best_estimator_.results_
display(gld_model_fit.summary())
###Output
_____no_output_____
###Markdown
Check adequacy of the model by creating a QQ-plot and kernerl density plot for the residuals.Residuals seems to be close to normally distributed, although the QQ-plot indicates that the tails are somewhat fatter than would be typical for normally distributed values.
###Code
gld_model_resid = pd.Series(gld_model_fit.resid)
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(16, 7))
_ = sm.qqplot(gld_model_resid, line='s', color='k', ax=ax1)
_ = ax1.set_title('QQ-plot for Global temperature deviations model')
gld_model_resid.plot(kind='kde', ax=ax2, color='k', title='Kernel density estimate for residuals')
pred_res = gld_model_fit.get_prediction(start='2018-01-01', end='2019-02-01',
full_results=True, alpha=0.05)
pred_means = pred_res.predicted_mean
# alpha = 0.05 => 95% CI
pred_cis = pred_res.conf_int(alpha=0.05)
fig = plt.figure(figsize=(16, 7))
ax = fig.add_subplot('111')
# actual data
ax.plot(gld_train['2015':], color="black", label='Training data')
ax.plot(gld_test, color="green", alpha=0.5, label='Test data')
# means
ax.plot(pred_means, lw=1, color="blue", alpha=0.5, label='SARIMA forecast')
ax.fill_between(pred_means.index, pred_cis.iloc[:, 0], pred_cis.iloc[:, 1], alpha=0.5, label='95%-confidence interval')
ax.legend(loc='upper left')
plt.draw()
model = SARIMAXWrapper(freq='MS')
grid = GridSearchCV(model,
param_grid={
'order': [
(1, 1, 1),
(2, 1, 1),
(3, 1, 1),
(4, 1, 1),
(5, 1, 1),
(6, 1, 1),
(7, 1, 1),
(8, 1, 1),
(9, 1, 1),
(10, 1, 1),
(11, 1, 1)
],
'seasonal_order': [(1, 1, 1, 12)],
'trend': ['ct'],
'freq': ['MS'],
},
scoring='neg_mean_squared_error',
cv=TimeSeriesSplit(n_splits=3),
n_jobs=1,
verbose=3)
grid.fit(nhd_train, nhd_train)
print("Best model found through grid search:")
display(grid.best_estimator_)
print("MSE of best model: {:.4f}".format(- grid.best_score_))
nhd_model_fit = grid.best_estimator_.results_
display(nhd_model_fit.summary())
###Output
_____no_output_____
###Markdown
Check adequacy of the model by creating a QQ-plot and kernerl density plot for the residuals.
###Code
nhd_model_resid = pd.Series(nhd_model_fit.resid)
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(16, 7))
_ = sm.qqplot(nhd_model_resid, line='s', color='k', ax=ax1)
_ = ax1.set_title('QQ-plot for Northern hemisphere temperature deviations model')
_ = nhd_model_resid.plot(kind='kde', ax=ax2, color='k', title='Kernel density estimate for residuals')
# plot forecast fit 95% CI and compare to actual test data
pred_res = nhd_model_fit.get_prediction(start='2018-01-01', end='2019-02-01',
full_results=True, alpha=0.05)
pred_means = pred_res.predicted_mean
# alpha = 0.05 => 95% CI
pred_cis = pred_res.conf_int(alpha=0.05)
fig = plt.figure(figsize=(16, 7))
ax = fig.add_subplot('111')
# plot training data from 2010 and forward
ax.plot(nhd_train['2015':], color="black", label='Training data')
# plot test data
ax.plot(nhd_test, color="green", alpha=0.5, label='Test data')
# means
ax.plot(pred_means, lw=1, color="blue", alpha=0.5, label='SARIMA forecast')
ax.fill_between(pred_means.index, pred_cis.iloc[:, 0], pred_cis.iloc[:, 1], alpha=0.5, label='95%-confidence interval')
ax.legend(loc='upper left')
plt.draw()
###Output
_____no_output_____ |
notebooks/layers/convolutional/SeparableConvolution2D.ipynb | ###Markdown
SeparableConvolution2D **[convolutional.SeparableConvolution2D.0] 4 3x3 filters on 5x5x2 input, activation='linear', border_mode='valid', subsample=(1,1), depth_multiplier=1, dim_ordering='tf', bias=True**
###Code
data_in_shape = (5, 5, 2)
conv = SeparableConvolution2D(4, 3, 3, activation='linear', border_mode='valid',
subsample=(1, 1), depth_multiplier=1, dim_ordering='tf', bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(input=layer_0, output=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(160)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('depthwise_kernel shape:', weights[0].shape)
print('depthwise_kernel:', format_decimal(weights[0].ravel().tolist()))
print('pointwise_kernel shape:', weights[1].shape)
print('pointwise_kernel:', format_decimal(weights[1].ravel().tolist()))
print('b shape:', weights[2].shape)
print('b:', format_decimal(weights[2].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
print('')
print('in shape:', data_in_shape)
print('in:', format_decimal(data_in.ravel().tolist()))
result = model.predict(np.array([data_in]))
print('out shape:', result[0].shape)
print('out:', format_decimal(result[0].ravel().tolist()))
###Output
depthwise_kernel shape: (3, 3, 2, 1)
depthwise_kernel: [0.666786, -0.353131, -0.240332, -0.38687, 0.133215, 0.397273, -0.645113, -0.216224, -0.394447, 0.467848, 0.86766, 0.712237, -0.19358, -0.889668, -0.846705, 0.838136, 0.075205, -0.066286]
pointwise_kernel shape: (1, 1, 2, 4)
pointwise_kernel: [0.666786, -0.353131, -0.240332, -0.38687, 0.133215, 0.397273, -0.645113, -0.216224]
b shape: (4,)
b: [0.666786, -0.353131, -0.240332, -0.38687]
in shape: (5, 5, 2)
in: [0.133215, 0.397273, -0.645113, -0.216224, -0.394447, 0.467848, 0.86766, 0.712237, -0.19358, -0.889668, -0.846705, 0.838136, 0.075205, -0.066286, -0.508764, -0.052183, 0.347896, 0.955889, 0.941462, 0.048041, 0.777089, 0.64464, 0.591418, 0.132861, 0.255779, -0.204615, 0.144295, 0.294353, -0.652583, -0.089829, 0.724848, -0.094612, 0.689771, -0.852965, 0.982654, -0.640659, 0.260924, -0.642287, -0.447322, -0.084257, 0.499578, 0.458206, 0.166239, -0.867684, -0.820507, -0.82673, 0.508849, -0.324211, -0.403243, -0.396073]
out shape: (3, 3, 4)
out: [0.347301, -0.450018, 0.214435, -0.122483, 0.681388, -0.108314, -0.567933, -0.470343, 0.87774, -0.40253, -0.395911, -0.527773, -0.689291, -0.150623, 0.906613, 0.55307, 0.046712, 0.336893, -0.478398, -0.134498, -0.377228, 0.116938, 0.241701, 0.243471, 0.512934, -1.336098, 1.173709, 0.018511, 0.921619, -0.639221, -0.139291, -0.489842, -0.226879, 0.209393, -0.032122, 0.105134]
###Markdown
**[convolutional.SeparableConvolution2D.1] 4 3x3 filters on 5x5x2 input, activation='relu', border_mode='valid', subsample=(1,1), depth_multiplier=2, dim_ordering='tf', bias=True**
###Code
data_in_shape = (5, 5, 2)
conv = SeparableConvolution2D(4, 3, 3, activation='relu', border_mode='valid',
subsample=(1, 1), depth_multiplier=2, dim_ordering='tf', bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(input=layer_0, output=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(161)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('depthwise_kernel shape:', weights[0].shape)
print('depthwise_kernel:', format_decimal(weights[0].ravel().tolist()))
print('pointwise_kernel shape:', weights[1].shape)
print('pointwise_kernel:', format_decimal(weights[1].ravel().tolist()))
print('b shape:', weights[2].shape)
print('b:', format_decimal(weights[2].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
print('')
print('in shape:', data_in_shape)
print('in:', format_decimal(data_in.ravel().tolist()))
result = model.predict(np.array([data_in]))
print('out shape:', result[0].shape)
print('out:', format_decimal(result[0].ravel().tolist()))
from keras.layers.convolutional import Convolution2D
test_layer_0 = Input(shape=(5, 5, 1))
test_layer_1 = Convolution2D(2, 3, 3, activation='linear', border_mode='valid', subsample=(1, 1), dim_ordering='tf', bias=False)(test_layer_0)
test_model = Model(input=test_layer_0, output=test_layer_1)
test_model.set_weights([weights[0][:,:,0:1,:]])
test_result = test_model.predict(np.array([data_in[:,:,0:1]]))
print('out shape:', test_result[0].shape)
print('out:', format_decimal(test_result[0].ravel().tolist()))
from keras.layers.convolutional import Convolution2D
test_layer_0 = Input(shape=(5, 5, 1))
test_layer_1 = Convolution2D(2, 3, 3, activation='linear', border_mode='valid', subsample=(1, 1), dim_ordering='tf', bias=False)(test_layer_0)
test_model = Model(input=test_layer_0, output=test_layer_1)
test_model.set_weights([weights[0][:,:,1:2,:]])
test_result = test_model.predict(np.array([data_in[:,:,1:2]]))
print('out shape:', test_result[0].shape)
print('out:', format_decimal(test_result[0].ravel().tolist()))
din=np.concatenate([np.array([0.0918, -0.250826, 0.752085, -0.396841, 1.239056, -0.569188, 0.67132, -0.204467, -0.241087, 0.062854, 0.829858, -0.047888, 2.479452, -1.706853, 0.834468, -1.525005, 0.699256, -0.386894]).reshape((3,3,2)),
np.array([1.301919, 1.174606, -0.754713, -0.528116, -0.431178, -0.245584, 0.482141, -0.370073, -0.96779, -0.070169, -0.373107, -0.637402, 0.148506, 0.112963, -0.600912, -0.627807, 0.361648, -0.466574]).reshape((3,3,2))], axis=2)
din.ravel()
from keras.layers.convolutional import Convolution2D
test_layer_0 = Input(shape=(3, 3, 4))
test_layer_1 = Convolution2D(4, 1, 1, activation='linear', border_mode='valid', subsample=(1, 1), dim_ordering='tf', bias=False)(test_layer_0)
test_model = Model(input=test_layer_0, output=test_layer_1)
test_model.set_weights([
weights[1]
])
test_result = test_model.predict(np.array([din]))
for i in range(4):
test_result[0][:,:,i] += weights[2][i]
print('out shape:', test_result[0].shape)
print('out:', format_decimal(test_result[0].ravel().tolist()))
###Output
out shape: (3, 3, 4)
out: [2.19505, -0.849839, -0.064323, -1.029974, 1.195129, -0.305476, -0.933523, -1.710334, 1.986473, -0.608713, -1.095067, -2.246176, 1.623991, -0.91593, -0.62239, -2.083865, 0.185464, 0.333382, -0.209328, -0.04148, 1.187908, -0.495996, -0.657218, -1.762427, 4.113845, -1.490486, -2.210908, -4.352599, 1.904537, -0.706008, -1.924138, -2.949446, 1.652338, -0.920442, -0.81737, -2.268492]
###Markdown
**[convolutional.SeparableConvolution2D.2] 16 3x3 filters on 5x5x4 input, activation='relu', border_mode='valid', subsample=(1,1), depth_multiplier=3, dim_ordering='tf', bias=True**
###Code
data_in_shape = (5, 5, 4)
conv = SeparableConvolution2D(16, 3, 3, activation='relu', border_mode='valid',
subsample=(1, 1), depth_multiplier=3, dim_ordering='tf', bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(input=layer_0, output=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(162)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('depthwise_kernel shape:', weights[0].shape)
print('depthwise_kernel:', format_decimal(weights[0].ravel().tolist()))
print('pointwise_kernel shape:', weights[1].shape)
print('pointwise_kernel:', format_decimal(weights[1].ravel().tolist()))
print('b shape:', weights[2].shape)
print('b:', format_decimal(weights[2].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
print('')
print('in shape:', data_in_shape)
print('in:', format_decimal(data_in.ravel().tolist()))
result = model.predict(np.array([data_in]))
print('out shape:', result[0].shape)
print('out:', format_decimal(result[0].ravel().tolist()))
###Output
depthwise_kernel shape: (3, 3, 4, 3)
depthwise_kernel: [-0.358, 0.51151, 0.75604, 0.386489, 0.569431, 0.817873, 0.457721, -0.408117, -0.449859, 0.424449, 0.082829, 0.998103, 0.945015, 0.709712, -0.370497, 0.356245, -0.884929, 0.043746, -0.621581, 0.85929, 0.990731, -0.792113, -0.248862, 0.73592, 0.776986, -0.737512, 0.171503, 0.087118, 0.837766, 0.029361, -0.527272, 0.147416, 0.365717, -0.096697, -0.51945, 0.201977, -0.009271, -0.127708, -0.256995, -0.276956, -0.161184, 0.128936, 0.760311, 0.372614, -0.370107, -0.727062, -0.563261, 0.591426, 0.272393, -0.93231, -0.928845, -0.643906, 0.089174, 0.384091, 0.007193, -0.367848, -0.38731, -0.234478, -0.308496, -0.678848, -0.207197, 0.253484, -0.34155, 0.908772, 0.089221, 0.623042, 0.166206, -0.112566, 0.332798, -0.223755, 0.964719, 0.218697, -0.330948, -0.353988, -0.054023, 0.932003, 0.029574, 0.639507, 0.011236, -0.281541, 0.737159, -0.578811, 0.394064, 0.143987, 0.857385, 0.817422, 0.935796, -0.679153, 0.290835, 0.583472, 0.714901, 0.848205, -0.012259, -0.978666, -0.311411, 0.311485, 0.832908, -0.320367, -0.979657, -0.595945, -0.478919, -0.594402, -0.073486, 0.082921, 0.801928, 0.980857, 0.457544, -0.81261]
pointwise_kernel shape: (1, 1, 12, 16)
pointwise_kernel: [-0.358, 0.51151, 0.75604, 0.386489, 0.569431, 0.817873, 0.457721, -0.408117, -0.449859, 0.424449, 0.082829, 0.998103, 0.945015, 0.709712, -0.370497, 0.356245, -0.884929, 0.043746, -0.621581, 0.85929, 0.990731, -0.792113, -0.248862, 0.73592, 0.776986, -0.737512, 0.171503, 0.087118, 0.837766, 0.029361, -0.527272, 0.147416, 0.365717, -0.096697, -0.51945, 0.201977, -0.009271, -0.127708, -0.256995, -0.276956, -0.161184, 0.128936, 0.760311, 0.372614, -0.370107, -0.727062, -0.563261, 0.591426, 0.272393, -0.93231, -0.928845, -0.643906, 0.089174, 0.384091, 0.007193, -0.367848, -0.38731, -0.234478, -0.308496, -0.678848, -0.207197, 0.253484, -0.34155, 0.908772, 0.089221, 0.623042, 0.166206, -0.112566, 0.332798, -0.223755, 0.964719, 0.218697, -0.330948, -0.353988, -0.054023, 0.932003, 0.029574, 0.639507, 0.011236, -0.281541, 0.737159, -0.578811, 0.394064, 0.143987, 0.857385, 0.817422, 0.935796, -0.679153, 0.290835, 0.583472, 0.714901, 0.848205, -0.012259, -0.978666, -0.311411, 0.311485, 0.832908, -0.320367, -0.979657, -0.595945, -0.478919, -0.594402, -0.073486, 0.082921, 0.801928, 0.980857, 0.457544, -0.81261, 0.653583, 0.201305, -0.792085, 0.341976, 0.600744, -0.250584, 0.266331, -0.843281, 0.979529, 0.365773, -0.230166, -0.370554, 0.734026, -0.031139, 0.622094, -0.495057, 0.58768, 0.263712, -0.859123, -0.743123, 0.088502, 0.910172, -0.76957, 0.574371, 0.960999, 0.991421, 0.552149, 0.687608, -0.431448, 0.596874, -0.714906, 0.776227, -0.48485, 0.323601, -0.015865, 0.975984, -0.540102, -0.80204, -0.507786, 0.904598, 0.90617, -0.641009, -0.936083, 0.56915, 0.707795, 0.108153, -0.513007, -0.799546, 0.31635, -0.298318, 0.91951, 0.921564, -0.886548, 0.098168, 0.968955, -0.43456, -0.4838, -0.512742, 0.84823, 0.21242, -0.635309, -0.145343, -0.624581, 0.243133, 0.240078, 0.104123, 0.875559, 0.389553, 0.900901, -0.865033, 0.893151, -0.293035, -0.241736, -0.021591, -0.384165, 0.073765, -0.937717, -0.771158, -0.221093, 0.874692, 0.610068, -0.767109, 0.001945, -0.660005]
b shape: (16,)
b: [-0.358, 0.51151, 0.75604, 0.386489, 0.569431, 0.817873, 0.457721, -0.408117, -0.449859, 0.424449, 0.082829, 0.998103, 0.945015, 0.709712, -0.370497, 0.356245]
in shape: (5, 5, 4)
in: [-0.884929, 0.043746, -0.621581, 0.85929, 0.990731, -0.792113, -0.248862, 0.73592, 0.776986, -0.737512, 0.171503, 0.087118, 0.837766, 0.029361, -0.527272, 0.147416, 0.365717, -0.096697, -0.51945, 0.201977, -0.009271, -0.127708, -0.256995, -0.276956, -0.161184, 0.128936, 0.760311, 0.372614, -0.370107, -0.727062, -0.563261, 0.591426, 0.272393, -0.93231, -0.928845, -0.643906, 0.089174, 0.384091, 0.007193, -0.367848, -0.38731, -0.234478, -0.308496, -0.678848, -0.207197, 0.253484, -0.34155, 0.908772, 0.089221, 0.623042, 0.166206, -0.112566, 0.332798, -0.223755, 0.964719, 0.218697, -0.330948, -0.353988, -0.054023, 0.932003, 0.029574, 0.639507, 0.011236, -0.281541, 0.737159, -0.578811, 0.394064, 0.143987, 0.857385, 0.817422, 0.935796, -0.679153, 0.290835, 0.583472, 0.714901, 0.848205, -0.012259, -0.978666, -0.311411, 0.311485, 0.832908, -0.320367, -0.979657, -0.595945, -0.478919, -0.594402, -0.073486, 0.082921, 0.801928, 0.980857, 0.457544, -0.81261, 0.653583, 0.201305, -0.792085, 0.341976, 0.600744, -0.250584, 0.266331, -0.843281]
out shape: (3, 3, 16)
out: [0.0, 3.072657, 6.661696, 1.212774, 0.0, 1.942042, 0.796062, 0.0, 0.0, 0.0, 0.0, 5.353453, 3.169381, 1.709415, 0.532102, 0.0, 0.0, 2.380967, 1.057094, 1.124686, 2.482994, 0.636369, 0.0, 0.177999, 0.501644, 0.346187, 1.655334, 2.568588, 3.838684, 2.13677, 0.0, 0.0, 0.244367, 0.0, 0.0, 0.040463, 1.887684, 0.447473, 0.0, 0.0, 0.392063, 0.461429, 0.740828, 0.0, 2.889798, 1.202188, 0.0, 2.990209, 0.263581, 1.668686, 0.0, 0.922838, 4.010374, 4.709476, 0.243737, 0.0, 0.0, 1.455777, 0.447792, 1.249569, 0.999484, 1.986179, 0.0, 1.88698, 0.0, 1.823954, 2.261445, 1.535919, 0.0, 0.0, 0.351947, 0.762176, 0.57681, 2.735742, 0.865883, 2.213735, 3.139838, 0.0, 0.150018, 0.0, 0.0, 0.169024, 2.122756, 0.0, 0.0, 0.0, 2.703279, 0.0, 0.0, 1.452518, 0.0, 1.760357, 2.195201, 1.397283, 0.647357, 1.03469, 1.720459, 0.787799, 1.818606, 0.0, 0.0, 3.229915, 0.0, 0.0, 0.0, 0.808574, 0.0, 0.0, 0.0, 3.093279, 0.0, 0.0, 0.0, 0.0, 6.157512, 0.969973, 2.362686, 1.642857, 1.567341, 0.0, 0.0, 1.257981, 0.0, 3.347936, 4.853856, 0.584134, 1.831214, 0.368586, 0.670326, 2.258959, 1.897032, 1.452034, 2.995703, 4.086173, 2.613929, 0.0, 0.0, 1.276499, 0.986229, 4.405969, 1.252966, 1.100708, 0.0, 0.0]
###Markdown
**[convolutional.SeparableConvolution2D.3] 4 3x3 filters on 5x5x2 input, activation='relu', border_mode='valid', subsample=(2,2), depth_multiplier=1, dim_ordering='tf', bias=True**
###Code
data_in_shape = (5, 5, 2)
conv = SeparableConvolution2D(4, 3, 3, activation='relu', border_mode='valid',
subsample=(2, 2), depth_multiplier=1, dim_ordering='tf', bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(input=layer_0, output=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(163)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('depthwise_kernel shape:', weights[0].shape)
print('depthwise_kernel:', format_decimal(weights[0].ravel().tolist()))
print('pointwise_kernel shape:', weights[1].shape)
print('pointwise_kernel:', format_decimal(weights[1].ravel().tolist()))
print('b shape:', weights[2].shape)
print('b:', format_decimal(weights[2].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
print('')
print('in shape:', data_in_shape)
print('in:', format_decimal(data_in.ravel().tolist()))
result = model.predict(np.array([data_in]))
print('out shape:', result[0].shape)
print('out:', format_decimal(result[0].ravel().tolist()))
###Output
depthwise_kernel shape: (3, 3, 2, 1)
depthwise_kernel: [0.377928, -0.051821, -0.103487, -0.414631, 0.620403, -0.945685, 0.554492, 0.145757, -0.668254, -0.102765, -0.313849, -0.527573, 0.70103, 0.569257, 0.431741, 0.829388, 0.729616, -0.61478]
pointwise_kernel shape: (1, 1, 2, 4)
pointwise_kernel: [0.377928, -0.051821, -0.103487, -0.414631, 0.620403, -0.945685, 0.554492, 0.145757]
b shape: (4,)
b: [0.377928, -0.051821, -0.103487, -0.414631]
in shape: (5, 5, 2)
in: [0.620403, -0.945685, 0.554492, 0.145757, -0.668254, -0.102765, -0.313849, -0.527573, 0.70103, 0.569257, 0.431741, 0.829388, 0.729616, -0.61478, 0.698196, 0.758288, -0.776354, 0.043486, 0.118144, -0.573699, -0.135688, -0.820887, 0.216593, 0.883644, 0.429352, -0.471163, 0.735991, 0.175361, 0.712349, -0.294722, 0.028816, 0.423103, -0.223071, 0.102386, -0.307802, 0.501519, 0.305004, -0.691112, 0.866549, 0.393944, 0.366401, -0.132872, -0.712304, 0.061523, -0.87309, 0.060624, -0.26679, -0.935888, 0.873897, -0.83769]
out shape: (2, 2, 4)
out: [0.493029, 0.0, 0.172868, 0.0, 1.312607, 0.0, 0.0, 0.0, 0.19444, 0.105527, 0.0, 0.0, 0.262073, 0.015703, 0.0, 0.0]
###Markdown
**[convolutional.SeparableConvolution2D.4] 4 3x3 filters on 5x5x2 input, activation='relu', border_mode='same', subsample=(1,1), depth_multiplier=1, dim_ordering='tf', bias=True**
###Code
data_in_shape = (5, 5, 2)
conv = SeparableConvolution2D(4, 3, 3, activation='relu', border_mode='same',
subsample=(1, 1), depth_multiplier=1, dim_ordering='tf', bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(input=layer_0, output=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(164)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('depthwise_kernel shape:', weights[0].shape)
print('depthwise_kernel:', format_decimal(weights[0].ravel().tolist()))
print('pointwise_kernel shape:', weights[1].shape)
print('pointwise_kernel:', format_decimal(weights[1].ravel().tolist()))
print('b shape:', weights[2].shape)
print('b:', format_decimal(weights[2].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
print('')
print('in shape:', data_in_shape)
print('in:', format_decimal(data_in.ravel().tolist()))
result = model.predict(np.array([data_in]))
print('out shape:', result[0].shape)
print('out:', format_decimal(result[0].ravel().tolist()))
###Output
depthwise_kernel shape: (3, 3, 2, 1)
depthwise_kernel: [-0.286035, -0.386595, -0.443495, 0.65068, -0.027871, 0.42356, -0.842032, 0.56074, 0.097075, -0.687678, 0.211609, -0.383519, -0.722563, -0.920196, 0.901117, -0.140214, 0.782672, 0.395253]
pointwise_kernel shape: (1, 1, 2, 4)
pointwise_kernel: [-0.286035, -0.386595, -0.443495, 0.65068, -0.027871, 0.42356, -0.842032, 0.56074]
b shape: (4,)
b: [-0.286035, -0.386595, -0.443495, 0.65068]
in shape: (5, 5, 2)
in: [-0.027871, 0.42356, -0.842032, 0.56074, 0.097075, -0.687678, 0.211609, -0.383519, -0.722563, -0.920196, 0.901117, -0.140214, 0.782672, 0.395253, -0.421894, 0.240522, 0.313608, 0.553496, 0.892848, 0.346932, 0.708907, -0.168408, -0.608734, 0.352148, -0.646845, -0.704696, 0.706245, 0.233727, -0.630529, 0.214598, 0.243023, -0.539184, -0.530638, -0.689965, 0.804915, 0.441358, 0.04869, -0.872442, -0.769665, -0.172659, 0.561906, -0.087356, -0.305674, 0.156377, 0.859227, 0.61066, 0.343536, -0.712298, 0.034094, 0.239021]
out shape: (5, 5, 4)
out: [0.0, 0.0, 0.0, 1.274639, 0.0, 0.0, 0.0, 0.605939, 0.0, 0.0, 0.0, 1.115433, 0.0, 0.0, 0.0, 1.387129, 0.0, 0.0, 0.0, 0.786538, 0.0, 0.0, 0.0, 1.292405, 0.294379, 0.072351, 1.020217, 0.0, 0.0, 0.0, 0.510053, 0.004305, 0.0, 0.0, 0.0, 1.326637, 0.016231, 0.0, 0.523518, 0.0, 0.0, 0.0, 0.0, 0.131063, 0.086156, 0.630381, 0.0, 0.358184, 0.0, 0.0, 0.0, 2.350895, 0.0, 0.0, 0.353669, 0.0, 0.22602, 0.684739, 0.0, 0.0, 0.0, 0.0, 0.0, 0.993245, 0.0, 0.0, 0.0, 0.811813, 0.0, 0.0, 0.0, 1.344548, 0.036525, 0.538957, 0.0, 0.444835, 0.0, 0.0, 0.0, 0.653008, 0.0, 0.0, 0.14365, 0.223034, 0.0, 0.0, 0.00937, 0.28599, 0.0, 0.0, 0.0, 0.855603, 0.0, 0.0, 0.021215, 0.016677, 0.0, 0.0, 0.0, 0.48765]
###Markdown
**[convolutional.SeparableConvolution2D.5] 4 3x3 filters on 5x5x2 input, activation='relu', border_mode='same', subsample=(1,1), depth_multiplier=2, dim_ordering='tf', bias=False**
###Code
data_in_shape = (5, 5, 2)
conv = SeparableConvolution2D(4, 3, 3, activation='relu', border_mode='same',
subsample=(1, 1), depth_multiplier=2, dim_ordering='tf', bias=False)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(input=layer_0, output=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(165)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('depthwise_kernel shape:', weights[0].shape)
print('depthwise_kernel:', format_decimal(weights[0].ravel().tolist()))
print('pointwise_kernel shape:', weights[1].shape)
print('pointwise_kernel:', format_decimal(weights[1].ravel().tolist()))
# print('b shape:', weights[2].shape)
# print('b:', format_decimal(weights[2].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
print('')
print('in shape:', data_in_shape)
print('in:', format_decimal(data_in.ravel().tolist()))
result = model.predict(np.array([data_in]))
print('out shape:', result[0].shape)
print('out:', format_decimal(result[0].ravel().tolist()))
###Output
depthwise_kernel shape: (3, 3, 2, 2)
depthwise_kernel: [0.652446, 0.2318, 0.819831, 0.944374, 0.940098, 0.624866, -0.63481, -0.882357, 0.81222, -0.44532, 0.601451, -0.524795, -0.612798, 0.792021, 0.757943, 0.30891, 0.424309, 0.939364, 0.520832, -0.644457, 0.295776, -0.625702, 0.444817, 0.507751, 0.27355, -0.245684, -0.174924, -0.76745, -0.532202, -0.237121, 0.291332, -0.844156, -0.555355, -0.170294, -0.15161, 0.081458]
pointwise_kernel shape: (1, 1, 4, 4)
pointwise_kernel: [0.652446, 0.2318, 0.819831, 0.944374, 0.940098, 0.624866, -0.63481, -0.882357, 0.81222, -0.44532, 0.601451, -0.524795, -0.612798, 0.792021, 0.757943, 0.30891]
in shape: (5, 5, 2)
in: [0.424309, 0.939364, 0.520832, -0.644457, 0.295776, -0.625702, 0.444817, 0.507751, 0.27355, -0.245684, -0.174924, -0.76745, -0.532202, -0.237121, 0.291332, -0.844156, -0.555355, -0.170294, -0.15161, 0.081458, 0.932854, -0.540877, -0.862886, -0.360297, -0.406034, -0.593383, 0.778915, -0.877735, -0.595981, 0.164847, 0.338339, 0.933207, 0.5686, 0.93931, -0.596359, -0.191263, -0.20549, 0.968537, 0.467264, -0.983275, -0.430008, 0.244891, -0.989142, 0.45335, 0.268861, -0.781933, -0.263702, -0.314199, -0.043672, 0.057881]
out shape: (5, 5, 4)
out: [0.862566, 0.048018, 0.241041, 0.400052, 0.351313, 1.25112, 0.63279, 0.0, 0.0, 1.747549, 0.151011, 0.264638, 0.607059, 0.706304, 0.064874, 0.186923, 0.607271, 0.596091, 0.0, 0.0, 0.0, 1.1677, 0.0, 1.360053, 0.0, 1.767671, 3.229443, 2.93878, 0.109303, 1.31919, 0.641564, 0.839189, 0.0, 1.045304, 0.0, 1.160624, 0.414675, 0.675433, 2.603644, 1.58157, 0.533043, 0.821891, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.604507, 2.244225, 0.270241, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.564922, 0.877675, 1.179773, 0.0, 2.154162, 0.0, 0.0, 0.0, 0.0, 0.667535, 0.930289, 0.0, 0.355705, 0.0, 0.647852, 0.738893, 0.0, 0.507023, 0.0, 0.395042, 1.369129, 0.0, 0.0, 0.0, 0.0, 0.0, 0.355779, 0.625898, 0.101676, 0.0, 2.107639, 0.37634, 0.0, 0.098033, 0.0, 0.895843, 0.260409, 0.86668, 2.347926, 0.302149]
###Markdown
**[convolutional.SeparableConvolution2D.6] 4 3x3 filters on 5x5x2 input, activation='relu', border_mode='same', subsample=(2,2), depth_multiplier=2, dim_ordering='tf', bias=True**
###Code
data_in_shape = (5, 5, 2)
conv = SeparableConvolution2D(4, 3, 3, activation='relu', border_mode='same',
subsample=(2, 2), depth_multiplier=2, dim_ordering='tf', bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(input=layer_0, output=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(166)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('depthwise_kernel shape:', weights[0].shape)
print('depthwise_kernel:', format_decimal(weights[0].ravel().tolist()))
print('pointwise_kernel shape:', weights[1].shape)
print('pointwise_kernel:', format_decimal(weights[1].ravel().tolist()))
print('b shape:', weights[2].shape)
print('b:', format_decimal(weights[2].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
print('')
print('in shape:', data_in_shape)
print('in:', format_decimal(data_in.ravel().tolist()))
result = model.predict(np.array([data_in]))
print('out shape:', result[0].shape)
print('out:', format_decimal(result[0].ravel().tolist()))
###Output
depthwise_kernel shape: (3, 3, 2, 2)
depthwise_kernel: [-0.404522, 0.654554, 0.530772, -0.946748, 0.570535, 0.191906, 0.198967, 0.40347, 0.997535, 0.717759, -0.030665, -0.480131, -0.793351, -0.096084, -0.608614, 0.582545, -0.311023, 0.636141, -0.552496, -0.466095, 0.673995, 0.480495, -0.133135, -0.487911, -0.061628, -0.799714, -0.372442, 0.051244, -0.242036, 0.022129, -0.738668, -0.497968, -0.08996, 0.67791, 0.306663, 0.434282]
pointwise_kernel shape: (1, 1, 4, 4)
pointwise_kernel: [-0.404522, 0.654554, 0.530772, -0.946748, 0.570535, 0.191906, 0.198967, 0.40347, 0.997535, 0.717759, -0.030665, -0.480131, -0.793351, -0.096084, -0.608614, 0.582545]
b shape: (4,)
b: [-0.404522, 0.654554, 0.530772, -0.946748]
in shape: (5, 5, 2)
in: [0.570535, 0.191906, 0.198967, 0.40347, 0.997535, 0.717759, -0.030665, -0.480131, -0.793351, -0.096084, -0.608614, 0.582545, -0.311023, 0.636141, -0.552496, -0.466095, 0.673995, 0.480495, -0.133135, -0.487911, -0.061628, -0.799714, -0.372442, 0.051244, -0.242036, 0.022129, -0.738668, -0.497968, -0.08996, 0.67791, 0.306663, 0.434282, 0.083813, -0.565874, -0.282824, -0.481839, 0.799366, 0.816717, -0.703778, 0.804765, 0.427984, 0.445201, 0.174055, -0.444629, -0.753315, -0.424704, 0.43543, 0.795045, -0.460225, 0.351483]
out shape: (3, 3, 4)
out: [0.0, 0.53121, 0.84218, 0.0, 0.0, 0.352753, 0.217608, 0.46037, 0.0, 1.000543, 0.426013, 0.0, 0.0, 0.0, 0.028174, 0.0, 0.772403, 1.70105, 0.833395, 0.0, 0.0, 0.436665, 1.699903, 0.0, 0.0, 0.801162, 0.478923, 0.0, 0.0, 1.391281, 1.419269, 0.0, 0.035231, 0.011648, 0.142694, 0.0]
|
Big-Data-Clusters/CU8/Public/content/install/sop064-packman-uninstall-azdata.ipynb | ###Markdown
SOP064 - Uninstall azdata CLI (using package manager)=====================================================Steps----- Common functionsDefine helper functions used in this notebook.
###Code
# Define `run` function for transient fault handling, hyperlinked suggestions, and scrolling updates on Windows
import sys
import os
import re
import json
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
first_run = True
rules = None
debug_logging = False
def run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
global first_run
global rules
if first_run:
first_run = False
rules = load_rules()
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportability, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
# Display an install HINT, so the user can click on a SOP to install the missing binary
#
if which_binary == None:
print(f"The path used to search for '{cmd_actual[0]}' was:")
print(sys.path)
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if rules is not None:
apply_expert_rules(line)
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# apply expert rules (to run follow-on notebooks), based on output
#
if rules is not None:
apply_expert_rules(line_decoded)
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
def load_json(filename):
"""Load a json file from disk and return the contents"""
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def load_rules():
"""Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable"""
# Load this notebook as json to get access to the expert rules in the notebook metadata.
#
try:
j = load_json("sop064-packman-uninstall-azdata.ipynb")
except:
pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename?
else:
if "metadata" in j and \
"azdata" in j["metadata"] and \
"expert" in j["metadata"]["azdata"] and \
"expanded_rules" in j["metadata"]["azdata"]["expert"]:
rules = j["metadata"]["azdata"]["expert"]["expanded_rules"]
rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first.
# print (f"EXPERT: There are {len(rules)} rules to evaluate.")
return rules
def apply_expert_rules(line):
"""Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so
inject a 'HINT' to the follow-on SOP/TSG to run"""
global rules
for rule in rules:
notebook = rule[1]
cell_type = rule[2]
output_type = rule[3] # i.e. stream or error
output_type_name = rule[4] # i.e. ename or name
output_type_value = rule[5] # i.e. SystemExit or stdout
details_name = rule[6] # i.e. evalue or text
expression = rule[7].replace("\\*", "*") # Something escaped *, and put a \ in front of it!
if debug_logging:
print(f"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.")
if re.match(expression, line, re.DOTALL):
if debug_logging:
print("EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'".format(output_type_name, output_type_value, expression, notebook))
match_found = True
display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.'))
print('Common functions defined successfully.')
# Hints for binary (transient fault) retry, (known) error and install guide
#
retry_hints = {'azdata': ['Endpoint sql-server-master does not exist', 'Endpoint livy does not exist', 'Failed to get state for cluster', 'Endpoint webhdfs does not exist', 'Adaptive Server is unavailable or does not exist', 'Error: Address already in use', 'Login timeout expired (0) (SQLDriverConnect)', 'SSPI Provider: No Kerberos credentials available', 'ERROR: No credentials were supplied, or the credentials were unavailable or inaccessible.']}
error_hints = {'azdata': [['The token is expired', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Reason: Unauthorized', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Max retries exceeded with url: /api/v1/bdc/endpoints', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Look at the controller logs for more details', 'TSG027 - Observe cluster deployment', '../diagnose/tsg027-observe-bdc-create.ipynb'], ['provided port is already allocated', 'TSG062 - Get tail of all previous container logs for pods in BDC namespace', '../log-files/tsg062-tail-bdc-previous-container-logs.ipynb'], ['Create cluster failed since the existing namespace', 'SOP061 - Delete a big data cluster', '../install/sop061-delete-bdc.ipynb'], ['Failed to complete kube config setup', 'TSG067 - Failed to complete kube config setup', '../repair/tsg067-failed-to-complete-kube-config-setup.ipynb'], ['Error processing command: "ApiError', 'TSG110 - Azdata returns ApiError', '../repair/tsg110-azdata-returns-apierror.ipynb'], ['Error processing command: "ControllerError', 'TSG036 - Controller logs', '../log-analyzers/tsg036-get-controller-logs.ipynb'], ['ERROR: 500', 'TSG046 - Knox gateway logs', '../log-analyzers/tsg046-get-knox-logs.ipynb'], ['Data source name not found and no default driver specified', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ["Can't open lib 'ODBC Driver 17 for SQL Server", 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Control plane upgrade failed. Failed to upgrade controller.', 'TSG108 - View the controller upgrade config map', '../diagnose/tsg108-controller-failed-to-upgrade.ipynb'], ["[Errno 2] No such file or directory: '..\\\\", 'TSG053 - ADS Provided Books must be saved before use', '../repair/tsg053-save-book-first.ipynb'], ["NameError: name 'azdata_login_secret_name' is not defined", 'SOP013 - Create secret for azdata login (inside cluster)', '../common/sop013-create-secret-for-azdata-login.ipynb'], ['ERROR: No credentials were supplied, or the credentials were unavailable or inaccessible.', "TSG124 - 'No credentials were supplied' error from azdata login", '../repair/tsg124-no-credentials-were-supplied.ipynb'], ['Please accept the license terms to use this product through', "TSG126 - azdata fails with 'accept the license terms to use this product'", '../repair/tsg126-accept-license-terms.ipynb']]}
install_hint = {'azdata': ['SOP063 - Install azdata CLI (using package manager)', '../install/sop063-packman-install-azdata.ipynb']}
###Output
_____no_output_____
###Markdown
Uninstall azdata CLI using OS specific package manager
###Code
import os
import sys
import platform
from pathlib import Path
if platform.system() == "Darwin":
run('brew uninstall azdata-cli')
elif platform.system() == "Windows":
# Get the product guid to be able to do the .msi uninstall (this can take 2 or 3 minutes)
#
product_guid = run("""powershell -Command "$product = get-wmiobject Win32_Product | Where {$_.Name -match 'Azure Data CLI'}; $product.IdentifyingNumber" """, return_output=True)
print (f"The product guid is: {product_guid}")
# Uninstall using the product guid
#
# NOTES:
# 1. This will pop up the User Access Control dialog, press 'Yes'
# 2. The installer dialog will appear (it may start as a background window)
#
run(f'msiexec /uninstall {product_guid} /passive')
elif platform.system() == "Linux":
webbrowser.open('https://docs.microsoft.com/en-us/sql/big-data-cluster/deploy-install-azdata-linux-package')
else:
raise SystemExit(f"Platform '{platform.system()}' is not recognized, must be 'Darwin', 'Windows' or 'Linux'")
###Output
_____no_output_____
###Markdown
Related (SOP063, SOP054)
###Code
print('Notebook execution complete.')
###Output
_____no_output_____ |
02_Neural_Nets/02_Conv_NNets/01_cats_v_dogs/.ipynb_checkpoints/Data Preparation-checkpoint.ipynb | ###Markdown
Cats vrs Dogs> Im going to prepare the data to train my `Conv NNet` using the data that i've already downladed from my previous projects `cats-vrs-dogs` using `sklearn-image-classification`. There are 2 folders `cats_gray` and `dogs_gray` which contains 100 images of `100x100` grayscale images. I've already moved these files in the root directory. Imports
###Code
import os
import numpy as np
import cv2
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
dogs_path = 'dogs_gray'
cats_path = 'cats_gray'
le = LabelEncoder()
target_names = np.array(["cat", "dog"])
target = np.array(le.fit_transform(target_names))
###Output
_____no_output_____
###Markdown
> `0` for cat and `1` for dog
###Code
data = []
###Output
_____no_output_____
###Markdown
Cats
###Code
for file in os.listdir(cats_path):
image = np.array(cv2.imread(os.path.join(cats_path, file)))
image = image/255
data.append([image, np.eye(2)[0]])
###Output
_____no_output_____
###Markdown
Dogs
###Code
for file in os.listdir(dogs_path):
image = np.array(cv2.imread(os.path.join(dogs_path, file)))
image = image/255
data.append([image, np.eye(2)[1]])
###Output
_____no_output_____
###Markdown
Shuffling the data
###Code
np.random.shuffle(data)
###Output
_____no_output_____
###Markdown
Saving the data
###Code
np.save('cats_v_dogs.npy', data)
print("SAVED!!")
###Output
SAVED!!
###Markdown
Loading the Data
###Code
data_ = np.load('cats_v_dogs.npy', allow_pickle=True)
len(data_)
###Output
_____no_output_____
###Markdown
> That's all about data preparation
###Code
plt.imshow(data_[0][0])
###Output
_____no_output_____ |
06CodingExercise16Functions7IsGreater.ipynb | ###Markdown
Define a function called is_greater() that takes in two arguments, and returns True if first value is greater than the second, False if it is less than or equal to the second
###Code
def is_greater(a,b) :
'''
Takes in two arguments
Returns True if first > second
Returns False if first <= second
I/P : Two values a and b
O/P : True (or) False
'''
if a > b :
return True
else :
return False
print(is_greater(6,4))
print(is_greater(5.5,8.8))
print(is_greater('Badri','Aadhi'))
help(is_greater)
###Output
Help on function is_greater in module __main__:
is_greater(a, b)
Takes in two arguments
Returns True if first > second
Returns False if first <= second
I/P : Two values a and b
O/P : True (or) False
|
Closed_during_the_month_(Registeration_Closure).xls.xlsx/.ipynb_checkpoints/Closed_during_the_month_(Registeration_Closure).xls-checkpoint.ipynb | ###Markdown
Codebook Author: [Patrick Guo](https://github.com/shpatrickguo) Closed_during_the_month_(Registeration_Closure).xls**Data provided by:** NGO Darpan**Source:** s3://daanmatchdatafiles/Closed_during_the_month_(Registeration_Closure).xls**Type:** xlsx**Last Modified:** October 27, 2021, 16:44:55 (UTC-07:00)**Size:** 520.5 KB```Closed_during_the_month_(Registeration_Closure).xls``` named ```reg_closed_df``` contains: List of companies struck off/closed during the month of August 2014.- ```COLUMN NAME```: Content - Issues/Transformations- ```S.No```: Index - ```CIN```: Corporate Identification number- ```COMPANY_NAME```: Name- ```COMPANY_STATUS```: Status- ```TYPE```: Government or Non-Government- ```DATE_OF_REGISTRATION```: Date of registration- ```LISTED```: Listed or Unlisted - 2 Missing values- ```COMPANY_INDICATOR```: Indian Company or not.- ```REGISTERED_STATE```: Registered State- ```ROC_CODE```: Registrar of Companies Code- ```INDUSTRIAL-CLASIFICATION```: 5-digit Industrial classification code.- ```DESCRIPTION```: Decription of the company's industry. - 41 Missing values. Import Libraries
###Code
import boto3
import io
import string
import requests
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import missingno as msno
###Output
_____no_output_____
###Markdown
Load Data
###Code
client = boto3.client('s3')
resource = boto3.resource('s3')
obj = client.get_object(Bucket='daanmatchdatafiles', Key='Closed_during_the_month_(Registeration_Closure).xls')
df = pd.read_excel(io.BytesIO(obj['Body'].read()))
###Output
_____no_output_____
###Markdown
Registration Closure Dataset
###Code
# First 4 rows are blank and 5th row is column names
reg_closed_df = df.copy()
# Set column names to row 5
reg_closed_df.columns = reg_closed_df.iloc[4]
# Drop first 5 rows
reg_closed_df = reg_closed_df.iloc[5:, :]
# Reset Index
reg_closed_df.reset_index(drop = True, inplace = True)
reg_closed_df.head()
# Examing the structure of the dataframe
reg_closed_df.info()
# Examine the descriptive statistics of the dataframe
reg_closed_df.describe()
###Output
_____no_output_____
###Markdown
Missing values
###Code
# Identify the nullity of the dataframe
missing_values_hist = reg_closed_df.isna().sum()
print('Total Missing Values:\n', missing_values_hist)
# Identify the percentage of nullity in the dataframe for each collumn
missing_values_hist_perc = reg_closed_df.isnull().mean() * 100
print('Percentage of Missing Values:\n', missing_values_hist_perc)
# Visualize the completeness of the dataframe
msno.bar(reg_closed_df)
plt.show()
# Visualize the locations of the missing values of the dataset
msno.matrix(reg_closed_df)
plt.show()
###Output
_____no_output_____
###Markdown
Observations- ```LISTED``` is missing 0.108%.- ```DESCRIPTION``` is missing 2.21%. Columns
###Code
reg_closed_df.columns
###Output
_____no_output_____
###Markdown
S.No
###Code
reg_closed_df["S.No"]
###Output
_____no_output_____
###Markdown
Observations```S.No``` represents an index of companies. CIN
###Code
reg_closed_df["CIN"]
# Check length of each CIN.
reg_closed_df["CIN"].apply(len).unique()
# Check for duplicates.
reg_closed_df["CIN"].value_counts(ascending=False).head()
###Output
_____no_output_____
###Markdown
Observations```CIN``` represents Corporate Identification Number (CIN). It is a 21 digits alpha-numeric code issued to companies incorporated within the country on being registered by the ROC situated in different states across India under the MCA. - They are no duplicates. COMPANY_NAME
###Code
reg_closed_df["COMPANY_NAME"]
# Check for duplicates.
reg_closed_df["COMPANY_NAME"].value_counts(ascending=False).head()
###Output
_____no_output_____
###Markdown
Observations```COMPANY_NAME``` represents the name of the company.- They are no duplicates. CLASS
###Code
reg_closed_df["CLASS"]
# Check unique Classes
reg_closed_df["CLASS"].value_counts(ascending=False).head()
# Visualize the proportion of public and private companies.
reg_closed_df["CLASS"].value_counts(ascending=False).plot(kind = 'barh')
plt.show()
###Output
_____no_output_____
###Markdown
Observations```CLASS``` indicates whether the company is Private or Public.- There are a large proportion of private companies compared to public companies. COMPANY_STATUS
###Code
reg_closed_df["COMPANY_STATUS"]
# Check unique
reg_closed_df["COMPANY_STATUS"].value_counts(ascending=False).head()
# Visualize the proportion of public and private companies.
reg_closed_df["COMPANY_STATUS"].value_counts(ascending=False).plot(kind = 'barh')
plt.show()
###Output
_____no_output_____
###Markdown
Observations```COMPANY_STATUS``` indicates the status of the company, - The largest proportion are the companies that were struck off. TYPE
###Code
reg_closed_df["TYPE"]
# Check unique
reg_closed_df["TYPE"].value_counts(ascending=False).head()
###Output
_____no_output_____
###Markdown
Observations```TYPE``` indicates the type of the company, - All companies are non-government. DATE_OF_REGISTRATION
###Code
reg_closed_df["DATE_OF_REGISTRATION"]
# Check years of registry
reg_closed_df["DATE_OF_REGISTRATION"].apply(lambda x: x.year).value_counts(ascending=False).head()
###Output
_____no_output_____
###Markdown
Observations```DATE_OF_REGISTRATION``` indicates the date the company was registered.- A large proportion of companies in this dataset were registered in 2010. LISTED
###Code
reg_closed_df["LISTED"]
# Check unique
reg_closed_df["LISTED"].value_counts(ascending=False).head()
###Output
_____no_output_____
###Markdown
Observations```LISTED``` indicates whether the company is listed.- Most are unlisted, only 2 are listed. COMPANY_INDICATOR
###Code
reg_closed_df["COMPANY_INDICATOR"]
# Check unique
reg_closed_df["COMPANY_INDICATOR"].value_counts(ascending=False).head()
###Output
_____no_output_____
###Markdown
Observations```COMPANY_INDICATOR``` indicates whether the company is Indian.- All are Indian Companies REGISTERED_STATE
###Code
reg_closed_df["REGISTERED_STATE"]
# Value Counts
reg_closed_df["REGISTERED_STATE"].value_counts(ascending=False)
len(reg_closed_df["REGISTERED_STATE"].unique())
###Output
_____no_output_____
###Markdown
Observations```REGISTERED_STATE``` indicates where the company is registered.- Most are in urban states, ie. Maharashtra, Delhi.- Only 22 states are listed. ROC_CODE
###Code
reg_closed_df["ROC_CODE"]
# Value Counts
reg_closed_df["ROC_CODE"].value_counts(ascending=False)
len(reg_closed_df["ROC_CODE"].unique())
###Output
_____no_output_____
###Markdown
Observations```ROC_CODE``` indicates the company's Registrar of Companies code.- Most are in urban states, ie. Delhi, Mubai.- 22 ROC-codes are listed. INDUSTRIAL_CLASIFICATION
###Code
reg_closed_df["INDUSTRIAL_CLASIFICATION"]
# Value Counts
reg_closed_df["INDUSTRIAL_CLASIFICATION"].value_counts(ascending=False)
###Output
_____no_output_____
###Markdown
Observations```INDUSTRIAL_CLASIFICATION``` 5 digit code for [national industry classification](http://mospi.nic.in/classification/national-industrial-classification). - ```51909``` refers to Wholesale n.e.c.- A large number of companies are ```72200```. DESCRIPTION
###Code
reg_closed_df["DESCRIPTION"]
# Value Counts
reg_closed_df["DESCRIPTION"].value_counts(ascending=False)
###Output
_____no_output_____ |
adres.ipynb | ###Markdown
###Code
adres = "/content/drive/MyDrive/HU-BBY162-2021/adres.txt"
f = open(adres, "a")
BirinciBilgi = input("Ad: ")
İkinciBilgi = input("Soyadı: ")
ÜçüncüBilgi = input("e-Posta: ")
f.write(BirinciBilgi+ " " + İkinciBilgi+ " " + ÜçüncüBilgi+ " " + "\n")
f.close()
f = open(adres, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Ad: yelda
Soyadı: bakıcıoglu
e-Posta: [email protected]
Yelda bakıcıoglu
[email protected] bakıcıoglu [email protected]
###Markdown
###Code
adres = "/content/drive/MyDrive/HU-BBY162-2021/adres.txt"
f = open(adres, "a")
BirinciBilgi = input("Ad: ")
İkinciBilgi = input("Soyadı: ")
ÜçüncüBilgi = input("e-Posta: ")
f.write(BirinciBilgi+ " " + İkinciBilgi+ " " + ÜçüncüBilgi+ " " + "\n")
f.close()
f = open(adres, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Ad: Emel
Soyadı: Kayan
e-Posta: [email protected]
Emel Kayan
[email protected] Emel Kayan [email protected]
###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Mounted at /content/drive
###Markdown
Yazar HakkındaMücahit Yazıcı Bölüm 10: Dosya İşlemleri Dosya OkumaPython, bilgisayarınızdaki bir dosyadan bilgi okumak ve yazmak için bir dizi yerleşik fonksiyona sahiptir. **open** fonksiyonu bir dosyayı açmak için kullanılır. Dosya, okuma modunda (ikinci argüman olarak "r" kullanılarak) veya yazma modunda (ikinci argüman olarak "w" kullanılarak) açılabilir. **open** fonksiyonu dosya nesnesini döndürür. Dosyanın saklanması için kapatılması gerekir.
###Code
'''
#Google Drive Bağlantısı
from google.colab import drive
drive.mount('/gdrive')
'''
dosya = "/content/drive/MyDrive/adres.TXT"
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Mücahit Yazıcı | [email protected]
###Markdown
Dosya YazmaBir dosyayı ikinci argüman olarak "w" (yazma) kullanarak açarsanız, yeni bir boş dosya oluşturulur. Aynı ada sahip başka bir dosya varsa silineceğini unutmayın. Mevcut bir dosyaya içerik eklemek istiyorsanız "a" (ekleme) değiştiricisini kullanmalısınız.
###Code
'''
#Google Drive Bağlantısı
from google.colab import drive
drive.mount('/gdrive')
'''
dosya = "/content/drive/MyDrive/adres.TXT"
adSoyad = input("Adınızı soyadınızı giriniz: ")
ePosta = input("E-posta adresinizi giriniz: ")
f = open(dosya, 'a') # Mevcut veriye ek veri yazılması için parametre: 'a'
f.write(adSoyad + " | " + ePosta + "\n") # Her yeni verinin bir alt satıra yazdırılması "test\n"
f.close()
f.close()
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Adınızı soyadınızı giriniz: Mücahit Yazıcı
E-posta adresinizi giriniz: [email protected]
Mücahit Yazıcı | [email protected]
###Markdown
###Code
adres = "/content/drive/MyDrive/HU-BBY162-2021/çalışmalar/adres.txt"
f = open(adres, "a")
BirinciBilgi = input("Ad: ")
İkinciBilgi = input("Soyadı: ")
ÜçüncüBilgi = input("e-Posta: ")
f.write(BirinciBilgi+ " " + İkinciBilgi+ " " + ÜçüncüBilgi+ " " + "\n")
f.close()
f = open(adres, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Ad: Melike Şevval
Soyadı: Tomak
e-Posta: [email protected]
melike şevval tomak
[email protected] Şevval Tomak [email protected]
###Markdown
###Code
dosya = "/content/drive/MyDrive/Colab Notebooks/HU -BBY162 -2021/adres.txt"
f = open(dosya, "a")
BirinciBilgi = input("Ad: ")
İkinciBilgi = input("Soyadı: ")
ÜçüncüBilgi = input("e-Posta: ")
f.write(BirinciBilgi+ " " + İkinciBilgi+ " " + ÜçüncüBilgi+ " " + "\n")
f.close()
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Ad: Gamze
Soyadı: Burulday
e-Posta: [email protected]
Gamze Burulday [email protected]
###Markdown
###Code
dosya = "/content/drive/MyDrive/Colab Notebooks/HU-BBY162-2021/metin.txt"
adSoyad = input("Ad soyad giriniz: ")
eposta = input("E-posta adresi giriniz: ")
f = open(dosya, 'w')
f.write(adSoyad + " I " + eposta + "\n")
f.close()
dosya = "/content/drive/MyDrive/Colab Notebooks/HU-BBY162-2021/metin.txt"
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Ad soyad giriniz: yasemin kaya
E-posta adresi giriniz: [email protected]
yasemin kaya I [email protected]
###Markdown
Dosya Okuma
###Code
# Google drive dosya bağlantısı
adresDosyam = "/content/drive/MyDrive/Colab Notebooks/HU-BBY162-2021/metin.txt"
f = open(adresDosyam, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Ad-Soyad: Ayse Nur Atar
e-posta: [email protected]
###Markdown
Dosya Yazma
###Code
# Google drive dosya bağlantısı
adresDosyam = "/content/drive/MyDrive/Colab Notebooks/HU-BBY162-2021/metin.txt"
f = open( adresDosyam, 'w')
f.write("Ayşe Nur Atar\n")
f.write("[email protected]\n")
f.write("[email protected]\n")
f.close()
f = open(adresDosyam, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Ayşe Nur Atar
[email protected]
[email protected]
###Markdown
###Code
adres = "/content/drive/MyDrive/adres.txt"
f = open(adres, "a")
BirinciBilgi = input("Ad: ")
İkinciBilgi = input("Soyadı: ")
ÜçüncüBilgi = input("e-Posta: ")
f.write(BirinciBilgi+ " " + İkinciBilgi+ " " + ÜçüncüBilgi+ " " + "\n")
f.close()
f = open(adres, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Ad: esin
Soyadı: acarturk
e-Posta: [email protected]
esin acarturk [email protected]
esin acarturk [email protected]
###Markdown
###Code
dosya = "/content/drive/MyDrive/HU BBY162/Colab Notebooks/adres.txt"
f = open(dosya, 'a')
ilkBilgi = input("Adı: ")
ikinciBilgi = input("Soyadı: ")
ucuncuBilgi = input("E-posta Adresi: ")
f.write(ilkBilgi+ " " + ikinciBilgi+ " " + ucuncuBilgi+ " " + "\n")
f.close
f = open(dosya, 'r')
for line in f. readlines():
print(line)
f.close()
###Output
Adı: Yağmur
Soyadı: Cengiz
E-posta Adresi: [email protected]
Yağmur Cengiz [email protected]
###Markdown
###Code
dosya = "/content/drive/MyDrive/Colab Notebooks/adres.txt"
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Ad: Melek
Soyad: Ayas
E-mail: [email protected]
###Markdown
Yeni Bölüm
###Code
dosya = "/content/drive/MyDrive/Colab Notebooks/adres.txt"
f = open(dosya, 'w') # Mevcut veriye ek veri yazılması için parametre: 'a'
f.write("test") # Her yeni verinin bir alt satıra yazdırılması "test\n"
f.close()
dosya = "/content/drive/MyDrive/Colab Notebooks/adres.txt"
adSoyad = input("Ad soyad giriniz: ")
email = input("Email giriniz: ")
'''
def adSoyad():
bilgi = input("Ad Soyad giriniz: ")
def email():
bilg = input("Email giriniz: ")
def giris():
print(" 1- Ad ve soyad giriniz.")
print(" 2- E-mail giriniz.")
secilen = input("Hangi işlemi yapmak istiyorsunuz (1/2):")
if secilen == "1":
adSoyad()
else:
email()
giris() #Bu fonksiyonu çalıştırdım ve başarılı oldu fakat [f.write(adSoyad + " / " + email + "\n")] bu kısmı geçersiz saydı ve düzeltemedim.
Bu yüzden eklemedim.
'''
f = open(dosya, 'a') # Mevcut veriye ek veri yazılması için parametre: 'a'
f.write(adSoyad + " / " + email + "\n")
f.close()
dosya = "/content/drive/MyDrive/Colab Notebooks/adres.txt"
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Ad soyad giriniz: Melek Ayas
Email giriniz: [email protected]
Ad: Melek
Soyad: Ayas
E-mail: [email protected]
Melek Ayas / [email protected]
###Markdown
###Code
'''
Google Drive Bağlantısı
from google.colab import drive
drive.mount('/gdrive')
'''
dosya = "/content/drive/MyDrive/Colab Notebooks/HU-BBY162-2021/adres.txt"
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
'''
Google Drive Bağlantısı
from google.colab import drive
drive.mount('/gdrive')
'''
dosya = "/content/drive/MyDrive/Colab Notebooks/HU-BBY162-2021/adres.txt"
bilgi = input("Eklenecek bilgi: ")
f = open(dosya, 'a') # Mevcut veriye ek veri yazılması için parametre: 'a'
f.write(bilgi + "\n") # Her yeni verinin bir alt satıra yazdırılması "test\n"
f.close()
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Eklenecek bilgi: Selenay Gürbüz [email protected]
Selenay Gürbüz [email protected]
###Markdown
###Code
#dosya okuma
#Google Drive Bağlantısı
dosya ="/content/drive/MyDrive/Colab Notebooks/BBY162 - Programlama ve Algoritmalar/metin.txt"
f = open (dosya, "r")
for line in f.readlines():
print(line)
f.close()
#dosya yazma
dosya = "/content/drive/MyDrive/Colab Notebooks/BBY162 - Programlama ve Algoritmalar/metin.txt"
f = open(dosya, "r")
f = open( dosya, 'w')
f.write ("Esra Nur\n")
f.write ("Ozcelik\n")
f.write ("[email protected]")
f.close()
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Esra Nur
Ozcelik
[email protected]
###Markdown
###Code
dosya = "/content/drive/MyDrive/Colab Notebooks/adres.txt"
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Onur Ceyhan
[email protected]
###Markdown
###Code
dosya = "/content/adres"
f = open(dosya, 'a')
bilgiler = input("İstenilen bilgi")
f.write (bilgiler + "\n")
f.close()
f = open(dosya, 'r')
for line in f. readlines():
print(line)
f.close()
###Output
İstenilen [email protected]
mustafa çetin
[email protected]
###Markdown
###Code
adresDosyasi = "/content/drive/MyDrive/adres.txt"
f = open(adresDosyasi, 'w')
f.write("AD SOYAD: E-POSTA")
f.close()
f = open(adresDosyasi, 'a')
f.write("\nOrçun Madran: [email protected]")
f.close()
f = open(adresDosyasi, 'a')
f.write("\nAhmet Bildirici: [email protected]")
f.close()
f = open(adresDosyasi, "r")
for line in f.readlines():
print(line)
f.close()
###Output
AD SOYAD: E-POSTA
Orçun Madran: [email protected]
Ahmet Bildirici: [email protected]
###Markdown
###Code
dosya = "/content/drive/MyDrive/metin.txt"
f = open(dosya, 'a')
bilgiler = input("İstenilen bilgi")
f.write (bilgiler + "\n")
f.close()
f = open(dosya, 'r')
for line in f. readlines():
print(line)
f.close()
###Output
İstenilen bilgi [email protected]
Öykü Işıl Turhan
21923802
[email protected]
###Markdown
###Code
dosya= "/content/drive/MyDrive/Colab Notebooks/HU-BBY162-2021/adres.txt"
giris = input("Bilgileri giriniz:")
f = open(dosya, 'a')
f.write(giris + "\n")
f.close()
giris = input("Bilgileri giriniz:")
f = open(dosya, "a")
f.write(giris + "\n")
f.close
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Bilgileri giriniz:Betül Manavoğlu
Bilgileri giriniz:[email protected]
Betül Manavoğlu
[email protected]
###Markdown
###Code
adres = "/content/drive/MyDrive/adres.txt"
f = open(adres, "a")
BirinciBilgi = input("Ad: ")
İkinciBilgi = input("Soyadı: ")
ÜçüncüBilgi = input("e-Posta: ")
f.write(BirinciBilgi+ " " + İkinciBilgi+ " " + ÜçüncüBilgi+ " " + "\n")
f.close()
f = open(adres, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Ad: Utku Özgür
Soyadı: Demirhan
e-Posta: [email protected]
Utku Özgür Demirhan [email protected]
Utku Özgür Demirhan [email protected]
###Markdown
###Code
#dosya okuma
dosya = "/content/drive/MyDrive/Colab Notebooks/BBY162-2021/adres.txt"
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
#dosya yazma
dosya = "/content/drive/MyDrive/Colab Notebooks/BBY162-2021/adres.txtt"
f = open(dosya, "a")
f.write("ad soyad: Rabia Çiçek, e-posta:[email protected]\n")
f.write("ad soyad: Orçun Madran, e-posta:[email protected]\n")
###Output
_____no_output_____
###Markdown
###Code
dosya = "/content/drive/MyDrive/Colab Notebooks/HU-BBY162-2021/adres.txt"
adSoyad = input("Ad soyad giriniz: ")
email = input("Email giriniz: ")
f = open(dosya, 'a') # Mevcut veriye ek veri yazılması için parametre: 'a'
f.write(adSoyad + " | " + email + "\n")
f.close()
dosya = "/content/drive/MyDrive/Colab Notebooks/HU-BBY162-2021/adres.txt"
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Ad soyad giriniz: Songül Artan
Email giriniz: [email protected]
Songül
Artan
[email protected]
Songül Artan | [email protected]
###Markdown
###Code
dosya = "/content/drive/MyDrive/Colab Notebooks/HU-BBY162-2021/adres.txt.txt"
adSoyad = input("Adınızı soyadınızı giriniz: ")
email = input("E-posta adresinizi giriniz: ")
f = open(dosya, 'a') # Mevcut veriye ek veri yazılması için parametre: 'a'
f.write(adSoyad + " / " + email+ "\n") # Her yeni verinin bir alt satıra yazdırılması
f.close()
dosya = "/content/drive/MyDrive/Colab Notebooks/HU-BBY162-2021/adres.txt.txt"
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
###Output
_____no_output_____
###Markdown
###Code
#dosya işlemleri
dosya = "/content/drive/MyDrive/Colab Notebooks/HU-BBY162-2021/adres.txt"
adSoyad = input("Adınızı soyadınızı giriniz: ")
eposta = input("E-posta adresinizi giriniz: ")
f = open(dosya, "a")
f.write(adSoyad + " | " + eposta + "\n" )
f.close()
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Adınızı soyadınızı giriniz: Rukiye Kucur
E-posta adresinizi giriniz: [email protected]
Ad, Soyad: Rukiye Kucur
E-posta: [email protected]
Rukiye Kucur | [email protected]
Rukiye Kucur | [email protected]
###Markdown
###Code
#dosya işlemleri
dosya = "/content/drive/MyDrive/hu-bby162-2021/adres.txt"
adSoyad = input("Adınızı soyadınızı giriniz: ")
eposta = input("E-posta adresinizi giriniz: ")
f = open(dosya, "a")
f.write(adSoyad + " | " + eposta + "\n" )
f.close()
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
###Output
Adınızı soyadınızı giriniz: Ali İhsan Kuru
E-posta adresinizi giriniz: [email protected]
Ali İhsan Kuru | [email protected]
###Markdown
###Code
dosya = "/content/adres.txt"
f = open(dosya, 'a')
bilgi = input("Aranan Bilgiler ")
f.write (bilgi + "\n")
f.close()
f = open(dosya, 'r')
for line in f. readlines():
print(line)
f.close()
###Output
_____no_output_____ |
validation/event-sampling/checks/psf_check.ipynb | ###Markdown
PSF Check
###Code
from gammapy.cube import PSFMap
from gammapy.irf import EnergyDependentTablePSF
from pathlib import Path
import copy
from astropy.coordinates import SkyCoord
from astropy.table import Table
from gammapy.maps import WcsGeom
from gammapy.modeling.models import (
Models,
PointSpatialModel,
PowerLawSpectralModel,
SkyModel,
)
from gammapy.data import GTI, Observation, EventList
from gammapy.maps import MapAxis, WcsGeom, Map, MapCoord
from gammapy.maps.profile import ImageProfile, ImageProfileEstimator
from gammapy.irf import EnergyDispersion2D, PSF3D, EnergyDependentMultiGaussPSF, load_cta_irfs
from gammapy.cube import (
MapDataset,
MapDatasetEventSampler,
MapDatasetMaker,
)
from gammapy.utils.random import get_random_state
import numpy as np
from scipy.stats import norm
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
from matplotlib import colors
from matplotlib.ticker import PercentFormatter
import astropy.units as u
def get_filename_dataset(livetime):
filename = f"data/dataset_{livetime.value:.0f}{livetime.unit}.fits.gz"
return BASE_PATH / filename
def get_filename_events(filename_dataset, filename_model, obs_id):
obs_id=int(obs_id)
model_str = filename_model.name.replace(filename_model.suffix, "")
filename_events = filename_dataset.name.replace("dataset", "events")
filename_events = BASE_PATH / f"data/models/{model_str}/" / filename_events
filename_events = filename_events.name.replace(".fits.gz", f"_{obs_id:04d}.fits.gz")
path = BASE_PATH / f"data/models/{model_str}/" / filename_events
return path
def gaussian(x, amp, wid):
return amp * np.exp(-(x)**2 / (2*wid**2.))
BASE_PATH = Path("../make.py").parent
model = "point-pwlsimple"
filename_model = BASE_PATH / f"models/{model}.yaml"
IRF_FILE = "$GAMMAPY_DATA/cta-1dc/caldb/data/cta/1dc/bcf/South_z20_50h/irf_file.fits"
POINTING = SkyCoord(0.0, 0.0, frame="galactic", unit="deg")
LIVETIME = 10 * u.hr
GTI_TABLE = GTI.create(start=0 * u.s, stop=LIVETIME.to(u.s))
# dataset config
ENERGY_AXIS = MapAxis.from_energy_bounds("0.1 TeV", "100 TeV", nbin=10, per_decade=True)
ENERGY_AXIS_TRUE = MapAxis.from_energy_bounds("0.03 TeV", "300 TeV", nbin=20, per_decade=True)
MIGRA_AXIS = MapAxis.from_bounds(0.5, 2, nbin=150, node_type="edges", name="migra")
width=0.15
bins=0.01
WCS_GEOM = WcsGeom.create(
skydir=POINTING, width=(width, width), binsz=bins, frame="galactic", axes=[ENERGY_AXIS]
)
###Output
_____no_output_____
###Markdown
Create the dataset
###Code
spatial_model = PointSpatialModel(lon_0="0.0deg", lat_0="0.0deg", frame="galactic")
spectral_model = PowerLawSpectralModel(amplitude="1e-11 cm-2 s-1 TeV-1")
skymodel = SkyModel(spatial_model=spatial_model, spectral_model=spectral_model)
#Now we create a reference exposure map, we use to evaluate the model:
exposure = Map.create(
binsz=0.02,
map_type='wcs',
skydir=POINTING,
width="5 deg",
axes=[ENERGY_AXIS],
frame="galactic", unit="cm2 s"
)
exposure.data = 1e10 * 1000 * np.ones(exposure.data.shape)
evaluator = MapDataset.create(WCS_GEOM, models=skymodel)
evaluator.exposure.data += 1e13
n=evaluator.npred()
np.sum(n.data)
random_state = get_random_state(0)
n_events = random_state.poisson(np.sum(n.data))
coords = npred.sample_coord(n_events, random_state)
### DOESN'T WORK....
###Output
_____no_output_____
###Markdown
START HERE
###Code
irfs = load_cta_irfs(IRF_FILE)
observation = Observation.create(
obs_id=1001, pointing=POINTING, livetime=LIVETIME, irfs=irfs
)
empty = MapDataset.create(WCS_GEOM, energy_axis_true=ENERGY_AXIS_TRUE, migra_axis=MIGRA_AXIS)
maker = MapDatasetMaker(selection=["exposure"])
dataset = maker.run(empty, observation)
models = Models.read(filename_model)
dataset.models = models
evaluator=dataset._evaluators[0]
npred = evaluator.compute_npred()
# a=npred.sum_over_axes()
###Output
_____no_output_____
###Markdown
Simulate the events
###Code
random_state = get_random_state(0)
n_events = random_state.poisson(np.sum(npred.data))
coords = npred.sample_coord(n_events, random_state)
coord = SkyCoord(coords['lon'], coords['lat'], frame='galactic')
ra,dec = coord.icrs.ra.value,coord.icrs.dec.value
###Output
_____no_output_____
###Markdown
Pointlike source distribution
###Code
model = PointSpatialModel(lon_0="0.01deg", lat_0="0.01deg", frame="galactic",)
geom = WcsGeom.create(
skydir=SkyCoord("0.0d 0.0d", frame="galactic"), width=(width, width), binsz=bins
)
###Output
_____no_output_____
###Markdown
Compare the 2Dhistograms
###Code
#plot the point source
model.plot(geom=geom, add_cbar=True)
#plot the npred histogram
plt.subplots(1,1)
weights = np.ones_like(ra)/float(len(ra))
plt.hist2d(ra,dec, bins=2,
cmap=plt.cm.YlOrRd, weights=weights,
)
plt.xlim(266.36,266.45)
plt.ylim(-28.96,-28.91)
plt.colorbar()
###Output
_____no_output_____
###Markdown
Create a fake PSF
###Code
psf_gauss = EnergyDependentMultiGaussPSF.read(filename=IRF_FILE, hdu="POINT SPREAD FUNCTION")
test = copy.deepcopy(psf_gauss.sigmas)
psf_gauss.sigmas[0][:] = 0.1
test[0][:] = 0.1
psf_gauss.norms[0][:] = 1
psf_3d = psf_gauss.to_psf3d(rad=np.linspace(0, 1, 100) * u.deg)
psf_gauss.sigmas
psf_gauss.containment_radius(1 *u.TeV,0.0*u.rad).deg
psf_gauss.plot_containment_vs_energy()
###Output
_____no_output_____
###Markdown
Create the dataset
###Code
irfs = load_cta_irfs(IRF_FILE)
irfs['psf'] = psf_3d
observation = Observation.create(
obs_id=1001, pointing=POINTING, livetime=LIVETIME*10, irfs=irfs
)
empty = MapDataset.create(WCS_GEOM, energy_axis_true=ENERGY_AXIS_TRUE, migra_axis=MIGRA_AXIS)
maker = MapDatasetMaker(selection=["exposure", "psf"])
dataset = maker.run(empty, observation)
###Output
WARNING: AstropyDeprecationWarning: The truth value of a Quantity is ambiguous. In the future this will raise a ValueError. [astropy.units.quantity]
###Markdown
Simulate the events
###Code
models = Models.read(filename_model)
dataset.models = models
observation = Observation.create(
obs_id=1001, pointing=POINTING, livetime=LIVETIME, irfs=irfs
)
sampler = MapDatasetEventSampler(random_state=0)
events = sampler.run(dataset, observation)
tab=copy.deepcopy(events)
# events.table.write('list.fits', format='fits')
tab.table = tab.table['RA','DEC','ENERGY']
# tab.table
counts = Map.create(frame="galactic", skydir=(0, 0.), binsz=0.01, npix=(100, 100))
counts.fill_events(tab)
counts.plot(add_cbar=True)
p = ImageProfileEstimator(method='sum', axis='radial', center=POINTING)
profile = p.run(counts)
index1 = np.where(profile.table['profile']>0)
dr1 = (profile.table['x_max']-profile.table['x_min'])[index1]
x1 = profile.table['x_ref'][index1]
y1 = profile.table['profile'][index1]#*2*np.pi*x1*dr1
yerr1 = y1**0.5
init_vals = [6, 0.1]
best_vals, covar = curve_fit(gaussian, x1,
(y1),
p0=init_vals,
sigma=yerr1)
print("############")
print(f"This is the normalization: {best_vals[0]} +\- {covar[0,0]**0.5}")
print(f"This is the sigma: {best_vals[1]} +\- {covar[1,1]**0.5}")
print("############")
plt.errorbar(x1, y1, yerr=yerr1)
plt.plot(x1, gaussian(x1,*best_vals))
plt.show()
###Output
_____no_output_____
###Markdown
Check sample_psf_coord()
###Code
table = Table()
n_events = int(len(events.table))
table['RA_TRUE'] = np.ones(n_events)*266.40498829 * u.deg
table['DEC_TRUE'] = np.ones(n_events)*-28.93617776 * u.deg
table['ENERGY_TRUE'] = events.table['ENERGY_TRUE']
table = EventList(table)
sampler = MapDatasetEventSampler(random_state=0)
evt_psf_mod = sampler.sample_psf(dataset.psf, table)
counts2 = Map.create(frame="galactic", skydir=(0, 0.), binsz=0.01, npix=(100, 100))
counts2.fill_events(evt_psf_mod)
counts2.plot(add_cbar=True)
p = ImageProfileEstimator(method='sum', axis='radial', center=POINTING)
profile = p.run(counts2)
index2 = np.where(profile.table['profile']>0)
dr2 = (profile.table['x_max']-profile.table['x_min'])[index2]
x2 = profile.table['x_ref'][index2]
y2 = profile.table['profile'][index2]#*2*np.pi*x2*dr2
yerr2 = y2**0.5
init_vals = [6, 0.1]
best_vals, covar = curve_fit(gaussian, x2,
(y2),
p0=init_vals,
sigma=yerr2)
print("############")
print(f"This is the normalization: {best_vals[0]} +\- {covar[0,0]**0.5}")
print(f"This is the sigma: {best_vals[1]} +\- {covar[1,1]**0.5}")
print("############")
plt.errorbar(x2, y2, yerr=yerr2)
plt.plot(x2, gaussian(x2,*best_vals))
plt.show()
###Output
_____no_output_____
###Markdown
Comparing 1D histograms
###Code
models = Models.read(filename_model)
dataset.models = models
sampler = MapDatasetEventSampler(random_state=0)
observation = Observation.create(
obs_id=1001, pointing=POINTING, livetime=LIVETIME, irfs=irfs
)
events = sampler.run(dataset, observation)
src_pos = SkyCoord(0.0*u.deg, 0.0*u.deg, frame='galactic')
evt_pos = SkyCoord(events.table['RA'], events.table['DEC'], frame='icrs')
evt_pos_psf_mod = SkyCoord(evt_psf_mod.table['RA'], evt_psf_mod.table['DEC'], frame='icrs')
sep = src_pos.separation(evt_pos).value
sep_other = src_pos.separation(evt_pos_psf_mod).value
plt.hist(sep_other, bins=50, label='fixed coordinates')
plt.hist(sep, bins=50, label='')
# plt.plot(x[2:], gaussian(x[2:],5000,0.1))
plt.legend()
###Output
_____no_output_____
###Markdown
Compare the npred profile with the simulated events
###Code
evaluator=dataset._evaluators[0]
npred = evaluator.compute_npred()
a=evaluator.apply_psf(npred)
a.plot_interactive(add_cbar=True)
npred.plot_interactive(add_cbar=True)
p = ImageProfileEstimator(method='sum', axis='radial', center=POINTING)
# profile = p.run(a)
profile = p.run(npred)
profile.table
index3 = np.where(profile.table['profile']>0)
dr3 = (profile.table['x_max']-profile.table['x_min'])[index3]
x3 = profile.table['x_ref'][index3]
y3 = profile.table['profile'][index3]#*2*np.pi*x3*dr3
yerr3 = y3**0.5
#compare the npred profile with the simulated events
plt.plot(x3, y3)
plt.plot(x2, y2)
###Output
_____no_output_____ |
GTSRB.ipynb | ###Markdown
Attack Examples on GTSRB
###Code
# Specify visible cuda device
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from parameters import *
from lib.utils import *
from lib.attacks import *
from lib.keras_utils import *
from lib.RandomTransform import *
from lib.OptCarlini import *
from lib.OptTransform import *
###Output
Using TensorFlow backend.
###Markdown
Initialize Model
###Code
# Build and load trained model
model = built_mltscl()
model.load_weights(WEIGTHS_PATH)
# Load dataset
x_train, y_train, x_val, y_val, x_test, y_test = load_dataset_GTSRB(
n_channel=N_CHANNEL)
# Convert labels to one-hot encoding
y_train = keras.utils.to_categorical(y_train, NUM_LABELS)
y_test = keras.utils.to_categorical(y_test, NUM_LABELS)
y_val = keras.utils.to_categorical(y_val, NUM_LABELS)
# Read sign names
signnames = read_csv("./input_data/signnames.csv").values[:, 1]
model.summary()
###Output
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 32, 32, 3) 0
____________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 32, 32, 32) 2432 input_1[0][0]
____________________________________________________________________________________________________
dropout_1 (Dropout) (None, 32, 32, 32) 0 conv2d_1[0][0]
____________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 16, 16, 32) 0 dropout_1[0][0]
____________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 16, 16, 64) 51264 max_pooling2d_1[0][0]
____________________________________________________________________________________________________
dropout_2 (Dropout) (None, 16, 16, 64) 0 conv2d_2[0][0]
____________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 8, 8, 64) 0 dropout_2[0][0]
____________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 8, 8, 128) 204928 max_pooling2d_2[0][0]
____________________________________________________________________________________________________
dropout_3 (Dropout) (None, 8, 8, 128) 0 conv2d_3[0][0]
____________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 4, 4, 32) 0 max_pooling2d_1[0][0]
____________________________________________________________________________________________________
max_pooling2d_5 (MaxPooling2D) (None, 4, 4, 64) 0 max_pooling2d_2[0][0]
____________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 4, 4, 128) 0 dropout_3[0][0]
____________________________________________________________________________________________________
flatten_1 (Flatten) (None, 512) 0 max_pooling2d_4[0][0]
____________________________________________________________________________________________________
flatten_2 (Flatten) (None, 1024) 0 max_pooling2d_5[0][0]
____________________________________________________________________________________________________
flatten_3 (Flatten) (None, 2048) 0 max_pooling2d_3[0][0]
____________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 3584) 0 flatten_1[0][0]
flatten_2[0][0]
flatten_3[0][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 1024) 3671040 concatenate_1[0][0]
____________________________________________________________________________________________________
dropout_4 (Dropout) (None, 1024) 0 dense_1[0][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 43) 44075 dropout_4[0][0]
====================================================================================================
Total params: 3,973,739
Trainable params: 3,973,739
Non-trainable params: 0
____________________________________________________________________________________________________
###Markdown
Load data
###Code
SAMPLE_IMG_DIR = './traffic_sign_samples'
SAMPLE_LABEL = './traffic_sign_samples/samples_label.txt'
# Load sample images, labels and masks
x_smp, x_smp_full, y_smp, masks, masks_full = load_samples(SAMPLE_IMG_DIR, SAMPLE_LABEL)
# Set target class to attack
tg = 10
print "Target class: " + signnames[tg]
# Set number of samples
size = 10
y_target = np.zeros((len(x_test))) + tg
y_target = keras.utils.to_categorical(y_target, NUM_LABELS)
# Filter samples (originally misclassified, originally classified as target)
x_fil, y_fil, del_id = filter_samples(model, x_smp, y_smp, y_target=y_target)
x_fil_full = np.delete(x_smp_full, del_id, axis=0)
masks_fil = np.delete(masks, del_id, axis=0)
masks_fil_full = np.delete(masks_full, del_id, axis=0)
# Set samples to attack (choose some samples by random)
ind = np.random.choice(range(len(y_fil)), size=size)
x_ben = np.copy(x_fil[ind])
x_ben_full = np.copy(x_fil_full[ind])
y_ben = np.copy(y_fil[ind])
y_tg = np.copy(y_target[ind])
masks_ben = np.copy(masks_fil[ind])
masks_ben_full = np.copy(masks_fil_full[ind])
###Output
Target class: No passing for vechiles over 3.5 metric tons
###Markdown
Attack Examples Fast Gradient
###Code
# Specify list of magnitudes
mag_list = np.linspace(1.0, 2.0, 6)
x_fg = fg(model, x_ben, y_tg, mag_list, target=True, mask=masks_ben)
im = x_ben[0]
print "Original class: " + signnames[predict(model, im)]
plt.imshow(im)
plt.axis('off')
plt.show()
im = x_fg[5, 0]
print "Adversarial class: " + signnames[predict(model, im)]
plt.imshow(im)
plt.axis('off')
plt.show()
###Output
['Original class: No vechiles']
###Markdown
Iterative AttackIterative steps in gradient direction
###Code
x_it = iterative(model, x_ben, y_tg, n_step=32, step_size=0.05, target=True, mask=masks_ben)
im = x_ben[0]
print "Original class: " + signnames[predict(model, im)]
plt.imshow(im)
plt.axis('off')
plt.show()
im = x_it[0]
print "Adversarial class: " + signnames[predict(model, im)]
plt.imshow(im)
plt.axis('off')
plt.show()
###Output
['Original class: No vechiles']
###Markdown
Optimize Attack
###Code
# Initialize optimizer
opt = OptCarlini(model, c=1, lr=0.01, target=True, use_bound=False, init_scl=0.1,
loss_op=0, var_change=True, k=5)
# Run optimizer on sample (only take one sample at a time)
x_adv, norm = opt.optimize(x_ben[0], y_tg[0], n_step=5000, prog=True, mask=masks_ben[0])
# Run optimier with constant search
#x_adv, norm = opt.optimize_search(x_ben[0], y_tg[0], n_step=5000, search_step=10, prog=True, mask=masks_ben[0])
im = x_ben[0]
print "Original class: " + signnames[predict(model, im)]
plt.imshow(im)
plt.axis('off')
plt.show()
im = x_adv
print "Adversarial class: " + signnames[predict(model, im)]
plt.imshow(im)
plt.axis('off')
plt.show()
###Output
['Original class: No vechiles']
###Markdown
Optimize with Transformation
###Code
# Initialize optimizer
opt = OptTransform(model, c=1, lr=0.01, target=True, use_bound=False, init_scl=0.1,
loss_op=0, var_change=True, k=5, batch_size=32)
# Run optimizer on sample
x_adv, norm = opt.optimize(x_ben[0], y_tg[0], n_step=5000, prog=True, mask=masks_ben[0])
# Run optimier with constant search
#x_adv, norm = opt.optimize_search(x_ben[0], y_tg[0], n_step=5000, search_step=10, prog=True, mask=masks_ben[0])
im = x_ben[0]
print "Original class: " + signnames[predict(model, im)]
plt.imshow(im)
plt.axis('off')
plt.show()
im = x_adv
print "Adversarial class: " + signnames[predict(model, im)]
plt.imshow(im)
plt.axis('off')
plt.show()
# Evaluate each attack, return a list of adv success rate
print eval_adv(model, x_fg, y_tg, target=True)
print eval_adv(model, x_it, y_tg, target=True)
###Output
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
0.0
###Markdown
Appendix Model trainer
###Code
# Build model
model = built_mltscl()
# Load dataset
x_train, y_train, x_val, y_val, x_test, y_test = load_dataset_GTSRB(
n_channel=N_CHANNEL, train_file_name='train_extended.p')
y_train = keras.utils.to_categorical(y_train, NUM_LABELS)
y_test = keras.utils.to_categorical(y_test, NUM_LABELS)
y_val = keras.utils.to_categorical(y_val, NUM_LABELS)
filepath = './weights.{epoch:02d}-{val_loss:.2f}.hdf5'
modelCheckpoint = keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0,
save_best_only=False, save_weights_only=False,
mode='auto', period=1)
earlyStop = keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0.001, patience=5,
verbose=0, mode='auto')
model.fit(x_train, y_train, batch_size=BATCH_SIZE, epochs=NUM_EPOCH, verbose=1,
callbacks=[modelCheckpoint, earlyStop], validation_data=(x_val, y_val),
shuffle=True, initial_epoch=0)
###Output
Train on 695980 samples, validate on 4410 samples
Epoch 1/100
695980/695980 [==============================] - 240s - loss: 0.7197 - acc: 0.8864 - val_loss: 0.5842 - val_acc: 0.9404
Epoch 2/100
695980/695980 [==============================] - 235s - loss: 0.5336 - acc: 0.9393 - val_loss: 0.4748 - val_acc: 0.9574
Epoch 3/100
695980/695980 [==============================] - 235s - loss: 0.4958 - acc: 0.9509 - val_loss: 0.4832 - val_acc: 0.9565
Epoch 4/100
695980/695980 [==============================] - 239s - loss: 0.4789 - acc: 0.9572 - val_loss: 0.4538 - val_acc: 0.9719
Epoch 5/100
695980/695980 [==============================] - 235s - loss: 0.4607 - acc: 0.9607 - val_loss: 0.4437 - val_acc: 0.9671
Epoch 6/100
390752/695980 [===============>..............] - ETA: 102s - loss: 0.4546 - acc: 0.9624 |
main_notebook/Generate_Text_with_GPT2.ipynb | ###Markdown
Text generation with Pretrained GPT2 models from Hugging Face on Amazon SageMaker The Poetry of NLPYou’ve just been hired by the Chicago Tribune to start a new poetry column. Congrats! The catch? You need to write a new poem every day. And it can’t just be any old string of syllables, you need it to be fresh, authentic, to resonate with the times and carry a sense of rhyme. You need it to delight your readers, to drive up the Tribune’s daily readership numbers and grow their social media presence. How are you going to accomplish this? With the help of Hugging Face and NLP models on SageMaker of course! In this notebook, we'll execute the following steps.1. Use the Hugging Face transformfers SDK to download pretrained NLP models and test them locally.2. Select a dataset from among our favorite authors.3. Finetune the pretrained model using SageMaker training.4. Deploy the model into S3.5. Trigger a pipeline to test and deploy the model onto a multi-container endpoint.6. Test your multi-model endpoint locally to write poetry and text in the style of your favorite authors. Please note, this notebook was built on SageMaker Studio, using an ml.t3.medium kernel gatway application, and the Python 3.6 PyTorch 1.8 CPU Jupyter Kernel. Step 0. Install the transformers SDK locally.
###Code
%%writefile requirements.txt
transformers==4.6.1
datasets
!pip install -r requirements.txt
###Output
_____no_output_____
###Markdown
Step 1. Download a pretrained GPT2 model and test locally.We're using the Transformers SDK syntax available on the model card here: https://huggingface.co/gpt2 To make this model even better, we'll use a version of GPT2 that **has already been finetuned to generate poetry!**
###Code
from transformers import AutoTokenizer, AutoModelForCausalLM
poem_gpt = "ismaelfaro/gpt2-poems.en"
tokenizer = AutoTokenizer.from_pretrained(poem_gpt)
base_model = AutoModelForCausalLM.from_pretrained(poem_gpt)
from transformers import set_seed
def get_outputs(sample_outputs, tokenizer):
# takes a tokenizer, and raw output from the model, decodes these and formats nicely
rt = []
print("Output:\n" + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
txt = tokenizer.decode(sample_output, skip_special_tokens = True)
print("{}: {}...".format(i, txt))
print('')
rt.append(txt)
return rt
# setting the seed helps us ensure reproducibility. when the seed is consistent, we know the model results will be consistent
set_seed(42)
text = "A rose by any other name"
input_ids = tokenizer.encode(text, return_tensors = 'pt')
sample_outputs = base_model.generate(input_ids,
do_sample = True,
max_length = 70,
num_return_sequences = 5)
generic_outputs = get_outputs(sample_outputs, tokenizer)
###Output
_____no_output_____
###Markdown
Interesting and entertaining! Clearly this model knows the form of poetry. It is obviously generating short lines, with a newline, and it seems to pick up some interesting concepts. Now, let's see if we can fine-tune this poem writer to fit the style of an author we have in mind. Step 2. Fine-tune the GPT2 Poem model with Anne Bradstreet.Now, we're going to fine-tune this model using another, much smaller, dataset. Then later we'll use a text classifier trained to evaluate this style of writer, and see how well our new text performs! If you're curious, take a look at some of the top authors in the English language available through this open domain site.https://www.public-domain-poetry.com/topauthors.php For the purposes of this workshop we'll stick to the longer poem pasted below. On your time time, outside of the workshop, if you'd like to modify this to work with a different text you are welcome to do so.Poke around at some of the available poems, and copy and paste what you like into this `train.txt` file below. We'll format that for finetuning GPT2 in the next step. In this notebook we're using a poem from Anne Bradstreet, a North American writer from the 17th Century.You may not have known this, but Anne Bradstreet was the first writer to be published in the North America!
###Code
%%writefile train.txt
A Dialogue Between Old England And New
By Anne Bradstreet
New England.
Alas, dear Mother, fairest Queen and best,
With honour, wealth, and peace happy and blest,
What ails thee hang thy head, and cross thine arms,
And sit i� the dust to sigh these sad alarms?
What deluge of new woes thus over-whelm
The glories of thy ever famous Realm?
What means this wailing tone, this mournful guise?
Ah, tell thy Daughter; she may sympathize.
Old England.
Art ignorant indeed of these my woes,
Or must my forced tongue these griefs disclose,
And must my self dissect my tatter�d state,
Which Amazed Christendom stands wondering at?
And thou a child, a Limb, and dost not feel
My weak�ned fainting body now to reel?
This physic-purging-potion I have taken
Will bring Consumption or an Ague quaking,
Unless some Cordial thou fetch from high,
Which present help may ease my malady.
If I decease, dost think thou shalt survive?
Or by my wasting state dost think to thrive?
Then weigh our case, if �t be not justly sad.
Let me lament alone, while thou art glad.
New England.
And thus, alas, your state you much deplore
In general terms, but will not say wherefore.
What Medicine shall I seek to cure this woe,
If th� wound�s so dangerous, I may not know?
But you, perhaps, would have me guess it out.
What, hath some Hengist like that Saxon stout
By fraud and force usurp�d thy flow�ring crown,
Or by tempestuous Wars thy fields trod down?
Or hath Canutus, that brave valiant Dane,
The regal peaceful Sceptre from thee ta�en?
Or is �t a Norman whose victorious hand
With English blood bedews thy conquered Land?
Or is �t intestine Wars that thus offend?
Do Maud and Stephen for the Crown contend?
Do Barons rise and side against their King,
And call in Foreign aid to help the thing?
Must Edward be depos�d? Or is �t the hour
That second Richard must be clapp�d i� th� Tower?
Or is it the fatal jar, again begun,
That from the red, white pricking Roses sprung?
Must Richmond�s aid the Nobles now implore
To come and break the tushes of the Boar?
If none of these, dear Mother, what�s your woe?
Pray, do not fear Spain�s bragging Armado.
Doth your Ally, fair France, conspire your wrack,
Or doth the Scots play false behind your back?
Doth Holland quit you ill for all your love?
Whence is this storm, from Earth or Heaven above?
Is �t drought, is �t Famine, or is �t Pestilence?
Dost feel the smart, or fear the consequence?
Your humble Child entreats you shew your grief.
Though Arms nor Purse she hath for your relief�
Such is her poverty,�yet shall be found
A suppliant for your help, as she is bound.
Old England.
I must confess some of those Sores you name
My beauteous Body at this present maim,
But foreign Foe nor feigned friend I fear,
For they have work enough, thou knowest, elsewhere.
Nor is it Alcie�s son and Henry�s Daughter
Whose proud contention cause this slaughter;
Nor Nobles siding to make John no King,
French Louis unjustly to the Crown to bring;
No Edward, Richard, to lose rule and life,
Nor no Lancastrians to renew old strife;
No Crook-backt Tyrant now usurps the Seat,
Whose tearing tusks did wound, and kill, and threat.
No Duke of York nor Earl of March to soil
Their hands in Kindred�s blood whom they did foil;
No need of Tudor Roses to unite:
None knows which is the Red or which the White.
Spain�s braving Fleet a second time is sunk.
France knows how of my fury she hath drunk
By Edward third and Henry fifth of fame;
Her Lilies in my Arms avouch the same.
My Sister Scotland hurts me now no more,
Though she hath been injurious heretofore.
What Holland is, I am in some suspense,
But trust not much unto his Excellence.
For wants, sure some I feel, but more I fear;
And for the Pestilence, who knows how near?
Famine and Plague, two sisters of the Sword,
Destruction to a Land doth soon afford.
They�re for my punishments ordain�d on high,
Unless thy tears prevent it speedily.
But yet I answer not what you demand
To shew the grievance of my troubled Land.
Before I tell the effect I�ll shew the cause,
Which are my sins�the breach of sacred Laws:
Idolatry, supplanter of a N ation,
With foolish superstitious adoration,
Are lik�d and countenanc�d by men of might,
The Gospel is trod down and hath no right.
Church Offices are sold and bought for gain
That Pope had hope to find Rome here again.
For Oaths and Blasphemies did ever ear
From Beelzebub himself such language hear?
What scorning of the Saints of the most high!
What injuries did daily on them lie!
What false reports, what nick-names did they take,
Not for their own, but for their Master�s sake!
And thou, poor soul, wast jeer�d among the rest;
Thy flying for the Truth I made a jest.
For Sabbath-breaking and for Drunkenness
Did ever Land profaneness more express?
From crying bloods yet cleansed am not I,
Martyrs and others dying causelessly.
How many Princely heads on blocks laid down
For nought but title to a fading Crown!
�Mongst all the cruelties which I have done,
Oh, Edward�s Babes, and Clarence�s hapless Son,
O Jane, why didst thou die in flow�ring prime?�
Because of Royal Stem, that was thy crime.
For Bribery, Adultery, for Thefts, and Lies
Where is the Nation I can�t paralyze?
With Usury, Extortion, and Oppression,
These be the Hydras of my stout transgression;
These be the bitter fountains, heads, and roots
Whence flow�d the source, the sprigs, the boughs, and fruits.
Of more than thou canst hear or I relate,
That with high hand I still did perpetrate,
For these were threat�ned the woeful day
I mocked the Preachers, put it fair away.
The Sermons yet upon record do stand
That cried destruction to my wicked Land.
These Prophets� mouths (all the while) was stopt,
Unworthily, some backs whipt, and ears crept;
Their reverent cheeks bear the glorious marks
Of stinking, stigmatizing Romish Clerks;
Some lost their livings, some in prison pent,
Some grossly fined, from friends to exile went:
Their silent tongues to heaven did vengeance cry,
Who heard their cause, and wrongs judg�d righteously,
And will repay it sevenfold in my lap.
This is fore-runner of my after-clap.
Nor took I warning by my neighbors� falls.
I saw sad Germany�s dismantled walls,
I saw her people famish�d, Nobles slain,
Her fruitful land a barren heath remain.
I saw (unmov�d) her Armies foil�d and fled,
Wives forc�d, babes toss�d, her houses calcined.
I saw strong Rochelle yield�d to her foe,
Thousands of starved Christians there also.
I saw poor Ireland bleeding out her last,
Such cruelty as all reports have past.
Mine heart obdurate stood not yet aghast.
Now sip I of that cup, and just �t may be
The bottom dregs reserved are for me.
New England.
To all you�ve said, sad mother, I assent.
Your fearful sins great cause there �s to lament.
My guilty hands (in part) hold up with you,
A sharer in your punishment�s my due.
But all you say amounts to this effect,
Not what you feel, but what you do expect.
Pray, in plain terms, what is your present grief?
Then let�s join heads and hands for your relief.
Old England.
Well, to the matter, then. There�s grown of late
�Twixt King and Peers a question of state:
Which is the chief, the law, or else the King?
One saith, it�s he; the other, no such thing.
My better part in Court of Parliament
To ease my groaning land shew their intent
To crush the proud, and right to each man deal,
To help the Church, and stay the Common-Weal.
So many obstacles comes in their way
As puts me to a stand what I should say.
Old customs, new Prerogatives stood on.
Had they not held law fast, all had been gone,
Which by their prudence stood them in such stead
They took high Strafford lower by the head,
And to their Laud be �t spoke they held �n th� Tower
All England�s metropolitan that hour.
This done, an Act they would have passed fain
No prelate should his Bishopric retain.
Here tugg�d they hard indeed, for all men saw
This must be done by Gospel, not by law.
Next the Militia they urged sore.
This was denied, I need not say wherefore.
The King, displeased, at York himself absents.
They humbly beg return, shew their intents.
The writing, printing, posting to and fro,
Shews all was done; I�ll therefore let it go.
But now I come to speak of my disaster.
Contention�s grown �twixt Subjects and their Master,
They worded it so long they fell to blows,
That thousands lay on heaps. Here bleeds my woes.
I that no wars so many years have known
Am now destroy�d and slaughter�d by mine own.
But could the field alone this strife decide,
One battle, two, or three I might abide,
But these may be beginnings of more woe�
Who knows, the worst, the best may overthrow!
Religion, Gospel, here lies at the stake,
Pray now, dear child, for sacred Zion�s sake,
Oh, pity me in this sad perturbation,
My plundered Towns, my houses� devastation,
My ravisht virgins, and my young men slain,
My wealthy trading fallen, my dearth of grain.
The seedtime�s come, but Ploughman hath no hope
Because he knows not who shall inn his crop.
The poor they want their pay, their children bread,
Their woful mothers� tears unpitied.
If any pity in thy heart remain,
Or any child-like love thou dost retain,
For my relief now use thy utmost skill,
And recompense me good for all my ill.
New England.
Dear mother, cease complaints, and wipe your eyes,
Shake off your dust, cheer up, and now arise.
You are my mother, nurse, I once your flesh,
Your sunken bowels gladly would refresh.
Your griefs I pity much but should do wrong,
To weep for that we both have pray�d for long,
To see these latter days of hop�d-for good,
That Right may have its right, though �t be with blood.
After dark Popery the day did clear;
But now the Sun in�s brightness shall appear.
Blest be the Nobles of thy Noble Land
With (ventur�d lives) for truth�s defence that stand.
Blest be thy Commons, who for Common good
And thy infringed Laws have boldly stood.
Blest be thy Counties, who do aid thee still
With hearts and states to testify their will.
Blest be thy Preachers, who do cheer thee on.
Oh, cry: the sword of God and Gideon!
And shall I not on them wish Mero�s curse
That help thee not with prayers, arms, and purse?
And for my self, let miseries abound
If mindless of thy state I e�er be found.
These are the days the Church�s foes to crush,
To root out Prelates, head, tail, branch, and rush.
Let�s bring Baal�s vestments out, to make a fire,
Their Mitres, Surplices, and all their tire,
Copes, Rochets, Croziers, and such trash,
And let their names consume, but let the flash
Light Christendom, and all the world to see
We hate Rome�s Whore, with all her trumpery.
Go on, brave Essex, shew whose son thou art,
Not false to King, nor Country in thy heart,
But those that hurt his people and his Crown,
By force expel, destroy, and tread them down.
Let Gaols be fill�d with th� remnant of that pack,
And sturdy Tyburn loaded till it crack.
And ye brave Nobles, chase away all fear,
And to this blessed Cause closely adhere.
O mother, can you weep and have such Peers?
When they are gone, then drown your self in tears,
If now you weep so much, that then no more
The briny Ocean will o�erflow your shore.
These, these are they (I trust) with Charles our king,
Out of all mists such glorious days will bring
That dazzled eyes, beholding, much shall wonder
At that thy settled Peace, thy wealth, and splendour,
Thy Church and Weal establish�d in such manner
That all shall joy that thou display�dst thy banner,
And discipline erected so, I trust,
That nursing Kings shall come and lick thy dust.
Then Justice shall in all thy Courts take place
Without respect of persons or of case.
Then bribes shall cease, and suits shall not stick long,
Patience and purse of Clients for to wrong.
Then High Commissions shall fall to decay,
And Pursuivants and Catchpoles want their pay.
So shall thy happy Nation ever flourish,
When truth and righteousness they thus shall nourish.
When thus in Peace, thine Armies brave send out
To sack proud Rome, and all her vassals rout.
There let thy name, thy fame, and valour shine,
As did thine Ancestors� in Palestine,
And let her spoils full pay with int�rest be
Of what unjustly once she poll�d from thee.
Of all the woes thou canst let her be sped,
Execute to th� full the vengeance threatened.
Bring forth the beast that rul�d the world with�s beck,
And tear his flesh, and set your feet on�s neck,
And make his filthy den so desolate
To th� �stonishment of all that knew his state.
This done, with brandish�d swords to Turkey go,�
(For then what is it but English blades dare do?)
And lay her waste, for so�s the sacred doom,
And do to Gog as thou hast done to Rome.
Oh Abraham�s seed, lift up your heads on high,
For sure the day of your redemption�s nigh.
The scales shall fall from your long blinded eyes,
And him you shall adore who now despise.
Then fullness of the Nations in shall flow,
And Jew and Gentile to one worship go.
Then follows days of happiness and rest.
Whose lot doth fall to live therein is blest.
No Canaanite shall then be found �n th� land,
And holiness on horses� bells shall stand.
If this make way thereto, then sigh no more,
But if at all thou didst not see �t before.
Farewell, dear mother; Parliament, prevail,
And in a while you�ll tell another tale.
###Output
Writing train.txt
###Markdown
Step 3. Format your training data for Hugging Face on Amazon SageMaker.Now, let's parse your training data to format it for finetuning GPT2 and training on Hugging Face.
###Code
data = []
with open('train.txt') as f:
for row in f.readlines():
d = row.strip()
if len(d) > 1:
data.append(d)
print ('Found {} valid objects in the training data.'.format(len(data)))
print (data[:10])
import sagemaker
sess = sagemaker.Session()
bucket = sess.default_bucket()
train_file_name = 'train.txt'
s3_train_data = 's3://{}/gpt2/{}'.format(bucket, train_file_name)
!aws s3 cp {train_file_name} {s3_train_data}
import sagemaker
from sagemaker.huggingface import HuggingFace, TrainingCompilerConfig
# gets role for executing training job
role = sagemaker.get_execution_role()
hyperparameters = {
'model_name_or_path':"ismaelfaro/gpt2-poems.en",
'output_dir':'/opt/ml/model',
'do_train':True,
'train_file': '/opt/ml/input/data/train/{}'.format(train_file_name),
'num_train_epochs': 5,
# set batch size to 22 if using SM training compiler
"per_device_train_batch_size": 64,
# add your remaining hyperparameters
# more info here https://github.com/huggingface/transformers/tree/v4.6.1/examples/pytorch/language-modeling
}
# git configuration to download our fine-tuning script
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.6.1'}
# creates Hugging Face estimator
huggingface_estimator = HuggingFace(
entry_point='run_clm.py',
source_dir='./examples/pytorch/language-modeling',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
git_config=git_config,
transformers_version='4.11.0',
pytorch_version='1.9.0',
py_version='py38',
hyperparameters = hyperparameters,
# pass the training compiler config to speed up your job
compiler_config=TrainingCompilerConfig(),
environment = {'GPU_NUM_DEVICES':'1'},
disable_profiler = True,
debugger_hook_config = False
)
# starting the train job
# should take about 13 minutes to run on current settings
huggingface_estimator.fit({'train':s3_train_data}, wait = True)
###Output
_____no_output_____
###Markdown
Step 4. Test your trained model locally
###Code
from sagemaker.huggingface import HuggingFace
# redefining if you need to restart your kernel
# huggingface_estimator = HuggingFace.attach('<paste your training job here')
s3_model_data = huggingface_estimator.model_data
local_model_path = 'gpt2_finetuned'
!mkdir {local_model_path}
!aws s3 cp {s3_model_data} {local_model_path}
!tar -xvf {local_model_path}/model.tar.gz -C {local_model_path}
!rm {local_model_path}/model.tar.gz
from transformers import AutoTokenizer, AutoModelForCausalLM
# optional - rerun this if you need to restart your kernel. We are actually using the same tokenizer from before
# tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained(local_model_path)
# step to make sure we can run inference with this model locally
model.eval()
from transformers import set_seed
set_seed(42)
text = "A rose by any other name "
input_ids = tokenizer.encode(text, return_tensors = 'pt')
sample_outputs = model.generate(input_ids,
do_sample = True,
max_length = 70,
num_return_sequences = 5)
bradsteet_raw = get_outputs(sample_outputs, tokenizer)
###Output
_____no_output_____
###Markdown
Interesting, it certainly looks different. Let's see if we can modify this output using different paramters to invoke the trained model.
###Code
sample_outputs = model.generate(input_ids,
max_length=70,
do_sample=True,
# only pick tokens at and above this probability level
top_p=0.85,
# only pick from this many tokens
top_k=200,
num_return_sequences = 5)
bradstreet_top_85 = get_outputs(sample_outputs, tokenizer)
###Output
_____no_output_____
###Markdown
Wow! Quite a difference - not all of these seem very much like Bradstreet, and just much more generic. Yet the logical coherence on some of them is strong. Let's try it again with a smaller top_k and top_p.
###Code
sample_outputs = model.generate(input_ids,
max_length=70,
do_sample=True,
# only pick tokens at and above this probability level
top_p=0.95,
# only pick from this many tokens
top_k=110,
num_return_sequences = 5)
bradstreet_top_95 = get_outputs(sample_outputs, tokenizer)
###Output
_____no_output_____
###Markdown
Interesting - under these terms the model seems even more generic. You can still pick up a hint of that very old English style of writing, and yet the social media base terms come even more to the surface. Step 5. Load a Text Classifier to Quantify Our Generated TextNow, we're going to use another model from the HF Hub. This time it's a text classifier, built specifically to give a strong signal for whether or not our text seems like it's in the style of Anne Bradstreet.
###Code
from transformers import AutoTokenizer, AutoModelForSequenceClassification
anne_model_name = 'edubz/anne_bradstreet'
anne_tokenizer = AutoTokenizer.from_pretrained(anne_model_name)
anne_clf = AutoModelForSequenceClassification.from_pretrained(anne_model_name)
from scipy.special import softmax
def invoke_locally(text, anne_clf, anne_tokenizer):
input_ids = anne_tokenizer(text, return_tensors = 'pt')
output = anne_clf(**input_ids)
logits = output['logits'].detach().numpy().tolist()[0]
res = softmax(logits).tolist()
conf = max(res)
label = res.index(conf)
if label == 0:
label_str = 'Not Anne'
elif label == 1:
label_str = 'Anne'
return {'confidence': conf, 'label':label_str }
invoke_locally("Alas, dear Mother, fairest Queen and best", anne_clf, anne_tokenizer)
invoke_locally("A rose by any other name", anne_clf, anne_tokenizer)
invoke_locally("< paste your generated text here >", anne_clf, anne_tokenizer)
###Output
_____no_output_____
###Markdown
Now, run some tests of your own. Try different invocation parameters. What seems to get you the highest Anne scores? Step 4. Deploy your fine-tuned model onto a SageMaker multi-model endpointNow, let's deploy this model onto SageMaker. In particular we will trigger a pipeline to update an existing multi-model endpoint, and then invoke our model from that endpoint. We'll also list all available models from that endpoint, and test out generating text with each of these. Who knows, maybe we'll stumble on something good enough for the Tribune!
###Code
# mme = <point to MME here>
# your model isn't here yet
# list(mme.list_models())
# now invoke the pipeline
## ADD A GIF OF VIEWING THE PIPELINE INVOCATION HERE
###Output
_____no_output_____
###Markdown
Step 5. Test your fine-tuned model on SageMaker multi-model endpoint
###Code
# now, point to the S3 path for the model you just created, and add a name for it
# mme.add_model(model_data_source=s3_model_data, model_data_path='My-Finetuned-Model')
# show your model
# list(mme.list_models())
# predictor = sagemaker.predictor.Predictor(endpoint_name = 'hf-multi-gpt2-2022-02-23-23-29-24', sagemaker_session=sess)
predictor.serializer = sagemaker.serializers.JSONSerializer()
predictor.deserializer = sagemaker.deserializers.JSONDeserializer()
# predictor = point to MME predictor here
predictor.predict({"inputs":'A rose by any other name'}, target_model='anne-bradstreet-gpt2')
###Output
_____no_output_____
###Markdown
Step 7. Write poetry for the Chicago TribuneNow - select your favorite lines from each output from GPT, and pass it in to the model. Feel free to modify the parameters using kwargs. When you are finished, you can submit your poem to our GitHub workshop page!**Please note** every time you invoke a new model via MME AWS is copying the model artifact from S3 to the SageMaker endpoint. That means **expect a big time delay whenever you invoke a new model.** One way to get around that is with model compilation, ie running SageMaker Neo to decrease the size, and thereby the runtime, of that model.In the poem below, I manually copied my favorite line from each output of the model, and fed it in to the generator. I manually pasted all of my favorites into the markdown file you see below.--- My poem - A rose by any other modelA rose by any other name has many meanings. When all that has been presented to us is a form of exaggeration. The language will not preserve. However, the old idea of he who has no business vainly passing by without any other Some unending mizzen, deceived and deceived, seems ever more absurd and likely to harm our time. We tuck his back into the sea which is on the plain almost as soon as we lose sight of him. A mariner shall pass. And I may leave nothing to thee till thou return, for as I said, My hand am strong when thou shouldst require it. This comes out of Kant\'s conviction that we have nothing in our minds
###Code
text = 'A rose by any other name has many meanings.'
predictor.predict({"inputs":text,
'parameters':{'max_length':70,
'do_sample':True,
# only pick tokens at and above this probability level
'top_p':0.99,
# only pick from this many tokens
'top_k':600}},
target_model='anne-bradstreet-gpt2')
text = 'However, the old idea of he who has no business vainly passing by without any other means'
predictor.predict({"inputs":text,
'parameters':{'max_length':70,
'do_sample':True,
# only pick tokens at and above this probability level
'top_p':0.99,
# only pick from this many tokens
'top_k':600}},
target_model='anne-bradstreet-gpt2')
text = 'A mariner shall pass'
predictor.predict({"inputs":text,
'parameters':{'max_length':70,
'do_sample':True,
# only pick tokens at and above this probability level
'top_p':0.99,
# only pick from this many tokens
'top_k':600}},
target_model='anne-bradstreet-gpt2')
text = 'A rose by any other model'
predictor.predict({"inputs":text,
'parameters':{'max_length':70,
'do_sample':True,
# only pick tokens at and above this probability level
'top_p':0.99,
# only pick from this many tokens
'top_k':100}},
target_model='anne-bradstreet-gpt2')
###Output
_____no_output_____
###Markdown
Text generation with Pretrained GPT2 models from Hugging Face on Amazon SageMaker The Poetry of NLPYou’ve just been hired by the Chicago Tribune to start a new poetry column. Congrats! The catch? You need to write a new poem every day. And it can’t just be any old string of syllables, you need it to be fresh, authentic, to resonate with the times and carry a sense of rhyme. You need it to delight your readers, to drive up the Tribune’s daily readership numbers and grow their social media presence. How are you going to accomplish this? With the help of Hugging Face and NLP models on SageMaker of course! In this notebook, we'll execute the following steps.1. Use the Hugging Face transformfers SDK to download pretrained NLP models and test them locally.2. Select a dataset from among our favorite authors.3. Finetune the pretrained model using SageMaker training.4. Deploy the model into S3.5. Trigger a pipeline to test and deploy the model onto a multi-container endpoint.6. Test your multi-model endpoint locally to write poetry and text in the style of your favorite authors. Please note, this notebook was built on SageMaker Studio, using an ml.t3.medium kernel gatway application, and the Python 3.6 PyTorch 1.8 CPU Jupyter Kernel. Step 0. Install the transformers SDK locally.
###Code
%%writefile requirements.txt
transformers==4.6.1
datasets
!pip install -r requirements.txt
###Output
_____no_output_____
###Markdown
Step 1. Download a pretrained GPT2 model and test locally.We're using the Transformers SDK syntax available on the model card here: https://huggingface.co/gpt2 To make this model even better, we'll use a version of GPT2 that **has already been finetuned to generate poetry!**
###Code
from transformers import AutoTokenizer, AutoModelForCausalLM
poem_gpt = "ismaelfaro/gpt2-poems.en"
tokenizer = AutoTokenizer.from_pretrained(poem_gpt)
base_model = AutoModelForCausalLM.from_pretrained(poem_gpt)
from transformers import set_seed
def get_outputs(sample_outputs, tokenizer):
# takes a tokenizer, and raw output from the model, decodes these and formats nicely
rt = []
print("Output:\n" + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
txt = tokenizer.decode(sample_output, skip_special_tokens = True)
print("{}: {}...".format(i, txt))
print('')
rt.append(txt)
return rt
# setting the seed helps us ensure reproducibility. when the seed is consistent, we know the model results will be consistent
set_seed(42)
text = "A rose by any other name"
input_ids = tokenizer.encode(text, return_tensors = 'pt')
sample_outputs = base_model.generate(input_ids,
do_sample = True,
max_length = 70,
num_return_sequences = 5)
generic_outputs = get_outputs(sample_outputs, tokenizer)
###Output
_____no_output_____
###Markdown
Interesting and entertaining! Clearly this model knows the form of poetry. It is obviously generating short lines, with a newline, and it seems to pick up some interesting concepts. Now, let's see if we can fine-tune this poem writer to fit the style of an author we have in mind. Step 2. Fine-tune the GPT2 Poem model with Anne Bradstreet.Now, we're going to fine-tune this model using another, much smaller, dataset. Then later we'll use a text classifier trained to evaluate this style of writer, and see how well our new text performs! If you're curious, take a look at some of the top authors in the English language available through this open domain site.https://www.public-domain-poetry.com/topauthors.php For the purposes of this workshop we'll stick to the longer poem pasted below. On your time time, outside of the workshop, if you'd like to modify this to work with a different text you are welcome to do so.Poke around at some of the available poems, and copy and paste what you like into this `train.txt` file below. We'll format that for finetuning GPT2 in the next step. In this notebook we're using a poem from Anne Bradstreet, a North American writer from the 17th Century.You may not have known this, but Anne Bradstreet was the first writer to be published in the North America!
###Code
%%writefile train.txt
A Dialogue Between Old England And New
By Anne Bradstreet
New England.
Alas, dear Mother, fairest Queen and best,
With honour, wealth, and peace happy and blest,
What ails thee hang thy head, and cross thine arms,
And sit i� the dust to sigh these sad alarms?
What deluge of new woes thus over-whelm
The glories of thy ever famous Realm?
What means this wailing tone, this mournful guise?
Ah, tell thy Daughter; she may sympathize.
Old England.
Art ignorant indeed of these my woes,
Or must my forced tongue these griefs disclose,
And must my self dissect my tatter�d state,
Which Amazed Christendom stands wondering at?
And thou a child, a Limb, and dost not feel
My weak�ned fainting body now to reel?
This physic-purging-potion I have taken
Will bring Consumption or an Ague quaking,
Unless some Cordial thou fetch from high,
Which present help may ease my malady.
If I decease, dost think thou shalt survive?
Or by my wasting state dost think to thrive?
Then weigh our case, if �t be not justly sad.
Let me lament alone, while thou art glad.
New England.
And thus, alas, your state you much deplore
In general terms, but will not say wherefore.
What Medicine shall I seek to cure this woe,
If th� wound�s so dangerous, I may not know?
But you, perhaps, would have me guess it out.
What, hath some Hengist like that Saxon stout
By fraud and force usurp�d thy flow�ring crown,
Or by tempestuous Wars thy fields trod down?
Or hath Canutus, that brave valiant Dane,
The regal peaceful Sceptre from thee ta�en?
Or is �t a Norman whose victorious hand
With English blood bedews thy conquered Land?
Or is �t intestine Wars that thus offend?
Do Maud and Stephen for the Crown contend?
Do Barons rise and side against their King,
And call in Foreign aid to help the thing?
Must Edward be depos�d? Or is �t the hour
That second Richard must be clapp�d i� th� Tower?
Or is it the fatal jar, again begun,
That from the red, white pricking Roses sprung?
Must Richmond�s aid the Nobles now implore
To come and break the tushes of the Boar?
If none of these, dear Mother, what�s your woe?
Pray, do not fear Spain�s bragging Armado.
Doth your Ally, fair France, conspire your wrack,
Or doth the Scots play false behind your back?
Doth Holland quit you ill for all your love?
Whence is this storm, from Earth or Heaven above?
Is �t drought, is �t Famine, or is �t Pestilence?
Dost feel the smart, or fear the consequence?
Your humble Child entreats you shew your grief.
Though Arms nor Purse she hath for your relief�
Such is her poverty,�yet shall be found
A suppliant for your help, as she is bound.
Old England.
I must confess some of those Sores you name
My beauteous Body at this present maim,
But foreign Foe nor feigned friend I fear,
For they have work enough, thou knowest, elsewhere.
Nor is it Alcie�s son and Henry�s Daughter
Whose proud contention cause this slaughter;
Nor Nobles siding to make John no King,
French Louis unjustly to the Crown to bring;
No Edward, Richard, to lose rule and life,
Nor no Lancastrians to renew old strife;
No Crook-backt Tyrant now usurps the Seat,
Whose tearing tusks did wound, and kill, and threat.
No Duke of York nor Earl of March to soil
Their hands in Kindred�s blood whom they did foil;
No need of Tudor Roses to unite:
None knows which is the Red or which the White.
Spain�s braving Fleet a second time is sunk.
France knows how of my fury she hath drunk
By Edward third and Henry fifth of fame;
Her Lilies in my Arms avouch the same.
My Sister Scotland hurts me now no more,
Though she hath been injurious heretofore.
What Holland is, I am in some suspense,
But trust not much unto his Excellence.
For wants, sure some I feel, but more I fear;
And for the Pestilence, who knows how near?
Famine and Plague, two sisters of the Sword,
Destruction to a Land doth soon afford.
They�re for my punishments ordain�d on high,
Unless thy tears prevent it speedily.
But yet I answer not what you demand
To shew the grievance of my troubled Land.
Before I tell the effect I�ll shew the cause,
Which are my sins�the breach of sacred Laws:
Idolatry, supplanter of a N ation,
With foolish superstitious adoration,
Are lik�d and countenanc�d by men of might,
The Gospel is trod down and hath no right.
Church Offices are sold and bought for gain
That Pope had hope to find Rome here again.
For Oaths and Blasphemies did ever ear
From Beelzebub himself such language hear?
What scorning of the Saints of the most high!
What injuries did daily on them lie!
What false reports, what nick-names did they take,
Not for their own, but for their Master�s sake!
And thou, poor soul, wast jeer�d among the rest;
Thy flying for the Truth I made a jest.
For Sabbath-breaking and for Drunkenness
Did ever Land profaneness more express?
From crying bloods yet cleansed am not I,
Martyrs and others dying causelessly.
How many Princely heads on blocks laid down
For nought but title to a fading Crown!
�Mongst all the cruelties which I have done,
Oh, Edward�s Babes, and Clarence�s hapless Son,
O Jane, why didst thou die in flow�ring prime?�
Because of Royal Stem, that was thy crime.
For Bribery, Adultery, for Thefts, and Lies
Where is the Nation I can�t paralyze?
With Usury, Extortion, and Oppression,
These be the Hydras of my stout transgression;
These be the bitter fountains, heads, and roots
Whence flow�d the source, the sprigs, the boughs, and fruits.
Of more than thou canst hear or I relate,
That with high hand I still did perpetrate,
For these were threat�ned the woeful day
I mocked the Preachers, put it fair away.
The Sermons yet upon record do stand
That cried destruction to my wicked Land.
These Prophets� mouths (all the while) was stopt,
Unworthily, some backs whipt, and ears crept;
Their reverent cheeks bear the glorious marks
Of stinking, stigmatizing Romish Clerks;
Some lost their livings, some in prison pent,
Some grossly fined, from friends to exile went:
Their silent tongues to heaven did vengeance cry,
Who heard their cause, and wrongs judg�d righteously,
And will repay it sevenfold in my lap.
This is fore-runner of my after-clap.
Nor took I warning by my neighbors� falls.
I saw sad Germany�s dismantled walls,
I saw her people famish�d, Nobles slain,
Her fruitful land a barren heath remain.
I saw (unmov�d) her Armies foil�d and fled,
Wives forc�d, babes toss�d, her houses calcined.
I saw strong Rochelle yield�d to her foe,
Thousands of starved Christians there also.
I saw poor Ireland bleeding out her last,
Such cruelty as all reports have past.
Mine heart obdurate stood not yet aghast.
Now sip I of that cup, and just �t may be
The bottom dregs reserved are for me.
New England.
To all you�ve said, sad mother, I assent.
Your fearful sins great cause there �s to lament.
My guilty hands (in part) hold up with you,
A sharer in your punishment�s my due.
But all you say amounts to this effect,
Not what you feel, but what you do expect.
Pray, in plain terms, what is your present grief?
Then let�s join heads and hands for your relief.
Old England.
Well, to the matter, then. There�s grown of late
�Twixt King and Peers a question of state:
Which is the chief, the law, or else the King?
One saith, it�s he; the other, no such thing.
My better part in Court of Parliament
To ease my groaning land shew their intent
To crush the proud, and right to each man deal,
To help the Church, and stay the Common-Weal.
So many obstacles comes in their way
As puts me to a stand what I should say.
Old customs, new Prerogatives stood on.
Had they not held law fast, all had been gone,
Which by their prudence stood them in such stead
They took high Strafford lower by the head,
And to their Laud be �t spoke they held �n th� Tower
All England�s metropolitan that hour.
This done, an Act they would have passed fain
No prelate should his Bishopric retain.
Here tugg�d they hard indeed, for all men saw
This must be done by Gospel, not by law.
Next the Militia they urged sore.
This was denied, I need not say wherefore.
The King, displeased, at York himself absents.
They humbly beg return, shew their intents.
The writing, printing, posting to and fro,
Shews all was done; I�ll therefore let it go.
But now I come to speak of my disaster.
Contention�s grown �twixt Subjects and their Master,
They worded it so long they fell to blows,
That thousands lay on heaps. Here bleeds my woes.
I that no wars so many years have known
Am now destroy�d and slaughter�d by mine own.
But could the field alone this strife decide,
One battle, two, or three I might abide,
But these may be beginnings of more woe�
Who knows, the worst, the best may overthrow!
Religion, Gospel, here lies at the stake,
Pray now, dear child, for sacred Zion�s sake,
Oh, pity me in this sad perturbation,
My plundered Towns, my houses� devastation,
My ravisht virgins, and my young men slain,
My wealthy trading fallen, my dearth of grain.
The seedtime�s come, but Ploughman hath no hope
Because he knows not who shall inn his crop.
The poor they want their pay, their children bread,
Their woful mothers� tears unpitied.
If any pity in thy heart remain,
Or any child-like love thou dost retain,
For my relief now use thy utmost skill,
And recompense me good for all my ill.
New England.
Dear mother, cease complaints, and wipe your eyes,
Shake off your dust, cheer up, and now arise.
You are my mother, nurse, I once your flesh,
Your sunken bowels gladly would refresh.
Your griefs I pity much but should do wrong,
To weep for that we both have pray�d for long,
To see these latter days of hop�d-for good,
That Right may have its right, though �t be with blood.
After dark Popery the day did clear;
But now the Sun in�s brightness shall appear.
Blest be the Nobles of thy Noble Land
With (ventur�d lives) for truth�s defence that stand.
Blest be thy Commons, who for Common good
And thy infringed Laws have boldly stood.
Blest be thy Counties, who do aid thee still
With hearts and states to testify their will.
Blest be thy Preachers, who do cheer thee on.
Oh, cry: the sword of God and Gideon!
And shall I not on them wish Mero�s curse
That help thee not with prayers, arms, and purse?
And for my self, let miseries abound
If mindless of thy state I e�er be found.
These are the days the Church�s foes to crush,
To root out Prelates, head, tail, branch, and rush.
Let�s bring Baal�s vestments out, to make a fire,
Their Mitres, Surplices, and all their tire,
Copes, Rochets, Croziers, and such trash,
And let their names consume, but let the flash
Light Christendom, and all the world to see
We hate Rome�s Whore, with all her trumpery.
Go on, brave Essex, shew whose son thou art,
Not false to King, nor Country in thy heart,
But those that hurt his people and his Crown,
By force expel, destroy, and tread them down.
Let Gaols be fill�d with th� remnant of that pack,
And sturdy Tyburn loaded till it crack.
And ye brave Nobles, chase away all fear,
And to this blessed Cause closely adhere.
O mother, can you weep and have such Peers?
When they are gone, then drown your self in tears,
If now you weep so much, that then no more
The briny Ocean will o�erflow your shore.
These, these are they (I trust) with Charles our king,
Out of all mists such glorious days will bring
That dazzled eyes, beholding, much shall wonder
At that thy settled Peace, thy wealth, and splendour,
Thy Church and Weal establish�d in such manner
That all shall joy that thou display�dst thy banner,
And discipline erected so, I trust,
That nursing Kings shall come and lick thy dust.
Then Justice shall in all thy Courts take place
Without respect of persons or of case.
Then bribes shall cease, and suits shall not stick long,
Patience and purse of Clients for to wrong.
Then High Commissions shall fall to decay,
And Pursuivants and Catchpoles want their pay.
So shall thy happy Nation ever flourish,
When truth and righteousness they thus shall nourish.
When thus in Peace, thine Armies brave send out
To sack proud Rome, and all her vassals rout.
There let thy name, thy fame, and valour shine,
As did thine Ancestors� in Palestine,
And let her spoils full pay with int�rest be
Of what unjustly once she poll�d from thee.
Of all the woes thou canst let her be sped,
Execute to th� full the vengeance threatened.
Bring forth the beast that rul�d the world with�s beck,
And tear his flesh, and set your feet on�s neck,
And make his filthy den so desolate
To th� �stonishment of all that knew his state.
This done, with brandish�d swords to Turkey go,�
(For then what is it but English blades dare do?)
And lay her waste, for so�s the sacred doom,
And do to Gog as thou hast done to Rome.
Oh Abraham�s seed, lift up your heads on high,
For sure the day of your redemption�s nigh.
The scales shall fall from your long blinded eyes,
And him you shall adore who now despise.
Then fullness of the Nations in shall flow,
And Jew and Gentile to one worship go.
Then follows days of happiness and rest.
Whose lot doth fall to live therein is blest.
No Canaanite shall then be found �n th� land,
And holiness on horses� bells shall stand.
If this make way thereto, then sigh no more,
But if at all thou didst not see �t before.
Farewell, dear mother; Parliament, prevail,
And in a while you�ll tell another tale.
###Output
Writing train.txt
###Markdown
Step 3. Format your training data for Hugging Face on Amazon SageMaker.Now, let's parse your training data to format it for finetuning GPT2 and training on Hugging Face.
###Code
data = []
with open('train.txt') as f:
for row in f.readlines():
d = row.strip()
if len(d) > 1:
data.append(d)
print ('Found {} valid objects in the training data.'.format(len(data)))
print (data[:10])
import sagemaker
sess = sagemaker.Session()
bucket = sess.default_bucket()
train_file_name = 'train.txt'
s3_train_data = 's3://{}/gpt2/{}'.format(bucket, train_file_name)
!aws s3 cp {train_file_name} {s3_train_data}
import sagemaker
from sagemaker.huggingface import HuggingFace, TrainingCompilerConfig
# gets role for executing training job
role = sagemaker.get_execution_role()
hyperparameters = {
'model_name_or_path':"ismaelfaro/gpt2-poems.en",
'output_dir':'/opt/ml/model',
'do_train':True,
'train_file': '/opt/ml/input/data/train/{}'.format(train_file_name),
'num_train_epochs': 5,
# set batch size to 22 if using SM training compiler
"per_device_train_batch_size": 64,
# add your remaining hyperparameters
# more info here https://github.com/huggingface/transformers/tree/v4.6.1/examples/pytorch/language-modeling
}
# git configuration to download our fine-tuning script
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.6.1'}
# creates Hugging Face estimator
huggingface_estimator = HuggingFace(
entry_point='run_clm.py',
source_dir='./examples/pytorch/language-modeling',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
git_config=git_config,
transformers_version='4.11.0',
pytorch_version='1.9.0',
py_version='py38',
hyperparameters = hyperparameters,
# pass the training compiler config to speed up your job
compiler_config=TrainingCompilerConfig(),
environment = {'GPU_NUM_DEVICES':'1'},
disable_profiler = True,
debugger_hook_config = False
)
# starting the train job
# should take about 13 minutes to run on current settings
huggingface_estimator.fit({'train':s3_train_data}, wait = True)
###Output
_____no_output_____
###Markdown
Step 4. Test your trained model locally
###Code
from sagemaker.huggingface import HuggingFace
# redefining if you need to restart your kernel
# huggingface_estimator = HuggingFace.attach('<paste your training job here')
s3_model_data = huggingface_estimator.model_data
local_model_path = 'gpt2_finetuned'
!mkdir {local_model_path}
!aws s3 cp {s3_model_data} {local_model_path}
!tar -xvf {local_model_path}/model.tar.gz -C {local_model_path}
!rm {local_model_path}/model.tar.gz
from transformers import AutoTokenizer, AutoModelForCausalLM
# optional - rerun this if you need to restart your kernel. We are actually using the same tokenizer from before
# tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained(local_model_path)
# step to make sure we can run inference with this model locally
model.eval()
from transformers import set_seed
set_seed(42)
text = "A rose by any other name "
input_ids = tokenizer.encode(text, return_tensors = 'pt')
sample_outputs = model.generate(input_ids,
do_sample = True,
max_length = 70,
num_return_sequences = 5)
bradsteet_raw = get_outputs(sample_outputs, tokenizer)
###Output
_____no_output_____
###Markdown
Interesting, it certainly looks different. Let's see if we can modify this output using different paramters to invoke the trained model.
###Code
sample_outputs = model.generate(input_ids,
max_length=70,
do_sample=True,
# only pick tokens at and above this probability level
top_p=0.85,
# only pick from this many tokens
top_k=200,
num_return_sequences = 5)
bradstreet_top_85 = get_outputs(sample_outputs, tokenizer)
###Output
_____no_output_____
###Markdown
Wow! Quite a difference - not all of these seem very much like Bradstreet, and just much more generic. Yet the logical coherence on some of them is strong. Let's try it again with a smaller top_k and top_p.
###Code
sample_outputs = model.generate(input_ids,
max_length=70,
do_sample=True,
# only pick tokens at and above this probability level
top_p=0.95,
# only pick from this many tokens
top_k=110,
num_return_sequences = 5)
bradstreet_top_95 = get_outputs(sample_outputs, tokenizer)
###Output
_____no_output_____
###Markdown
Interesting - under these terms the model seems even more generic. You can still pick up a hint of that very old English style of writing, and yet the social media base terms come even more to the surface. Step 5. Load a Text Classifier to Quantify Our Generated TextNow, we're going to use another model from the HF Hub. This time it's a text classifier, built specifically to give a strong signal for whether or not our text seems like it's in the style of Anne Bradstreet.
###Code
from transformers import AutoTokenizer, AutoModelForSequenceClassification
anne_model_name = 'edubz/anne_bradstreet'
anne_tokenizer = AutoTokenizer.from_pretrained(anne_model_name)
anne_clf = AutoModelForSequenceClassification.from_pretrained(anne_model_name)
from scipy.special import softmax
def invoke_locally(text, anne_clf, anne_tokenizer):
input_ids = tokenizer(text, return_tensors = 'pt')
output = base_model(**input_ids)
logits = output['logits'].detach().numpy().tolist()[0]
res = softmax(logits).tolist()
conf = max(res)
label = res.index(conf)
if label == 0:
label_str = 'Not Anne'
elif label == 1:
label_str = 'Anne'
return {'confidence': conf, 'label':label_str }
invoke_locally("Alas, dear Mother, fairest Queen and best", anne_clf, anne_tokenizer)
invoke_locally("A rose by any other name", anne_clf, anne_tokenizer)
invoke_locally("< paste your generated text here >", anne_clf, anne_tokenizer)
###Output
_____no_output_____
###Markdown
Now, run some tests of your own. Try different invocation parameters. What seems to get you the highest Anne scores? Step 4. Deploy your fine-tuned model onto a SageMaker multi-model endpointNow, let's deploy this model onto SageMaker. In particular we will trigger a pipeline to update an existing multi-model endpoint, and then invoke our model from that endpoint. We'll also list all available models from that endpoint, and test out generating text with each of these. Who knows, maybe we'll stumble on something good enough for the Tribune!
###Code
# mme = <point to MME here>
# your model isn't here yet
# list(mme.list_models())
# now invoke the pipeline
## ADD A GIF OF VIEWING THE PIPELINE INVOCATION HERE
###Output
_____no_output_____
###Markdown
Step 5. Test your fine-tuned model on SageMaker multi-model endpoint
###Code
# now, point to the S3 path for the model you just created, and add a name for it
# mme.add_model(model_data_source=s3_model_data, model_data_path='My-Finetuned-Model')
# show your model
# list(mme.list_models())
# predictor = sagemaker.predictor.Predictor(endpoint_name = 'hf-multi-gpt2-2022-02-23-23-29-24', sagemaker_session=sess)
predictor.serializer = sagemaker.serializers.JSONSerializer()
predictor.deserializer = sagemaker.deserializers.JSONDeserializer()
# predictor = point to MME predictor here
predictor.predict({"inputs":'A rose by any other name'}, target_model='anne-bradstreet-gpt2')
###Output
_____no_output_____
###Markdown
Step 7. Write poetry for the Chicago TribuneNow - select your favorite lines from each output from GPT, and pass it in to the model. Feel free to modify the parameters using kwargs. When you are finished, you can submit your poem to our GitHub workshop page!**Please note** every time you invoke a new model via MME AWS is copying the model artifact from S3 to the SageMaker endpoint. That means **expect a big time delay whenever you invoke a new model.** One way to get around that is with model compilation, ie running SageMaker Neo to decrease the size, and thereby the runtime, of that model.In the poem below, I manually copied my favorite line from each output of the model, and fed it in to the generator. I manually pasted all of my favorites into the markdown file you see below.--- My poem - A rose by any other modelA rose by any other name has many meanings. When all that has been presented to us is a form of exaggeration. The language will not preserve. However, the old idea of he who has no business vainly passing by without any other Some unending mizzen, deceived and deceived, seems ever more absurd and likely to harm our time. We tuck his back into the sea which is on the plain almost as soon as we lose sight of him. A mariner shall pass. And I may leave nothing to thee till thou return, for as I said, My hand am strong when thou shouldst require it. This comes out of Kant\'s conviction that we have nothing in our minds
###Code
text = 'A rose by any other name has many meanings.'
predictor.predict({"inputs":text,
'parameters':{'max_length':70,
'do_sample':True,
# only pick tokens at and above this probability level
'top_p':0.99,
# only pick from this many tokens
'top_k':600}},
target_model='anne-bradstreet-gpt2')
text = 'However, the old idea of he who has no business vainly passing by without any other means'
predictor.predict({"inputs":text,
'parameters':{'max_length':70,
'do_sample':True,
# only pick tokens at and above this probability level
'top_p':0.99,
# only pick from this many tokens
'top_k':600}},
target_model='anne-bradstreet-gpt2')
text = 'A mariner shall pass'
predictor.predict({"inputs":text,
'parameters':{'max_length':70,
'do_sample':True,
# only pick tokens at and above this probability level
'top_p':0.99,
# only pick from this many tokens
'top_k':600}},
target_model='anne-bradstreet-gpt2')
text = 'A rose by any other model'
predictor.predict({"inputs":text,
'parameters':{'max_length':70,
'do_sample':True,
# only pick tokens at and above this probability level
'top_p':0.99,
# only pick from this many tokens
'top_k':100}},
target_model='anne-bradstreet-gpt2')
###Output
_____no_output_____ |
Lab-05-05/New York Philharmonic.ipynb | ###Markdown
New York Philharmonic Due Thursday, May 12 at 8 AMIn this lab, you will analyze XML data of every one of the [New York Philharmonic](http://www.nyphil.org)'s concerts between 1963 and 1973. The data resides in the file `/data/nyphil.xml`.Note that the same program may have been used for several concerts. So the number of times a work was _programmed_ might be different from the number of times it was _performed_.You are highly encouraged to skim the [Beautiful Soup documentation](https://www.crummy.com/software/BeautifulSoup/bs4/doc/). Unlike most documentation, it's concise and organized! Question 0 (5 points)Use [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to read the XML data into a Python object. Store the Python object called `data`. Make sure the tests below run without any errors, as this question is autograded.
###Code
from bs4 import BeautifulSoup
fp = open("/data/nyphil.xml")
data = BeautifulSoup(fp, "xml")
root = list(data.children)[0]
assert(root.name == "programs")
assert(sum(1 for _ in root.children) == 1931)
###Output
_____no_output_____
###Markdown
Question 1 (15 points)Which works (please give composer and title) were programmed the most times over this time period? (No explanation necessary; just print out the top works, alongside the counts of how many programs they appeared on.)
###Code
import pandas as pd
composer_tags = data.find_all("composerName")
worktitle_tags = data.find_all("workTitle")
composer_list_str = []
worktitle_list_str = []
for composer in composer_tags:
composer_list_str.append(composer.string)
for worktitle in worktitle_tags:
worktitle_list_str.append(worktitle.string)
works_list = list(zip(composer_list_str,worktitle_list_str))
works_dict = {} # has (ComposerName, Work Title) : of times a work was programmed
for work in works_list:
if work not in works_dict:
works_dict[work] = 1
else:
works_dict[work] += 1
num_times_ea_work_programmed_df = pd.Series(works_dict).to_frame()
num_times_ea_work_programmed_df.columns = ['Number of Times a Work Was Programmed']
num_times_ea_work_programmed_df = num_times_ea_work_programmed_df.reset_index()
num_times_ea_work_programmed_df = num_times_ea_work_programmed_df.sort_values(by = "Number of Times a Work Was Programmed", ascending = False)
num_times_ea_work_programmed_df = num_times_ea_work_programmed_df.rename(columns = {'index': "(Composer Name, Work Title)" }).reset_index()
num_times_ea_work_programmed_df.ix[0:10]
###Output
_____no_output_____
###Markdown
Question 2 (20 points)Which works (please give composer and title) were performed the most times over this time period? (No explanation necessary; just print out the top works, alongside the counts of how many times they were performed.)*look at concertinfo. Some programs had multiple concertinfo tags, which meant that each work under a program was performed n(concertinfo) of times.
###Code
# composer_tags = data.find_all("composerName")
# worktitle_tags = data.find_all("workTitle")
# composer_list_str = []
# worktitle_list_str = []
# for composer in composer_tags:
# composer_list_str.append(composer.string)
# for worktitle in worktitle_tags:
# worktitle_list_str.append(worktitle.string)
# works_list = list(zip(composer_list_str,worktitle_list_str))
program_tags = data.find_all("program")
works_dict2 = {}
for program in program_tags:
concert_infos_list_per_ID = program.find_all("concertInfo")
num_performances_per_ID = len(list(concert_infos_list_per_ID) )
for work in program.find_all("work"):
if work['ID'] not in works_dict2 and work['ID'] != "0*":
works_dict2[work['ID']] = num_performances_per_ID
else:
if work['ID'] != "0*":
works_dict2[work['ID']] += num_performances_per_ID
num_performances_per_work = pd.Series(works_dict2).to_frame()
num_performances_per_work.columns = ['Number of Performances Per Work']
num_performances_per_work = num_performances_per_work.reset_index()
num_performances_per_work = num_performances_per_work.sort_values(by = 'Number of Performances Per Work', ascending = False)
num_performances_per_work = num_performances_per_work.rename(columns = {'index': "Work ID" })
num_performances_per_work = num_performances_per_work.reset_index()
num_performances_per_work.ix[0:10]
# performance_count_per_program = [] #num concertinfo's per id, DONT PUT IT IN LIST OR WILL HAVE INDEXING COMPLEXITIES
# for program in program_tags:
# concert_infos_list_per_ID = list(program.find_all("concertInfo"))
# num_performances_per_ID = len(concert_infos_list_per_ID)
# performance_count_per_program.append(num_performances_per_ID)
# performance_count_per_program
###Output
_____no_output_____
###Markdown
Question 3a (20 points)Make a Pandas DataFrame, where each row is a work that was programmed by the New York Philharmonic. The columns should include the composer, work title, conductor, and the date of the first performance of that program. (Hint: You may want to look at the Pandas DataFrame to remind yourself about how to use `pd.DataFrame` to create a DataFrame from a dict.)Please print out the first few rows of your DataFrame.
###Code
program_tags = data.find_all("program")
# test_dict = {
# 'composer':[1,2],
# 'work title':[3,4],
# 'conductor':[5,6]
# }
# df3a_test = pd.DataFrame(test_dict)
# #keys of dict become columns(composer,worktitle, conductor...)
# #values of dict are lists that contain varying content of the column, such as workID's
# #each row has pieces of info that represent the work id
# #list in values from dict must contain all the composers or conducts , etc..should be global to for loop
# df3a_test
works_info_data = {}
composerNames_list = []
workTitles_list = []
conductorNames_list = []
#first, put each info from works into sep lists:
for program in program_tags:
#find smallest timestamp
# strDates = []
# dates = list(program.find_all("Date"))
# for tag in dates:
# strDates.append(tag.string)
# earliest_date = min(dates)
# print(dates)
for work in program.find_all("work"):
if work['ID'] != '0*':
for work_detail in work.children:
if work_detail.name == 'composerName':
composerNames_list.append(work_detail.string)
elif work_detail.name == 'workTitle':
workTitles_list.append(work_detail.string)
elif work_detail.name == 'conductorName':
conductorNames_list.append(work_detail.string)
input_dict = {}
dummy_list = ['N/A'] * 198 # used solely to make size of smaller arrays = size of largest array for creating dataframe
#creating an array of na's that is size 198
conductorNames_list.extend(dummy_list)
input_dict['composerNames'] = composerNames_list
input_dict['workTitles'] = workTitles_list
input_dict['conductorNames'] = conductorNames_list
#can't create dataframe if arrays are all different sizes..fixed
df_3a = pd.DataFrame(input_dict)
df_3a
# print(len(composerNames_list)) #2 sizes smaller, must fill in rest to get same length
# print(len(workTitles_list))
# print(len(conductorNames_list))
###Output
_____no_output_____
###Markdown
Question 3b (10 points)Use the DataFrame you created above to determine Leonard Bernstein's favorite composers. That is, which composers appeared on the most programs where Bernstein was conducting?
###Code
df_3a_filterby_Bernstein = df_3a.ix[(df_3a['conductorNames'] == "Bernstein, Leonard") ] #get only rows where bernstein was conducting
df_3b = df_3a_filterby_Bernstein.groupby('composerNames')['composerNames'].count().to_frame()
df_3b.columns = ["Num Times Composer Appeared where Bernstein was conducting"]
df_3b = df_3b.sort_values(by = "Num Times Composer Appeared where Bernstein was conducting", ascending = False).reset_index()
df_3b.ix[0:2]
###Output
_____no_output_____
###Markdown
Bernstein's top three favorite composers: Beethoven, Mahler, Tchaikovsky Question 4 (20 points)For each composer, calculate the number of programs that featured one (or more) of his works. Sort the composers in descending order of the number of programs in which they appeared.**Think:** Why can't you just call `.groupby("composer").count()` on your Pandas DataFrame from the previous question?
###Code
#check if program contains target composer name.
#if so, map the composer name and count to a dict or if already in dict, increment count by one
program_tags = data.find_all("program")
num_programs_featuring_composerswork_dict = {}
composerNames_noduplicates = list(set(composerNames_list))
for composer in composerNames_noduplicates:
#print(composer)
for program in program_tags:
comp_names_inprogram = program.find_all('composerName')
for comp_name_tag in comp_names_inprogram:
if composer == comp_name_tag.string:
if composer not in num_programs_featuring_composerswork_dict:
num_programs_featuring_composerswork_dict[composer] = 1
else:
num_programs_featuring_composerswork_dict[composer] += 1
num_programs_featuring_composerswork_df = pd.Series(num_programs_featuring_composerswork_dict).to_frame().reset_index()
num_programs_featuring_composerswork_df.columns = ['composerNames', "number of programs that featured one (or more) of composer's works"]
num_programs_featuring_composerswork_df = num_programs_featuring_composerswork_df.sort_values(by = "number of programs that featured one (or more) of composer's works", ascending = False )
num_programs_featuring_composerswork_df = num_programs_featuring_composerswork_df.reset_index()
num_programs_featuring_composerswork_df
###Output
_____no_output_____ |
Classification with ML_ Predict Crime Category.ipynb | ###Markdown
**In this notebook, we demonstrate the application of Random Forest, Naive Bayes and Neural Network to perform classification task with San Francisco Crime Dataset**The steps of the classification task:1. Import Libraries1. Preliminary Analysis1. Data Preparation1. Feature Selection1. Dataset Splitting1. Random Forest Modelling1. Neural Network Modelling1. Naive Bayes Modelling
###Code
# 1. Import Libraries
# Visualization Libraries
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
#Preprocessing Libraries
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import precision_score, recall_score, confusion_matrix, classification_report, accuracy_score, f1_score
# ML Libraries
from sklearn.ensemble import RandomForestClassifier,VotingClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.naive_bayes import GaussianNB
# Evaluation Metrics
from yellowbrick.classifier import ClassificationReport
from sklearn import metrics
# Read dataset and display first 5 row
df = pd.read_csv('../input/train.csv', error_bad_lines=False)
df.head(5)
# 2. Preliminary Analysis
df.info()
# 3. Data Preparation
# Remove irrelevant/not meaningfull attributes
df = df.drop(['Descript'], axis=1)
df = df.drop(['Resolution'], axis=1)
df.info()
# Splitting the Date to Day, Month, Year, Hour, Minute, Second
df['date2'] = pd.to_datetime(df['Dates'])
df['Year'] = df['date2'].dt.year
df['Month'] = df['date2'].dt.month
df['Day'] = df['date2'].dt.day
df['Hour'] = df['date2'].dt.hour
df['Minute'] = df['date2'].dt.minute
df['Second'] = df['date2'].dt.second
df = df.drop(['Dates'], axis=1)
df = df.drop(['date2'], axis=1)
df.head(5)
# Convert Categorical Attributes to Numerical
df['PdDistrict'] = pd.factorize(df["PdDistrict"])[0]
df['Address'] = pd.factorize(df["Address"])[0]
df['DayOfWeek'] = pd.factorize(df["DayOfWeek"])[0]
df['Year'] = pd.factorize(df["Year"])[0]
df['Month'] = pd.factorize(df["Month"])[0]
df['Day'] = pd.factorize(df["Day"])[0]
df['Hour'] = pd.factorize(df["Hour"])[0]
df['Minute'] = pd.factorize(df["Minute"])[0]
df['Second'] = pd.factorize(df["Second"])[0]
df.head(5)
# Display targer class
Target = 'Category'
print('Target: ', Target)
# Plot Bar Chart visualize Crime Types
plt.figure(figsize=(14,10))
plt.title('Amount of Crimes by Category')
plt.ylabel('Crime Category')
plt.xlabel('Amount of Crimes')
df.groupby([df['Category']]).size().sort_values(ascending=True).plot(kind='barh')
plt.show()
# Display all unique classes
Classes = df['Category'].unique()
Classes
#Encode target labels into categorical variables:
df['Category'] = pd.factorize(df["Category"])[0]
df['Category'].unique()
# 4. Feature Selection using Filter Method
# Split Dataframe to target class and features
X_fs = df.drop(['Category'], axis=1)
Y_fs = df['Category']
#Using Pearson Correlation
plt.figure(figsize=(20,10))
cor = df.corr()
sns.heatmap(cor, annot=True, cmap=plt.cm.Reds)
plt.show()
#Correlation with output variable
cor_target = abs(cor['Category'])
#Selecting highly correlated features
relevant_features = cor_target[cor_target>0.02]
relevant_features
# At Current Point, the attributes is select manually based on Feature Selection Part.
Features = ["Address","Year","Hour", "Minute" ]
print('Full Features: ', Features)
# 5. Split dataset to Training Set & Test Set
x, y = train_test_split(df,
test_size = 0.2,
train_size = 0.8,
random_state= 3)
x1 = x[Features] #Features to train
x2 = x[Target] #Target Class to train
y1 = y[Features] #Features to test
y2 = y[Target] #Target Class to test
print('Feature Set Used : ', Features)
print('Target Class : ', Target)
print('Training Set Size : ', x.shape)
print('Test Set Size : ', y.shape)
# 6. Random Forest
# Create Model with configuration
rf_model = RandomForestClassifier(n_estimators=70, # Number of trees
min_samples_split = 30,
bootstrap = True,
max_depth = 50,
min_samples_leaf = 25)
# Model Training
rf_model.fit(X=x1,
y=x2)
# Prediction
result = rf_model.predict(y[Features])
# Model Evaluation
ac_sc = accuracy_score(y2, result)
rc_sc = recall_score(y2, result, average="weighted")
pr_sc = precision_score(y2, result, average="weighted")
f1_sc = f1_score(y2, result, average='micro')
confusion_m = confusion_matrix(y2, result)
print("========== Random Forest Results ==========")
print("Accuracy : ", ac_sc)
print("Recall : ", rc_sc)
print("Precision : ", pr_sc)
print("F1 Score : ", f1_sc)
print("Confusion Matrix: ")
print(confusion_m)
# 7. Neural Network
# Create Model with configuration
nn_model = MLPClassifier(solver='adam',
alpha=1e-5,
hidden_layer_sizes=(40,),
random_state=1,
max_iter=1000
)
# Model Training
nn_model.fit(X=x1,
y=x2)
# Prediction
result = nn_model.predict(y[Features])
# Model Evaluation
ac_sc = accuracy_score(y2, result)
rc_sc = recall_score(y2, result, average="weighted")
pr_sc = precision_score(y2, result, average="weighted")
f1_sc = f1_score(y2, result, average='micro')
confusion_m = confusion_matrix(y2, result)
print("========== Neural Network Results ==========")
print("Accuracy : ", ac_sc)
print("Recall : ", rc_sc)
print("Precision : ", pr_sc)
print("F1 Score : ", f1_sc)
print("Confusion Matrix: ")
print(confusion_m)
# 8. Naive Bayes
# Create Model with configuration
nb_model = GaussianNB()
# Model Training
nb_model.fit(X=x1,
y=x2)
# Prediction
result = nb_model.predict(y[Features])
# Model Evaluation
ac_sc = accuracy_score(y2, result)
rc_sc = recall_score(y2, result, average="weighted")
pr_sc = precision_score(y2, result, average="weighted")
f1_sc = f1_score(y2, result, average='micro')
confusion_m = confusion_matrix(y2, result)
print("========== Naive Bayes Results ==========")
print("Accuracy : ", ac_sc)
print("Recall : ", rc_sc)
print("Precision : ", pr_sc)
print("F1 Score : ", f1_sc)
print("Confusion Matrix: ")
print(confusion_m)
###Output
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1143: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
|
A5 - Crime Analysis.ipynb | ###Markdown
OverviewThis notebook will use a dataset imported from the Prince George's County MD government data repository and use it to analyze crime rates in the context of Covid-19. The dataset used can be found at the following webpage: - https://data.princegeorgescountymd.gov/Public-Safety/Crime-Incidents-February-2017-to-Present/wb4e-w4nf With terms of use defined here: - https://data.princegeorgescountymd.gov/terms-of-useThis analysis will also reference the John Hopkin's Covid Case Count dataset from Kaggle, which can be found at the url below: - https://www.kaggle.com/antgoldbloom/covid19-data-from-john-hopkins-universityThe notebook will explore how weekly crime rates have differed before and after the emergence of Covid-19, running Welch T tests on corresponding pairs of crime rate sub-types as they are defined in the dataset. Data Ingestion and CleaningBelow, we will perform data ingestion from a local 'data' directory, cleaning the data for analysis while examining some very basic statistics.
###Code
# John's Hopkins confirmed cases dataset runs from 01/23/2020 to 11/01/2021,
# we will use the date 01/23/2020 as the divisor of our dataset into pre and post-Covid
# timeframes.
covid_start_date = np.datetime64('2020-01-23')
# Read in data from local folder, examine head of dataframe for reference
crime = pd.read_csv('data/Crime_Incidents_February_2017_to_Present.csv')
crime.head()
# Examine different sub-types of crimes contained in the dataset
crime['Clearance_code_inc_type'].unique()
# Convert 'Date' field from string to datetime object,
# partition table into before and after confirmed cases began to get
# tracked through the Johns Hopkins dataset
crime['Date'] = pd.to_datetime(crime['Date'])
covid_crime = crime[crime['Date'] >= covid_start_date]
non_covid_crime = crime[crime['Date'] < covid_start_date]
# Compare means for crude hypothesis formation
print('Mean number of crimes committed daily during covid:',
covid_crime['Date'].count()/covid_crime['Date'].nunique())
print('Mean number of crimes committed daily prior to covid:',
non_covid_crime['Date'].count()/non_covid_crime['Date'].nunique())
###Output
Mean number of crimes committed daily during covid: 57.64339908952959
Mean number of crimes committed daily prior to covid: 66.36129032258064
###Markdown
As can be seen above, the number of crimes committed daily, on average, are greater prior to Covid-19 than after. This fits the intuition that increased social restrictions imposed in reaction to the virus would also curb crime rates. AnalysisNext, we will group the data by crime sub-type (originally defined in the dataset as 'Clearance_code_inc_type'), aggregating counts of each crime sub-type reported by week, finishing by running Welch T Tests on all pairs of crime sub-types that are represented in both timeframes.
###Code
# Group Data by week, and count the number of each sub-type of crime
# reported according to sub-types defined in 'Clearance_code_inc_type'
# column.
covid_crime['Date'] = covid_crime['Date'] - pd.to_timedelta(7, unit='d')
covid_crime_agg = covid_crime.groupby([pd.Grouper(key='Date', freq='W-MON'),
'Clearance_code_inc_type'])['Incident_case_id'].count()
non_covid_crime['Date'] = non_covid_crime['Date'] - pd.to_timedelta(7, unit='d')
non_covid_crime_agg = non_covid_crime.groupby([pd.Grouper(key='Date', freq='W-MON'),
'Clearance_code_inc_type'])['Incident_case_id'].count()
# Define function that constructs a dictionary with the keys being
# each type of crime, and the values being a list of the weekly occurences of that
# sub-type of crime. Takes a Pandas Series object representing an aggregation by
# week on the original dataset, as well as the total number of weeks within that
# aggregation as parameters.
def compute_list_by_crime(ser, num_weeks):
result = {}
for row in ser.index:
crime_title = row[1]
if crime_title not in result.keys():
result[crime_title] = []
result[crime_title].append(ser[row])
# Ensure weeks with zero crimes of any one sub-type are still counted
for key in result.keys():
while len(result[key]) != num_weeks:
result[key].append(0)
return result
# calculate number of weeks in covid and non-covid datasets
weeks_non_covid = non_covid_crime_agg.reset_index()['Date'].nunique()
weeks_covid = covid_crime_agg.reset_index()['Date'].nunique()
# Compute dictionaries of crimes
non_covid_crime_list = compute_list_by_crime(non_covid_crime_agg, weeks_non_covid)
covid_crime_list = compute_list_by_crime(covid_crime_agg, weeks_covid)
###Output
_____no_output_____
###Markdown
Normality AssessmentBelow we will visualize the and then assess the normality of weekly report counts by crime sub-type.
###Code
# Visualize normality of data with histograms, plotting weekly report counts by crime sub-type
fig, axs = plt.subplots(4, 5, figsize=(15, 15))
axs = axs.ravel()
plt_index = 0
for key in covid_crime_list.keys():
if key in non_covid_crime_list.keys():
axs[plt_index].hist(non_covid_crime_list[key])
axs[plt_index].hist(covid_crime_list[key])
axs[plt_index].set_title(key)
plt_index += 1
fig.delaxes(axs[-1])
fig.delaxes(axs[-2])
fig.legend(title='Timeframe', labels=['Non-Covid', 'Covid'])
###Output
_____no_output_____
###Markdown
As can be seen above, some plots appear normal while others greatly diverge from normality. To put a better metric on this, we'll run a Kolmogorov-Smirnov test on each pair of weekly report counts by crime sub-type, creating a list of crime sub-types that violate normality.
###Code
# Run Kolmogorov-Smirnov tests to judge normality on all crime subtypes, generate list of subtypes
# were at least one of the distributions from either timeframe violates normality.
# Record this list in 'non_normal_subtypes'
non_normal_subtypes = []
for key in covid_crime_list.keys():
if key in non_covid_crime_list.keys():
# Standardize both datasets prior to running the test
cov_ks_test = kstest((covid_crime_list[key]-np.mean(covid_crime_list[key]))/np.std(covid_crime_list[key]),
'norm')
non_cov_ks_test = kstest((non_covid_crime_list[key]-np.mean(non_covid_crime_list[key]))/np.std(non_covid_crime_list[key]),
'norm')
if cov_ks_test.pvalue < .05 or non_cov_ks_test.pvalue < .05:
non_normal_subtypes.append(key)
# Define function to compute the mean and standard deviation of weekly
# crime occurences for each sub-type of crime, returning the result in a
# dictionary. Takes a dictionary with crime sub-types as keys and lists of weekly
# crime counts as values.
def compute_stats(dic):
result = {}
for key in dic.keys():
mean = np.mean(dic[key])
sd = np.std(dic[key])
result[key] = [mean, sd]
return result
# Compute stats on crime data for each sub-type
non_covid_crime_stats = compute_stats(non_covid_crime_list)
covid_crime_stats = compute_stats(covid_crime_list)
###Output
_____no_output_____
###Markdown
Hypothesis TestingNow, we will run Welch's T tests comparing the difference in mean number of crimes reported by week for each pair of crime sub-types in the two timeframes. Sub-types that violated normality according to the KS test are not included. The two hypotheses are made explicit below:$H_0$: Mean number of crimes reported by week of each sub-type were the same during and not during covid$H_A$: Mean number of crimes reported by week of each sub-type was less during covid than not during covidThe sub-types and their respective p-values are printed by the code below:
###Code
# H_0: Mean number of crimes by week of each sub-type were the same during and not during covid
# H_A: Mean number of crimes by week of each sub-type was less during covid than not during covid
# If p-value is very low, that means the data suggests the alternative is true, if p-value is very high,
# that suggests that the opposite of the alternative is true (i.e. the mean number of crimes by week of
# each sub-type was greater during covid than not)
tested_sub_types = []
for key in covid_crime_stats.keys():
if key in non_covid_crime_stats.keys() and key not in non_normal_subtypes:
print(key +':\n\n\tp-value: ' + '{:0.3e}'.format(ttest_ind(non_covid_crime_list[key], covid_crime_list[key],
alternative='less', equal_var=False).pvalue), '\n\n')
tested_sub_types.append(key)
###Output
ACCIDENT:
p-value: 2.390e-08
ACCIDENT WITH IMPOUND:
p-value: 9.533e-01
ASSAULT:
p-value: 1.000e+00
AUTO, STOLEN:
p-value: 8.181e-01
B & E, COMMERCIAL:
p-value: 2.263e-01
ROBBERY, OTHER:
p-value: 1.000e+00
THEFT:
p-value: 1.000e+00
THEFT FROM AUTO:
p-value: 9.996e-01
###Markdown
VisualizationHere, we'll use a boxplot to visualize the eight crime sub-types tested - gaining additional perspective on how the data is distributed.
###Code
# Create dataframes from lists, assign additional column for timeframe division,
# inner join the two new dataframes and pivot the concatenated dataframe on TimeFrame
non_covid_counts = pd.DataFrame.from_dict(non_covid_crime_list).assign(TimeFrame='Non-Covid')
covid_counts = pd.DataFrame.from_dict(covid_crime_list).assign(TimeFrame='Covid')
cdf = pd.concat([non_covid_counts, covid_counts], join='inner')
mdf = pd.melt(cdf, id_vars=['TimeFrame'], var_name=['Crime Sub-Type'])
# Specify plot parameters, produce plot and
fig, ax = plt.subplots(figsize=(15,10))
ax = sns.boxplot(x="Crime Sub-Type", y="value", hue="TimeFrame",
data=mdf[mdf['Crime Sub-Type'].isin(tested_sub_types)])
ax.set_title('Mean Number of Crimes Reported Per Week per Crime Sub-Type')
ax.set_ylabel('Mean Number of Crimes Reported Per Week')
plt.savefig('visualizations/tested_subtypes_box_plot.png', facecolor='w')
###Output
_____no_output_____ |
Digital-Signal-Processing/signal_discontinuity_naive.ipynb | ###Markdown
Signal Discontinuity [naive]---- Author: Diego Inácio- GitHub: [github.com/diegoinacio](https://github.com/diegoinacio)- Notebook: [signal_discontinuity_naive.ipynb](https://github.com/diegoinacio/computer-vision-notebooks/blob/master/Digital-Signal-Processing/signal_discontinuity_naive.ipynb)---Naive solution to solve the frequency discontinuity between two concatenated signals.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import Audio
import numpy as np
from _utils import *
fs = 44100
t = np.linspace(-1, 1, 2*fs)
xo = np.sin(2*np.pi*110*t + np.pi*0.5)
xi = np.sin(2*np.pi*215*t)
audiovis(xo, time=t, tlim=[-0.1, 0.1], text='Signal 110 Hz')
audiovis(xi, time=t, tlim=[-0.1, 0.1], text='Signal 210 Hz')
# heaviside step transition
b = t >= 0
x = (1 - b)*xo + b*xi
audiovis(x, time=t, tlim=[-0.1, 0.1], text='Signal discontinuity | "Heaviside step" transition')
Audio(x, rate=fs) # we can hear a 'tic' sound on transition
spectrogram(x, flim=[0, 1000], text='Discontinuity signal spectrogram')
###Output
audio mono
###Markdown
1. Logistic function--- [Logistic function](https://en.wikipedia.org/wiki/Logistic_function) results in a sigmoidal curve, represented by equation in functions of *t*:$$\large f(t)=\frac{L}{1+e^{-k(t - t_0)}}$$where:* $L$ : maximum sigmoid's value;* $t_0$ : *t* value which corresponds the median point of curve;* $k$ : curve slope.The sigmoid curve will be used as a parameter for a linear interpolation. This process is given by:$$\large y=(1 - s)x_o + s x_i$$where *s* is the sigmoid transition and $\large x_o, x_i$ are the two signal to have the transition.
###Code
# logistic function transition
def logisticFunction(t, t0, k, L=1):
return L/(1 + np.exp(-k*(t - t0)))
t0 = 0
k = 256
s = logisticFunction(t, t0, k)
x = (1 - s)*xo + s*xi
audiovis(s, time=t, tlim=[-0.1, 0.1], text='Logistic function | t0 = {0}, k = {1}'.format(t0, k))
audiovis(x, time=t, tlim=[-0.1, 0.1], text='Signal with less discontinuity | transition by logistic function')
Audio(x, rate=fs) # no 'tic'
spectrogram(x, flim=[0, 1000], text='Less descontinuity spectrogram')
###Output
audio mono
|
Exercises_Day2_AMora.ipynb | ###Markdown
ExercisesThis will be a notebook for you to work through the exercises during the workshop. Feel free to work on these at whatever pace you feel works for you, but I encourage you to work together! Edit the title of this notebook with your name because I will ask you to upload your final notebook to our shared github repository at the end of this workshop.Feel free to google the documentation for numpy, matplotlib, etc.Don't forget to start by importing any libraries you need.
###Code
# import your libraries here
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
###Output
UsageError: Line magic function `%` not found.
###Markdown
Day 1 Exercise 1 A. Create an array with 10 evenly spaced values in logspace ranging from 0.1 to 10,000. B. Print the following values: The first value in the array, the final value in the array, and the range of 5th-8th values. C. Append the numbers 10,001 and 10,002 (as floats) to the array. Make sure you define this! D. Divide your new array by 2. E. Reshape your array to be 3 x 4. F. Multiply your array by itself. G. Print out the number of dimensions and the maximum value.
###Code
# your solution here
###Output
_____no_output_____
###Markdown
Day 2 Exercise 1 A. Create an array containing the values 4, 0, 6, 5, 11, 14, 12, 14, 5, 16. B. Create a 10x2 array of zeros. C. Write a for loop that checks if each of the numbers in the first array squared is less than 100. If the statement is true, change that row of your zeros array to equal the number and its square. Hint: you can change the value of an array by stating "zerosarray[i] = [a number, a number squared]". D. Print out the final version of your zeros array. Hint: should you loop over the elements of the array or the indices of the array?
###Code
# your solutions here
#A
an_array=np.array([4,0,6,5,11,14,12,14,5,16])
#B
z_array=np.zeros((10,2))
print(z_array)
#C
for a in an_array:
if(a**2)<100:
i=np.where(an_array==a)
print('i=', i)
z_array[i]=[a,a**2]
else:
print(a,'squared > 100')
#D
print('final z_array=',z_array[0:10])
###Output
[[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]]
i= (array([0]),)
i= (array([1]),)
i= (array([2]),)
i= (array([3, 8]),)
11 squared > 100
14 squared > 100
12 squared > 100
14 squared > 100
i= (array([3, 8]),)
16 squared > 100
final z_array= [[ 4. 16.]
[ 0. 0.]
[ 6. 36.]
[ 5. 25.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 5. 25.]
[ 0. 0.]]
###Markdown
Exercise 2 A. Write a function that takes an array of numbers and spits out the Gaussian distribution. Yes, there is a function for this in Python, but it's good to do this from scratch! This is the equation: $$ f(x) = \frac{1}{\sigma \sqrt{2\pi}} \exp{\frac{-(x - \mu)^2}{2\sigma^2}} $$ (Pi is built into numpy, so call it as np.pi.) B. Call the function a few different times for different values of mu and sigma, between -10 < x < 10. C. Plot each version, making sure they are differentiated with different colors and/or linestyles and include a legend. Btw, here's a list of the customizations available in matplotlib: https://matplotlib.org/3.1.0/api/_as_gen/matplotlib.pyplot.plot.html https://matplotlib.org/gallery/color/named_colors.html D. Save your figure. If you have multiple lines with plt.plot(), Python will plot all of them together, unless you write plt.show() after each one. I want these all on one plot.
###Code
# your solutions here
#A
def GaussDistr(x,sigma,mu):
first = (sigma * np.sqrt(np.pi*2))**-1
second = (-(x-mu)**2 / (2*sigma**2))
y= first *np.exp(second)
return y
#B
# x array
x_array=np.linspace(-10,10,20)
#MU AND SIGMA
mu_array1=np.linspace(-10,10,20)
sigma_array1=np.linspace(-10,10,20)
mu_array2=np.linspace(-1,10,20)
sigma_array2=np.linspace(-5,10,20)
mu_array3=np.linspace(-7,4,20)
sigma_array3=np.linspace(-7,7,20)
#C
plt.style.use("dark_background")
plt.plot(x_array,GaussDistr(x_array,sigma_array1,mu_array1), label = "Plot1")
plt.plot(x_array,GaussDistr(x_array,sigma_array2,mu_array2), label = "Plot2")
plt.plot(x_array,GaussDistr(x_array,sigma_array3,mu_array3), label = "Plot3")
plt.legend(loc = "best", prop = {"size":13})
plt.title("Gaussian Distribution", size=25, weight="bold")
#D
fig=plt.figure()
fig.savefig('GaussianDistribution.jpg')
###Output
_____no_output_____
###Markdown
Day 3 Exercise 1There is a file in this directory called "histogram_exercise.dat" which consists of of randomly generated samples from a Gaussian distribution with an unknown $\mu$ and $\sigma$. Using what you've learned about fitting data, load up this file using np.genfromtxt, fit a Gaussian curve to the data and plot both the curve and the histogram of the data. As always, label everything, play with the colors, and choose a judicious bin size. Hint: if you attempt to call a function from a library or package that hasn't been imported, you will get an error.
###Code
# your solution here
###Output
_____no_output_____
###Markdown
Exercise 2Create a 1D interpolation along these arrays. Plot both the data (as points) and the interpolation (as a dotted line). Also plot the value of the interpolated function at x=325. What does the function look like to you?
###Code
x = np.array([0., 50., 100., 150., 200., 250., 300., 350., 400., 450., 500])
y = np.array([0., 7.071, 10., 12.247, 14.142, 15.811, 17.321, 18.708, 20., 21.213, 22.361])
# solution here
###Output
_____no_output_____
###Markdown
Day 4 Exercise 1Let's practice some more plotting skills, now incorporating units. A. Write a function that takes an array of frequencies and spits out the Planck distribution. That's this equation:$$ B(\nu, T) = \frac{2h\nu^3/c^2}{e^{\frac{h\nu}{k_B T}} - 1} $$This requires you to use the Planck constant, the Boltzmann constant, and the speed of light from astropy. Make sure they are all in cgs. B. Plot your function in log-log space for T = 25, 50, and 300 K. The most sensible frequency range is about 10^5 to 10^15 Hz. Hint: if your units are correct, your peak values of B(T) should be on the order of 10^-10. Make sure everything is labelled.
###Code
# solution here
###Output
_____no_output_____
###Markdown
Exercise 2Let's put everything together now! Here's a link to the full documentation for FITSFigure, which will tell you all of the customizable options: http://aplpy.readthedocs.io/en/stable/api/aplpy.FITSFigure.html. Let's create a nice plot of M51 with a background optical image and X-ray contours overplotted.The data came from here if you're interested: http://chandra.harvard.edu/photo/openFITS/multiwavelength_data.htmlA. Using astropy, open the X-RAY data (m51_xray.fits). Flatten the data array and find its standard deviation, and call it sigma.B. Using aplpy, plot a colorscale image of the OPTICAL data. Choose a colormap that is visually appealing (list of them here: https://matplotlib.org/2.0.2/examples/color/colormaps_reference.html). Show the colorbar. C. Plot the X-ray data as contours above the optical image. Make the contours spring green with 80% opacity and dotted lines. Make the levels go from 2$\sigma$ to 10$\sigma$ in steps of 2$\sigma$. (It might be easier to define the levels array before show_contours, and set levels=levels.)
###Code
# solution here
###Output
_____no_output_____ |
homework09/Homework09-Radhika.ipynb | ###Markdown
1. PART ONE: Write your few tiny functions
###Code
def time_of_day(timestring):
time = timestring['time']
yourdate = dateutil.parser.parse(time)
time_words = int(yourdate.strftime("%H"))
if time_words <11:
return "Morning"
elif time_words >=11 & time_words < 15:
return "Noon"
elif time_words >=15 & time_words < 19:
return "Evening"
elif time_words >= 19 & time_words < 24:
return "Night"
time_of_day(earthquake)
def day_in_words(timestring):
time = timestring['time']
yourday = dateutil.parser.parse(time)
day_words = yourday.strftime("%a")
return day_words
day_in_words(earthquake)
def date_in_words(timestring):
time = timestring['time']
yourdate = dateutil.parser.parse(time)
date_words = yourdate.strftime("%b %d")
return date_words
date_in_words(earthquake)
def depth_is(My_Parameter):
number = int(My_Parameter['depth'])
if number >=0 & number <= 70:
return "shallow"
elif number >= 71 & number <= 300:
return "intermediate"
elif number >= 301 & number <= 700:
return "deep"
depth_is(earthquake)
def mag_words(my_mag):
num = float(my_mag['mag'])
if num < 2:
return "Micro"
elif num < 4:
return "Minor"
elif num < 5:
return "Light"
elif num < 6:
return "Moderate"
elif num < 7:
return "Strong"
elif num < 8:
return "Major"
elif num > 8:
return "Great"
mag_words(earthquake)
def mag_is(my_mag):
num = str(my_mag['mag'])
return num
mag_is(earthquake)
def place_is(my_place):
place = my_place['place']
return place
place_is(earthquake)
###Output
_____no_output_____
###Markdown
2. PART TWO: Write the eq_to_sentence function
###Code
def eq_to_sentence(my_dict):
return "A " + depth_is(my_dict), mag_words(my_dict), mag_is(my_dict) + " earthquake was reported " + day_in_words(my_dict), time_of_day(my_dict) + " on " + date_in_words(my_dict), place_is(my_dict)
eq_to_sentence(earthquake)
###Output
_____no_output_____
###Markdown
3. PART THREE: Doing it in bulk
###Code
earthquakes_df = pd.read_csv("1.0_month.csv")
earthquakes = earthquakes_df.to_dict('records')
earthquakes
for item in earthquakes:
if item['mag'] >= 4:
print(eq_to_sentence(item))
###Output
('A shallow', 'Light', '4.7 earthquake was reported Mon', 'Noon on Jun 20', '56km NNE of Port-Vila, Vanuatu')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Noon on Jun 20', '238km SE of Lambasa, Fiji')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Noon on Jun 20', '21km S of Hukumati Dahanah-ye Ghori, Afghanistan')
('A shallow', 'Light', '4.7 earthquake was reported Mon', 'Noon on Jun 20', '56km S of Molibagu, Indonesia')
('A shallow', 'Light', '4.1 earthquake was reported Mon', 'Noon on Jun 20', '130km NE of San Pedro de Atacama, Chile')
('A shallow', 'Light', '4.3 earthquake was reported Mon', 'Noon on Jun 20', '99km W of San Antonio de los Cobres, Argentina')
('A shallow', 'Light', '4.6 earthquake was reported Mon', 'Noon on Jun 20', '30km ENE of Nanae, Japan')
('A shallow', 'Moderate', '5.0 earthquake was reported Mon', 'Noon on Jun 20', '18km NE of Norsup, Vanuatu')
('A shallow', 'Moderate', '5.2 earthquake was reported Mon', 'Noon on Jun 20', '118km NE of Tadine, New Caledonia')
('A shallow', 'Light', '4.8 earthquake was reported Mon', 'Noon on Jun 20', '233km NE of Fais, Micronesia')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Morning on Jun 20', '48km WNW of San Antonio de los Cobres, Argentina')
('A shallow', 'Light', '4.3 earthquake was reported Mon', 'Morning on Jun 20', '190km WSW of Hachijo-jima, Japan')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Morning on Jun 20', '21km NE of Yilan, Taiwan')
('A shallow', 'Moderate', '5.4 earthquake was reported Mon', 'Morning on Jun 20', '95km SW of Isangel, Vanuatu')
('A shallow', 'Light', '4.6 earthquake was reported Mon', 'Morning on Jun 20', 'Off the west coast of northern Sumatra')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Morning on Jun 20', '77km WSW of Coquimbo, Chile')
('A shallow', 'Light', '4.0 earthquake was reported Mon', 'Morning on Jun 20', '83km SSW of Nikolski, Alaska')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Morning on Jun 20', '196km N of Tobelo, Indonesia')
('A shallow', 'Light', '4.3 earthquake was reported Mon', 'Morning on Jun 20', '59km NE of Taitung City, Taiwan')
('A shallow', 'Light', '4.6 earthquake was reported Sun', 'Noon on Jun 19', "164km ENE of L'Esperance Rock, New Zealand")
('A shallow', 'Moderate', '5.4 earthquake was reported Sun', 'Noon on Jun 19', 'Ascension Island region')
('A shallow', 'Light', '4.8 earthquake was reported Sun', 'Noon on Jun 19', 'Ascension Island region')
('A shallow', 'Moderate', '5.4 earthquake was reported Sun', 'Noon on Jun 19', '54km WSW of Sabtang, Philippines')
('A shallow', 'Light', '4.8 earthquake was reported Sun', 'Noon on Jun 19', '116km SW of Isangel, Vanuatu')
('A shallow', 'Light', '4.5 earthquake was reported Sun', 'Noon on Jun 19', '110km WNW of Tobelo, Indonesia')
('A shallow', 'Light', '4.1 earthquake was reported Sun', 'Noon on Jun 19', '150km ESE of Iquique, Chile')
('A shallow', 'Light', '4.6 earthquake was reported Sun', 'Noon on Jun 19', '25km ENE of Linares, Chile')
('A shallow', 'Light', '4.3 earthquake was reported Sun', 'Noon on Jun 19', '9km NE of Zonda, Argentina')
('A shallow', 'Light', '4.9 earthquake was reported Sun', 'Noon on Jun 19', '117km SW of Isangel, Vanuatu')
('A shallow', 'Light', '4.7 earthquake was reported Sun', 'Noon on Jun 19', '92km SSW of Isangel, Vanuatu')
('A shallow', 'Light', '4.9 earthquake was reported Sun', 'Morning on Jun 19', '117km SW of Isangel, Vanuatu')
('A shallow', 'Moderate', '5.1 earthquake was reported Sun', 'Morning on Jun 19', '114km SW of Isangel, Vanuatu')
('A shallow', 'Light', '4.8 earthquake was reported Sun', 'Morning on Jun 19', '91km SSW of Isangel, Vanuatu')
('A shallow', 'Strong', '6.3 earthquake was reported Sun', 'Morning on Jun 19', '84km SSW of Isangel, Vanuatu')
('A shallow', 'Light', '4.8 earthquake was reported Sun', 'Morning on Jun 19', '74km SW of Isangel, Vanuatu')
('A shallow', 'Light', '4.6 earthquake was reported Sun', 'Morning on Jun 19', '61km SSW of Juli, Peru')
('A shallow', 'Moderate', '5.2 earthquake was reported Sun', 'Morning on Jun 19', '118km ESE of Bitung, Indonesia')
('A shallow', 'Light', '4.4 earthquake was reported Sun', 'Morning on Jun 19', '136km NE of Aksu, China')
('A shallow', 'Light', '4.4 earthquake was reported Sun', 'Morning on Jun 19', '264km WNW of Saumlaki, Indonesia')
('A shallow', 'Moderate', '5.3 earthquake was reported Sun', 'Morning on Jun 19', 'Southwest Indian Ridge')
('A shallow', 'Light', '4.9 earthquake was reported Sun', 'Morning on Jun 19', 'Ascension Island region')
('A shallow', 'Light', '4.8 earthquake was reported Sat', 'Noon on Jun 18', '91km ENE of Norsup, Vanuatu')
('A shallow', 'Moderate', '5.5 earthquake was reported Sat', 'Noon on Jun 18', '124km WNW of Bengkulu, Indonesia')
('A shallow', 'Light', '4.1 earthquake was reported Sat', 'Noon on Jun 18', 'Izu Islands, Japan region')
('A shallow', 'Moderate', '5.0 earthquake was reported Sat', 'Noon on Jun 18', '118km SSW of Isangel, Vanuatu')
('A shallow', 'Light', '4.4 earthquake was reported Sat', 'Noon on Jun 18', '204km WSW of Puerto Natales, Chile')
('A shallow', 'Light', '4.4 earthquake was reported Sat', 'Noon on Jun 18', '221km NNW of Saumlaki, Indonesia')
('A shallow', 'Moderate', '5.5 earthquake was reported Sat', 'Noon on Jun 18', '78km W of San Antonio de los Cobres, Argentina')
('A shallow', 'Light', '4.3 earthquake was reported Sat', 'Noon on Jun 18', '35km NE of Jarm, Afghanistan')
('A shallow', 'Light', '4.5 earthquake was reported Sat', 'Noon on Jun 18', '27km NW of Ayaviri, Peru')
('A shallow', 'Light', '4.8 earthquake was reported Sat', 'Noon on Jun 18', 'Southwest Indian Ridge')
('A shallow', 'Light', '4.8 earthquake was reported Sat', 'Noon on Jun 18', '8km W of Uto, Japan')
('A shallow', 'Light', '4.8 earthquake was reported Sat', 'Morning on Jun 18', '115km W of San Antonio de los Cobres, Argentina')
('A shallow', 'Light', '4.6 earthquake was reported Sat', 'Morning on Jun 18', '213km S of Punta de Burica, Panama')
('A shallow', 'Light', '4.5 earthquake was reported Sat', 'Morning on Jun 18', '23km ENE of Lata, Solomon Islands')
('A shallow', 'Light', '4.7 earthquake was reported Sat', 'Morning on Jun 18', '105km SSW of Kota Ternate, Indonesia')
('A shallow', 'Light', '4.5 earthquake was reported Sat', 'Morning on Jun 18', '138km NW of Kota Ternate, Indonesia')
('A shallow', 'Moderate', '5.0 earthquake was reported Sat', 'Morning on Jun 18', '143km S of False Pass, Alaska')
('A shallow', 'Light', '4.3 earthquake was reported Fri', 'Noon on Jun 17', '24km ENE of Oarai, Japan')
('A shallow', 'Light', '4.9 earthquake was reported Fri', 'Noon on Jun 17', 'Southern East Pacific Rise')
('A shallow', 'Light', '4.3 earthquake was reported Fri', 'Noon on Jun 17', '99km ENE of Keelung, Taiwan')
('A shallow', 'Moderate', '5.2 earthquake was reported Fri', 'Noon on Jun 17', '111km SSE of Lata, Solomon Islands')
('A shallow', 'Light', '4.7 earthquake was reported Fri', 'Noon on Jun 17', '123km NNE of Tadine, New Caledonia')
('A shallow', 'Light', '4.6 earthquake was reported Fri', 'Noon on Jun 17', '202km E of Hachijo-jima, Japan')
('A shallow', 'Light', '4.4 earthquake was reported Fri', 'Morning on Jun 17', '94km NNE of Palue, Indonesia')
('A shallow', 'Light', '4.7 earthquake was reported Fri', 'Morning on Jun 17', '123km SSW of Isangel, Vanuatu')
('A shallow', 'Light', '4.4 earthquake was reported Fri', 'Morning on Jun 17', '44km E of Kerman, Iran')
('A shallow', 'Moderate', '5.1 earthquake was reported Fri', 'Morning on Jun 17', 'Kepulauan Barat Daya, Indonesia')
('A shallow', 'Moderate', '5.0 earthquake was reported Fri', 'Morning on Jun 17', '64km W of Ovalle, Chile')
('A shallow', 'Light', '4.6 earthquake was reported Fri', 'Morning on Jun 17', '22km WSW of Coquimbo, Chile')
('A shallow', 'Light', '4.3 earthquake was reported Fri', 'Morning on Jun 17', '56km SSW of Calama, Chile')
('A shallow', 'Light', '4.4 earthquake was reported Fri', 'Morning on Jun 17', '153km W of Longyearbyen, Svalbard and Jan Mayen')
('A shallow', 'Moderate', '5.1 earthquake was reported Thu', 'Noon on Jun 16', '89km E of Naze, Japan')
('A shallow', 'Light', '4.3 earthquake was reported Thu', 'Noon on Jun 16', '19km WNW of Atuncolla, Peru')
('A shallow', 'Moderate', '5.1 earthquake was reported Thu', 'Noon on Jun 16', '121km NE of Tadine, New Caledonia')
('A shallow', 'Moderate', '5.2 earthquake was reported Thu', 'Morning on Jun 16', '129km SSW of Isangel, Vanuatu')
('A shallow', 'Light', '4.5 earthquake was reported Thu', 'Morning on Jun 16', '274km NNW of Saumlaki, Indonesia')
('A shallow', 'Moderate', '5.2 earthquake was reported Thu', 'Morning on Jun 16', '22km ENE of Nanae, Japan')
('A shallow', 'Light', '4.8 earthquake was reported Thu', 'Morning on Jun 16', '14km WSW of Ovalle, Chile')
('A shallow', 'Moderate', '5.1 earthquake was reported Thu', 'Morning on Jun 16', '24km SE of Ndoi Island, Fiji')
('A shallow', 'Light', '4.4 earthquake was reported Wed', 'Noon on Jun 15', 'Kuril Islands')
('A shallow', 'Light', '4.4 earthquake was reported Wed', 'Noon on Jun 15', '171km S of False Pass, Alaska')
('A shallow', 'Light', '4.8 earthquake was reported Wed', 'Noon on Jun 15', '114km SSE of Lorengau, Papua New Guinea')
('A shallow', 'Light', '4.2 earthquake was reported Wed', 'Noon on Jun 15', '36km ESE of Pucallpa, Peru')
('A shallow', 'Light', '4.5 earthquake was reported Wed', 'Noon on Jun 15', '2km WNW of La Esperanza, Panama')
('A shallow', 'Moderate', '5.7 earthquake was reported Wed', 'Noon on Jun 15', '11km SW of Pueblo Nuevo Tiquisate, Guatemala')
('A shallow', 'Moderate', '5.2 earthquake was reported Wed', 'Noon on Jun 15', '66km NW of Sola, Vanuatu')
('A shallow', 'Moderate', '5.0 earthquake was reported Wed', 'Morning on Jun 15', '43km W of `Alaqahdari-ye Kiran wa Munjan, Afghanistan')
('A shallow', 'Moderate', '5.6 earthquake was reported Wed', 'Morning on Jun 15', 'Balleny Islands region')
('A shallow', 'Light', '4.6 earthquake was reported Wed', 'Morning on Jun 15', '32km W of Nuqui, Colombia')
('A shallow', 'Light', '4.7 earthquake was reported Wed', 'Morning on Jun 15', '279km S of Kute, Indonesia')
('A shallow', 'Light', '4.7 earthquake was reported Tue', 'Noon on Jun 14', '149km N of Calama, Chile')
('A shallow', 'Moderate', '5.0 earthquake was reported Tue', 'Noon on Jun 14', 'Southern Mid-Atlantic Ridge')
('A shallow', 'Moderate', '5.6 earthquake was reported Tue', 'Noon on Jun 14', '194km ESE of Enarotali, Indonesia')
('A shallow', 'Light', '4.6 earthquake was reported Tue', 'Noon on Jun 14', '93km W of Harian, Indonesia')
('A shallow', 'Moderate', '5.1 earthquake was reported Tue', 'Noon on Jun 14', '39km SW of Adak, Alaska')
('A shallow', 'Moderate', '5.6 earthquake was reported Tue', 'Noon on Jun 14', '171km SSE of Naze, Japan')
('A shallow', 'Strong', '6.2 earthquake was reported Tue', 'Noon on Jun 14', '98km NNW of Isangel, Vanuatu')
('A shallow', 'Light', '4.4 earthquake was reported Tue', 'Noon on Jun 14', '20km ENE of Chepen, Peru')
('A shallow', 'Light', '4.1 earthquake was reported Tue', 'Noon on Jun 14', '95km NNE of Chignik Lake, Alaska')
('A shallow', 'Light', '4.3 earthquake was reported Tue', 'Noon on Jun 14', '9km S of San Vicente Pacaya, Guatemala')
('A shallow', 'Light', '4.3 earthquake was reported Tue', 'Morning on Jun 14', '69km SW of Ocos, Guatemala')
('A shallow', 'Light', '4.4 earthquake was reported Tue', 'Morning on Jun 14', '65km SW of Ocos, Guatemala')
('A shallow', 'Moderate', '5.0 earthquake was reported Tue', 'Morning on Jun 14', '21km SSW of Somotillo, Nicaragua')
('A shallow', 'Light', '4.9 earthquake was reported Tue', 'Morning on Jun 14', '6km N of Taron, Papua New Guinea')
('A shallow', 'Light', '4.3 earthquake was reported Mon', 'Noon on Jun 13', '134km NNW of Labuhankananga, Indonesia')
('A shallow', 'Light', '4.6 earthquake was reported Mon', 'Noon on Jun 13', '135km ESE of Kirakira, Solomon Islands')
('A shallow', 'Light', '4.6 earthquake was reported Mon', 'Noon on Jun 13', 'South of Panama')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Noon on Jun 13', '291km N of Tobelo, Indonesia')
('A shallow', 'Moderate', '5.0 earthquake was reported Mon', 'Noon on Jun 13', 'Western Indian-Antarctic Ridge')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Noon on Jun 13', '181km ESE of Hachijo-jima, Japan')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Noon on Jun 13', '63km SE of Caburan, Philippines')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Noon on Jun 13', '31km S of Puerto El Triunfo, El Salvador')
('A shallow', 'Light', '4.6 earthquake was reported Mon', 'Noon on Jun 13', 'Southwest Indian Ridge')
('A shallow', 'Light', '4.34 earthquake was reported Mon', 'Noon on Jun 13', '52km W of West Yellowstone, Montana')
('A shallow', 'Light', '4.7 earthquake was reported Mon', 'Morning on Jun 13', '176km SSE of Naze, Japan')
('A shallow', 'Light', '4.7 earthquake was reported Mon', 'Morning on Jun 13', '82km SSW of Corinto, Nicaragua')
('A shallow', 'Light', '4.8 earthquake was reported Mon', 'Morning on Jun 13', '17km W of Auki, Solomon Islands')
('A shallow', 'Moderate', '5.3 earthquake was reported Mon', 'Morning on Jun 13', '178km SSE of Naze, Japan')
('A shallow', 'Moderate', '5.7 earthquake was reported Mon', 'Morning on Jun 13', '163km SSE of Naze, Japan')
('A shallow', 'Light', '4.7 earthquake was reported Mon', 'Morning on Jun 13', '30km S of Ndoi Island, Fiji')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Morning on Jun 13', 'Southern East Pacific Rise')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Morning on Jun 13', '148km S of Puerto El Triunfo, El Salvador')
('A shallow', 'Light', '4.3 earthquake was reported Mon', 'Morning on Jun 13', '32km NNE of Tumbagaan, Philippines')
('A shallow', 'Moderate', '5.2 earthquake was reported Mon', 'Morning on Jun 13', 'Central East Pacific Rise')
('A shallow', 'Light', '4.2 earthquake was reported Sun', 'Noon on Jun 12', '23km NNE of Ambunti, Papua New Guinea')
('A shallow', 'Light', '4.6 earthquake was reported Sun', 'Noon on Jun 12', '240km WNW of Bandon, Oregon')
('A shallow', 'Moderate', '5.3 earthquake was reported Sun', 'Noon on Jun 12', '93km E of Shikotan, Russia')
('A shallow', 'Light', '4.0 earthquake was reported Sun', 'Noon on Jun 12', '49km E of , Azerbaijan')
('A shallow', 'Light', '4.5 earthquake was reported Sun', 'Noon on Jun 12', '13km SSW of Yatsushiro, Japan')
('A shallow', 'Light', '4.7 earthquake was reported Sun', 'Noon on Jun 12', 'East of the Kuril Islands')
('A shallow', 'Light', '4.3 earthquake was reported Sun', 'Morning on Jun 12', '56km NNE of Grande Anse, Guadeloupe')
('A shallow', 'Light', '4.7 earthquake was reported Sun', 'Morning on Jun 12', '125km SSE of Kirakira, Solomon Islands')
('A shallow', 'Light', '4.7 earthquake was reported Sun', 'Morning on Jun 12', '190km NNW of Tobelo, Indonesia')
('A shallow', 'Light', '4.5 earthquake was reported Sun', 'Morning on Jun 12', '61km N of Agrihan, Northern Mariana Islands')
('A shallow', 'Light', '4.5 earthquake was reported Sun', 'Morning on Jun 12', '181km SSE of Bitung, Indonesia')
('A shallow', 'Light', '4.0 earthquake was reported Sat', 'Noon on Jun 11', '24km SW of Lakhdaria, Algeria')
('A shallow', 'Light', '4.9 earthquake was reported Sat', 'Noon on Jun 11', '6km ENE of Noda, Japan')
('A shallow', 'Light', '4.7 earthquake was reported Sat', 'Noon on Jun 11', '40km SW of Ashkasham, Afghanistan')
('A shallow', 'Light', '4.6 earthquake was reported Sat', 'Noon on Jun 11', '68km SSE of Tuensang, India')
('A shallow', 'Light', '4.8 earthquake was reported Sat', 'Noon on Jun 11', '58km SSW of Bunisari, Indonesia')
('A shallow', 'Light', '4.6 earthquake was reported Sat', 'Morning on Jun 11', "62km SE of Kuril'sk, Russia")
('A shallow', 'Light', '4.5 earthquake was reported Sat', 'Morning on Jun 11', '4km NNW of Koronadal, Philippines')
('A shallow', 'Light', '4.6 earthquake was reported Sat', 'Morning on Jun 11', '16km N of Banjar Sidayu, Indonesia')
('A shallow', 'Light', '4.5 earthquake was reported Sat', 'Morning on Jun 11', '18km E of Puerto Morazan, Nicaragua')
('A shallow', 'Light', '4.5 earthquake was reported Sat', 'Morning on Jun 11', '288km N of Ndoi Island, Fiji')
('A shallow', 'Light', '4.8 earthquake was reported Sat', 'Morning on Jun 11', '287km SE of Lambasa, Fiji')
('A shallow', 'Light', '4.6 earthquake was reported Sat', 'Morning on Jun 11', '186km SE of Sarangani, Philippines')
('A shallow', 'Light', '4.6 earthquake was reported Fri', 'Noon on Jun 10', '21km ENE of Shughnon, Tajikistan')
('A shallow', 'Light', '4.7 earthquake was reported Fri', 'Noon on Jun 10', 'Southeast of Easter Island')
('A shallow', 'Light', '4.8 earthquake was reported Fri', 'Noon on Jun 10', '288km SSE of Sigave, Wallis and Futuna')
('A shallow', 'Light', '4.3 earthquake was reported Fri', 'Noon on Jun 10', '25km WNW of Solhan, Turkey')
('A shallow', 'Moderate', '5.1 earthquake was reported Fri', 'Noon on Jun 10', '14km ESE of Kabayan, Philippines')
('A shallow', 'Moderate', '5.5 earthquake was reported Fri', 'Noon on Jun 10', '263km SSE of Sigave, Wallis and Futuna')
('A shallow', 'Light', '4.6 earthquake was reported Fri', 'Noon on Jun 10', 'East of the Kuril Islands')
('A shallow', 'Light', '4.8 earthquake was reported Fri', 'Noon on Jun 10', '110km ENE of Hihifo, Tonga')
('A shallow', 'Light', '4.7 earthquake was reported Fri', 'Noon on Jun 10', '62km W of Abra Pampa, Argentina')
('A shallow', 'Moderate', '5.3 earthquake was reported Fri', 'Noon on Jun 10', '249km ESE of Kamaishi, Japan')
('A shallow', 'Moderate', '5.2 earthquake was reported Fri', 'Morning on Jun 10', '107km SSE of Hihifo, Tonga')
('A shallow', 'Moderate', '5.5 earthquake was reported Fri', 'Morning on Jun 10', '106km ENE of Georgetown, Saint Helena')
('A shallow', 'Moderate', '5.17 earthquake was reported Fri', 'Morning on Jun 10', '20km NNW of Borrego Springs, CA')
('A shallow', 'Light', '4.8 earthquake was reported Fri', 'Morning on Jun 10', '25km S of Somotillo, Nicaragua')
('A shallow', 'Strong', '6.2 earthquake was reported Fri', 'Morning on Jun 10', '20km WNW of Auki, Solomon Islands')
('A shallow', 'Light', '4.6 earthquake was reported Fri', 'Morning on Jun 10', '19km SSW of Somotillo, Nicaragua')
('A shallow', 'Light', '4.8 earthquake was reported Fri', 'Morning on Jun 10', '23km E of Puerto Morazan, Nicaragua')
('A shallow', 'Light', '4.7 earthquake was reported Fri', 'Morning on Jun 10', '66km NNE of Tela, Honduras')
('A shallow', 'Moderate', '5.1 earthquake was reported Fri', 'Morning on Jun 10', '24km S of Somotillo, Nicaragua')
('A shallow', 'Strong', '6.1 earthquake was reported Fri', 'Morning on Jun 10', '17km E of Puerto Morazan, Nicaragua')
('A shallow', 'Light', '4.7 earthquake was reported Fri', 'Morning on Jun 10', '251km NNE of Chichi-shima, Japan')
('A shallow', 'Moderate', '5.0 earthquake was reported Fri', 'Morning on Jun 10', '183km NNE of Fais, Micronesia')
('A shallow', 'Light', '4.3 earthquake was reported Thu', 'Noon on Jun 09', '51km S of Molibagu, Indonesia')
('A shallow', 'Light', '4.5 earthquake was reported Thu', 'Noon on Jun 09', '108km ENE of We, New Caledonia')
('A shallow', 'Light', '4.4 earthquake was reported Thu', 'Noon on Jun 09', 'Southwest Indian Ridge')
('A shallow', 'Light', '4.3 earthquake was reported Thu', 'Noon on Jun 09', '131km NNW of Kiunga, Papua New Guinea')
('A shallow', 'Light', '4.6 earthquake was reported Thu', 'Noon on Jun 09', '201km NW of Saumlaki, Indonesia')
('A shallow', 'Moderate', '5.2 earthquake was reported Thu', 'Noon on Jun 09', '5km WNW of Uddiawan, Philippines')
('A shallow', 'Light', '4.2 earthquake was reported Thu', 'Noon on Jun 09', '49km NNW of Chilecito, Argentina')
('A shallow', 'Light', '4.9 earthquake was reported Thu', 'Noon on Jun 09', '218km ENE of Neiafu, Tonga')
('A shallow', 'Light', '4.4 earthquake was reported Thu', 'Morning on Jun 09', '35km WSW of Kiska Volcano, Alaska')
('A shallow', 'Light', '4.8 earthquake was reported Thu', 'Morning on Jun 09', '98km SSW of Bogorawatu, Indonesia')
('A shallow', 'Light', '4.7 earthquake was reported Thu', 'Morning on Jun 09', '99km SSW of Bogorawatu, Indonesia')
('A shallow', 'Light', '4.9 earthquake was reported Thu', 'Morning on Jun 09', '206km ESE of Hachijo-jima, Japan')
('A shallow', 'Light', '4.6 earthquake was reported Thu', 'Morning on Jun 09', 'South of the Fiji Islands')
('A shallow', 'Strong', '6.2 earthquake was reported Thu', 'Morning on Jun 09', '284km S of Kute, Indonesia')
('A shallow', 'Light', '4.3 earthquake was reported Thu', 'Morning on Jun 09', '56km SSW of Tawun, Indonesia')
('A shallow', 'Light', '4.3 earthquake was reported Thu', 'Morning on Jun 09', '4km S of Turija, Serbia')
('A shallow', 'Light', '4.3 earthquake was reported Wed', 'Noon on Jun 08', '99km SSW of Champerico, Guatemala')
('A shallow', 'Light', '4.9 earthquake was reported Wed', 'Noon on Jun 08', '65km NNW of Barranca, Peru')
('A shallow', 'Light', '4.6 earthquake was reported Wed', 'Noon on Jun 08', '70km W of Coquimbo, Chile')
('A shallow', 'Light', '4.4 earthquake was reported Wed', 'Noon on Jun 08', '287km E of Namie, Japan')
('A shallow', 'Moderate', '5.2 earthquake was reported Wed', 'Noon on Jun 08', 'Southern Mid-Atlantic Ridge')
('A shallow', 'Light', '4.3 earthquake was reported Wed', 'Noon on Jun 08', '50km SSE of Korsakov, Russia')
('A shallow', 'Moderate', '5.0 earthquake was reported Wed', 'Morning on Jun 08', 'Mid-Indian Ridge')
('A shallow', 'Moderate', '5.1 earthquake was reported Wed', 'Morning on Jun 08', '38km S of Puerto San Jose, Guatemala')
('A shallow', 'Light', '4.5 earthquake was reported Wed', 'Morning on Jun 08', '78km SW of Puerto El Triunfo, El Salvador')
('A shallow', 'Moderate', '5.0 earthquake was reported Wed', 'Morning on Jun 08', 'Central East Pacific Rise')
('A shallow', 'Light', '4.9 earthquake was reported Wed', 'Morning on Jun 08', '170km SSE of Naze, Japan')
('A shallow', 'Light', '4.1 earthquake was reported Wed', 'Morning on Jun 08', '175km NNE of Esperance, Australia')
('A shallow', 'Light', '4.9 earthquake was reported Wed', 'Morning on Jun 08', '149km SSE of Naze, Japan')
('A shallow', 'Light', '4.2 earthquake was reported Wed', 'Morning on Jun 08', '44km SSW of Ashkasham, Afghanistan')
('A shallow', 'Light', '4.2 earthquake was reported Wed', 'Morning on Jun 08', '116km WNW of Kota Ternate, Indonesia')
('A shallow', 'Light', '4.1 earthquake was reported Wed', 'Morning on Jun 08', '96km NNE of Sangiang, Indonesia')
('A shallow', 'Light', '4.8 earthquake was reported Wed', 'Morning on Jun 08', '78km S of Nishinoomote, Japan')
('A shallow', 'Light', '4.4 earthquake was reported Tue', 'Noon on Jun 07', 'South of the Fiji Islands')
('A shallow', 'Light', '4.7 earthquake was reported Tue', 'Noon on Jun 07', '118km WNW of Kota Ternate, Indonesia')
('A shallow', 'Strong', '6.3 earthquake was reported Tue', 'Noon on Jun 07', '128km E of Bitung, Indonesia')
('A shallow', 'Light', '4.4 earthquake was reported Tue', 'Noon on Jun 07', '77km NE of Diego de Almagro, Chile')
('A shallow', 'Light', '4.4 earthquake was reported Tue', 'Noon on Jun 07', '14km SE of El Valle, Dominican Republic')
('A shallow', 'Light', '4.2 earthquake was reported Tue', 'Noon on Jun 07', "252km NE of Kuril'sk, Russia")
('A shallow', 'Light', '4.7 earthquake was reported Tue', 'Noon on Jun 07', '29km SW of Kimbe, Papua New Guinea')
('A shallow', 'Light', '4.2 earthquake was reported Tue', 'Noon on Jun 07', 'Fiji region')
('A shallow', 'Light', '4.7 earthquake was reported Tue', 'Noon on Jun 07', '76km ESE of Culaman, Philippines')
('A shallow', 'Light', '4.4 earthquake was reported Tue', 'Noon on Jun 07', '68km SSW of Abancay, Peru')
('A shallow', 'Light', '4.6 earthquake was reported Tue', 'Noon on Jun 07', '156km SSW of San Patricio, Mexico')
('A shallow', 'Moderate', '5.5 earthquake was reported Tue', 'Morning on Jun 07', '93km SSW of San Patricio, Mexico')
('A shallow', 'Strong', '6.2 earthquake was reported Tue', 'Morning on Jun 07', '102km SSW of San Patricio, Mexico')
('A shallow', 'Light', '4.1 earthquake was reported Tue', 'Morning on Jun 07', '30km SW of San Patricio, Mexico')
('A shallow', 'Light', '4.4 earthquake was reported Tue', 'Morning on Jun 07', '95km S of Chignik Lake, Alaska')
('A shallow', 'Light', '4.9 earthquake was reported Tue', 'Morning on Jun 07', '112km ENE of Ndoi Island, Fiji')
('A shallow', 'Moderate', '5.5 earthquake was reported Tue', 'Morning on Jun 07', 'South of the Fiji Islands')
('A shallow', 'Moderate', '5.1 earthquake was reported Tue', 'Morning on Jun 07', '104km E of Shikotan, Russia')
('A shallow', 'Light', '4.1 earthquake was reported Tue', 'Morning on Jun 07', '4km ESE of Demirtas, Turkey')
('A shallow', 'Moderate', '5.4 earthquake was reported Tue', 'Morning on Jun 07', '70km WNW of Te Anau, New Zealand')
('A shallow', 'Light', '4.9 earthquake was reported Tue', 'Morning on Jun 07', '116km WSW of Kota Ternate, Indonesia')
('A shallow', 'Light', '4.0 earthquake was reported Tue', 'Morning on Jun 07', '25km ESE of Raoul Island, New Zealand')
('A shallow', 'Light', '4.1 earthquake was reported Mon', 'Noon on Jun 06', '74km W of Te Anau, New Zealand')
('A shallow', 'Light', '4.2 earthquake was reported Mon', 'Noon on Jun 06', '53km WNW of Porgera, Papua New Guinea')
('A shallow', 'Light', '4.3 earthquake was reported Mon', 'Noon on Jun 06', '49km S of Las Choapas, Mexico')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Noon on Jun 06', '16km ESE of Mucuchies, Venezuela')
('A shallow', 'Light', '4.1 earthquake was reported Mon', 'Noon on Jun 06', '85km E of Iquique, Chile')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Noon on Jun 06', '3km S of Muisne, Ecuador')
('A shallow', 'Light', '4.6 earthquake was reported Mon', 'Noon on Jun 06', 'South of Panama')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Noon on Jun 06', '222km N of Chichi-shima, Japan')
('A shallow', 'Light', '4.2 earthquake was reported Mon', 'Noon on Jun 06', '86km WNW of Polis, Cyprus')
('A shallow', 'Light', '4.3 earthquake was reported Mon', 'Noon on Jun 06', '66km E of Namie, Japan')
('A shallow', 'Moderate', '5.3 earthquake was reported Mon', 'Noon on Jun 06', '23km WSW of Coquimbo, Chile')
('A shallow', 'Moderate', '5.4 earthquake was reported Mon', 'Noon on Jun 06', '23km SW of Coquimbo, Chile')
('A shallow', 'Light', '4.9 earthquake was reported Mon', 'Morning on Jun 06', '159km SSW of Biha, Indonesia')
('A shallow', 'Light', '4.3 earthquake was reported Mon', 'Morning on Jun 06', '58km NW of La Ligua, Chile')
('A shallow', 'Light', '4.1 earthquake was reported Mon', 'Morning on Jun 06', '161km NNW of Atambua, Indonesia')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Morning on Jun 06', "94km NE of Roshtqal'a, Tajikistan")
('A shallow', 'Light', '4.6 earthquake was reported Mon', 'Morning on Jun 06', '3km SE of Santa Catarina Juquila, Mexico')
('A shallow', 'Light', '4.3 earthquake was reported Mon', 'Morning on Jun 06', '16km SE of Ocos, Guatemala')
('A shallow', 'Strong', '6.1 earthquake was reported Mon', 'Morning on Jun 06', '84km S of Raoul Island, New Zealand')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Morning on Jun 06', '226km SE of Lambasa, Fiji')
('A shallow', 'Light', '4.8 earthquake was reported Mon', 'Morning on Jun 06', '53km NE of Sulangan, Philippines')
('A shallow', 'Light', '4.8 earthquake was reported Mon', 'Morning on Jun 06', 'Southern Mid-Atlantic Ridge')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Morning on Jun 06', '61km ENE of Mazamari, Peru')
('A shallow', 'Light', '4.0 earthquake was reported Sun', 'Noon on Jun 05', 'South of the Fiji Islands')
('A shallow', 'Moderate', '5.4 earthquake was reported Sun', 'Noon on Jun 05', '202km NE of Neiafu, Tonga')
('A shallow', 'Moderate', '5.0 earthquake was reported Sun', 'Noon on Jun 05', 'Central East Pacific Rise')
('A shallow', 'Light', '4.5 earthquake was reported Sun', 'Noon on Jun 05', '11km W of Canoas, Costa Rica')
('A shallow', 'Light', '4.1 earthquake was reported Sun', 'Noon on Jun 05', '8km SSW of Foca, Turkey')
('A shallow', 'Strong', '6.3 earthquake was reported Sun', 'Noon on Jun 05', '133km SW of Leksula, Indonesia')
('A shallow', 'Light', '4.8 earthquake was reported Sun', 'Noon on Jun 05', '52km E of Palu, Indonesia')
('A shallow', 'Moderate', '5.1 earthquake was reported Sun', 'Noon on Jun 05', '145km SSE of Putre, Chile')
('A shallow', 'Light', '4.5 earthquake was reported Sun', 'Noon on Jun 05', '261km ESE of Sarangani, Philippines')
('A shallow', 'Moderate', '5.0 earthquake was reported Sun', 'Morning on Jun 05', '124km W of Airbuaya, Indonesia')
('A shallow', 'Light', '4.5 earthquake was reported Sun', 'Morning on Jun 05', 'Central East Pacific Rise')
('A shallow', 'Moderate', '5.7 earthquake was reported Sun', 'Morning on Jun 05', '86km NW of Coquimbo, Chile')
('A shallow', 'Light', '4.3 earthquake was reported Sun', 'Morning on Jun 05', '20km NNE of Jayune, Peru')
('A shallow', 'Light', '4.3 earthquake was reported Sun', 'Morning on Jun 05', '105km NNW of False Pass, Alaska')
('A shallow', 'Light', '4.9 earthquake was reported Sun', 'Morning on Jun 05', '78km SE of Acari, Peru')
('A shallow', 'Light', '4.8 earthquake was reported Sun', 'Morning on Jun 05', '141km WNW of Naze, Japan')
('A shallow', 'Light', '4.4 earthquake was reported Sun', 'Morning on Jun 05', 'Fiji region')
('A shallow', 'Light', '4.7 earthquake was reported Sun', 'Morning on Jun 05', '36km SW of Santiago Pinotepa Nacional, Mexico')
('A shallow', 'Light', '4.6 earthquake was reported Sun', 'Morning on Jun 05', '79km W of San Antonio de los Cobres, Argentina')
('A shallow', 'Light', '4.4 earthquake was reported Sun', 'Morning on Jun 05', '13km SW of San Juan Cacahuatepec, Mexico')
('A shallow', 'Light', '4.2 earthquake was reported Sun', 'Morning on Jun 05', '5km N of San Pedro Amuzgos, Mexico')
('A shallow', 'Light', '4.2 earthquake was reported Sun', 'Morning on Jun 05', '93km S of Makry Gialos, Greece')
('A shallow', 'Light', '4.1 earthquake was reported Sun', 'Morning on Jun 05', '7km NW of Cabrera, Dominican Republic')
('A shallow', 'Light', '4.3 earthquake was reported Sat', 'Noon on Jun 04', '34km W of Illapel, Chile')
('A shallow', 'Moderate', '5.2 earthquake was reported Sat', 'Noon on Jun 04', '60km SSW of Port-Vila, Vanuatu')
('A shallow', 'Light', '4.4 earthquake was reported Sat', 'Noon on Jun 04', '150km WNW of Tobelo, Indonesia')
('A shallow', 'Light', '4.0 earthquake was reported Sat', 'Noon on Jun 04', '54km S of Hukumati Dahanah-ye Ghori, Afghanistan')
('A shallow', 'Light', '4.9 earthquake was reported Sat', 'Noon on Jun 04', '11km WSW of Lixourion, Greece')
('A shallow', 'Light', '4.7 earthquake was reported Sat', 'Noon on Jun 04', 'Southwest of Sumatra, Indonesia')
('A shallow', 'Moderate', '5.4 earthquake was reported Sat', 'Morning on Jun 04', '114km WSW of Kota Ternate, Indonesia')
('A shallow', 'Light', '4.3 earthquake was reported Sat', 'Morning on Jun 04', '61km ESE of Kerman, Iran')
('A shallow', 'Moderate', '5.4 earthquake was reported Sat', 'Morning on Jun 04', '81km NE of Hihifo, Tonga')
('A shallow', 'Light', '4.6 earthquake was reported Sat', 'Morning on Jun 04', '98km E of Ndoi Island, Fiji')
('A shallow', 'Light', '4.8 earthquake was reported Sat', 'Morning on Jun 04', '93km S of Fukue, Japan')
('A shallow', 'Light', '4.8 earthquake was reported Sat', 'Morning on Jun 04', 'Central Mid-Atlantic Ridge')
('A shallow', 'Light', '4.2 earthquake was reported Sat', 'Morning on Jun 04', '93km S of La Libertad, El Salvador')
('A shallow', 'Light', '4.6 earthquake was reported Sat', 'Morning on Jun 04', '64km SW of Pasarbaru, Indonesia')
('A shallow', 'Light', '4.1 earthquake was reported Sat', 'Morning on Jun 04', '88km WNW of Polis, Cyprus')
('A shallow', 'Light', '4.4 earthquake was reported Fri', 'Noon on Jun 03', "145km ENE of L'Esperance Rock, New Zealand")
('A shallow', 'Light', '4.5 earthquake was reported Fri', 'Noon on Jun 03', 'South of the Fiji Islands')
('A shallow', 'Light', '4.1 earthquake was reported Fri', 'Noon on Jun 03', '108km NNW of Congkar, Indonesia')
('A shallow', 'Light', '4.0 earthquake was reported Fri', 'Noon on Jun 03', '180km NNW of Yunaska Island, Alaska')
('A shallow', 'Light', '4.1 earthquake was reported Fri', 'Noon on Jun 03', 'South of the Fiji Islands')
('A shallow', 'Light', '4.3 earthquake was reported Fri', 'Noon on Jun 03', "242km ESE of Nikol'skoye, Russia")
('A shallow', 'Light', '4.7 earthquake was reported Fri', 'Noon on Jun 03', '131km ESE of Hirara, Japan')
('A shallow', 'Light', '4.0 earthquake was reported Fri', 'Noon on Jun 03', '55km WNW of Illapel, Chile')
('A shallow', 'Light', '4.6 earthquake was reported Fri', 'Noon on Jun 03', '19km SSE of Charagua, Bolivia')
('A shallow', 'Light', '4.8 earthquake was reported Fri', 'Noon on Jun 03', '72km W of Pasirnangka, Indonesia')
('A shallow', 'Light', '4.3 earthquake was reported Fri', 'Noon on Jun 03', "245km ESE of Nikol'skoye, Russia")
('A shallow', 'Light', '4.3 earthquake was reported Fri', 'Noon on Jun 03', "265km SE of Nikol'skoye, Russia")
('A shallow', 'Moderate', '5.3 earthquake was reported Fri', 'Noon on Jun 03', "250km ESE of Nikol'skoye, Russia")
('A shallow', 'Light', '4.1 earthquake was reported Fri', 'Noon on Jun 03', 'South Indian Ocean')
('A shallow', 'Light', '4.3 earthquake was reported Fri', 'Noon on Jun 03', '239km W of Port-Olry, Vanuatu')
('A shallow', 'Light', '4.2 earthquake was reported Fri', 'Noon on Jun 03', '62km S of Arica, Chile')
('A shallow', 'Light', '4.4 earthquake was reported Fri', 'Noon on Jun 03', '78km ESE of Sucua, Ecuador')
('A shallow', 'Moderate', '5.2 earthquake was reported Fri', 'Noon on Jun 03', 'South of Tasmania')
('A shallow', 'Light', '4.6 earthquake was reported Fri', 'Noon on Jun 03', 'Fiji region')
('A shallow', 'Light', '4.9 earthquake was reported Fri', 'Noon on Jun 03', '141km W of Itbayat, Philippines')
('A shallow', 'Light', '4.9 earthquake was reported Fri', 'Morning on Jun 03', '227km W of Hihifo, Tonga')
('A shallow', 'Light', '4.7 earthquake was reported Fri', 'Morning on Jun 03', '180km SSE of Naze, Japan')
('A shallow', 'Light', '4.7 earthquake was reported Fri', 'Morning on Jun 03', '86km S of Chignik Lake, Alaska')
('A shallow', 'Light', '4.0 earthquake was reported Fri', 'Morning on Jun 03', '60km S of Little Sitkin Island, Alaska')
('A shallow', 'Light', '4.5 earthquake was reported Fri', 'Morning on Jun 03', '25km N of Pujocucho, Peru')
('A shallow', 'Light', '4.2 earthquake was reported Fri', 'Morning on Jun 03', '18km SE of Ina, Japan')
('A shallow', 'Light', '4.2 earthquake was reported Fri', 'Morning on Jun 03', '111km SSW of Abepura, Indonesia')
('A shallow', 'Light', '4.5 earthquake was reported Fri', 'Morning on Jun 03', '68km NW of Port-Olry, Vanuatu')
('A shallow', 'Light', '4.7 earthquake was reported Thu', 'Noon on Jun 02', '101km S of La Libertad, El Salvador')
('A shallow', 'Light', '4.0 earthquake was reported Thu', 'Noon on Jun 02', '147km N of Calama, Chile')
('A shallow', 'Light', '4.6 earthquake was reported Thu', 'Noon on Jun 02', 'Owen Fracture Zone region')
('A shallow', 'Light', '4.4 earthquake was reported Thu', 'Noon on Jun 02', '39km NE of Palu, Indonesia')
('A shallow', 'Light', '4.5 earthquake was reported Thu', 'Noon on Jun 02', '113km ESE of Port-Vila, Vanuatu')
('A shallow', 'Light', '4.1 earthquake was reported Thu', 'Noon on Jun 02', '155km ENE of Ndoi Island, Fiji')
('A shallow', 'Light', '4.7 earthquake was reported Thu', 'Noon on Jun 02', '284km NE of Port Mathurin, Mauritius')
('A shallow', 'Light', '4.3 earthquake was reported Thu', 'Noon on Jun 02', '13km NNE of Hamza, Uzbekistan')
('A shallow', 'Light', '4.4 earthquake was reported Thu', 'Noon on Jun 02', '119km WSW of Itbayat, Philippines')
('A shallow', 'Light', '4.4 earthquake was reported Thu', 'Noon on Jun 02', '41km S of Boal Atas, Indonesia')
('A shallow', 'Light', '4.6 earthquake was reported Thu', 'Noon on Jun 02', '241km NNW of Farallon de Pajaros, Northern Mariana Islands')
('A shallow', 'Light', '4.8 earthquake was reported Thu', 'Noon on Jun 02', "217km SW of L'Esperance Rock, New Zealand")
('A shallow', 'Light', '4.9 earthquake was reported Thu', 'Noon on Jun 02', '71km S of Molibagu, Indonesia')
('A shallow', 'Light', '4.6 earthquake was reported Thu', 'Noon on Jun 02', '129km NNW of Labuhankananga, Indonesia')
('A shallow', 'Moderate', '5.1 earthquake was reported Thu', 'Noon on Jun 02', 'South of the Fiji Islands')
('A shallow', 'Light', '4.4 earthquake was reported Thu', 'Morning on Jun 02', '205km SW of San Patricio, Mexico')
('A shallow', 'Light', '4.7 earthquake was reported Thu', 'Morning on Jun 02', '178km SW of San Patricio, Mexico')
('A shallow', 'Moderate', '5.8 earthquake was reported Thu', 'Morning on Jun 02', '179km SW of San Patricio, Mexico')
('A shallow', 'Light', '4.9 earthquake was reported Thu', 'Morning on Jun 02', '36km NNW of Pujocucho, Peru')
('A shallow', 'Strong', '6.6 earthquake was reported Wed', 'Noon on Jun 01', '79km W of Sungaipenuh, Indonesia')
('A shallow', 'Light', '4.3 earthquake was reported Wed', 'Noon on Jun 01', '1km ESE of Yachimata, Japan')
('A shallow', 'Light', '4.0 earthquake was reported Wed', 'Noon on Jun 01', '24km WNW of Ain Bessem, Algeria')
('A shallow', 'Light', '4.1 earthquake was reported Wed', 'Noon on Jun 01', 'Carlsberg Ridge')
('A shallow', 'Light', '4.1 earthquake was reported Wed', 'Noon on Jun 01', '86km WSW of Coquimbo, Chile')
('A shallow', 'Moderate', '5.1 earthquake was reported Wed', 'Noon on Jun 01', '82km S of Huancavelica, Peru')
('A shallow', 'Light', '4.7 earthquake was reported Wed', 'Noon on Jun 01', '91km SSE of Pujiharjo, Indonesia')
('A shallow', 'Light', '4.3 earthquake was reported Wed', 'Noon on Jun 01', '95km NE of Chernabura Island, Alaska')
('A shallow', 'Light', '4.5 earthquake was reported Wed', 'Noon on Jun 01', 'Kuril Islands')
('A shallow', 'Light', '4.2 earthquake was reported Wed', 'Noon on Jun 01', '29km S of Jarm, Afghanistan')
('A shallow', 'Light', '4.7 earthquake was reported Wed', 'Noon on Jun 01', '31km E of Visokoi Island, South Georgia and the South Sandwich Islands')
('A shallow', 'Moderate', '5.5 earthquake was reported Wed', 'Noon on Jun 01', '106km S of Raoul Island, New Zealand')
('A shallow', 'Moderate', '5.1 earthquake was reported Wed', 'Noon on Jun 01', '111km SSW of Kokopo, Papua New Guinea')
('A shallow', 'Light', '4.7 earthquake was reported Wed', 'Noon on Jun 01', '54km WNW of Pedernales, Ecuador')
('A shallow', 'Light', '4.9 earthquake was reported Wed', 'Noon on Jun 01', '98km SW of San Patricio, Mexico')
('A shallow', 'Light', '4.0 earthquake was reported Wed', 'Noon on Jun 01', '63km WSW of Amatignak Island, Alaska')
('A shallow', 'Light', '4.8 earthquake was reported Wed', 'Morning on Jun 01', '44km WNW of Pedernales, Ecuador')
('A shallow', 'Moderate', '5.4 earthquake was reported Wed', 'Morning on Jun 01', '192km SW of San Patricio, Mexico')
('A shallow', 'Light', '4.2 earthquake was reported Wed', 'Morning on Jun 01', '148km SW of San Patricio, Mexico')
('A shallow', 'Light', '4.3 earthquake was reported Wed', 'Morning on Jun 01', '194km SW of San Patricio, Mexico')
('A shallow', 'Light', '4.6 earthquake was reported Wed', 'Morning on Jun 01', '40km W of Amatignak Island, Alaska')
('A shallow', 'Light', '4.4 earthquake was reported Wed', 'Morning on Jun 01', '23km ESE of Taquile, Peru')
('A shallow', 'Light', '4.5 earthquake was reported Wed', 'Morning on Jun 01', "161km SSE of L'Esperance Rock, New Zealand")
('A shallow', 'Light', '4.2 earthquake was reported Tue', 'Noon on May 31', '24km N of Nanao, Japan')
('A shallow', 'Moderate', '5.7 earthquake was reported Tue', 'Noon on May 31', '92km S of Chignik Lake, Alaska')
('A shallow', 'Light', '4.8 earthquake was reported Tue', 'Noon on May 31', '13km SW of Lakhdaria, Algeria')
('A shallow', 'Light', '4.7 earthquake was reported Tue', 'Noon on May 31', '3km SE of Mamburao, Philippines')
('A shallow', 'Light', '4.3 earthquake was reported Tue', 'Noon on May 31', '32km WNW of Pedernales, Ecuador')
('A shallow', 'Light', '4.4 earthquake was reported Tue', 'Noon on May 31', '30km NW of Cempa, Indonesia')
('A shallow', 'Light', '4.8 earthquake was reported Tue', 'Noon on May 31', '196km NNE of Chichi-shima, Japan')
('A shallow', 'Light', '4.1 earthquake was reported Tue', 'Noon on May 31', '103km NNE of Chignik Lake, Alaska')
('A shallow', 'Light', '4.5 earthquake was reported Tue', 'Noon on May 31', '20km NNW of Cortes, Philippines')
('A shallow', 'Moderate', '5.5 earthquake was reported Tue', 'Morning on May 31', 'Kuril Islands')
('A shallow', 'Light', '4.3 earthquake was reported Tue', 'Morning on May 31', '111km NW of Namatanai, Papua New Guinea')
('A shallow', 'Light', '4.6 earthquake was reported Tue', 'Morning on May 31', '78km WNW of Port-Olry, Vanuatu')
('A shallow', 'Light', '4.3 earthquake was reported Tue', 'Morning on May 31', '16km S of Parrita, Costa Rica')
('A shallow', 'Strong', '6.4 earthquake was reported Tue', 'Morning on May 31', '93km ENE of Keelung, Taiwan')
('A shallow', 'Light', '4.9 earthquake was reported Tue', 'Morning on May 31', '94km NE of Visokoi Island, South Georgia and the South Sandwich Islands')
('A shallow', 'Light', '4.7 earthquake was reported Tue', 'Morning on May 31', '98km S of La Libertad, El Salvador')
('A shallow', 'Light', '4.4 earthquake was reported Tue', 'Morning on May 31', '33km W of Andalgala, Argentina')
('A shallow', 'Light', '4.1 earthquake was reported Tue', 'Morning on May 31', '20km S of King Salmon, Alaska')
('A shallow', 'Light', '4.5 earthquake was reported Tue', 'Morning on May 31', "176km ENE of L'Esperance Rock, New Zealand")
('A shallow', 'Light', '4.5 earthquake was reported Tue', 'Morning on May 31', "106km ENE of L'Esperance Rock, New Zealand")
('A shallow', 'Light', '4.1 earthquake was reported Tue', 'Morning on May 31', '263km ESE of Lambasa, Fiji')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Noon on May 30', '96km NNE of Visokoi Island, South Georgia and the South Sandwich Islands')
('A shallow', 'Light', '4.6 earthquake was reported Mon', 'Noon on May 30', '93km NE of Visokoi Island, South Georgia and the South Sandwich Islands')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Noon on May 30', '10km WSW of Piedecuesta, Colombia')
('A shallow', 'Light', '4.6 earthquake was reported Mon', 'Noon on May 30', '92km NE of Visokoi Island, South Georgia and the South Sandwich Islands')
('A shallow', 'Light', '4.2 earthquake was reported Mon', 'Noon on May 30', '111km W of Rabaul, Papua New Guinea')
('A shallow', 'Moderate', '5.1 earthquake was reported Mon', 'Noon on May 30', '82km SE of Amahai, Indonesia')
('A shallow', 'Light', '4.2 earthquake was reported Mon', 'Noon on May 30', '11km NW of Nakanojo, Japan')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Noon on May 30', 'Off the east coast of the North Island of New Zealand')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Noon on May 30', '4km W of Castel Viscardo, Italy')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Noon on May 30', '254km ENE of Olonkinbyen, Svalbard and Jan Mayen')
('A shallow', 'Light', '4.1 earthquake was reported Mon', 'Noon on May 30', '99km SE of Old Iliamna, Alaska')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Noon on May 30', '55km WSW of Yonakuni, Japan')
('A shallow', 'Light', '4.8 earthquake was reported Mon', 'Noon on May 30', '134km NNE of Visokoi Island, South Georgia and the South Sandwich Islands')
('A shallow', 'Light', '4.6 earthquake was reported Mon', 'Noon on May 30', '154km SW of Kavieng, Papua New Guinea')
('A shallow', 'Light', '4.6 earthquake was reported Mon', 'Noon on May 30', 'South Shetland Islands')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Noon on May 30', '85km SE of Cabiraoan, Philippines')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Noon on May 30', '99km NE of Tobelo, Indonesia')
('A shallow', 'Light', '4.7 earthquake was reported Mon', 'Noon on May 30', '30km W of Illapel, Chile')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Noon on May 30', 'West Chile Rise')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Noon on May 30', "117km E of L'Esperance Rock, New Zealand")
('A shallow', 'Moderate', '5.4 earthquake was reported Mon', 'Morning on May 30', '116km S of Raoul Island, New Zealand')
('A shallow', 'Moderate', '5.2 earthquake was reported Mon', 'Morning on May 30', '126km ESE of Hirara, Japan')
('A shallow', 'Moderate', '5.2 earthquake was reported Mon', 'Morning on May 30', 'Southern Mid-Atlantic Ridge')
('A shallow', 'Light', '4.8 earthquake was reported Mon', 'Morning on May 30', "123km E of L'Esperance Rock, New Zealand")
('A shallow', 'Moderate', '5.7 earthquake was reported Mon', 'Morning on May 30', '123km S of Raoul Island, New Zealand')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Morning on May 30', '62km NE of Atka, Alaska')
('A shallow', 'Light', '4.8 earthquake was reported Mon', 'Morning on May 30', '127km NNE of Visokoi Island, South Georgia and the South Sandwich Islands')
('A shallow', 'Light', '4.8 earthquake was reported Mon', 'Morning on May 30', '123km SSW of Merizo Village, Guam')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Morning on May 30', '21km NE of Hualian, Taiwan')
('A shallow', 'Light', '4.7 earthquake was reported Mon', 'Morning on May 30', '122km SSW of Merizo Village, Guam')
('A shallow', 'Light', '4.7 earthquake was reported Mon', 'Morning on May 30', '252km NNW of Tual, Indonesia')
('A shallow', 'Light', '4.1 earthquake was reported Mon', 'Morning on May 30', '24km E of Taniwel, Indonesia')
('A shallow', 'Light', '4.0 earthquake was reported Mon', 'Morning on May 30', '52km SSW of Gra Liyia, Greece')
('A shallow', 'Light', '4.3 earthquake was reported Sun', 'Noon on May 29', '78km N of Iwo Jima, Japan')
('A shallow', 'Light', '4.0 earthquake was reported Sun', 'Noon on May 29', '16km SE of Ndoi Island, Fiji')
('A shallow', 'Light', '4.5 earthquake was reported Sun', 'Noon on May 29', '150km SSW of Kavieng, Papua New Guinea')
('A shallow', 'Light', '4.4 earthquake was reported Sun', 'Noon on May 29', '108km W of Illapel, Chile')
('A shallow', 'Light', '4.3 earthquake was reported Sun', 'Noon on May 29', '20km WNW of Ain Bessem, Algeria')
('A shallow', 'Moderate', '5.1 earthquake was reported Sun', 'Noon on May 29', '277km WNW of Saumlaki, Indonesia')
('A shallow', 'Light', '4.4 earthquake was reported Sun', 'Noon on May 29', '83km ENE of Lar, Iran')
('A shallow', 'Light', '4.6 earthquake was reported Sun', 'Noon on May 29', '94km S of La Libertad, El Salvador')
('A shallow', 'Light', '4.5 earthquake was reported Sun', 'Noon on May 29', '107km NNE of Anatahan, Northern Mariana Islands')
('A shallow', 'Light', '4.6 earthquake was reported Sun', 'Noon on May 29', '59km S of Hirara, Japan')
('A shallow', 'Light', '4.5 earthquake was reported Sun', 'Noon on May 29', 'Tuamotu Archipelago, French Polynesia region')
('A shallow', 'Light', '4.1 earthquake was reported Sun', 'Noon on May 29', '77km ESE of Iquique, Chile')
('A shallow', 'Light', '4.5 earthquake was reported Sun', 'Noon on May 29', '204km E of `Ohonua, Tonga')
('A shallow', 'Light', '4.4 earthquake was reported Sun', 'Noon on May 29', '132km SW of Kokopo, Papua New Guinea')
('A shallow', 'Light', '4.3 earthquake was reported Sun', 'Noon on May 29', '138km NNE of Visokoi Island, South Georgia and the South Sandwich Islands')
('A shallow', 'Light', '4.6 earthquake was reported Sun', 'Noon on May 29', '106km ENE of Visokoi Island, South Georgia and the South Sandwich Islands')
('A shallow', 'Light', '4.2 earthquake was reported Sun', 'Noon on May 29', 'Kuril Islands')
('A shallow', 'Light', '4.2 earthquake was reported Sun', 'Noon on May 29', 'South of the Kermadec Islands')
('A shallow', 'Light', '4.1 earthquake was reported Sun', 'Noon on May 29', '51km ENE of Port-Olry, Vanuatu')
('A shallow', 'Light', '4.1 earthquake was reported Sun', 'Morning on May 29', '132km ENE of Ndoi Island, Fiji')
('A shallow', 'Light', '4.2 earthquake was reported Sun', 'Morning on May 29', '84km WNW of Kirakira, Solomon Islands')
('A shallow', 'Light', '4.5 earthquake was reported Sun', 'Morning on May 29', '6km NE of `Alaqahdari-ye Kiran wa Munjan, Afghanistan')
('A shallow', 'Light', '4.3 earthquake was reported Sun', 'Morning on May 29', '10km SSE of Boyuibe, Bolivia')
('A shallow', 'Moderate', '5.0 earthquake was reported Sun', 'Morning on May 29', 'Northern Mid-Atlantic Ridge')
('A shallow', 'Light', '4.2 earthquake was reported Sun', 'Morning on May 29', '16km WSW of Ain Bessem, Algeria')
('A shallow', 'Light', '4.5 earthquake was reported Sun', 'Morning on May 29', '154km ESE of Hasaki, Japan')
('A shallow', 'Light', '4.4 earthquake was reported Sun', 'Morning on May 29', '148km W of Itbayat, Philippines')
('A shallow', 'Light', '4.2 earthquake was reported Sun', 'Morning on May 29', '10km WSW of Huagai, China')
('A shallow', 'Light', '4.6 earthquake was reported Sun', 'Morning on May 29', '60km SSE of Lakatoro, Vanuatu')
('A shallow', 'Light', '4.8 earthquake was reported Sun', 'Morning on May 29', '74km S of Lakatoro, Vanuatu')
('A shallow', 'Light', '4.3 earthquake was reported Sun', 'Morning on May 29', '36km NW of Kizukuri, Japan')
('A shallow', 'Light', '4.3 earthquake was reported Sun', 'Morning on May 29', '68km WSW of Puerto Madero, Mexico')
('A shallow', 'Light', '4.5 earthquake was reported Sun', 'Morning on May 29', '8km WNW of Ain Bessem, Algeria')
('A shallow', 'Light', '4.4 earthquake was reported Sat', 'Noon on May 28', '86km NE of Visokoi Island, South Georgia and the South Sandwich Islands')
('A shallow', 'Moderate', '5.4 earthquake was reported Sat', 'Noon on May 28', '16km SSW of Lakhdaria, Algeria')
('A shallow', 'Light', '4.1 earthquake was reported Sat', 'Noon on May 28', '90km W of Vallenar, Chile')
('A shallow', 'Light', '4.3 earthquake was reported Sat', 'Noon on May 28', '120km NE of Visokoi Island, South Georgia and the South Sandwich Islands')
('A shallow', 'Light', '4.9 earthquake was reported Sat', 'Noon on May 28', '119km E of Aileu, East Timor')
('A shallow', 'Light', '4.1 earthquake was reported Sat', 'Noon on May 28', '12km NE of Poros, Greece')
('A shallow', 'Light', '4.4 earthquake was reported Sat', 'Noon on May 28', '193km NE of Ndoi Island, Fiji')
('A shallow', 'Light', '4.4 earthquake was reported Sat', 'Noon on May 28', 'Prince Edward Islands region')
('A shallow', 'Light', '4.6 earthquake was reported Sat', 'Noon on May 28', '20km SE of Shardara, Kazakhstan')
('A shallow', 'Light', '4.1 earthquake was reported Sat', 'Noon on May 28', '115km S of Nabire, Indonesia')
('A shallow', 'Light', '4.7 earthquake was reported Sat', 'Noon on May 28', '137km NNE of Visokoi Island, South Georgia and the South Sandwich Islands')
('A shallow', 'Light', '4.8 earthquake was reported Sat', 'Noon on May 28', '166km NNE of Esperance, Australia')
('A shallow', 'Light', '4.8 earthquake was reported Sat', 'Noon on May 28', '170km NNE of Esperance, Australia')
('A shallow', 'Light', '4.2 earthquake was reported Sat', 'Noon on May 28', '121km SSW of Raoul Island, New Zealand')
('A shallow', 'Light', '4.5 earthquake was reported Sat', 'Noon on May 28', '97km NE of Visokoi Island, South Georgia and the South Sandwich Islands')
('A shallow', 'Light', '4.0 earthquake was reported Sat', 'Noon on May 28', '119km NE of Ndoi Island, Fiji')
('A shallow', 'Light', '4.5 earthquake was reported Sat', 'Noon on May 28', "129km ENE of L'Esperance Rock, New Zealand")
('A shallow', 'Light', '4.1 earthquake was reported Sat', 'Noon on May 28', '31km W of Ashkasham, Afghanistan')
('A shallow', 'Light', '4.1 earthquake was reported Sat', 'Noon on May 28', 'South of the Fiji Islands')
('A shallow', 'Major', '7.2 earthquake was reported Sat', 'Morning on May 28', '53km NNE of Visokoi Island, South Georgia and the South Sandwich Islands')
('A shallow', 'Light', '4.7 earthquake was reported Sat', 'Morning on May 28', '33km NNW of Nagarkot, Nepal')
('A shallow', 'Light', '4.4 earthquake was reported Sat', 'Morning on May 28', '8km N of Funaishikawa, Japan')
('A shallow', 'Light', '4.6 earthquake was reported Sat', 'Morning on May 28', '74km N of Hachijo-jima, Japan')
('A shallow', 'Strong', '6.6 earthquake was reported Sat', 'Morning on May 28', '161km SSE of Ndoi Island, Fiji')
('A shallow', 'Moderate', '5.2 earthquake was reported Sat', 'Morning on May 28', 'Central Mid-Atlantic Ridge')
('A shallow', 'Light', '4.2 earthquake was reported Sat', 'Morning on May 28', '43km WNW of Lebu, Chile')
('A shallow', 'Light', '4.4 earthquake was reported Sat', 'Morning on May 28', '244km NW of Saumlaki, Indonesia')
('A shallow', 'Light', '4.3 earthquake was reported Fri', 'Noon on May 27', '288km NNE of Ndoi Island, Fiji')
('A shallow', 'Light', '4.2 earthquake was reported Fri', 'Noon on May 27', '136km ENE of Iquique, Chile')
('A shallow', 'Light', '4.4 earthquake was reported Fri', 'Noon on May 27', '56km NE of Port-Olry, Vanuatu')
('A shallow', 'Light', '4.4 earthquake was reported Fri', 'Noon on May 27', '300km N of Ndoi Island, Fiji')
('A shallow', 'Light', '4.5 earthquake was reported Fri', 'Noon on May 27', '96km WSW of Ferndale, California')
('A shallow', 'Light', '4.1 earthquake was reported Fri', 'Noon on May 27', '223km NNW of Tual, Indonesia')
('A shallow', 'Light', '4.4 earthquake was reported Fri', 'Noon on May 27', 'Southwest Indian Ridge')
('A shallow', 'Light', '4.1 earthquake was reported Fri', 'Noon on May 27', '55km NNW of Visokoi Island, South Georgia and the South Sandwich Islands')
('A shallow', 'Light', '4.1 earthquake was reported Fri', 'Noon on May 27', '176km NE of Thang, India')
('A shallow', 'Light', '4.2 earthquake was reported Fri', 'Noon on May 27', "186km SSW of Ust'-Kamchatsk Staryy, Russia")
('A shallow', 'Light', '4.2 earthquake was reported Fri', 'Noon on May 27', '104km SSE of Lolayan, Indonesia')
('A shallow', 'Light', '4.0 earthquake was reported Fri', 'Noon on May 27', '72km SW of Ovalle, Chile')
('A shallow', 'Moderate', '5.2 earthquake was reported Fri', 'Noon on May 27', '66km SSE of Kokopo, Papua New Guinea')
('A shallow', 'Light', '4.2 earthquake was reported Fri', 'Noon on May 27', '58km NE of Yelizovo, Russia')
('A shallow', 'Light', '4.4 earthquake was reported Fri', 'Noon on May 27', '108km E of Pagan, Northern Mariana Islands')
('A shallow', 'Light', '4.4 earthquake was reported Fri', 'Noon on May 27', '114km N of Kendari, Indonesia')
('A shallow', 'Light', '4.6 earthquake was reported Fri', 'Noon on May 27', '52km NNW of Finschhafen, Papua New Guinea')
('A shallow', 'Light', '4.6 earthquake was reported Fri', 'Noon on May 27', '97km N of Tobelo, Indonesia')
('A shallow', 'Light', '4.4 earthquake was reported Fri', 'Noon on May 27', '142km ENE of Taltal, Chile')
('A shallow', 'Light', '4.0 earthquake was reported Fri', 'Noon on May 27', '21km WNW of Piura, Peru')
('A shallow', 'Moderate', '5.2 earthquake was reported Fri', 'Morning on May 27', '13km WNW of Campoverde, Peru')
('A shallow', 'Moderate', '5.0 earthquake was reported Fri', 'Morning on May 27', '96km NW of Port-Vila, Vanuatu')
('A shallow', 'Light', '4.6 earthquake was reported Fri', 'Morning on May 27', '103km ESE of Khonsa, India')
('A shallow', 'Strong', '6.4 earthquake was reported Fri', 'Morning on May 27', '19km S of Ndoi Island, Fiji')
('A shallow', 'Light', '4.7 earthquake was reported Fri', 'Morning on May 27', '172km SSE of Naze, Japan')
('A shallow', 'Moderate', '5.9 earthquake was reported Fri', 'Morning on May 27', '165km SSE of Naze, Japan')
('A shallow', 'Light', '4.3 earthquake was reported Fri', 'Morning on May 27', '12km SE of Lukatan, Philippines')
('A shallow', 'Light', '4.3 earthquake was reported Thu', 'Noon on May 26', '123km W of Pangai, Tonga')
('A shallow', 'Light', '4.0 earthquake was reported Thu', 'Noon on May 26', "123km W of L'Esperance Rock, New Zealand")
('A shallow', 'Light', '4.2 earthquake was reported Thu', 'Noon on May 26', '15km N of Amahai, Indonesia')
('A shallow', 'Light', '4.5 earthquake was reported Thu', 'Noon on May 26', 'South of the Fiji Islands')
('A shallow', 'Light', '4.8 earthquake was reported Thu', 'Noon on May 26', '55km W of Coquimbo, Chile')
('A shallow', 'Light', '4.3 earthquake was reported Thu', 'Noon on May 26', '34km N of Yigo Village, Guam')
('A shallow', 'Light', '4.5 earthquake was reported Thu', 'Noon on May 26', '136km SSE of Shizunai, Japan')
('A shallow', 'Light', '4.7 earthquake was reported Thu', 'Noon on May 26', '237km SE of Vostok, Russia')
('A shallow', 'Light', '4.7 earthquake was reported Thu', 'Noon on May 26', '9km ESE of Madang, Papua New Guinea')
('A shallow', 'Light', '4.4 earthquake was reported Thu', 'Noon on May 26', '65km S of Visokoi Island, South Georgia and the South Sandwich Islands')
('A shallow', 'Light', '4.1 earthquake was reported Thu', 'Noon on May 26', '243km NNE of Ndoi Island, Fiji')
('A shallow', 'Light', '4.6 earthquake was reported Thu', 'Noon on May 26', "220km NE of Kuril'sk, Russia")
('A shallow', 'Moderate', '5.1 earthquake was reported Thu', 'Noon on May 26', "282km SSE of L'Esperance Rock, New Zealand")
('A shallow', 'Moderate', '5.0 earthquake was reported Thu', 'Morning on May 26', 'Timor Sea')
('A shallow', 'Moderate', '5.2 earthquake was reported Thu', 'Morning on May 26', '2km S of Moyogalpa, Nicaragua')
('A shallow', 'Light', '4.3 earthquake was reported Thu', 'Morning on May 26', '22km WNW of Massy, Kyrgyzstan')
('A shallow', 'Light', '4.5 earthquake was reported Thu', 'Morning on May 26', '49km ENE of Anatahan, Northern Mariana Islands')
('A shallow', 'Light', '4.2 earthquake was reported Thu', 'Morning on May 26', '77km SSE of Putre, Chile')
('A shallow', 'Light', '4.9 earthquake was reported Wed', 'Noon on May 25', '248km N of Chichi-shima, Japan')
('A shallow', 'Light', '4.1 earthquake was reported Wed', 'Noon on May 25', '232km W of Riverton, New Zealand')
('A shallow', 'Light', '4.1 earthquake was reported Wed', 'Noon on May 25', '55km ESE of Siracusa, Italy')
('A shallow', 'Light', '4.0 earthquake was reported Wed', 'Noon on May 25', '53km NNE of Naze, Japan')
('A shallow', 'Light', '4.3 earthquake was reported Wed', 'Noon on May 25', '15km WNW of Palomares, Mexico')
('A shallow', 'Light', '4.3 earthquake was reported Wed', 'Noon on May 25', '247km E of Miyako, Japan')
('A shallow', 'Light', '4.2 earthquake was reported Wed', 'Noon on May 25', '12km ENE of Rota, Northern Mariana Islands')
('A shallow', 'Light', '4.1 earthquake was reported Wed', 'Noon on May 25', '95km SW of Mapastepec, Mexico')
('A shallow', 'Light', '4.2 earthquake was reported Wed', 'Noon on May 25', '88km SSW of Pijijiapan, Mexico')
('A shallow', 'Light', '4.9 earthquake was reported Wed', 'Noon on May 25', '19km E of Marihatag, Philippines')
('A shallow', 'Light', '4.3 earthquake was reported Wed', 'Noon on May 25', '106km S of Bristol Island, South Sandwich Islands')
('A shallow', 'Light', '4.0 earthquake was reported Wed', 'Noon on May 25', 'South of the Fiji Islands')
('A shallow', 'Light', '4.2 earthquake was reported Wed', 'Noon on May 25', '190km NW of Farallon de Pajaros, Northern Mariana Islands')
('A shallow', 'Light', '4.2 earthquake was reported Wed', 'Noon on May 25', '38km WNW of Hengchun, Taiwan')
('A shallow', 'Light', '4.03 earthquake was reported Wed', 'Noon on May 25', '41km NNW of Duchesne, Utah')
('A shallow', 'Light', '4.3 earthquake was reported Wed', 'Noon on May 25', '31km ENE of Lakatoro, Vanuatu')
('A shallow', 'Light', '4.5 earthquake was reported Wed', 'Noon on May 25', '24km SSW of Hidalgotitlan, Mexico')
('A shallow', 'Light', '4.1 earthquake was reported Wed', 'Morning on May 25', '112km SSW of Dadali, Solomon Islands')
('A shallow', 'Light', '4.1 earthquake was reported Wed', 'Morning on May 25', '246km NNW of Tobelo, Indonesia')
('A shallow', 'Light', '4.1 earthquake was reported Wed', 'Morning on May 25', '12km SW of Kato Mazarakion, Greece')
('A shallow', 'Moderate', '5.4 earthquake was reported Wed', 'Morning on May 25', '20km S of Palaikastron, Greece')
('A shallow', 'Light', '4.2 earthquake was reported Wed', 'Morning on May 25', '9km NNE of Piedra Blanca, Dominican Republic')
('A shallow', 'Light', '4.5 earthquake was reported Wed', 'Morning on May 25', '37km SW of Sarangani, Philippines')
('A shallow', 'Light', '4.9 earthquake was reported Wed', 'Morning on May 25', '81km ENE of Misawa, Japan')
('A shallow', 'Light', '4.5 earthquake was reported Wed', 'Morning on May 25', '91km SE of Hasaki, Japan')
('A shallow', 'Light', '4.0 earthquake was reported Tue', 'Noon on May 24', '181km WNW of Saumlaki, Indonesia')
('A shallow', 'Light', '4.8 earthquake was reported Tue', 'Noon on May 24', '49km WSW of Agrihan, Northern Mariana Islands')
('A shallow', 'Light', '4.6 earthquake was reported Tue', 'Noon on May 24', '31km ESE of Muisne, Ecuador')
('A shallow', 'Light', '4.1 earthquake was reported Tue', 'Noon on May 24', '1km E of Aileu, East Timor')
('A shallow', 'Light', '4.5 earthquake was reported Tue', 'Noon on May 24', '19km SSW of Ndoi Island, Fiji')
('A shallow', 'Light', '4.3 earthquake was reported Tue', 'Noon on May 24', '100km SW of Chirovanga, Solomon Islands')
('A shallow', 'Light', '4.1 earthquake was reported Tue', 'Noon on May 24', '58km SSE of Ofunato, Japan')
('A shallow', 'Light', '4.8 earthquake was reported Tue', 'Noon on May 24', '24km NE of Bojnurd, Iran')
('A shallow', 'Light', '4.9 earthquake was reported Tue', 'Noon on May 24', '12km W of Dhi Na`im, Yemen')
('A shallow', 'Light', '4.7 earthquake was reported Tue', 'Noon on May 24', '9km SE of Azogues, Ecuador')
('A shallow', 'Light', '4.4 earthquake was reported Tue', 'Noon on May 24', '9km ESE of Taradale, New Zealand')
('A shallow', 'Light', '4.1 earthquake was reported Tue', 'Morning on May 24', '19km SE of Kishtwar, India')
('A shallow', 'Light', '4.6 earthquake was reported Tue', 'Morning on May 24', '52km N of Miyako, Japan')
('A shallow', 'Light', '4.3 earthquake was reported Tue', 'Morning on May 24', '150km SE of Ndoi Island, Fiji')
('A shallow', 'Light', '4.0 earthquake was reported Tue', 'Morning on May 24', '45km SW of San Francisco Menendez, El Salvador')
('A shallow', 'Light', '4.4 earthquake was reported Tue', 'Morning on May 24', "64km E of L'Esperance Rock, New Zealand")
('A shallow', 'Light', '4.7 earthquake was reported Tue', 'Morning on May 24', 'South of the Kermadec Islands')
('A shallow', 'Light', '4.2 earthquake was reported Tue', 'Morning on May 24', '50km ESE of Cold Bay, Alaska')
('A shallow', 'Light', '4.9 earthquake was reported Tue', 'Morning on May 24', '51km SSE of Putre, Chile')
('A shallow', 'Light', '4.7 earthquake was reported Mon', 'Noon on May 23', '3km SSW of Pilar, Philippines')
('A shallow', 'Light', '4.3 earthquake was reported Mon', 'Noon on May 23', '159km SSE of Hachijo-jima, Japan')
('A shallow', 'Light', '4.7 earthquake was reported Mon', 'Noon on May 23', '52km E of Port-Olry, Vanuatu')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Noon on May 23', '177km NNW of Dili, East Timor')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Noon on May 23', '129km SW of Abepura, Indonesia')
('A shallow', 'Moderate', '5.4 earthquake was reported Mon', 'Noon on May 23', '19km W of Cintalapa de Figueroa, Mexico')
('A shallow', 'Light', '4.3 earthquake was reported Mon', 'Noon on May 23', '18km W of Kirtipur, Nepal')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Noon on May 23', '169km NNW of Farallon de Pajaros, Northern Mariana Islands')
('A shallow', 'Light', '4.2 earthquake was reported Mon', 'Noon on May 23', '30km W of Gyangkar, China')
('A shallow', 'Light', '4.0 earthquake was reported Mon', 'Noon on May 23', '255km WNW of Ozernovskiy, Russia')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Noon on May 23', 'Northern Mid-Atlantic Ridge')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Noon on May 23', '7km SSW of Yachimata, Japan')
('A shallow', 'Light', '4.2 earthquake was reported Mon', 'Noon on May 23', '71km NNE of Dili, East Timor')
('A shallow', 'Light', '4.8 earthquake was reported Mon', 'Noon on May 23', '100km W of Makurazaki, Japan')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Noon on May 23', '35km SE of Hitachi, Japan')
('A shallow', 'Light', '4.7 earthquake was reported Mon', 'Noon on May 23', 'Northern Mid-Atlantic Ridge')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Noon on May 23', 'Central Mid-Atlantic Ridge')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Noon on May 23', '75km WNW of La Ligua, Chile')
('A shallow', 'Light', '4.6 earthquake was reported Mon', 'Noon on May 23', '66km NNE of Pangai, Tonga')
('A shallow', 'Light', '4.1 earthquake was reported Mon', 'Noon on May 23', '80km W of San Antonio de los Cobres, Argentina')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Noon on May 23', '71km SW of Ndoi Island, Fiji')
('A shallow', 'Light', '4.5 earthquake was reported Mon', 'Morning on May 23', 'East of the Kuril Islands')
('A shallow', 'Light', '4.2 earthquake was reported Mon', 'Morning on May 23', '15km SSW of Ndoi Island, Fiji')
('A shallow', 'Light', '4.3 earthquake was reported Mon', 'Morning on May 23', '91km SSE of Isangel, Vanuatu')
('A shallow', 'Moderate', '5.0 earthquake was reported Mon', 'Morning on May 23', '134km NE of San Pedro de Atacama, Chile')
('A shallow', 'Light', '4.3 earthquake was reported Mon', 'Morning on May 23', '6km S of Bogorawatu, Indonesia')
('A shallow', 'Light', '4.4 earthquake was reported Mon', 'Morning on May 23', '125km S of Attu Station, Alaska')
('A shallow', 'Light', '4.2 earthquake was reported Mon', 'Morning on May 23', '23km ESE of Jarm, Afghanistan')
('A shallow', 'Light', '4.5 earthquake was reported Sun', 'Noon on May 22', '142km W of Itbayat, Philippines')
('A shallow', 'Light', '4.1 earthquake was reported Sun', 'Noon on May 22', '249km E of Enarotali, Indonesia')
('A shallow', 'Light', '4.2 earthquake was reported Sun', 'Noon on May 22', '162km ESE of Sarangani, Philippines')
('A shallow', 'Light', '4.4 earthquake was reported Sun', 'Noon on May 22', '37km SSW of Jarm, Afghanistan')
('A shallow', 'Moderate', '5.0 earthquake was reported Sun', 'Noon on May 22', '37km SE of Iztapa, Guatemala')
('A shallow', 'Light', '4.0 earthquake was reported Sun', 'Noon on May 22', '127km N of Saumlaki, Indonesia')
('A shallow', 'Light', '4.3 earthquake was reported Sun', 'Noon on May 22', '11km SSW of Piedecuesta, Colombia')
('A shallow', 'Light', '4.7 earthquake was reported Sun', 'Noon on May 22', 'North of Severnaya Zemlya')
('A shallow', 'Light', '4.2 earthquake was reported Sun', 'Noon on May 22', '50km S of Bambanglipuro, Indonesia')
('A shallow', 'Moderate', '5.3 earthquake was reported Sun', 'Noon on May 22', '47km WNW of Tartagal, Argentina')
('A shallow', 'Moderate', '5.1 earthquake was reported Sun', 'Noon on May 22', '46km W of Ovalle, Chile')
('A shallow', 'Light', '4.5 earthquake was reported Sun', 'Noon on May 22', '160km WSW of San Antonio, Chile')
('A shallow', 'Light', '4.3 earthquake was reported Sun', 'Morning on May 22', "102km ESE of Ust'-Kamchatsk Staryy, Russia")
('A shallow', 'Light', '4.3 earthquake was reported Sun', 'Morning on May 22', '31km NW of Agrihan, Northern Mariana Islands')
('A shallow', 'Light', '4.8 earthquake was reported Sun', 'Morning on May 22', '33km W of Chaoyang, China')
('A shallow', 'Light', '4.3 earthquake was reported Sun', 'Morning on May 22', '5km ENE of Strumyani, Bulgaria')
('A shallow', 'Light', '4.4 earthquake was reported Sun', 'Morning on May 22', '38km ESE of Kamaishi, Japan')
('A shallow', 'Light', '4.3 earthquake was reported Sun', 'Morning on May 22', '198km NNW of Sola, Vanuatu')
('A shallow', 'Light', '4.0 earthquake was reported Sun', 'Morning on May 22', '7km WSW of Atalanti, Greece')
('A shallow', 'Light', '4.1 earthquake was reported Sun', 'Morning on May 22', '126km ESE of Iwaki, Japan')
('A shallow', 'Light', '4.3 earthquake was reported Sun', 'Morning on May 22', '27km NW of Gyangkar, China')
('A shallow', 'Light', '4.2 earthquake was reported Sun', 'Morning on May 22', '265km SE of Hachijo-jima, Japan')
('A shallow', 'Light', '4.2 earthquake was reported Sun', 'Morning on May 22', '30km WSW of Tobelo, Indonesia')
###Markdown
4. PART FOUR: The other bits
###Code
def mag_is(my_mag):
num = str(my_mag['mag'])
return num
mag_is(earthquake)
def type_of_event(my_event):
eve = str(my_event['type'])
return eve
type_of_event(earthquake)
def date_in_words(timestring):
time = timestring['time']
yourdate = dateutil.parser.parse(time)
date_words = yourdate.strftime("%b %d")
return date_words
date_in_words(earthquake)
def loc(locale):
location = str(locale['place'])
return location
loc(earthquake)
def eq_to_sent(my_dict):
return "There was also a magnitude " + mag_is(my_dict), type_of_event(my_dict) + " on " + date_in_words(my_dict), loc(my_dict)
eq_to_sent(earthquake)
for item in earthquakes:
if item['type'] != "earthquake":
print(eq_to_sent(item))
###Output
('There was also a magnitude 1.33', 'quarry blast on Jun 20', '11km E of Quarry near Portola Valley, CA')
('There was also a magnitude 1.85', 'explosion on Jun 20', '2km E of Granite Falls, Washington')
('There was also a magnitude 1.69', 'quarry blast on Jun 20', '0km SSW of Home Gardens, CA')
('There was also a magnitude 1.77', 'quarry blast on Jun 20', '7km SSE of Home Gardens, CA')
('There was also a magnitude 1.29', 'other event on Jun 19', '10km SW of Bridgeport, Washington')
('There was also a magnitude 1.95', 'explosion on Jun 19', '1km SSW of Princeton, Canada')
('There was also a magnitude 1.36', 'other event on Jun 19', '10km SW of Bridgeport, Washington')
('There was also a magnitude 1.27', 'other event on Jun 19', '30km ESE of Sweet Home, Oregon')
('There was also a magnitude 1.74', 'explosion on Jun 18', '9km S of Princeton, Canada')
('There was also a magnitude 1.22', 'explosion on Jun 18', '8km E of Yacolt, Washington')
('There was also a magnitude 1.99', 'explosion on Jun 17', '26km WSW of Cheney, Washington')
('There was also a magnitude 1.57', 'explosion on Jun 17', '8km WNW of Junction City, Oregon')
('There was also a magnitude 1.48', 'quarry blast on Jun 17', '4km SE of Home Gardens, CA')
('There was also a magnitude 1.66', 'explosion on Jun 17', '14km NNW of Philomath, Oregon')
('There was also a magnitude 1.85', 'quarry blast on Jun 17', '4km ENE of Butte, Montana')
('There was also a magnitude 1.63', 'quarry blast on Jun 17', '0km E of Quarry near Salinas, CA')
('There was also a magnitude 1.27', 'quarry blast on Jun 17', '10km NNW of Big Bear City, CA')
('There was also a magnitude 1.36', 'quarry blast on Jun 16', '2km SE of Home Gardens, CA')
('There was also a magnitude 1.24', 'quarry blast on Jun 16', '11km E of Quarry near Portola Valley, CA')
('There was also a magnitude 1.48', 'quarry blast on Jun 16', '0km SE of Quarry near Vallejo, CA')
('There was also a magnitude 1.35', 'explosion on Jun 16', '28km SW of Morton, Washington')
('There was also a magnitude 1.16', 'quarry blast on Jun 16', '6km SSW of Mojave, CA')
('There was also a magnitude 1.09', 'explosion on Jun 16', '28km SW of Morton, Washington')
('There was also a magnitude 2.08', 'quarry blast on Jun 16', '16km SW of Kemmerer, Wyoming')
('There was also a magnitude 1.15', 'explosion on Jun 16', '25km SW of Morton, Washington')
('There was also a magnitude 1.3', 'explosion on Jun 16', '25km SW of Morton, Washington')
('There was also a magnitude 1.07', 'quarry blast on Jun 16', '13km WNW of Searles Valley, CA')
('There was also a magnitude 1.33', 'quarry blast on Jun 15', '12km WNW of Whitehall, Montana')
('There was also a magnitude 2.14', 'explosion on Jun 15', '5km S of Princeton, Canada')
('There was also a magnitude 1.56', 'quarry blast on Jun 15', '1km NW of Quarry near Salinas, CA')
('There was also a magnitude 1.29', 'quarry blast on Jun 15', '4km SSE of Home Gardens, CA')
('There was also a magnitude 1.25', 'quarry blast on Jun 15', '5km ENE of Butte, Montana')
('There was also a magnitude 1.12', 'quarry blast on Jun 15', '2km SW of Quarry near San Rafael, CA')
('There was also a magnitude 1.35', 'explosion on Jun 15', '14km WSW of Cashmere, Washington')
('There was also a magnitude 1.1', 'explosion on Jun 14', '4km N of Fern Prairie, Washington')
('There was also a magnitude 1.57', 'quarry blast on Jun 14', '46km NE of Holtville, CA')
('There was also a magnitude 1.44', 'quarry blast on Jun 14', '11km E of Quarry near Portola Valley, CA')
('There was also a magnitude 1.33', 'quarry blast on Jun 14', '3km SSE of Quarry near Aromas, CA')
('There was also a magnitude 1.35', 'quarry blast on Jun 14', '4km ENE of Rancho San Diego, CA')
('There was also a magnitude 1.03', 'quarry blast on Jun 14', '10km ESE of Coto De Caza, CA')
('There was also a magnitude 1.17', 'quarry blast on Jun 14', '6km SSW of Mojave, CA')
('There was also a magnitude 1.36', 'quarry blast on Jun 14', '13km SE of Tehachapi, CA')
('There was also a magnitude 1.05', 'explosion on Jun 14', '5km E of Yoncalla, Oregon')
('There was also a magnitude 1.11', 'quarry blast on Jun 13', '5km NNW of Boron, CA')
('There was also a magnitude 1.13', 'explosion on Jun 13', '10km NNW of Philomath, Oregon')
('There was also a magnitude 2.18', 'quarry blast on Jun 13', '46km NE of Holtville, CA')
('There was also a magnitude 2.38', 'explosion on Jun 13', '1km WNW of Princeton, Canada')
('There was also a magnitude 1.65', 'quarry blast on Jun 13', '1km W of Tijuana, B.C., MX')
('There was also a magnitude 1.26', 'explosion on Jun 13', '13km S of Morton, Washington')
('There was also a magnitude 1.09', 'quarry blast on Jun 13', '5km S of Mojave, CA')
('There was also a magnitude 1.75', 'quarry blast on Jun 13', '5km E of Butte, Montana')
('There was also a magnitude 1.53', 'quarry blast on Jun 13', '9km NNW of Big Bear Lake, CA')
('There was also a magnitude 2.02', 'explosion on Jun 11', '2km NNE of Princeton, Canada')
('There was also a magnitude 1.47', 'quarry blast on Jun 10', '4km SE of Home Gardens, CA')
('There was also a magnitude 2.01', 'explosion on Jun 10', '9km S of Princeton, Canada')
('There was also a magnitude 1.19', 'quarry blast on Jun 10', '13km SE of Tehachapi, CA')
('There was also a magnitude 1.41', 'quarry blast on Jun 09', '44km NNW of Los Algodones, B.C., MX')
('There was also a magnitude 1.45', 'quarry blast on Jun 09', '11km E of Quarry near Portola Valley, CA')
('There was also a magnitude 1.11', 'quarry blast on Jun 09', '13km W of Mojave, CA')
('There was also a magnitude 1.9', 'explosion on Jun 09', '1km S of Princeton, Canada')
('There was also a magnitude 1.42', 'quarry blast on Jun 09', '7km SSW of Mojave, CA')
('There was also a magnitude 1.45', 'quarry blast on Jun 09', '1km SSE of Quarry near Aromas, CA')
('There was also a magnitude 1.55', 'quarry blast on Jun 09', '28km SE of Virginia City, Montana')
('There was also a magnitude 1.63', 'explosion on Jun 09', '23km NNW of Baker City, Oregon')
('There was also a magnitude 1.72', 'quarry blast on Jun 08', '47km NE of Holtville, CA')
('There was also a magnitude 1.93', 'explosion on Jun 08', '6km SSW of Princeton, Canada')
('There was also a magnitude 1.3', 'quarry blast on Jun 08', '11km E of Quarry near Portola Valley, CA')
('There was also a magnitude 1.0', 'explosion on Jun 08', '16km ESE of Enumclaw, Washington')
('There was also a magnitude 1.57', 'explosion on Jun 08', '5km SW of Napavine, Washington')
('There was also a magnitude 1.8', 'quarry blast on Jun 08', '7km SSE of Home Gardens, CA')
('There was also a magnitude 1.55', 'quarry blast on Jun 07', '0km N of Quarry near Portola Valley, CA')
('There was also a magnitude 1.28', 'explosion on Jun 07', '3km E of West Side Highway, Washington')
('There was also a magnitude 2.1', 'explosion on Jun 07', '1km WSW of Princeton, Canada')
('There was also a magnitude 1.95', 'explosion on Jun 07', '32km E of Shady Cove, Oregon')
('There was also a magnitude 1.44', 'quarry blast on Jun 07', '0km E of Quarry near Atascadero, CA')
('There was also a magnitude 1.6', 'quarry blast on Jun 07', '8km W of Townsend, Montana')
('There was also a magnitude 1.63', 'quarry blast on Jun 06', '43km NNW of Los Algodones, B.C., MX')
('There was also a magnitude 1.35', 'quarry blast on Jun 06', '5km NNW of Boron, CA')
('There was also a magnitude 1.47', 'quarry blast on Jun 06', '11km E of Quarry near Portola Valley, CA')
('There was also a magnitude 1.77', 'quarry blast on Jun 06', '4km E of Butte, Montana')
('There was also a magnitude 1.1', 'quarry blast on Jun 06', '2km SW of Quarry near San Rafael, CA')
('There was also a magnitude 1.09', 'quarry blast on Jun 06', '13km SE of Tehachapi, CA')
('There was also a magnitude 2.0', 'explosion on Jun 05', '6km SSE of Princeton, Canada')
('There was also a magnitude 1.93', 'explosion on Jun 04', '3km S of Princeton, Canada')
('There was also a magnitude 1.09', 'explosion on Jun 03', '5km N of Fern Prairie, Washington')
('There was also a magnitude 1.76', 'quarry blast on Jun 03', '46km NNW of Los Algodones, B.C., MX')
('There was also a magnitude 1.82', 'explosion on Jun 03', '0km SW of Dundee, Oregon')
('There was also a magnitude 1.52', 'quarry blast on Jun 03', '4km SE of Home Gardens, CA')
('There was also a magnitude 1.38', 'quarry blast on Jun 03', '13km W of Mojave, CA')
('There was also a magnitude 1.84', 'quarry blast on Jun 03', '5km ENE of Butte, Montana')
('There was also a magnitude 1.24', 'quarry blast on Jun 03', '2km WSW of Quarry near Clayton, CA')
('There was also a magnitude 1.53', 'explosion on Jun 03', '22km NNE of Pasco, Washington')
('There was also a magnitude 1.07', 'quarry blast on Jun 03', '6km ENE of Tehachapi, CA')
('There was also a magnitude 1.37', 'quarry blast on Jun 02', '11km E of Quarry near Portola Valley, CA')
('There was also a magnitude 2.0', 'explosion on Jun 02', '25km SW of Cheney, Washington')
('There was also a magnitude 2.08', 'explosion on Jun 02', '2km NE of Coos Bay, Oregon')
('There was also a magnitude 1.02', 'quarry blast on Jun 02', '3km SSE of San Marcos, CA')
('There was also a magnitude 1.05', 'quarry blast on Jun 02', '6km SSW of Mojave, CA')
('There was also a magnitude 1.53', 'quarry blast on Jun 02', '20km S of Quarry near Atascadero, CA')
('There was also a magnitude 1.33', 'quarry blast on Jun 02', '7km ESE of Butte, Montana')
('There was also a magnitude 1.35', 'explosion on Jun 02', '5km E of Buckley, Washington')
('There was also a magnitude 1.34', 'quarry blast on Jun 02', '12km SE of Tehachapi, CA')
('There was also a magnitude 1.56', 'quarry blast on Jun 01', '5km NNW of Boron, CA')
('There was also a magnitude 1.42', 'explosion on Jun 01', '14km S of Leavenworth, Washington')
('There was also a magnitude 1.34', 'quarry blast on Jun 01', '3km SSE of Home Gardens, CA')
('There was also a magnitude 1.81', 'explosion on Jun 01', '12km S of Princeton, Canada')
('There was also a magnitude 1.01', 'quarry blast on Jun 01', '7km E of Lebec, CA')
('There was also a magnitude 1.4', 'quarry blast on Jun 01', '7km SE of Bonita, CA')
('There was also a magnitude 2.3', 'quarry blast on Jun 01', '17km N of Orofino, Idaho')
('There was also a magnitude 1.26', 'quarry blast on Jun 01', '4km SE of Home Gardens, CA')
('There was also a magnitude 1.35', 'quarry blast on Jun 01', '6km SSE of Valley Center, CA')
('There was also a magnitude 1.21', 'explosion on Jun 01', '16km W of Winston, Oregon')
('There was also a magnitude 1.18', 'explosion on May 31', '3km E of Kelso, Washington')
('There was also a magnitude 1.56', 'quarry blast on May 31', '8km ESE of Bonita, CA')
('There was also a magnitude 1.41', 'quarry blast on May 31', '45km NE of Holtville, CA')
('There was also a magnitude 1.23', 'quarry blast on May 31', '12km E of Quarry near Portola Valley, CA')
('There was also a magnitude 1.57', 'explosion on May 31', '2km SSW of Princeton, Canada')
('There was also a magnitude 1.41', 'quarry blast on May 31', '4km N of Norco, CA')
('There was also a magnitude 2.04', 'quarry blast on May 31', '28km N of Orofino, Idaho')
('There was also a magnitude 1.2', 'quarry blast on May 31', '7km SSW of Mojave, CA')
('There was also a magnitude 2.23', 'explosion on May 29', '10km S of Princeton, Canada')
('There was also a magnitude 1.66', 'explosion on May 28', '2km S of Princeton, Canada')
('There was also a magnitude 1.84', 'explosion on May 28', '7km NE of Abbotsford, Canada')
('There was also a magnitude 1.55', 'explosion on May 28', '3km SW of Drain, Oregon')
('There was also a magnitude 1.2', 'quarry blast on May 27', '0km S of Quarry near Vallejo, CA')
('There was also a magnitude 1.79', 'explosion on May 27', '5km SSE of Princeton, Canada')
('There was also a magnitude 1.37', 'quarry blast on May 27', '7km SSW of Mojave, CA')
('There was also a magnitude 1.48', 'quarry blast on May 27', '6km ESE of Butte, Montana')
('There was also a magnitude 1.31', 'quarry blast on May 27', '11km E of Quarry near Portola Valley, CA')
('There was also a magnitude 1.63', 'quarry blast on May 26', '46km NNW of Los Algodones, B.C., MX')
('There was also a magnitude 1.57', 'quarry blast on May 26', '4km NNW of Boron, CA')
('There was also a magnitude 1.99', 'explosion on May 26', '5km SSE of Princeton, Canada')
('There was also a magnitude 1.96', 'quarry blast on May 26', '11km ESE of Santa Ynez, CA')
('There was also a magnitude 1.31', 'quarry blast on May 26', '6km ENE of Three Forks, Montana')
('There was also a magnitude 1.44', 'explosion on May 26', '19km SE of Cottage Grove, Oregon')
('There was also a magnitude 1.08', 'explosion on May 26', '24km SE of Sweet Home, Oregon')
('There was also a magnitude 1.94', 'quarry blast on May 26', '10km N of Oroville, California')
('There was also a magnitude 1.08', 'quarry blast on May 26', '13km SE of Tehachapi, CA')
('There was also a magnitude 1.13', 'quarry blast on May 25', '8km ENE of Lebec, CA')
('There was also a magnitude 1.85', 'quarry blast on May 25', '4km ENE of Butte, Montana')
('There was also a magnitude 1.49', 'quarry blast on May 25', '7km SSE of Home Gardens, CA')
('There was also a magnitude 1.23', 'quarry blast on May 25', '10km NNW of Big Bear City, CA')
('There was also a magnitude 1.03', 'quarry blast on May 24', '1km WSW of Quarry near Milpitas, CA')
('There was also a magnitude 1.21', 'quarry blast on May 24', '45km NNW of Los Algodones, B.C., MX')
('There was also a magnitude 1.64', 'explosion on May 24', '14km SE of McCleary, Washington')
('There was also a magnitude 2.03', 'explosion on May 24', '7km S of Princeton, Canada')
('There was also a magnitude 1.35', 'explosion on May 24', '10km SE of Graham, Washington')
('There was also a magnitude 1.01', 'quarry blast on May 24', '14km W of Mojave, CA')
('There was also a magnitude 1.08', 'quarry blast on May 24', '5km NE of Rancho San Diego, CA')
('There was also a magnitude 1.31', 'quarry blast on May 24', '2km SW of Quarry near Clayton, CA')
('There was also a magnitude 1.49', 'quarry blast on May 23', '6km NNW of Boron, CA')
('There was also a magnitude 1.79', 'quarry blast on May 23', '46km NNW of Los Algodones, B.C., MX')
('There was also a magnitude 2.55', 'explosion on May 23', '11km S of Agassiz, Canada')
('There was also a magnitude 1.05', 'quarry blast on May 23', '1km WNW of Quarry near Vallejo, CA')
('There was also a magnitude 1.5', 'quarry blast on May 23', '11km E of Quarry near Portola Valley, CA')
('There was also a magnitude 1.4', 'quarry blast on May 23', '5km N of Lake Elsinore, CA')
('There was also a magnitude 1.33', 'quarry blast on May 23', '7km SSW of Mojave, CA')
('There was also a magnitude 1.6', 'explosion on May 23', '5km WNW of Junction City, Oregon')
('There was also a magnitude 1.17', 'quarry blast on May 23', '13km SE of Tehachapi, CA')
('There was also a magnitude 1.6', 'explosion on May 22', '4km S of Princeton, Canada')
|
author_initiations/GraphCharacterization.ipynb | ###Markdown
Graph characterization===
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
import re
import pandas as pd
import numpy as np
from collections import Counter, defaultdict
import sqlite3
from tqdm import tqdm
import random
import pickle
from datetime import datetime
import bisect
import matplotlib.pyplot as plt
import matplotlib.dates as md
import matplotlib
import pylab as pl
from IPython.core.display import display, HTML
import networkx as nx
working_dir = "/home/srivbane/shared/caringbridge/data/projects/sna-social-support/author_initiations"
assert os.path.exists(working_dir)
git_root_dir = !git rev-parse --show-toplevel
git_root_dir = git_root_dir[0]
figures_dir = os.path.join(git_root_dir, 'figures')
figures_dir
start_date = datetime.fromisoformat('2005-01-01')
start_timestamp = int(start_date.timestamp() * 1000)
end_date = datetime.fromisoformat('2016-06-01')
end_timestamp = int(end_date.timestamp() * 1000)
subset_start_date = datetime.fromisoformat('2014-01-01')
subset_start_timestamp = int(subset_start_date.timestamp() * 1000)
###Output
_____no_output_____
###Markdown
Read in the data
###Code
# load the list of valid users
data_selection_working_dir = "/home/srivbane/shared/caringbridge/data/projects/sna-social-support/data_selection"
valid_user_ids = set()
with open(os.path.join(data_selection_working_dir, "valid_user_ids.txt"), 'r') as infile:
for line in infile:
user_id = line.strip()
if user_id == "":
continue
else:
valid_user_ids.add(int(user_id))
len(valid_user_ids)
# load the list of valid sites
data_selection_working_dir = "/home/srivbane/shared/caringbridge/data/projects/sna-social-support/data_selection"
valid_site_ids = set()
with open(os.path.join(data_selection_working_dir, "valid_site_ids.txt"), 'r') as infile:
for line in infile:
site_id = line.strip()
if site_id == "":
continue
else:
valid_site_ids.add(int(site_id))
len(valid_site_ids)
# read the journal metadata with author type info added
s = datetime.now()
author_type_dir = "/home/srivbane/shared/caringbridge/data/projects/sna-social-support/author_type"
journal_metadata_filepath = os.path.join(author_type_dir, "journal_metadata_with_author_type.df")
journal_df = pd.read_feather(journal_metadata_filepath)
print(datetime.now() - s)
len(journal_df)
# as a quick fix for invalid dates in journals, when created_at is 0 we use the updated_at instead
# note that only 41 updates have this issue
invalid_created_at = journal_df.created_at <= 0
journal_df.loc[invalid_created_at, 'created_at'] = journal_df.loc[invalid_created_at, 'updated_at']
# read the user author type dataframe
author_type_dir = "/home/srivbane/shared/caringbridge/data/projects/sna-social-support/author_type"
user_patient_proportions_filepath = os.path.join(author_type_dir, 'user_patient_proportions.df')
user_df = pd.read_feather(user_patient_proportions_filepath)
len(user_df)
# read the user->user interactions dataframe
metadata_dir = "/home/srivbane/shared/caringbridge/data/projects/sna-social-support/user_metadata"
u2u_df = pd.read_feather(os.path.join(metadata_dir,"u2u_df.feather"))
len(u2u_df)
# read the site-level metadata
site_metadata_working_dir = "/home/srivbane/shared/caringbridge/data/derived/site_metadata"
site_metadata_filepath = os.path.join(site_metadata_working_dir, "site_metadata.feather")
site_metadata_df = pd.read_feather(site_metadata_filepath)
len(site_metadata_df)
# read in the interactions dataframe
metadata_dir = "/home/srivbane/shared/caringbridge/data/projects/sna-social-support/user_metadata"
author_to_site = os.path.join(metadata_dir, "interaction_metadata.h5")
ints_df = pd.read_hdf(author_to_site)
len(ints_df)
###Output
_____no_output_____
###Markdown
Filter the u2u links
###Code
valid_u2u_df = u2u_df[(u2u_df.from_user_id.isin(valid_user_ids))&(u2u_df.to_user_id.isin(valid_user_ids))]
len(valid_u2u_df), len(valid_u2u_df) / len(u2u_df)
inits_df = valid_u2u_df.sort_values(by='created_at', ascending=True).drop_duplicates(subset=['from_user_id', 'to_user_id'], keep='first')
len(inits_df), len(inits_df) / len(u2u_df)
valid_ints_df = ints_df[ints_df.user_id.isin(valid_user_ids)]
len(valid_ints_df), len(valid_ints_df) / len(ints_df)
model_start_date = datetime.fromisoformat('2014-01-01')
model_start_timestamp = int(model_start_date.timestamp() * 1000)
model_end_date = datetime.fromisoformat('2016-01-01')
model_end_timestamp = int(model_end_date.timestamp() * 1000)
###Output
_____no_output_____
###Markdown
Build the graph at a specified time
###Code
target_timestamp = end_timestamp
inits_subset = inits_df[inits_df.created_at < target_timestamp]
len(inits_subset)
s = datetime.now()
base_graph = nx.DiGraph()
nodes = set(inits_subset.from_user_id) | set(inits_subset.to_user_id)
edges = [tuple(row) for row in inits_subset[["from_user_id", "to_user_id"]].values]
base_graph.add_nodes_from(nodes)
base_graph.add_edges_from(edges)
print(f"{datetime.now() - s}")
G = base_graph
# compute users active at target_timestamp
# active users who have interacted within some threshold
threshold = int(1000 * 60 * 60 * 24 * (364 / 2)) # about 6 months
active_users = set(ints_df[(ints_df.created_at >= target_timestamp - threshold)&(ints_df.created_at <= target_timestamp)&(ints_df.user_id.isin(valid_user_ids))].user_id)
len(active_users), len(active_users) / len(valid_user_ids)
364 / 2
isolate_count = 0
no_outdegree_count = 0
no_indegree_count = 0
for active_user in active_users:
if active_user not in G.nodes:
no_outdegree_count += 1
no_indegree_count += 1
isolate_count += 1
continue
if G.out_degree(active_user) == 0:
no_outdegree_count += 1
if G.in_degree(active_user) == 0:
no_indegree_count += 1
if G.out_degree(active_user) == 0 and G.in_degree(active_user) == 0:
isolate_count += 1
np.max(np.array(G.in_degree())[:,1]), np.max(np.array(G.out_degree())[:,1])
isolate_count / len(active_users),\
no_indegree_count / len(active_users),\
no_outdegree_count / len(active_users)
sccs = nx.strongly_connected_components(G)
lscc = max(sccs, key=len)
second_largest_size = sorted(map(len, nx.strongly_connected_components(G)))[-2]
len(lscc), second_largest_size
wccs = nx.weakly_connected_components(G)
lwcc = max(wccs, key=len)
second_largest_size = sorted(map(len, nx.weakly_connected_components(G)))[-2]
len(lwcc), second_largest_size
G_active = G.subgraph(active_users)
len(G_active), len(G)
np.max(np.array(G_active.in_degree())[:,1]), np.max(np.array(G_active.out_degree())[:,1])
wccs = nx.weakly_connected_components(G_active)
lwcc = max(wccs, key=len)
second_largest_size = sorted(map(len, nx.weakly_connected_components(G_active)))[-2]
len(lwcc), second_largest_size
sccs = nx.strongly_connected_components(G_active)
lscc = max(sccs, key=len)
second_largest_size = sorted(map(len, nx.strongly_connected_components(G_active)))[-2]
len(lscc), second_largest_size
# warning: SLOW
s = datetime.now()
diam = nx.diameter(G_active.subgraph(lscc))
print(f"{datetime.now() - s}")
diam
G_active.number_of_edges()
# compute number of dyads in G_active (and number of authors in dyads)
G_recip = G_active.to_undirected(reciprocal=True)
authors_in_dyads = sum([degree > 0 for node, degree in G_recip.degree()])
total_dyads = G_recip.number_of_edges()
authors_in_dyads, total_dyads
# number of SCCs and WCCs
g = G_active
scc_sizes = np.array(list(map(len, nx.strongly_connected_components(g))))
scc_sizes = scc_sizes[scc_sizes > 1]
wcc_sizes = np.array(list(map(len, nx.weakly_connected_components(g))))
wcc_sizes = wcc_sizes[wcc_sizes > 1]
len(scc_sizes), len(wcc_sizes)
x = np.sort(scc_sizes)[:-1]
fig, ax = plt.subplots(1, 1, figsize=(5.47807 / 2, 1.6))
ax.hist(x, bins=12, log=False, align='mid')
ax.set_xlabel("SCC Size")
ax.set_ylabel("# Components")
plt.xticks(range(2, 16, 2))
#plt.yticks(np.logspace(start=0, stop=4, base=10, num=5))
import matplotlib.ticker as ticker
plt.yscale('log')
#ax.yaxis.set_major_locator(ticker.FixedLocator([0, 10, 10000]))
#ax.yaxis.set_minor_locator(ticker.NullLocator())
#ax.yaxis.set_minor_formatter(ticker.ScalarFormatter())
plt.tight_layout(pad=0)
plt.margins(0,0)
plt.savefig(os.path.join(figures_dir, 'network_scc_dist_plot.pdf'), dpi=180, pad_inches=0)
plt.show()
x = np.sort(wcc_sizes)[:-1]
fig, ax = plt.subplots(1, 1, figsize=(5.47807 / 2, 1.6))
ax.hist(x, bins=12, log=True, align='mid')
ax.set_xlabel("WCC Size")
ax.set_ylabel("# Components")
plt.xticks(range(2, 18, 2))
plt.tight_layout(pad=0)
plt.margins(0,0)
plt.savefig(os.path.join(figures_dir, 'network_wcc_dist_plot.pdf'), dpi=180, pad_inches=0)
plt.show()
bins = []
#bins.append(start_date.timestamp() * 1000)
year = 2005
month = 1
while year != 2016 or month != 7:
bins.append(datetime.fromisoformat(f"{year}-{month:02}-01").timestamp() * 1000)
month += 3
if month >= 12:
year += 1
month = 1
len(bins)
###Output
_____no_output_____
###Markdown
We compute network features every three months during the year:```1 2 3 4 5 6 7 8 9 10 11 12^ ^ ^ ^```
###Code
threshold = int(1000 * 60 * 60 * 24 * (364 / 2)) # about 6 months
active_user_counts = []
lwcc_sizes = []
lscc_sizes = []
isolate_counts = []
in_lwcc_counts = []
in_other_comp_counts = []
for target_timestamp in tqdm(bins):
inits_subset = inits_df[inits_df.created_at <= target_timestamp]
g = nx.DiGraph()
nodes = set(inits_subset.from_user_id) | set(inits_subset.to_user_id)
edges = [tuple(row) for row in inits_subset[["from_user_id", "to_user_id"]].values]
base_graph.add_nodes_from(nodes)
base_graph.add_edges_from(edges)
active_users = set(valid_ints_df[(valid_ints_df.created_at >= target_timestamp - threshold)&(valid_ints_df.created_at <= target_timestamp)].user_id)
g_active = G.subgraph(active_users)
active_user_count = len(active_users)
active_user_counts.append(active_user_count)
# nodes that appear in the active users list but not in the graph are isolates
isolate_count = len(active_users - g_active.nodes)
wccs = nx.weakly_connected_components(g_active)
lwcc = max(wccs, key=len)
lwcc_sizes.append(len(lwcc))
in_lwcc_count = 0
in_other_comp_count = 0
for active_user in g_active.nodes:
if active_user in lwcc:
in_lwcc_count += 1
else:
in_other_comp_count += 1
isolate_counts.append(isolate_count)
in_lwcc_counts.append(in_lwcc_count)
in_other_comp_counts.append(in_other_comp_count)
assert active_user_count == isolate_count + in_lwcc_count + in_other_comp_count
sccs = nx.strongly_connected_components(g_active)
lscc = max(sccs, key=len)
lscc_sizes.append(len(lscc))
active_user_counts = np.array(active_user_counts)
isolate_pcts = np.array(isolate_counts) / active_user_counts
in_lwcc_pcts = np.array(in_lwcc_counts) / active_user_counts
in_other_comp_pcts = np.array(in_other_comp_counts) / active_user_counts
fig, ax = plt.subplots(1, 1, figsize=(8,2))
plt.plot(bins, isolate_pcts, linestyle='solid', linewidth=2, label='Isolates')
plt.plot(bins, in_lwcc_pcts, linestyle='dotted', linewidth=2, label='LWCC')
plt.plot(bins, np.array(lscc_sizes) / active_user_counts, linestyle='dashed', linewidth=2, label='LSCC')
plt.plot(bins, in_other_comp_pcts, linestyle='dashdot', linewidth=2, label='Other')
plt.axvline(subset_start_timestamp, color='black', alpha=0.2, linestyle='--', linewidth=1)
plt.legend(ncol=4, frameon=False, loc=9)
plt.ylim((0,0.99))
plt.ylabel("% active authors")
newline = '\n'
xticks = [datetime.fromisoformat(f"{2005 + i // 2}-{'01' if i % 2 == 0 else '07'}-01").timestamp() * 1000 for i in range((2016 - 2005) * 2 + 2)]
plt.xticks(
xticks,
[f"{newline if i%2 == 0 else ''}{datetime.utcfromtimestamp(be / 1000).strftime('%Y-%m')}" for i, be in enumerate(xticks)])
plt.tight_layout(pad=0)
plt.margins(0,0)
plt.savefig(os.path.join(figures_dir, 'network_summary_timeline_plot.pdf'), dpi=200, pad_inches=0)
plt.show()
isolate_pcts[-1], in_lwcc_pcts[-1], (np.array(lscc_sizes) / active_user_counts)[-1], in_other_comp_pcts[-1]
# range of active user counts during the analysis period
active_user_counts[np.array(bins) >= subset_start_timestamp]
fig, ax = plt.subplots(1, 1, figsize=(8,2))
plt.plot(bins, active_user_counts, linestyle='-', linewidth=2, label='Active Authors')
#plt.plot(bins, lwcc_sizes, linestyle='-', linewidth=2, label='LWCC Size')
#plt.plot(bins, lscc_sizes, linestyle='-', linewidth=2, label='LSCC Size')
plt.axvline(subset_start_timestamp, color='black', alpha=0.2, linestyle='--', linewidth=1)
#plt.legend(ncol=2, frameon=False, loc=10)
plt.ylabel("Active Author Count")
newline = '\n'
xticks = [datetime.fromisoformat(f"{2005 + i // 2}-{'01' if i % 2 == 0 else '07'}-01").timestamp() * 1000 for i in range((2016 - 2005) * 2 + 2)]
plt.xticks(
xticks,
[f"{newline if i%2 == 0 else ''}{datetime.utcfromtimestamp(be / 1000).strftime('%Y-%m')}" for i, be in enumerate(xticks)])
plt.tight_layout(pad=0)
plt.margins(0,0)
plt.savefig(os.path.join(figures_dir, 'network_user_summary_timeline_plot.pdf'), dpi=200, pad_inches=0)
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(8,2))
#plt.plot(bins, isolate_pcts, linestyle='-', linewidth=2, label='Isolates')
plt.plot(bins, np.array(lwcc_sizes) / active_user_counts, linestyle='-', linewidth=2, label='LWCC %')
plt.plot(bins, np.array(lscc_sizes) / active_user_counts, linestyle='-', linewidth=2, label='LSCC Size')
plt.axvline(subset_start_timestamp, color='black', alpha=0.2, linestyle='--', linewidth=1)
#plt.legend(ncol=2, frameon=False, loc=10)
plt.ylim((0,0.9))
plt.ylabel("% active authors")
newline = '\n'
xticks = [datetime.fromisoformat(f"{2005 + i // 2}-{'01' if i % 2 == 0 else '07'}-01").timestamp() * 1000 for i in range((2016 - 2005) * 2 + 2)]
plt.xticks(
xticks,
[f"{newline if i%2 == 0 else ''}{datetime.utcfromtimestamp(be / 1000).strftime('%Y-%m')}" for i, be in enumerate(xticks)])
plt.tight_layout(pad=0)
plt.margins(0,0)
plt.savefig(os.path.join(figures_dir, 'network_wcc_vs_scc_timeline_plot.pdf'), dpi=200, pad_inches=0)
plt.show()
#hist, bin_edges = np.histogram(initiations_df.created_at, bins=bins)
#plt.plot(bin_edges[:-1], hist, linestyle='-', linewidth=1, label='Initiations')
n = 20000
s = 24
# sample n initiations
# using s negative samples
# valid candidate users are ALL valid authors who have posted their first update at this time
inits_subset = inits_df[(inits_df.created_at >= model_start_timestamp)&(inits_df.created_at <= model_end_timestamp)]
inits_subset = inits_subset.sample(n=n).sort_values(by='created_at', ascending=True)
inits_subset.head()
user_df['time_to_first_update'] = user_df.first_update - model_start_timestamp
# if first update is positive, it is still in the future
# if first update is <= 0, then it should already be an eligible node
# however, it might not be in the network, since the base network only contains connected nodes
active_user_ids = user_df.loc[user_df.time_to_first_update <= 0, 'user_id']
len(active_user_ids) / len(user_df)
# create data structures storing all of the edges that do not yet but will exist in the model
# these will be added incrementally as computation continues
model_subset = inits_df[(inits_df.created_at >= model_start_timestamp)&(inits_df.created_at <= model_end_timestamp)]
all_edges = [(created_at, tuple(row))
for created_at, row
in zip(model_subset.created_at, model_subset[["from_user_id", "to_user_id"]].values)]
edge_df = pd.DataFrame(all_edges, columns=['created_at', 'edge'])
edge_df['time_to_existence'] = edge_df.created_at - model_start_timestamp
# if time_to_existence <= 0, it should exist in the network
assert np.all(edge_df.time_to_existence > 0)
len(edge_df)
prev_timestep = model_start_timestamp
active_user_ids = user_df.loc[user_df.time_to_first_update <= 0, 'user_id']
sampled_initiations = []
for from_user_id, to_user_id, created_at in tqdm(zip(inits_subset.from_user_id, inits_subset.to_user_id, inits_subset.created_at), total=len(inits_subset)):
curr_timestep = created_at
elapsed_time = curr_timestep - prev_timestep
if elapsed_time > 0: # if 2+ sampled initiations occur at the same time, elapsed_time == 0
# update the active users set
user_df.time_to_first_update -= elapsed_time
active_user_ids = user_df.loc[user_df.time_to_first_update <= 0, 'user_id']
# update the graph with all initiations between previous timestep and now
edge_df.time_to_existence -= elapsed_time
new_edge_mask = edge_df.time_to_existence < 0 # edges that exist AT zero happen at the current timestep, including the edge from_user_id, to_user_id
new_edges = edge_df[new_edge_mask]
edge_df = edge_df[~new_edge_mask] # TODO Use loc for assignment?
#assert np.all(edge_df[edge_df.time_to_existence==0].created_at == created_at)
G.add_edges_from(new_edges.edge)
# also add edges to the WCC graph
for from_user_id, to_user_id in new_edges.edge:
wcc_graph.add_edge(from_user_id, to_user_id)
# candidate users are all active users...
candidate_user_ids = set(active_user_ids)
# ... minus the true initiation target...
candidate_user_ids.discard(to_user_id)
# ... minus users already initiated to by this user
if from_user_id in G:
candidate_user_ids -= set(G[from_user_id].keys())
# we only sample s of the candidate users
negative_sampled_users = list(random.sample(candidate_user_ids, s))
# now, extract ids for the target user and all of the negative sampled users
indegree_list = []
outdegree_list = []
is_reciprocal_list = []
is_weakly_connected_list = []
is_friend_of_friend_list = []
#is_strongly_connected_list = []
for user_id in [to_user_id] + negative_sampled_users:
is_friend_of_friend = False
if user_id in G:
indegree = G.in_degree(user_id)
outdegree = G.out_degree(user_id)
is_reciprocal = from_user_id in G[user_id]
is_weakly_connected = wcc_graph.are_weakly_connected(from_user_id, user_id)
if is_weakly_connected:
is_friend_of_friend = compute_is_friend_of_friend(G, from_user_id, user_id)
#is_strongly_connected = are_strongly_connected(G, from_user_id, user_id)
else:
indegree = 0
outdegree = 0
is_reciprocal = False
is_weakly_connected = False
indegree_list.append(indegree)
outdegree_list.append(outdegree)
is_reciprocal_list.append(is_reciprocal)
is_weakly_connected_list.append(is_weakly_connected)
is_strongly_connected_list.append(is_strongly_connected)
is_friend_of_friend_list.append(is_friend_of_friend)
d = {
'initiator_user_id': from_user_id,
'target_user_id': to_user_id,
'negative_user_ids': negative_sampled_users,
'created_at': created_at,
'indegree_list': indegree_list,
'outdegree_list': outdegree_list,
'is_reciprocal_list': is_reciprocal_list,
'is_weakly_connected_list': is_weakly_connected_list,
'is_friend_of_friend_list': is_friend_of_friend_list
}
sampled_initiations.append(d)
prev_timestep = curr_timestep
sampled_inits_df = pd.DataFrame(sampled_initiations)
len(sampled_inits_df)
# save the sampled initiations dataframe with graph features
# so that the expensive graph feature computation can be saved
sampled_inits_df_filename = "sampled_inits_df.pickle"
sampled_inits_df_filepath = os.path.join(working_dir, sampled_inits_df_filename)
sampled_inits_df.to_pickle(sampled_inits_df_filepath)
print("Finished.")
# read the sampled initiations dataframe with graph features
sampled_inits_df_filename = "sampled_inits_df.pickle"
sampled_inits_df_filepath = os.path.join(working_dir, sampled_inits_df_filename)
sampled_inits_df = pd.read_pickle(sampled_inits_df_filepath)
len(sampled_inits_df)
sampled_inits_df.head()
# dictionaries for computing user-level features
author_type_dict = {row.user_id: row.user_author_type for row in user_df.itertuples()}
health_condition_dict = {row.user_id: row.health_condition for row in user_df.itertuples()}
is_multisite_author_dict = {row.user_id: row.is_multisite_author for row in user_df.itertuples()}
is_mixedsite_author_dict = {row.user_id: row.is_mixedsite_author for row in user_df.itertuples()}
update_count_dict = {row.user_id: row.update_count for row in user_df.itertuples()}
update_frequency_dict = {row.user_id: row.update_frequency for row in user_df.itertuples()}
# compute days_since_most_recent_update
# given a target user_id and a created_at timestamp
def get_most_recent_update(user_id, created_at):
update_times = user_updates_dict[user_id]
# update_times is a sorted list of created_at times for all updates by the given user_id
ind = bisect.bisect_right(update_times, created_at)
most_recent_update = update_times[ind-1]
return most_recent_update
def compute_days_since_most_recent_update(user_id, created_at):
most_recent_update = get_most_recent_update(user_id, created_at)
ms_since_most_recent_update = created_at - most_recent_update
days_since_most_recent_update = ms_since_most_recent_update / (1000 * 60 * 60 * 24)
return days_since_most_recent_update
def compute_days_since_first_update(user_id, created_at):
update_times = user_updates_dict[user_id]
ind = bisect.bisect_right(update_times, created_at)
most_recent_update = update_times[ind-1]
first_update = update_times[0]
ms_since_first_update = most_recent_update - first_update
days_since_first_update = ms_since_first_update / (1000 * 60 * 60 * 24)
return days_since_first_update
sampled_initiations_filename = "author_initiation_choices_train_20000.csv"
sampled_initiations_filepath = os.path.join(working_dir, sampled_initiations_filename)
with open(sampled_initiations_filepath, 'w') as outfile:
header = """
choice_id,
initiator_user_id,
candidate_user_id,
is_target,
target_outdegree,
target_indegree,
target_has_indegree,
is_reciprocal,
is_weakly_connected,
is_friend_of_friend,
is_author_type_shared,
target_author_type,
initiator_author_type,
target_health_condition,
is_health_condition_shared,
target_is_multisite_author,
target_is_mixedsite_author,
target_update_count,
target_update_frequency,
target_days_since_most_recent_update,
target_days_since_first_update,
target_site_visits
"""
header = re.sub(r'\s+', '', header).strip() + "\n"
format_str = "iiiiiiiiiiiccciiiidddi"
outfile.write(header)
for i, row in tqdm(enumerate(sampled_inits_df.itertuples()), total=len(sampled_inits_df)):
choice_id = i
initiator_user_id = row.initiator_user_id
initiator_author_type = author_type_dict[initiator_user_id]
initiator_health_condition = health_condition_dict[initiator_user_id]
for i, user_id in enumerate([row.target_user_id] + row.negative_user_ids):
is_target = int(i == 0)
candidate_user_id = user_id
target_outdegree = row.outdegree_list[i]
target_indegree = row.indegree_list[i]
target_has_indegree = int(target_indegree > 0)
is_reciprocal = int(row.is_reciprocal_list[i])
is_weakly_connected = int(row.is_weakly_connected_list[i])
is_friend_of_friend = int(row.is_friend_of_friend_list[i])
# Include the user-level features for the candidates
target_author_type = author_type_dict[candidate_user_id]
is_author_type_shared = int(initiator_author_type == target_author_type)
target_health_condition = health_condition_dict[candidate_user_id]
is_health_condition_shared = int(initiator_health_condition == target_health_condition)
target_is_multisite_author = int(is_multisite_author_dict[candidate_user_id])
target_is_mixedsite_author = int(is_mixedsite_author_dict[candidate_user_id])
target_update_count = update_count_dict[candidate_user_id]
target_update_frequency = update_frequency_dict[candidate_user_id]
target_days_since_most_recent_update = compute_days_since_most_recent_update(candidate_user_id, row.created_at)
target_days_since_first_update = compute_days_since_first_update(candidate_user_id, row.created_at)
target_site_visits = user_visits_dict[candidate_user_id]
line_vars = [
choice_id,
initiator_user_id,
candidate_user_id,
is_target,
target_outdegree,
target_indegree,
target_has_indegree,
is_reciprocal,
is_weakly_connected,
is_friend_of_friend,
is_author_type_shared,
target_author_type,
initiator_author_type,
target_health_condition,
is_health_condition_shared,
target_is_multisite_author,
target_is_mixedsite_author,
target_update_count,
target_update_frequency,
target_days_since_most_recent_update,
target_days_since_first_update,
target_site_visits
]
line = ",".join([str(v) for v in line_vars]) + "\n"
#line = f"{choice_id},{initiator_user_id},{candidate_user_id},{is_target},{target_outdegree},{target_indegree},{target_has_indegree},{is_reciprocal},{is_author_type_shared},{target_author_type},{initiator_author_type}\n"
outfile.write(line)
print(f"R column types format string: {format_str}")
sampled_initiations_filepath
###Output
_____no_output_____ |
.ipynb_checkpoints/main-pipeline-checkpoint.ipynb | ###Markdown
![title](source/title.png) Table of Contents Initialization Dataset Looking for Patterns Moddeling <a id="01" style=" background-color: 37509b; border: none; color: white; padding: 2px 10px; text-align: center; text-decoration: none; display: inline-block; font-size: 10px;" href="toc">TOC ↻ 1. Initialization Inicialização Pacotes Funcoes Dados de Indicadores Sociais Dados de COVID-19 --> 1.1 Description <a href="01"style=" border-radius: 10px; background-color: f1f1f1; border: none; color: 37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px;">↻ Start here if...You have some experience with R or Python and machine learning basics. This is a perfect competition for data science students who have completed an online course in machine learning and are looking to expand their skill set before trying a featured competition. **Competition Description**Ask a home buyer to describe their dream house, and they probably won't begin with the height of the basement ceiling or the proximity to an east-west railroad. But this playground competition's dataset proves that much more influences price negotiations than the number of bedrooms or a white-picket fence.![image](https://storage.googleapis.com/kaggle-competitions/kaggle/5407/media/housesbanner.png)With 79 explanatory variables describing (almost) every aspect of residential homes in Ames, Iowa, this competition challenges you to predict the final price of each home.Practice SkillsCreative feature engineering Advanced regression techniques like random forest and gradient boostingAcknowledgmentsThe Ames Housing dataset was compiled by Dean De Cock for use in data science education. It's an incredible alternative for data scientists looking for a modernized and expanded version of the often cited Boston Housing dataset. 1.2 Packages and Modules <a href="01"style=" border-radius: 10px; background-color: f1f1f1; border: none; color: 37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px;">↻
###Code
import pandas as pd
import numpy as np
import seaborn as sns
from scipy.stats import randint
from numpy.random import uniform
from sklearn.linear_model import Ridge , \
LinearRegression, \
Lasso
from sklearn.metrics import mean_squared_log_error, \
mean_absolute_error, \
r2_score
from sklearn.model_selection import RandomizedSearchCV, \
KFold
from sklearn.pipeline import Pipeline
# from sklearn.preprocessing import MinMaxScaler
#!pip install xtlearn
from xtlearn.feature_selection import *
from xtlearn.preprocessing import *
###Output
_____no_output_____
###Markdown
1.3 Settings <a href="01"style=" border-radius: 10px; background-color: f1f1f1; border: none; color: 37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px;">↻
###Code
sns.set(style="darkgrid")
###Output
_____no_output_____
###Markdown
<a id="02" style=" background-color: 37509b; border: none; color: white; padding: 2px 10px; text-align: center; text-decoration: none; display: inline-block; font-size: 10px;" href="toc">TOC ↻ 2. Dataset Inicialização Pacotes Funcoes Dados de Indicadores Sociais Dados de COVID-19 --> 1.1 Import dataset <a href="02"style=" border-radius: 10px; background-color: f1f1f1; border: none; color: 37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px;">↻ I want to work with Pipelines. However, It will not be possible to use the pipelines to every step of my approach. When this occurs, I'll redefine the dataframes. To make the work easier, I will define a function to reset the initial dataframes every time I need
###Code
train = pd.read_csv('data/train.csv')
test = pd.read_csv('data/test.csv')
def reset_datasets(dic_subs = {'1stFlrSF':'FirstFlrSF','2ndFlrSF':'SecFlrSF'}):
# defining global variables
global df_trn,df_tst,X_trn,X_tst,y_trn,y_tst,train_size,test_size,full_size, df_full,X,y
# deleting old datasets
try:
del df_trn,df_tst,X_trn,X_tst,y_trn,y_tst
except:
pass
# get the training and test datasets
df_trn = train.copy()
X_tst = test.drop(columns=['Id']).copy()
# splitting features and target
X_trn = df_trn.drop(columns=['Id','SalePrice'])
y_trn = df_trn['SalePrice']
# Renaming columns with naming starting by numbers
X_trn = X_trn.rename(columns = dic_subs)
X_tst = X_tst.rename(columns = dic_subs)
# evaluating dataset lengths
train_size = len(train)
test_size = len(test)
full_size = train_size + test_size
# concatenating test and training datasets
df_full = pd.concat([train,test]).set_index('Id').rename(columns = dic_subs)
# splitting features and target of the full dataset
X = df_full.drop(columns = ['SalePrice'])
y = df_full['SalePrice']
X = X.rename(columns = dic_subs)
reset_datasets()
###Output
_____no_output_____
###Markdown
2.2 Useful Classes and Functions <a href="02"style=" border-radius: 10px; background-color: f1f1f1; border: none; color: 37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px;">↻
###Code
class SalePriceTransformer(BaseEstimator,TransformerMixin):
'''
Description
----------
This class will transform the target data.
Arguments
----------
target_name: string, default='SalePrice'
The name of the target column
active: boolean
This parameter controls if the selection will occour. This is useful in hyperparameters searchs to test the contribution
of selection in the final score
'''
def __init__(self,active=True,target_name = 'SalePrice'):
self.target_name = target_name
self.active = active
def fit(self,y):
self.log_ymin = np.log10(y.min())
self.log_ymax = np.log10(y.max())
return self
def transform(self,y):
if not self.active:
return y
else:
return self.__transformation(y)
def __transformation(self,y_in):
y = y_in.copy()
log_y = np.log10(y)
return log_y
def inverse_transform(self,y):
if not self.active:
return y
else:
return self.__inv_transformation(y)
def __inv_transformation(self,log_y_in):
log_y = log_y_in.copy()
y = 10**(log_y)
return y.astype(int)
###Output
_____no_output_____
###Markdown
<a id="03" style=" background-color: 37509b; border: none; color: white; padding: 2px 10px; text-align: center; text-decoration: none; display: inline-block; font-size: 10px;" href="toc">TOC ↻ 3. Looking for Patterns Inicialização Pacotes Funcoes Dados de Indicadores Sociais Dados de COVID-19 -->
###Code
import matplotlib.pyplot as plt
reset_datasets()
raw_proc = Pipeline(steps = [
('DropMissing',DropMissingDataColumns()),
('Encoding',Encoder(drop_first=True)),
])
raw_proc.fit(X_trn,y_trn)
y_transf = SalePriceTransformer().fit(y_trn)
X = raw_proc.transform(X)
X_trn_pp = X.iloc[:train_size]
X_tst_pp = X.iloc[train_size:full_size]
y_trn = y_transf.transform(y_trn)
df_corr = pd.concat(
[X_trn_pp.reset_index(drop=True),y_trn],
axis = 1).corr().abs().sort_values(by= 'SalePrice',ascending=False)
imp_features = df_corr[df_corr['SalePrice'] > 0.3]['SalePrice'].index.to_list()
# imp_features.remove('SalePrice')
df_plot = pd.concat(
[X_trn_pp.reset_index(drop=True),y_trn],
axis = 1)[imp_features]
df_corr[['SalePrice']].head(10)
###Output
_____no_output_____
###Markdown
OverallQual
###Code
plt.scatter(df_plot['OverallQual'], df_plot['SalePrice'], c="g", s=14, label="Luck")
###Output
_____no_output_____
###Markdown
GrLivArea
###Code
# fa = FeatureApply( destination = 'GrLivArea', apply = '(<GrLivArea>)')
fa = FeatureApply( destination = 'GrLivArea', apply = '(np.log(<GrLivArea>))')
# df_plot = fa.transform(df_plot)
plt.scatter(fa.transform(df_plot)['GrLivArea'], df_plot['SalePrice'], c="g", s=14, label="Luck")
fa.transform(df_plot)[['GrLivArea','SalePrice']].corr()['SalePrice']['GrLivArea']
###Output
_____no_output_____
###Markdown
YearBuilt
###Code
# fa = FeatureApply( destination = 'YearBuilt', apply = '<YearBuilt>')
fa = FeatureApply( destination = 'YearBuilt', apply = '10**14*np.log1p(<YearBuilt>/1980)**90')
plt.scatter(fa.transform(df_plot)['YearBuilt'], df_plot['SalePrice'], c="g", s=14, label="Luck")
fa.transform(df_plot)[['YearBuilt','SalePrice']].corr()['SalePrice']['YearBuilt']
###Output
_____no_output_____
###Markdown
<a id="04" style=" background-color: 37509b; border: none; color: white; padding: 2px 10px; text-align: center; text-decoration: none; display: inline-block; font-size: 10px;" href="toc">TOC ↻ 4. Modelling Inicialização Pacotes Funcoes Dados de Indicadores Sociais Dados de COVID-19 --> 4.1 Preprocessing <a href="04"style=" border-radius: 10px; background-color: f1f1f1; border: none; color: 37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px;">↻ My first innocent approach is:* to drop columns with more than 6% of missing data* in categorical features, replace NaN value by the mode* in numerical features, replace NaN values by the mean* feature transformation in 'GrLivArea' and 'YearBuilt' the makes the correlation with 'SalePrice' greater.* encode categorical features dropping one column to avoid the dummy variable trap* scale the features between 0 and 1
###Code
preproc = Pipeline(steps = [
('DropMissing',DropMissingDataColumns(max_missing = 0.06)),
('Imputer', MeanModeImputer()),
('apGrLivArea',FeatureApply( destination = 'GrLivArea', apply = '(np.log(<GrLivArea>)/7.1)')),
('apYearBuilt',FeatureApply( destination = 'YearBuilt', apply = '10**14*np.log1p(<YearBuilt>/1980)**90')),
('Encoding',Encoder()),
('Scaler' , ScalerDF()), ])
reset_datasets()
target_proc = SalePriceTransformer().fit(y_trn)
y_trn = target_proc.transform(y_trn)
y_tst_true = target_proc.transform(df_tst_true['SalePrice'])
preproc.fit(X,y)
X = preproc.transform(X)
X_trn = X.iloc[:train_size]
X_tst = X.iloc[train_size:full_size]
###Output
_____no_output_____
###Markdown
4.2 Regression Approach <a href="04"style=" border-radius: 10px; background-color: f1f1f1; border: none; color: 37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px;">↻
###Code
def Regression_Search(X,y,
Regressor,
param_distributions,
n_iter = 50, scoring = 'neg_mean_squared_log_error',
n_splits = 10, seed = 42,
):
X_trn_pp = X
y_trn = y
search_cv = RandomizedSearchCV(
Regressor,
param_distributions,
n_iter = n_iter,
scoring = scoring,
cv = KFold(n_splits = n_splits, shuffle = True,random_state = seed))
search_cv.fit(X_trn_pp, y_trn)
scv_cols = ['params','mean_test_score','std_test_score']
results = pd.DataFrame(search_cv.cv_results_).sort_values('rank_test_score')[scv_cols]
estimator = search_cv.best_estimator_
estimator.fit(X_trn_pp,y_trn)
y_pred = target_proc.inverse_transform(estimator.predict(X_trn_pp))
print('r2_score_trn = %.4f' % r2_score(target_proc.inverse_transform(y_trn),y_pred))
print('RMSLE_trn = %.4f' % mean_squared_log_error(target_proc.inverse_transform(y_trn),y_pred)**0.5)
return estimator,pd.DataFrame(search_cv.cv_results_).sort_values('rank_test_score')
###Output
_____no_output_____
###Markdown
4.2.1 Stochastic Gradient Descent - SGDRegressor
###Code
from sklearn.linear_model import SGDRegressor
est,res = Regression_Search(
X_trn,y_trn,
Regressor = SGDRegressor(shuffle = False,loss = 'huber'),
param_distributions = {
'alpha' : 10**uniform(np.log10(0.00005),np.log10(0.0015),200),
'epsilon' : 10**uniform(np.log10(0.05),np.log10(0.15),200),
'tol' : 10**uniform(-195,-90,200),
'l1_ratio': uniform(0,1,200),
'learning_rate': ['optimal','adaptive'],},
n_iter = 100,
n_splits = 2,
scoring = 'neg_mean_squared_log_error')
res.head(5)
my_submission = pd.DataFrame({'Id': test.Id, 'SalePrice': target_proc.inverse_transform(est.predict(X_tst))})
my_submission.to_csv('data/submission.csv', index=False)
my_submission
###Output
_____no_output_____ |
quantum ML/Quantum Distance-based Classifier/Quantum Distance-based Classifier.ipynb | ###Markdown
Quantum Distance-based Classifier (QDBC)This is a code implementation of distance-based classifier (similar to k-nearest neighbour algorithm) using quantum computer adapted from research paper ["Implementing a distance-based classifier with a quantum interference circuit"](https://arxiv.org/abs/1703.10793) by Maria Schuld, Mark Fingerhuth, and Francesco Petruccione. In this code I would not explain too many things to keep it clear and concise, please read the research paper if you would like to know the details (spoiler: the paper is so well written and easy to understand for anyone having some basics in quantum computation imo).I found the modern Qiskit rewrite of the original code from the author [here](https://github.com/markf94/ibmq_code_epl_119_60002/blob/master/qiskit_distance_based_classifier.py). From my understanding, that code implementation is built specifically based on the datapoints used in the paper. So I decided to create a new one (this code) that can be implemented using any datapoints from the Iris dataset.Goal of this code: Take any three datapoints from the datasets and use two of them as training input, then we would like to predict the label/class of the third one (testing input) using the quantum version of distance-based classifier. Data PreprocessingWe will only use the first 2 classes, 'setosa' (0) and 'versicolor' (1), and the first 2 features, 'sepal length' and 'sepal width'.We need to do 3 things before we go to the quantum side: 1. [Standardization](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html): make the datasets having zero mean and unit variance. 2. [Normalization](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html): normalize the features value for every datapoints. 3. Angle extraction: After 1). and 2)., every datapoints can be represented as a coordinate vector inside a unitary circle (polar coordinate) with the origin as its center. We can extract the value of the angle form by the vector and the positive x-axis, and then replace the datapoint with this value. We need to do this because we will use [Quantum Amplitude Embedding](https://pennylane.ai/qml/glossary/quantum_embedding.html) scheme to convert the datasets into quantum states.
###Code
# import relevant libraries
import numpy as np
from matplotlib import pyplot as plt
from sklearn import datasets
from sklearn.preprocessing import normalize, StandardScaler
# load the datasets
iris = datasets.load_iris()
# take only the first 2 classes and the first 2 features
y = iris.target
X = np.concatenate((iris.data[:, :2][np.where(y == 0)], iris.data[:, :2][np.where(y == 1)]))
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 6
fig_size[1] = 4
plt.rcParams["figure.figsize"] = fig_size
plt.scatter(X[np.where(y == 0)][:,0], X[np.where(y == 0)][:,1], color='red', label='y = 0')
plt.scatter(X[np.where(y == 1)][:,0], X[np.where(y == 1)][:,1], color='blue', label='y = 1')
plt.xlabel('sepal length')
plt.ylabel('sepal width')
plt.title('Dataset')
plt.legend()
plt.show()
# standardization
X = StandardScaler().fit_transform(X)
plt.scatter(X[np.where(y == 0)][:,0], X[np.where(y == 0)][:,1], color='red', label='y = 0')
plt.scatter(X[np.where(y == 1)][:,0], X[np.where(y == 1)][:,1], color='blue', label='y = 1')
plt.xlabel('sepal length')
plt.ylabel('sepal width')
plt.title('Dataset after Standardization')
plt.legend()
plt.show()
# normalization
X = normalize(X)
X_train_0 = X[12, :]
X_train_1 = X[84, :]
X_test_0 = X[90, :]
#X_test_1 = X[36, :]
#X_test_2 = X[80, :]
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 8
fig_size[1] = 8
plt.rcParams["figure.figsize"] = fig_size
plt.scatter(X[np.where(y == 0)][:,0], X[np.where(y == 0)][:,1], color='red', label='y = 0')
plt.scatter(X[np.where(y == 1)][:,0], X[np.where(y == 1)][:,1], color='blue', label='y = 1')
plt.scatter(X_train_0[0], X_train_0[1], color='black', label='train data')
plt.scatter(X_train_1[0], X_train_1[1], color='black')
plt.scatter(X_test_0[0], X_test_0[1], color='yellow', label='test data')
#plt.scatter(X_test_1[0], X_test_1[1], color='yellow')
#plt.scatter(X_test_2[0], X_test_2[1], color='yellow')
plt.xlabel('sepal length')
plt.ylabel('sepal width')
plt.title('Dataset after Standardization and Normalization')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
For any coordinate vector of a datapoint after standardization and normalization$$x^m = \left[\begin{array}{c}a^m \\b^m\end{array}\right]$$we can represent that datapoint in polar coordinate (r, θ)$$x^m = \left[\begin{array}{c}r^m = 1 \\\theta^m\end{array}\right]$$where $a$ is the sepal length feature's value and $b$ is the sepal width feature's value, $m$ denotes the index of the sample in the dataset. The value of $r$ here is $1$ for all datapoints because of standardization and normalization, hence we can drop the $r$ and represent all datapoints with their respective angle$$x^m \rightarrow \theta^m$$Using this $θ$ we can easily get back the cartesian coordinate version of the vector$$x^m = \left[\begin{array}{c}\cos(\theta^m) = a^m \\\sin(\theta^m) = b^m\end{array}\right]$$
###Code
# features to angles: change the coordinate vector of a datapoint from cartesian to polar coordinate
X_angle = np.arctan2(X[:,1], X[:,0])
plt.scatter(np.linspace(0, 49, 50), X_angle[np.where(y == 0)], color='red', label='y = 0')
plt.scatter(np.linspace(0, 49, 50), X_angle[np.where(y == 1)], color='blue', label='y = 1')
plt.xlabel('dataset index')
plt.ylabel('angle [radian]')
plt.title('The value of the angle for every datapoints in the dataset')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Building the Quantum CircuitIn this part, we would like to build a quantum circuit for quantum amplitude embedding and classification. We need 4 qubits for the circuit and 2 classical bits for measurements. I asigned the qubits as below:qubit 0: the index qubit, it is for flagging the $\textit{m}th$ training input. qubit 1: the ancilla qubit, it acts as a differentiator between training and testing input. qubit 2: the input qubit, it acts as a register for the training and testing input. qubit 3: the class qubit, it encodes the class information (either 0 or 1). Preparation To encode the datapoints into quantum states, we will use $RY(θ)$ gate to rotate the qubit around the Y-axis. $RY(θ)$ gate will transform a qubit with initial state $|0\rangle$ to$$|\psi\rangle=RY(\theta)|0\rangle=\cos\left(\frac{\theta}{2}\right)|0\rangle+\sin\left(\frac{\theta}{2}\right)|1\rangle .$$Writing $x^m$ in computational basis, we get$$x^m = \left[\begin{array}{c}\cos(\theta^m) = a^m \\\sin(\theta^m) = b^m\end{array}\right]= \cos(\theta^m) \left[\begin{array}{c}1 \\0\end{array}\right]= \sin(\theta^m) \left[\begin{array}{c}0 \\1\end{array}\right]= \cos(\theta^m)|0\rangle+\sin(\theta^m)|1\rangle$$this means we can use $RY(θ)$ gate and $θ^m$ to encode a datapoint $x^m$ into quantum state $|\psi^m\rangle$ (this is called the amplitude embedding)$$|\psi^m\rangle = RY(2\theta^m)|0\rangle = \cos(\theta^m)|0\rangle+\sin(\theta^m)|1\rangle .$$We need to double the angle value before using it as parameter for $RY(θ)$ gate.
###Code
# doubling the angle value
X_angle = 2*X_angle
# import relevant libraries
from qiskit import *
from qiskit import Aer, execute
from qiskit.visualization import plot_histogram
from qiskit.providers.aer import noise
###Output
_____no_output_____
###Markdown
To do the amplitude embedding, we need a $CCRY(θ)$ gate. A $CCRY(θ)$ gate can be decompose as below:
###Code
# decomposition function for CCRY gate
def CCRY(control, target, theta, num_qubit):
qc = QuantumCircuit(num_qubit)
qc.ccx(control[0], control[1], target)
qc.cx(control[0], target)
qc.ry(theta/4, target)
qc.cx(control[0], target)
qc.ry(-1*theta/4, target)
qc.ccx(control[0], control[1], target)
qc.cx(control[0], target)
qc.ry(-1*theta/4, target)
qc.cx(control[0], target)
qc.ry(theta/4, target)
return qc
###Output
_____no_output_____
###Markdown
Now we are ready to do the amplitude embedding. State Preparation CircuitThis function below will return a quantum circuit that embed both the training input and testing input into quantum states using amplitude embedding. This embedding scheme is similar to what is done in the research paper, but a little bit more general. I won't explain about it here to keep this notebook short.
###Code
# state preparation function
def state_prep(theta_0, theta_1, theta_test):
qc = QuantumCircuit(4)
qc.h(0)
qc.h(1)
qc.barrier()
# prepare the state for the testing data (x_test)
qc.cry(theta_test, 1, 2)
qc.barrier()
# prepare the state for the label (y^m)
qc.cx(0, 3)
qc.barrier()
# prepare the state for the first training data (x_0)
qc.x(0)
qc.x(1)
qc = qc + CCRY([0,1], 2, theta_0, 4)
qc.barrier()
# prepare the state for the second training data (x_1)
qc.x(0)
qc = qc + CCRY([0,1], 2, theta_1, 4)
qc.barrier()
# flip back the ancilla qubit
qc.x(1)
qc.barrier()
return qc
###Output
_____no_output_____
###Markdown
Let's see the quantum circuit for this amplitude embedding. The barriers are there to separate the quantum circuit into several parts based on the embedding steps.
###Code
theta_0 = circuit.Parameter('$θ_0$')
theta_1 = circuit.Parameter('$θ_1$')
theta_test = circuit.Parameter('$θ_{test}$')
state_prep(theta_0, theta_1, theta_test).draw('mpl')
###Output
_____no_output_____
###Markdown
Classifier CircuitAfter the state preparation, we only need to do 2 things:1. Apply the Hadamard gate to the ancilla qubit to interfere the training input amplitude and the testing input amplitude of the input qubit.2. Measure the ancilla qubit and label/class qubit.
###Code
def qdbc(theta_0, theta_1, theta_test):
qc = QuantumCircuit(4, 2)
# state preparation
qc += state_prep(theta_0, theta_1, theta_test)
qc.barrier()
# apply the Hadamard gate to the ancilla qubit
qc.h(1)
# apply measurement to the ancilla qubit and label qubit (y^m)
qc.measure([1, 3], [0, 1])
return qc
###Output
_____no_output_____
###Markdown
Let's see the whole quantum circuit (state preparation + classifier, we will call this the Quantum Distance-based Classifier circuit). State preparation circuit and classifier circuit are separated by two barriers.
###Code
theta_0 = circuit.Parameter('$θ_0$')
theta_1 = circuit.Parameter('$θ_1$')
theta_test = circuit.Parameter('$θ_{test}$')
qdbc(theta_0, theta_1, theta_test).draw('mpl', scale=5)
###Output
_____no_output_____
###Markdown
TestingAfter the measurements of the Quantum Distance-based Classifier circuit for several shots, we need to do classical postprocessing. Similar to the research paper, we only need to consider the measurement results where the ancilla qubit's value is 0. Then the class of the testing input is obtained from the measurement probability of the class qubit. The class of the testing input is whichever class qubit's measurement result that has higher probability. We can implement this by using hard limit function$$f(P_{|1\rangle})=\left\{\begin{array}{ll}0, & \text { if } P_{|1\rangle} \leq 0.5 \\1, & \text { if } P_{|1\rangle}>0.5\end{array}\right.$$where $P_{|1\rangle}$ is the class qubit's measurement probability of getting the state $|1\rangle$.
###Code
# hard limit function
def hardlim(P_1):
if P_1 <= 0.5:
return 0
else:
return 1
###Output
_____no_output_____
###Markdown
We would like to test this algorithm in 3 scenarios:1. ideal simulator2. noisy simulator (for future update)3. IBM quantum computer server, the IBMQ Experience (for future update)And we will also compare them with classical distance-based classifier (CDBC) algorithm.
###Code
#Define the noise model based on the ibmq_essex chip
provider = IBMQ.load_account()
chip_name = 'ibmq_athens'
device = provider.get_backend(chip_name)
noise_model = noise.NoiseModel.from_backend(device)
simulator = Aer.get_backend('qasm_simulator')
from qiskit.tools.monitor import job_monitor
from qiskit.providers.ibmq import least_busy
# function to execute the circuit on simulator, both ideal and noisy
# we can run the circuit without noise by giving 0 to the noise_model argument
def run_sim(mycircuit, iteration=1024, backend=simulator, noise_model=noise_model):
if noise_model == 0:
counts = execute(mycircuit, backend, shots=iteration).result().get_counts(mycircuit)
else:
counts = execute(mycircuit, backend, shots=iteration, noise_model=noise_model).result().get_counts(mycircuit)
return counts
def run_device(mycircuit, backend, iteration=1024, any_device=False, monitor=False, tags=None, name=None, opt_level=0):
if any_device:
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= mycircuit.num_qubits and
not x.configuration().simulator and x.status().operational==True))
job = execute(mycircuit, backend, shots=iteration, optimization_level=opt_level, job_name=name, job_tags=tags)
if monitor:
job_monitor(job)
counts = job.result().get_counts(mycircuit)
return counts
# a function to convert the execution result from Qiskit to the state's probability
def measurement_probs(result, shots):
probs = [0,0,0,0] # reset the probability distribution
for x, y in result.items():
probs[2*int(x[0]) + 1*int(x[1])] = y/shots
# note that in Qiskit, the position of the qubits are flipped
# hence we take the '10' result for '01' probability and vice-versa
P_00, P_10, P_01, P_11 = probs
return P_00, P_01
def decimalToBinary(n, length):
binary = bin(n).replace("0b", "")
if len(binary) != length:
for i in range(length - len(binary)):
binary = "0" + binary
return binary
def binaryToDecimal(binary):
binary1 = binary
decimal, i, n = 0, 0, 0
while(binary != 0):
dec = binary % 10
decimal = decimal + dec * pow(2, i)
binary = binary//10
i += 1
print(decimal)
###Output
_____no_output_____
###Markdown
Quantum Distance-based Classifier: Ideal Simulator
###Code
train_id = [12, 84]
theta_0 = X_angle[train_id[0]]
theta_1 = X_angle[train_id[1]]
shots = 1000
y_pred = []
acc = 0
for i in range (len(X_angle)):
if i != train_id[0] and i != train_id[1]:
theta_test = X_angle[i]
result = run_sim(qdbc(theta_0, theta_1, theta_test), shots, noise_model=0)
P_00, P_01 = measurement_probs(result, shots)
y_pred.append(hardlim(P_01/(P_00 + P_01)))
if y_pred[-1] == y[i]:
acc += 1
acc = acc/(len(X_angle)-2)
print("Test accuracy of Quantum Distance-based Classifier (Ideal Simulator):", acc)
###Output
Test accuracy of Quantum Distance-based Classifier (Ideal Simulator): 0.9795918367346939
###Markdown
Quantum Distance-based Classifier: Noisy Simulator
###Code
train_id = [12, 84]
theta_0 = X_angle[train_id[0]]
theta_1 = X_angle[train_id[1]]
shots = 1000
y_pred = []
acc = 0
for i in range (len(X_angle)):
if i != train_id[0] and i != train_id[1]:
theta_test = X_angle[i]
result = run_sim(qdbc(theta_0, theta_1, theta_test), shots)
P_00, P_01 = measurement_probs(result, shots)
y_pred.append(hardlim(P_01/(P_00 + P_01)))
if y_pred[-1] == y[i]:
acc += 1
acc = acc/(len(X_angle)-2)
print("Test accuracy of Quantum Distance-based Classifier (Noisy Simulator):", acc)
###Output
Test accuracy of Quantum Distance-based Classifier (Noisy Simulator): 0.9795918367346939
###Markdown
Quantum Distance-based Classifier: Real Hardware
###Code
provider_real = IBMQ.get_provider(hub='ibm-q')
device_real = provider.get_backend('ibmq_santiago')
test_index = [28, 90]
X_angle_test = np.array([X_angle[test_index[0]], X_angle[test_index[1]]])
y_test = np.array([y[28], y[55]])
print(X_angle_test)
print(y_test)
train_id = [12, 84]
theta_0 = X_angle[train_id[0]]
theta_1 = X_angle[train_id[1]]
shots = 1000
y_pred = []
acc = 0
for i in range (len(X_angle_test)):
if i != train_id[0] and i != train_id[1]:
theta_test = X_angle_test[i]
result = run_device(qdbc(theta_0, theta_1, theta_test), backend=device_real, iteration=1000, tags=['QDbC'], name='test 4', opt_level=3, monitor=True)
P_00, P_01 = measurement_probs(result, shots)
y_pred.append(hardlim(P_01/(P_00 + P_01)))
if y_pred[-1] == y_test[i]:
acc += 1
if (i+1)%10 == 0:
print(str(i+1) + " tests has been done. Estimated acc = " + str(acc/(len(X_angle)-2)))
acc = acc/(len(X_angle_test))
print("Test accuracy of Quantum Distance-based Classifier (Real Hardware):", acc)
y_pred
y_test == y_pred
###Output
_____no_output_____
###Markdown
Classical Distance-based Classifier
###Code
def distance(x1, x2):
v = x1 - x2
return (v[0]**2 + v[1]**2)
def scaler(y):
return (2*y-1)
def cdbc(x_0, x_1, x_test, y_0, y_1):
a = scaler(y_0)*(1 - distance(x_test, x_0)/(4*2)) + scaler(y_1)*(1 - distance(x_test, x_1)/(4*2))
if a <= 0:
return 0, a
else:
return 1, a
train_id = [12, 84]
y_pred = []
acc = 0
for i in range (len(X_angle)):
if i != train_id[0] and i != train_id[1]:
x_test = X[i, :]
result, _ = cdbc(X[train_id[0], :], X[train_id[1], :], x_test, y[train_id[0]], y[train_id[1]])
y_pred.append(result)
if y_pred[-1] == y[i]:
acc += 1
acc = acc/(len(X_angle)-2)
print("Test accuracy of Classical Distance-based Classifier:", acc)
###Output
Test accuracy of Classical Distance-based Classifier: 0.9795918367346939
###Markdown
Quantum Distance-based Classifier (QDBC)This is a code implementation of distance-based classifier (similar to k-nearest neighbour algorithm) using quantum computer adapted from research paper ["Implementing a distance-based classifier with a quantum interference circuit"](https://arxiv.org/abs/1703.10793) by Maria Schuld, Mark Fingerhuth, and Francesco Petruccione. In this code I would not explain too many things to keep it clear and concise, please read the research paper if you would like to know the details (spoiler: the paper is so well written and easy to understand for anyone having some basics in quantum computation imo).I found the modern Qiskit rewrite of the original code from the author [here](https://github.com/markf94/ibmq_code_epl_119_60002/blob/master/qiskit_distance_based_classifier.py). From my understanding, that code implementation is built specifically based on the datapoints used in the paper. So I decided to create a new one (this code) that can be implemented using any datapoints from the Iris dataset.Goal of this code: Take any three datapoints from the datasets and use two of them as training input, then we would like to predict the label/class of the third one (testing input) using the quantum version of distance-based classifier. Data PreprocessingWe will only use the first 2 classes, 'setosa' (0) and 'versicolor' (1), and the first 2 features, 'sepal length' and 'sepal width'.We need to do 3 things before we go to the quantum side: 1. [Standardization](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html): make the datasets having zero mean and unit variance. 2. [Normalization](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html): normalize the features value for every datapoints. 3. Angle extraction: After 1). and 2)., every datapoints can be represented as a coordinate vector inside a unitary circle (polar coordinate) with the origin as its center. We can extract the value of the angle form by the vector and the positive x-axis, and then replace the datapoint with this value. We need to do this because we will use [Quantum Amplitude Embedding](https://pennylane.ai/qml/glossary/quantum_embedding.html) scheme to convert the datasets into quantum states.
###Code
# import relevant libraries
import numpy as np
from matplotlib import pyplot as plt
from sklearn import datasets
from sklearn.preprocessing import normalize, StandardScaler
# load the datasets
iris = datasets.load_iris()
# take only the first 2 classes and the first 2 features
y = iris.target
X = np.concatenate((iris.data[:, :2][np.where(y == 0)], iris.data[:, :2][np.where(y == 1)]))
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 6
fig_size[1] = 4
plt.rcParams["figure.figsize"] = fig_size
plt.scatter(X[np.where(y == 0)][:,0], X[np.where(y == 0)][:,1], color='red', label='y = 0')
plt.scatter(X[np.where(y == 1)][:,0], X[np.where(y == 1)][:,1], color='blue', label='y = 1')
plt.xlabel('sepal length')
plt.ylabel('sepal width')
plt.title('Dataset')
plt.legend()
plt.show()
# standardization
X = StandardScaler().fit_transform(X)
plt.scatter(X[np.where(y == 0)][:,0], X[np.where(y == 0)][:,1], color='red', label='y = 0')
plt.scatter(X[np.where(y == 1)][:,0], X[np.where(y == 1)][:,1], color='blue', label='y = 1')
plt.xlabel('sepal length')
plt.ylabel('sepal width')
plt.title('Dataset after Standardization')
plt.legend()
plt.show()
# normalization
X = normalize(X)
#X_train_0 = X[12, :]
#X_train_1 = X[84, :]
#X_test_0 = X[28, :]
#X_test_1 = X[36, :]
#X_test_2 = X[80, :]
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 8
fig_size[1] = 8
plt.rcParams["figure.figsize"] = fig_size
plt.scatter(X[np.where(y == 0)][:,0], X[np.where(y == 0)][:,1], color='red', label='y = 0')
plt.scatter(X[np.where(y == 1)][:,0], X[np.where(y == 1)][:,1], color='blue', label='y = 1')
#plt.scatter(X_train_0[0], X_train_0[1], color='black', label='train data')
#plt.scatter(X_train_1[0], X_train_1[1], color='black')
#plt.scatter(X_test_0[0], X_test_0[1], color='yellow', label='test data')
#plt.scatter(X_test_1[0], X_test_1[1], color='yellow')
#plt.scatter(X_test_2[0], X_test_2[1], color='yellow')
plt.xlabel('sepal length')
plt.ylabel('sepal width')
plt.title('Dataset after Standardization and Normalization')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
For any coordinate vector of a datapoint after standardization and normalization$$x^m = \left[\begin{array}{c}a^m \\b^m\end{array}\right]$$we can represent that datapoint in polar coordinate (r, θ)$$x^m = \left[\begin{array}{c}r^m = 1 \\\theta^m\end{array}\right]$$where $a$ is the sepal length feature's value and $b$ is the sepal width feature's value, $m$ denotes the index of the sample in the dataset. The value of $r$ here is $1$ for all datapoints because of standardization and normalization, hence we can drop the $r$ and represent all datapoints with their respective angle$$x^m \rightarrow \theta^m$$Using this $θ$ we can easily get back the cartesian coordinate version of the vector$$x^m = \left[\begin{array}{c}\cos(\theta^m) = a^m \\\sin(\theta^m) = b^m\end{array}\right]$$
###Code
# features to angles: change the coordinate vector of a datapoint from cartesian to polar coordinate
X_angle = np.arctan2(X[:,1], X[:,0])
plt.scatter(np.linspace(0, 49, 50), X_angle[np.where(y == 0)], color='red', label='y = 0')
plt.scatter(np.linspace(0, 49, 50), X_angle[np.where(y == 1)], color='blue', label='y = 1')
plt.xlabel('dataset index')
plt.ylabel('angle [radian]')
plt.title('The value of the angle for every datapoints in the dataset')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Building the Quantum CircuitIn this part, we would like to build a quantum circuit for quantum amplitude embedding and classification. We need 4 qubits for the circuit and 2 classical bits for measurements. I asigned the qubits as below:qubit 0: the index qubit, it is for flagging the $\textit{m}th$ training input. qubit 1: the ancilla qubit, it acts as a differentiator between training and testing input. qubit 2: the input qubit, it acts as a register for the training and testing input. qubit 3: the class qubit, it encodes the class information (either 0 or 1). Preparation To encode the datapoints into quantum states, we will use $RY(θ)$ gate to rotate the qubit around the Y-axis. $RY(θ)$ gate will transform a qubit with initial state $|0\rangle$ to$$|\psi\rangle=RY(\theta)|0\rangle=\cos\left(\frac{\theta}{2}\right)|0\rangle+\sin\left(\frac{\theta}{2}\right)|1\rangle .$$Writing $x^m$ in computational basis, we get$$x^m = \left[\begin{array}{c}\cos(\theta^m) = a^m \\\sin(\theta^m) = b^m\end{array}\right]= \cos(\theta^m) \left[\begin{array}{c}1 \\0\end{array}\right]= \sin(\theta^m) \left[\begin{array}{c}0 \\1\end{array}\right]= \cos(\theta^m)|0\rangle+\sin(\theta^m)|1\rangle$$this means we can use $RY(θ)$ gate and $θ^m$ to encode a datapoint $x^m$ into quantum state $|\psi^m\rangle$ (this is called the amplitude embedding)$$|\psi^m\rangle = RY(2\theta^m)|0\rangle = \cos(\theta^m)|0\rangle+\sin(\theta^m)|1\rangle .$$We need to double the angle value before using it as parameter for $RY(θ)$ gate.
###Code
# doubling the angle value
X_angle = 2*X_angle
# import relevant libraries
from qiskit import *
from qiskit import Aer, execute
from qiskit.visualization import plot_histogram
from qiskit.providers.aer import noise
provider = IBMQ.load_account()
###Output
c:\users\user\appdata\local\programs\python\python37\lib\site-packages\qiskit\providers\ibmq\ibmqfactory.py:192: UserWarning: Timestamps in IBMQ backend properties, jobs, and job results are all now in local time instead of UTC.
warnings.warn('Timestamps in IBMQ backend properties, jobs, and job results '
###Markdown
To do the amplitude embedding, we need a $CCRY(θ)$ gate. A $CCRY(θ)$ gate can be decompose as below:
###Code
# decomposition function for CCRY gate
def CCRY(control, target, theta, num_qubit):
qc = QuantumCircuit(num_qubit)
qc.ccx(control[0], control[1], target)
qc.cx(control[0], target)
qc.ry(theta/4, target)
qc.cx(control[0], target)
qc.ry(-1*theta/4, target)
qc.ccx(control[0], control[1], target)
qc.cx(control[0], target)
qc.ry(-1*theta/4, target)
qc.cx(control[0], target)
qc.ry(theta/4, target)
return qc
###Output
_____no_output_____
###Markdown
Now we are ready to do the amplitude embedding. State Preparation CircuitThis function below will return a quantum circuit that embed both the training input and testing input into quantum states using amplitude embedding. This embedding scheme is similar to what is done in the research paper, but a little bit more general. I won't explain about it here to keep this notebook short.
###Code
# state preparation function
def state_prep(theta_0, theta_1, theta_test):
qc = QuantumCircuit(4)
qc.h(0)
qc.h(1)
qc.barrier()
# prepare the state for the testing data (x_test)
qc.cry(theta_test, 1, 2)
qc.barrier()
# prepare the state for the label (y^m)
qc.cx(0, 3)
qc.barrier()
# prepare the state for the first training data (x_0)
qc.x(0)
qc.x(1)
qc = qc + CCRY([0,1], 2, theta_0, 4)
qc.barrier()
# prepare the state for the second training data (x_1)
qc.x(0)
qc = qc + CCRY([0,1], 2, theta_1, 4)
qc.barrier()
# flip back the ancilla qubit
qc.x(1)
qc.barrier()
return qc
###Output
_____no_output_____
###Markdown
Let's see the quantum circuit for this amplitude embedding. The barriers are there to separate the quantum circuit into several parts based on the embedding steps.
###Code
theta_0 = circuit.Parameter('$θ_0$')
theta_1 = circuit.Parameter('$θ_1$')
theta_test = circuit.Parameter('$θ_{test}$')
state_prep(theta_0, theta_1, theta_test).draw('mpl')
###Output
_____no_output_____
###Markdown
Classifier CircuitAfter the state preparation, we only need to do 2 things:1. Apply the Hadamard gate to the ancilla qubit to interfere the training input amplitude and the testing input amplitude of the input qubit.2. Measure the ancilla qubit and label/class qubit.
###Code
def qdbc(theta_0, theta_1, theta_test):
qc = QuantumCircuit(4, 2)
# state preparation
qc += state_prep(theta_0, theta_1, theta_test)
qc.barrier()
# apply the Hadamard gate to the ancilla qubit
qc.h(1)
# apply measurement to the ancilla qubit and label qubit (y^m)
qc.measure([1, 3], [0, 1])
return qc
###Output
_____no_output_____
###Markdown
Let's see the whole quantum circuit (state preparation + classifier, we will call this the Quantum Distance-based Classifier circuit). State preparation circuit and classifier circuit are separated by two barriers.
###Code
theta_0 = circuit.Parameter('$θ_0$')
theta_1 = circuit.Parameter('$θ_1$')
theta_test = circuit.Parameter('$θ_{test}$')
qdbc(theta_0, theta_1, theta_test).draw('mpl', scale=5)
###Output
_____no_output_____
###Markdown
TestingAfter the measurements of the Quantum Distance-based Classifier circuit for several shots, we need to do classical postprocessing. Similar to the research paper, we only need to consider the measurement results where the ancilla qubit's value is 0. Then the class of the testing input is obtained from the measurement probability of the class qubit. The class of the testing input is whichever class qubit's measurement result that has higher probability. We can implement this by using hard limit function$$f(P_{|1\rangle})=\left\{\begin{array}{ll}0, & \text { if } P_{|1\rangle} \leq 0.5 \\1, & \text { if } P_{|1\rangle}>0.5\end{array}\right.$$where $P_{|1\rangle}$ is the class qubit's measurement probability of getting the state $|1\rangle$.
###Code
# hard limit function
def hardlim(P_1):
if P_1 <= 0.5:
return 0
else:
return 1
###Output
_____no_output_____
###Markdown
We would like to test this algorithm in 3 scenarios:1. ideal simulator2. noisy simulator (for future update)3. IBM quantum computer server, the IBMQ Experience (for future update)And we will also compare them with classical distance-based classifier (CDBC) algorithm.
###Code
#Define the noise model based on the ibmq_essex chip
chip_name = 'ibmq_essex'
device = provider.get_backend(chip_name)
noise_model = noise.NoiseModel.from_backend(device)
simulator = Aer.get_backend('qasm_simulator')
# we can also run the circuit without noise by giving 0 to the noise_model argument
def run(mycircuit, iteration, simulator=simulator, noise_model=noise_model):
if noise_model == 0:
counts = execute(mycircuit, simulator, shots=iteration).result().get_counts(mycircuit)
else:
counts = execute(mycircuit, simulator, shots=iteration, noise_model=noise_model).result().get_counts(mycircuit)
return counts
# a function to convert the execution result from Qiskit to the state's probability
def measurement_probs(result, shots):
probs = [0,0,0,0] # reset the probability distribution
for x, y in result.items():
probs[2*int(x[0]) + 1*int(x[1])] = y/shots
# note that in Qiskit, the position of the qubits are flipped
# hence we take the '10' result for '01' probability and vice-versa
P_00, P_10, P_01, P_11 = probs
return P_00, P_01
###Output
_____no_output_____
###Markdown
Quantum Distance-based Classifier: Ideal Simulator
###Code
train_id = [12, 84]
theta_0 = X_angle[train_id[0]]
theta_1 = X_angle[train_id[1]]
shots = 1000
y_pred = []
acc = 0
for i in range (len(X_angle)):
if i != train_id[0] and i != train_id[1]:
theta_test = X_angle[i]
result = run(qdbc(theta_0, theta_1, theta_test), shots, noise_model=0)
P_00, P_01 = measurement_probs(result, shots)
y_pred.append(hardlim(P_01/(P_00 + P_01)))
if y_pred[-1] == y[i]:
acc += 1
acc = acc/(len(X_angle)-2)
print("Test accuracy of Quantum Distance-based Classifier (Ideal Simulator):", acc)
###Output
Test accuracy of Quantum Distance-based Classifier (Ideal Simulator): 0.9795918367346939
###Markdown
Quantum Distance-based Classifier: Noisy Simulator
###Code
train_id = [12, 84]
theta_0 = X_angle[train_id[0]]
theta_1 = X_angle[train_id[1]]
shots = 1000
y_pred = []
acc = 0
for i in range (len(X_angle)):
if i != train_id[0] and i != train_id[1]:
theta_test = X_angle[i]
result = run(qdbc(theta_0, theta_1, theta_test), shots, noise_model=noise_model)
P_00, P_01 = measurement_probs(result, shots)
y_pred.append(hardlim(P_01/(P_00 + P_01)))
if y_pred[-1] == y[i]:
acc += 1
acc = acc/(len(X_angle)-2)
print("Test accuracy of Quantum Distance-based Classifier (Ideal Simulator):", acc)
###Output
Test accuracy of Quantum Distance-based Classifier (Ideal Simulator): 0.9693877551020408
###Markdown
Classical Distance-based Classifier
###Code
def distance(x1, x2):
v = x1 - x2
return (v[0]**2 + v[1]**2)
def scaler(y):
return (2*y-1)
def cdbc(x_0, x_1, x_test, y_0, y_1):
a = scaler(y_0)*(1 - distance(x_test, x_0)/(4*2)) + scaler(y_1)*(1 - distance(x_test, x_1)/(4*2))
if a <= 0:
return 0, a
else:
return 1, a
train_id = [12, 84]
y_pred = []
acc = 0
for i in range (len(X_angle)):
if i != train_id[0] and i != train_id[1]:
x_test = X[i, :]
result, _ = cdbc(X[train_id[0], :], X[train_id[1], :], x_test, y[train_id[0]], y[train_id[1]])
y_pred.append(result)
if y_pred[-1] == y[i]:
acc += 1
acc = acc/(len(X_angle)-2)
print("Test accuracy of Classical Distance-based Classifier:", acc)
###Output
Test accuracy of Classical Distance-based Classifier: 0.9795918367346939
|
how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.31.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop('logQuantity', axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names,
freq='W-THU' # Set the forecast frequency to be weekly (start on each Thursday)
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({"data": json.loads(X_query.to_json(orient="records"))})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from sklearn.metrics import mean_absolute_error, mean_squared_error
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
X_train, X_test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesAutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
y_train = X_train.pop(target_column_name).values
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up-to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *X_valid* and *y_valid* parameters of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**X**|Training matrix of features as a pandas DataFrame, shape = [n_training_samples, n_features]||**y**|Target values as a numpy.ndarray, shape = [n_training_samples, ]||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models|**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models|**debug_log**|Log file path for writing debugging information|**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'],
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
iterations=10,
X=X_train,
y=y_train,
n_cross_validations=3,
enable_voting_ensemble=False,
enable_stack_ensemble=False,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
local_run = experiment.submit(automl_config, show_output=True)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_pipeline = local_run.get_output()
fitted_pipeline.steps
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. We will first create a query `y_query`, which is aligned index-for-index to `X_test`. This is a vector of target values where each `NaN` serves the function of the question mark to be replaced by forecast. Passing definite values in the `y` argument allows the `forecast` function to make predictions on data that does not immediately follow the train data which contains `y`. In each grain, the last time point where the model sees a definite value of `y` is that grain's _forecast origin_.
###Code
# Replace ALL values in y_pred by NaN.
# The forecast origin will be at the beginning of the first forecast period.
# (Which is the same time as the end of the last training period.)
y_query = y_test.copy().astype(np.float)
y_query.fill(np.nan)
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_pred, X_trans = fitted_pipeline.forecast(X_test, y_query)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'):
"""
Demonstrates how to get the output aligned to the inputs
using pandas indexes. Helps understand what happened if
the output's shape differs from the input shape, or if
the data got re-sorted by time and grain during forecasting.
Typical causes of misalignment are:
* we predicted some periods that were missing in actuals -> drop from eval
* model was asked to predict past max_horizon -> increase max horizon
* data at start of X_test was needed for lags -> provide previous periods in y
"""
df_fcst = pd.DataFrame({predicted_column_name : y_predicted})
# y and X outputs are aligned by forecast() function contract
df_fcst.index = X_trans.index
# align original X_test to y_test
X_test_full = X_test.copy()
X_test_full[target_column_name] = y_test
# X_test_full's index does not include origin, so reset for merge
df_fcst.reset_index(inplace=True)
X_test_full = X_test_full.reset_index().drop(columns='index')
together = df_fcst.merge(X_test_full, how='right')
# drop rows where prediction or actuals are nan
# happens because of missing actuals
# or at edges of time due to lags/rolling windows
clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)]
return(clean)
df_all = align_outputs(y_pred, X_trans, X_test, y_test)
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
print("Simple forecasting model")
rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted']))
print("[Test Data] \nRoot Mean squared error: %.2f" % rmse)
mae = mean_absolute_error(df_all[target_column_name], df_all['predicted'])
print('mean_absolute_error score: %.2f' % mae)
print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted']))
# Plot outputs
import matplotlib.pyplot as plt
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(y_test, y_test, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = local_run.register_model(description = description, tags = tags)
print(local_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptSerializing and deserializing complex data frames may be tricky. We first develop the `run()` function of the scoring script locally, then write it into a scoring script. It is much easier to debug any quirks of the scoring function without crossing two compute environments. For this exercise, we handle a common quirk of how pandas dataframes serialize time stamp values.
###Code
# this is where we test the run function of the scoring script interactively
# before putting it in the scoring script
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# test the run function here before putting in the scoring script
import json
test_sample = json.dumps({'X': X_test.to_json(), 'y' : y_query.tolist()})
response = run(test_sample, fitted_pipeline)
# unpack the response, dealing with the timestamp serialization again
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Now that the function works locally in the notebook, let's write it down into the scoring script. The scoring script is authored by the data scientist. Adjust it to taste, adding inputs, outputs and processing as needed.
###Code
%%writefile score_fcast.py
import pickle
import json
import numpy as np
import pandas as pd
import azureml.train.automl
from sklearn.externals import joblib
from azureml.core.model import Model
def init():
global model
model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
# prepare to send over wire as json
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# get the model
from azureml.train.automl.run import AutoMLRun
experiment = Experiment(ws, experiment_name)
ml_run = AutoMLRun(experiment = experiment, run_id = local_run.id)
best_iteration = int(str.split(best_run.id,'_')[-1]) # the iteration number is a postfix of the run ID.
# get the best model's dependencies and write them into this file
from azureml.core.conda_dependencies import CondaDependencies
conda_env_file_name = 'fcast_env.yml'
dependencies = ml_run.get_run_sdk_dependencies(iteration = best_iteration)
for p in ['azureml-train-automl', 'azureml-core']:
print('{}\t{}'.format(p, dependencies[p]))
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'], pip_packages=['azureml-train-automl'])
myenv.save_to_file('.', conda_env_file_name)
# this is the script file name we wrote a few cells above
script_file_name = 'score_fcast.py'
# Substitute the actual version number in the environment file.
# This is not strictly needed in this notebook because the model should have been generated using the current SDK version.
# However, we include this in case this code is used on an experiment from a previous SDK version.
with open(conda_env_file_name, 'r') as cefr:
content = cefr.read()
with open(conda_env_file_name, 'w') as cefw:
cefw.write(content.replace(azureml.core.VERSION, dependencies['azureml-train-automl']))
# Substitute the actual model id in the script file.
with open(script_file_name, 'r') as cefr:
content = cefr.read()
with open(script_file_name, 'w') as cefw:
cefw.write(content.replace('<<modelid>>', local_run.model_id))
###Output
_____no_output_____
###Markdown
Create a Container Image
###Code
from azureml.core.image import Image, ContainerImage
image_config = ContainerImage.image_configuration(runtime= "python",
execution_script = script_file_name,
conda_file = conda_env_file_name,
tags = {'type': "automl-forecasting"},
description = "Image for automl forecasting sample")
image = Image.create(name = "automl-fcast-image",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
if image.creation_state == 'Failed':
print("Image build log at: " + image.image_build_log_uri)
###Output
_____no_output_____
###Markdown
Deploy the Image as a Web Service on Azure Container Instance
###Code
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
from azureml.core.webservice import Webservice
aci_service_name = 'automl-forecast-01'
print(aci_service_name)
aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = aci_service_name,
workspace = ws)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Call the service
###Code
# we send the data to the service serialized into a json string
test_sample = json.dumps({'X':X_test.to_json(), 'y' : y_query.tolist()})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-forecast-01')
# serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we use AutoML to find and tune a time-series forecasting model.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.In this notebook, you will:1. Create an Experiment in an existing Workspace2. Instantiate an AutoMLConfig 3. Find and train a forecasting model using local compute4. Evaluate the performance of the modelThe examples in the follow code samples use the [University of Chicago's Dominick's Finer Foods dataset](https://research.chicagobooth.edu/kilts/marketing-databases/dominicks) to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. SetupAs part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment is a named object in a Workspace which represents a predictive task, the output of which is a trained model and a set of evaluation metrics for the model.
###Code
import azureml.core
import pandas as pd
import numpy as np
import os
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.train.automl.run import AutoMLRun
from sklearn.metrics import mean_absolute_error, mean_squared_error
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojsalesforecasting'
# project folder
project_folder = './sample_projects/automl-local-ojsalesforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
pd.DataFrame(data=output, index=['']).T
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingFor the purposes of demonstration and later forecast evaluation, we now split the data into a training and a testing set. The test set will contain the final 20 weeks of observed sales for each time-series.
###Code
ntest_periods = 20
def split_last_n_by_grain(df, n):
"""
Group df by grain and split on last n rows for each group
"""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
X_train, X_test = split_last_n_by_grain(data, ntest_periods)
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesAutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.You are almost ready to start an AutoML training job. We will first need to create a validation set from the existing training set (i.e. for hyper-parameter tuning):
###Code
nvalidation_periods = 20
X_train, X_validate = split_last_n_by_grain(X_train, nvalidation_periods)
###Output
_____no_output_____
###Markdown
We also need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
y_train = X_train.pop(target_column_name).values
y_validate = X_validate.pop(target_column_name).values
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, and the training and validation data. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time and the grain column names. A time column is required for forecasting, while the grain is optional. If a grain is not given, the forecaster assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak. |Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**X**|Training matrix of features, shape = [n_training_samples, n_features]||**y**|Target values, shape = [n_training_samples, ]||**X_valid**|Validation matrix of features, shape = [n_validation_samples, n_features]||**y_valid**|Target values for validation, shape = [n_validation_samples, ]|**enable_ensembling**|Allow AutoML to create ensembles of the best performing models|**debug_log**|Log file path for writing debugging information|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.
###Code
automl_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity']
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_root_mean_squared_error',
iterations=10,
X=X_train,
y=y_train,
X_valid=X_validate,
y_valid=y_validate,
enable_ensembling=False,
path=project_folder,
verbosity=logging.INFO,
**automl_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
local_run = experiment.submit(automl_config, show_output=True)
local_run
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_pipeline = local_run.get_output()
fitted_pipeline.steps
###Output
_____no_output_____
###Markdown
Make Predictions from the Best Fitted ModelNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. The target predictions can be retrieved by calling the `predict` method on the best model:
###Code
y_pred = fitted_pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
Calculate evaluation metrics for the predictionTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).
###Code
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
print("[Test Data] \nRoot Mean squared error: %.2f" % np.sqrt(mean_squared_error(y_test, y_pred)))
print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred))
print('MAPE: %.2f' % MAPE(y_test, y_pred))
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we use AutoML to find and tune a time-series forecasting model.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.In this notebook, you will:1. Create an Experiment in an existing Workspace2. Instantiate an AutoMLConfig 3. Find and train a forecasting model using local compute4. Evaluate the performance of the modelThe examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from sklearn.metrics import mean_absolute_error, mean_squared_error
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment is a named object in a Workspace which represents a predictive task, the output of which is a trained model and a set of evaluation metrics for the model.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojsalesforecasting'
# project folder
project_folder = './sample_projects/automl-local-ojsalesforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingFor the purposes of demonstration and later forecast evaluation, we now split the data into a training and a testing set. The test set will contain the final 20 weeks of observed sales for each time-series.
###Code
ntest_periods = 20
def split_last_n_by_grain(df, n):
"""
Group df by grain and split on last n rows for each group
"""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
X_train, X_test = split_last_n_by_grain(data, ntest_periods)
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesAutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.You are almost ready to start an AutoML training job. We will first need to create a validation set from the existing training set (i.e. for hyper-parameter tuning):
###Code
nvalidation_periods = 20
X_train, X_validate = split_last_n_by_grain(X_train, nvalidation_periods)
###Output
_____no_output_____
###Markdown
We also need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
y_train = X_train.pop(target_column_name).values
y_validate = X_validate.pop(target_column_name).values
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, and the training and validation data. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time and the grain column names. A time column is required for forecasting, while the grain is optional. If a grain is not given, the forecaster assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak. |Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**X**|Training matrix of features, shape = [n_training_samples, n_features]||**y**|Target values, shape = [n_training_samples, ]||**X_valid**|Validation matrix of features, shape = [n_validation_samples, n_features]||**y_valid**|Target values for validation, shape = [n_validation_samples, ]|**enable_ensembling**|Allow AutoML to create ensembles of the best performing models|**debug_log**|Log file path for writing debugging information|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.
###Code
automl_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity']
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_root_mean_squared_error',
iterations=10,
X=X_train,
y=y_train,
X_valid=X_validate,
y_valid=y_validate,
enable_ensembling=False,
path=project_folder,
verbosity=logging.INFO,
**automl_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
local_run = experiment.submit(automl_config, show_output=True)
local_run
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_pipeline = local_run.get_output()
fitted_pipeline.steps
###Output
_____no_output_____
###Markdown
Make Predictions from the Best Fitted ModelNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. The target predictions can be retrieved by calling the `predict` method on the best model:
###Code
y_pred = fitted_pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
Calculate evaluation metrics for the predictionTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).
###Code
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
print("[Test Data] \nRoot Mean squared error: %.2f" % np.sqrt(mean_squared_error(y_test, y_pred)))
print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred))
print('MAPE: %.2f' % MAPE(y_test, y_pred))
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.22.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop('logQuantity', axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.16.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include,1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspaceupgrade)
###Code
featurization_config = FeaturizationConfig()
featurization_config.drop_columns = ['logQuantity'] # 'logQuantity' is a leaky feature, so we remove it.
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.| TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet. Note that these models are only available for [Enterprise Edition Workspaces](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspaceupgrade).* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Compute](compute)1. [Data](data)1. [Train](train)1. [Forecast](forecast)1. [Operationalize](operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.32.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D12_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop('logQuantity', axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
test_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_test.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names,
freq='W-THU' # Set the forecast frequency to be weekly (start on each Thursday)
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute.
###Code
test_experiment = Experiment(ws, experiment_name + "_inference")
###Output
_____no_output_____
###Markdown
Retreiving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
from run_forecast import run_remote_inference
remote_run_infer = run_remote_inference(test_experiment=test_experiment,
compute_target=compute_target,
train_run=best_run,
test_dataset=test_dataset,
target_column_name=target_column_name)
remote_run_infer.wait_for_completion(show_output=False)
# download the forecast file to the local machine
remote_run_infer.download_file('outputs/predictions.csv', 'predictions.csv')
###Output
_____no_output_____
###Markdown
EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
# load forecast data frame
fcst_df = pd.read_csv('predictions.csv', parse_dates=[time_column_name])
fcst_df.head()
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=fcst_df[target_column_name],
y_pred=fcst_df['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(fcst_df[target_column_name], fcst_df['predicted'], color='b')
test_test = plt.scatter(fcst_df[target_column_name], fcst_df[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = test.copy()
X_query.pop(target_column_name)
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({"data": json.loads(X_query.to_json(orient="records"))})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from sklearn.metrics import mean_absolute_error, mean_squared_error
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
# project folder
project_folder = './sample_projects/automl-local-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
X_train, X_test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesAutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
y_train = X_train.pop(target_column_name).values
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up-to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *X_valid* and *y_valid* parameters of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**X**|Training matrix of features as a pandas DataFrame, shape = [n_training_samples, n_features]||**y**|Target values as a numpy.ndarray, shape = [n_training_samples, ]||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_ensembling**|Allow AutoML to create ensembles of the best performing models|**debug_log**|Log file path for writing debugging information|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'],
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
iterations=10,
X=X_train,
y=y_train,
n_cross_validations=3,
enable_ensembling=False,
path=project_folder,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
local_run = experiment.submit(automl_config, show_output=True)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_pipeline = local_run.get_output()
fitted_pipeline.steps
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. We will first create a query `y_query`, which is aligned index-for-index to `X_test`. This is a vector of target values where each `NaN` serves the function of the question mark to be replaced by forecast. Passing definite values in the `y` argument allows the `forecast` function to make predictions on data that does not immediately follow the train data which contains `y`. In each grain, the last time point where the model sees a definite value of `y` is that grain's _forecast origin_.
###Code
# Replace ALL values in y_pred by NaN.
# The forecast origin will be at the beginning of the first forecast period.
# (Which is the same time as the end of the last training period.)
y_query = y_test.copy().astype(np.float)
y_query.fill(np.nan)
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_pred, X_trans = fitted_pipeline.forecast(X_test, y_query)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'):
"""
Demonstrates how to get the output aligned to the inputs
using pandas indexes. Helps understand what happened if
the output's shape differs from the input shape, or if
the data got re-sorted by time and grain during forecasting.
Typical causes of misalignment are:
* we predicted some periods that were missing in actuals -> drop from eval
* model was asked to predict past max_horizon -> increase max horizon
* data at start of X_test was needed for lags -> provide previous periods in y
"""
df_fcst = pd.DataFrame({predicted_column_name : y_predicted})
# y and X outputs are aligned by forecast() function contract
df_fcst.index = X_trans.index
# align original X_test to y_test
X_test_full = X_test.copy()
X_test_full[target_column_name] = y_test
# X_test_full's index does not include origin, so reset for merge
df_fcst.reset_index(inplace=True)
X_test_full = X_test_full.reset_index().drop(columns='index')
together = df_fcst.merge(X_test_full, how='right')
# drop rows where prediction or actuals are nan
# happens because of missing actuals
# or at edges of time due to lags/rolling windows
clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)]
return(clean)
df_all = align_outputs(y_pred, X_trans, X_test, y_test)
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
print("Simple forecasting model")
rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted']))
print("[Test Data] \nRoot Mean squared error: %.2f" % rmse)
mae = mean_absolute_error(df_all[target_column_name], df_all['predicted'])
print('mean_absolute_error score: %.2f' % mae)
print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted']))
# Plot outputs
import matplotlib.pyplot as plt
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(y_test, y_test, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = local_run.register_model(description = description, tags = tags)
print(local_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptSerializing and deserializing complex data frames may be tricky. We first develop the `run()` function of the scoring script locally, then write it into a scoring script. It is much easier to debug any quirks of the scoring function without crossing two compute environments. For this exercise, we handle a common quirk of how pandas dataframes serialize time stamp values.
###Code
# this is where we test the run function of the scoring script interactively
# before putting it in the scoring script
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# test the run function here before putting in the scoring script
import json
test_sample = json.dumps({'X': X_test.to_json(), 'y' : y_query.tolist()})
response = run(test_sample, fitted_pipeline)
# unpack the response, dealing with the timestamp serialization again
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Now that the function works locally in the notebook, let's write it down into the scoring script. The scoring script is authored by the data scientist. Adjust it to taste, adding inputs, outputs and processing as needed.
###Code
%%writefile score_fcast.py
import pickle
import json
import numpy as np
import pandas as pd
import azureml.train.automl
from sklearn.externals import joblib
from azureml.core.model import Model
def init():
global model
model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
# prepare to send over wire as json
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# get the model
from azureml.train.automl.run import AutoMLRun
experiment = Experiment(ws, experiment_name)
ml_run = AutoMLRun(experiment = experiment, run_id = local_run.id)
best_iteration = int(str.split(best_run.id,'_')[-1]) # the iteration number is a postfix of the run ID.
# get the best model's dependencies and write them into this file
from azureml.core.conda_dependencies import CondaDependencies
conda_env_file_name = 'fcast_env.yml'
dependencies = ml_run.get_run_sdk_dependencies(iteration = best_iteration)
for p in ['azureml-train-automl', 'azureml-sdk', 'azureml-core']:
print('{}\t{}'.format(p, dependencies[p]))
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'], pip_packages=['azureml-sdk[automl]'])
myenv.save_to_file('.', conda_env_file_name)
# this is the script file name we wrote a few cells above
script_file_name = 'score_fcast.py'
# Substitute the actual version number in the environment file.
# This is not strictly needed in this notebook because the model should have been generated using the current SDK version.
# However, we include this in case this code is used on an experiment from a previous SDK version.
with open(conda_env_file_name, 'r') as cefr:
content = cefr.read()
with open(conda_env_file_name, 'w') as cefw:
cefw.write(content.replace(azureml.core.VERSION, dependencies['azureml-sdk']))
# Substitute the actual model id in the script file.
with open(script_file_name, 'r') as cefr:
content = cefr.read()
with open(script_file_name, 'w') as cefw:
cefw.write(content.replace('<<modelid>>', local_run.model_id))
###Output
_____no_output_____
###Markdown
Create a Container Image
###Code
from azureml.core.image import Image, ContainerImage
image_config = ContainerImage.image_configuration(runtime= "python",
execution_script = script_file_name,
conda_file = conda_env_file_name,
tags = {'type': "automl-forecasting"},
description = "Image for automl forecasting sample")
image = Image.create(name = "automl-fcast-image",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
if image.creation_state == 'Failed':
print("Image build log at: " + image.image_build_log_uri)
###Output
_____no_output_____
###Markdown
Deploy the Image as a Web Service on Azure Container Instance
###Code
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
from azureml.core.webservice import Webservice
aci_service_name = 'automl-forecast-01'
print(aci_service_name)
aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = aci_service_name,
workspace = ws)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Call the service
###Code
# we send the data to the service serialized into a json string
test_sample = json.dumps({'X':X_test.to_json(), 'y' : y_query.tolist()})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-forecast-01')
# serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
###Output
C:\Users\jp\AppData\Roaming\Python\Python36\site-packages\sklearn\ensemble\weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release.
from numpy.core.umath_tests import inner1d
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
# ws = Workspace.from_config()
import sys
sys.path.append(r'C:\Users\jp\Documents\GitHub\vault-private')
import credentials
ws = credentials.authenticate_AZR('gmail','WS_demo','RG_wip')
# choose a name for the run history container in the workspace
experiment_name = 'test-automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
# pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
{'Authorization': 'Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6IllNRUxIVDBndmIwbXhvU0RvWWZvbWpxZmpZVSIsImtpZCI6IllNRUxIVDBndmIwbXhvU0RvWWZvbWpxZmpZVSJ9.eyJhdWQiOiJodHRwczovL21hbmFnZW1lbnQuY29yZS53aW5kb3dzLm5ldC8iLCJpc3MiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC9lMjE4ZGRjZC1jYTYyLTQzNzgtYmJlMS0xMDliZGQwNGU4YTMvIiwiaWF0IjoxNTg3NTUzMzY4LCJuYmYiOjE1ODc1NTMzNjgsImV4cCI6MTU4NzU1NzI2OCwiYWlvIjoiNDJkZ1lCRE00ZW1ybk5BeHUzeXY3N1RXaFBPc0FBPT0iLCJhcHBpZCI6IjIwYWQ0NWFhLTQ3NGQtNGFmYy04MzNiLTllNjI5MjEzMmQzOSIsImFwcGlkYWNyIjoiMSIsImlkcCI6Imh0dHBzOi8vc3RzLndpbmRvd3MubmV0L2UyMThkZGNkLWNhNjItNDM3OC1iYmUxLTEwOWJkZDA0ZThhMy8iLCJvaWQiOiJmZmVlMjRjYy0xNDY5LTQ1NWQtOTFkOC04YzhmZWI5MjJlMjIiLCJzdWIiOiJmZmVlMjRjYy0xNDY5LTQ1NWQtOTFkOC04YzhmZWI5MjJlMjIiLCJ0aWQiOiJlMjE4ZGRjZC1jYTYyLTQzNzgtYmJlMS0xMDliZGQwNGU4YTMiLCJ1dGkiOiJmekhnT01YcU1rZWI4bXJiXy1sTUFBIiwidmVyIjoiMS4wIn0.EBTf0_tOZDqEJRuwOmDx58QcSlbZKrshWEFIgcXMSbLVk8VUkWBZf3Jjh7FzieWkOd30nHNr0jfJzdyQklAvTqV2gSpZDNwslVD9rK1eAydVGdBD8feE7LUgIMOExqYP9nv35l5XBZSDsJeTQXjE4J6A1JeLkTDlg_JkBAEti_tWICh7rcvCu2aV9PEd6a2dtp4BWClcqKKHOJdC25bUJnoXfFWntC9NCKei2zzfgdBsiKq1oRFm6SM8MZfUCRw4Z8qdcefEe9WMsFPEWouaXrxfdm1liG41jlve57neIuJU4Ckq2NbKzOXXhwsK2zBvN7nAOBqHkjPwjHNZfiij1g'}
Found workspace WS_demo at location eastus2
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your cluster.
amlcompute_cluster_name = "cpu-cluster"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
# print('Checking cluster status...')
# # Can poll for a minimum number of nodes and for a specific timeout.
# # If no min_node_count is provided, it will use the scale settings for the cluster.
# compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# # For a more detailed view of current AmlCompute status, use get_status().
# check compute targets
cts = ws.compute_targets
print(cts)
# attach compute (gpu / cpu / local)
import pyautogui
sys.path.append(r'C:\Users\jp\Documents\GitHub\jp-codes-python\autoML_py36')
import jp_utils
answer = pyautogui.prompt(
text='Enter compute target (gpu, cpu, or local)',
title='Compute target',
default='cpu')
compute_dict = {'gpu':'gpu-cluster', 'cpu':'cpu-cluster', 'local':'gpu-local'}
target_name = jp_utils.generic_switch(compute_dict, answer)
compute_target =cts[target_name]
print(compute_target.name)
###Output
Found existing compute target.
{'cpu-cluster': AmlCompute(workspace=Workspace.create(name='WS_demo', subscription_id='be8e48ab-94b2-4145-a6de-2104dc657912', resource_group='RG_wip'), name=cpu-cluster, id=/subscriptions/be8e48ab-94b2-4145-a6de-2104dc657912/resourceGroups/RG_wip/providers/Microsoft.MachineLearningServices/workspaces/WS_demo/computes/cpu-cluster, type=AmlCompute, provisioning_state=Succeeded, location=eastus2, tags=None)}
cpu-cluster
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
Data contains 249 individual time-series.
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
Data subset contains 9 individual time-series.
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
Uploading an estimated of 2 files
Uploading ./dominicks_OJ_test.csv
Uploading ./dominicks_OJ_train.csv
Uploaded ./dominicks_OJ_test.csv, 1 files out of an estimated total of 2
Uploaded ./dominicks_OJ_train.csv, 2 files out of an estimated total of 2
Uploaded 2 files
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please check out the forecasting grouping notebook. You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'], # 'logQuantity' is a leaky feature, so we remove it.
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
n_cross_validations=3,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
model_name = best_run.properties['model_name']
print(model_name)
print(best_run.properties)
print(fitted_model)
from helper import get_result_df
summary_df = get_result_df(remote_run)
print(summary_df)
###Output
AutoML1534bb1255
{'runTemplate': 'automl_child', 'pipeline_id': '__AutoML_Ensemble__', 'pipeline_spec': '{"pipeline_id":"__AutoML_Ensemble__","objects":[{"module":"azureml.train.automl.ensemble","class_name":"Ensemble","spec_class":"sklearn","param_args":[],"param_kwargs":{"automl_settings":"{\'task_type\':\'regression\',\'primary_metric\':\'normalized_mean_absolute_error\',\'debug_log\':\'azureml_automl.log\',\'verbosity\':20,\'ensemble_iterations\':15,\'is_timeseries\':True,\'name\':\'test-automl-ojforecasting\',\'compute_target\':\'cpu-cluster\',\'subscription_id\':\'be8e48ab-94b2-4145-a6de-2104dc657912\',\'region\':\'eastus2\',\'time_column_name\':\'WeekStarting\',\'grain_column_names\':[\'Store\',\'Brand\'],\'max_horizon\':20,\'drop_column_names\':[\'logQuantity\'],\'spark_service\':None}","ensemble_run_id":"AutoML_1534bb12-57ff-4208-827d-20effa815c1d_5","experiment_name":"test-automl-ojforecasting","workspace_name":"WS_demo","subscription_id":"be8e48ab-94b2-4145-a6de-2104dc657912","resource_group_name":"RG_wip"}}]}', 'training_percent': '100', 'predicted_cost': None, 'iteration': '5', '_azureml.ComputeTargetType': 'amlcompute', 'ContentSnapshotId': '1cf9b045-e8a8-4aaa-ae4c-7175dddb4662', 'AzureML.DerivedImageName': 'azureml/azureml_5a74d428fe5c384ab15f195d4ff87498', 'ProcessInfoFile': 'azureml-logs/process_info.json', 'ProcessStatusFile': 'azureml-logs/process_status.json', 'run_template': 'automl_child', 'run_preprocessor': '', 'run_algorithm': 'VotingEnsemble', 'conda_env_data_location': 'aml://artifact/ExperimentRun/dcid.AutoML_1534bb12-57ff-4208-827d-20effa815c1d_5/outputs/conda_env_v_1_0_0.yml', 'model_data_location': 'aml://artifact/ExperimentRun/dcid.AutoML_1534bb12-57ff-4208-827d-20effa815c1d_5/outputs/model.pkl', 'pipeline_graph_version': '1.0.0', 'model_exp_support': 'False', 'scoring_data_location': 'aml://artifact/ExperimentRun/dcid.AutoML_1534bb12-57ff-4208-827d-20effa815c1d_5/outputs/scoring_file_v_1_0_0.py', 'model_name': 'AutoML1534bb1255', 'staticProperties': '{}', 'score': '0.03302876802566193', 'run_properties': "estimators=[('2', Pipeline(memory=None,\n steps=[('standardscalerwrapper', <azureml.automl.runtime.shared.model_wrappers.StandardScalerWrapper object at 0x7f37c9cee1d0>", 'pipeline_script': '{"pipeline_id":"__AutoML_Ensemble__","objects":[{"module":"azureml.train.automl.ensemble","class_name":"Ensemble","spec_class":"sklearn","param_args":[],"param_kwargs":{"automl_settings":"{\'task_type\':\'regression\',\'primary_metric\':\'normalized_mean_absolute_error\',\'debug_log\':\'azureml_automl.log\',\'verbosity\':20,\'ensemble_iterations\':15,\'is_timeseries\':True,\'name\':\'test-automl-ojforecasting\',\'compute_target\':\'cpu-cluster\',\'subscription_id\':\'be8e48ab-94b2-4145-a6de-2104dc657912\',\'region\':\'eastus2\',\'time_column_name\':\'WeekStarting\',\'grain_column_names\':[\'Store\',\'Brand\'],\'max_horizon\':20,\'drop_column_names\':[\'logQuantity\'],\'spark_service\':None}","ensemble_run_id":"AutoML_1534bb12-57ff-4208-827d-20effa815c1d_5","experiment_name":"test-automl-ojforecasting","workspace_name":"WS_demo","subscription_id":"be8e48ab-94b2-4145-a6de-2104dc657912","resource_group_name":"RG_wip"}}]}', 'training_type': 'MeanCrossValidation', 'num_classes': '', 'framework': 'sklearn', 'fit_time': '8', 'goal': 'normalized_mean_absolute_error_min', 'class_labels': '', 'primary_metric': 'normalized_mean_absolute_error', 'errors': '{}', 'fitted_pipeline': "ForecastingPipelineWrapper(pipeline=Pipeline(memory=None,\n steps=[('timeseriestransformer', TimeSeriesTransformer(featurization_config='auto', logger=None,\n pipeline_type=<TimeSeriesPipelineType.FULL: 1>)), ('prefittedsoftvotingregressor', PreFittedSoftVotingRegressor(estimators=[('2', Pipeline(memory=None,\n steps=[('standardscalerwrapper', <a...333333333333, 0.06666666666666667, 0.06666666666666667, 0.06666666666666667, 0.26666666666666666]))]),\n stddev=None)", 'friendly_errors': '{}', 'onnx_model_resource': '{}', 'error_code': '', 'failure_reason': '', 'feature_skus': 'automatedml_sdk_guardrails', 'dependencies_versions': '{"azureml-train-automl": "1.2.0", "azureml-train-automl-runtime": "1.2.0", "azureml-train-automl-client": "1.2.0", "azureml-telemetry": "1.2.0", "azureml-pipeline-core": "1.2.0", "azureml-model-management-sdk": "1.0.1b6.post1", "azureml-interpret": "1.2.0", "azureml-explain-model": "1.2.0", "azureml-defaults": "1.2.0", "azureml-dataprep": "1.3.6", "azureml-dataprep-native": "14.1.0", "azureml-core": "1.2.0.post3", "azureml-automl-runtime": "1.2.0", "azureml-automl-core": "1.2.0"}', 'num_cores': '2', 'peak_memory_usage': '528264', 'vm_configuration': 'Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz', 'core_hours': '0.0011019172222222222'}
ForecastingPipelineWrapper(pipeline=Pipeline(memory=None,
steps=[('timeseriestransformer', TimeSeriesTransformer(featurization_config='auto', logger=None,
pipeline_type=<TimeSeriesPipelineType.FULL: 1>)), ('prefittedsoftvotingregressor', PreFittedSoftVotingRegressor(estimators=[('2', Pipeline(memory=None,
steps=[('standardscalerwrapper', <a...333333333333, 0.06666666666666667, 0.06666666666666667, 0.06666666666666667, 0.26666666666666666]))]),
stddev=None)
run_id \
run_algorithm
VotingEnsemble AutoML_1534bb12-57ff-4208-827d-20effa815c1d_5
ElasticNet AutoML_1534bb12-57ff-4208-827d-20effa815c1d_2
AutoArima AutoML_1534bb12-57ff-4208-827d-20effa815c1d_0
primary_metric Score
run_algorithm
VotingEnsemble normalized_mean_absolute_error 0.03
ElasticNet normalized_mean_absolute_error 0.03
AutoArima normalized_mean_absolute_error 0.04
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
from forecasting_helper import align_outputs
df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name)
from azureml.automl.core._vendor.automl.client.core.common import metrics
from matplotlib import pyplot as plt
from automl.client.core.common import constants
# use automl metrics module
scores = metrics.compute_metrics_regression(
df_all['predicted'],
df_all[target_column_name],
list(constants.Metric.SCALAR_REGRESSION_SET),
None, None, None)
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
print([model.name, ' ', model.run])
dir(model)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
dir(ws)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
print(test_sample)
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from sklearn.metrics import mean_absolute_error, mean_squared_error
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
X_train, X_test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesAutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
y_train = X_train.pop(target_column_name).values
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up-to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *X_valid* and *y_valid* parameters of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**X**|Training matrix of features as a pandas DataFrame, shape = [n_training_samples, n_features]||**y**|Target values as a numpy.ndarray, shape = [n_training_samples, ]||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models|**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models|**debug_log**|Log file path for writing debugging information|**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'],
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
iterations=10,
X=X_train,
y=y_train,
n_cross_validations=3,
enable_voting_ensemble=False,
enable_stack_ensemble=False,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
local_run = experiment.submit(automl_config, show_output=True)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_pipeline = local_run.get_output()
fitted_pipeline.steps
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. We will first create a query `y_query`, which is aligned index-for-index to `X_test`. This is a vector of target values where each `NaN` serves the function of the question mark to be replaced by forecast. Passing definite values in the `y` argument allows the `forecast` function to make predictions on data that does not immediately follow the train data which contains `y`. In each grain, the last time point where the model sees a definite value of `y` is that grain's _forecast origin_.
###Code
# Replace ALL values in y_pred by NaN.
# The forecast origin will be at the beginning of the first forecast period.
# (Which is the same time as the end of the last training period.)
y_query = y_test.copy().astype(np.float)
y_query.fill(np.nan)
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_pred, X_trans = fitted_pipeline.forecast(X_test, y_query)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'):
"""
Demonstrates how to get the output aligned to the inputs
using pandas indexes. Helps understand what happened if
the output's shape differs from the input shape, or if
the data got re-sorted by time and grain during forecasting.
Typical causes of misalignment are:
* we predicted some periods that were missing in actuals -> drop from eval
* model was asked to predict past max_horizon -> increase max horizon
* data at start of X_test was needed for lags -> provide previous periods in y
"""
df_fcst = pd.DataFrame({predicted_column_name : y_predicted})
# y and X outputs are aligned by forecast() function contract
df_fcst.index = X_trans.index
# align original X_test to y_test
X_test_full = X_test.copy()
X_test_full[target_column_name] = y_test
# X_test_full's index does not include origin, so reset for merge
df_fcst.reset_index(inplace=True)
X_test_full = X_test_full.reset_index().drop(columns='index')
together = df_fcst.merge(X_test_full, how='right')
# drop rows where prediction or actuals are nan
# happens because of missing actuals
# or at edges of time due to lags/rolling windows
clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)]
return(clean)
df_all = align_outputs(y_pred, X_trans, X_test, y_test)
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
print("Simple forecasting model")
rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted']))
print("[Test Data] \nRoot Mean squared error: %.2f" % rmse)
mae = mean_absolute_error(df_all[target_column_name], df_all['predicted'])
print('mean_absolute_error score: %.2f' % mae)
print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted']))
# Plot outputs
import matplotlib.pyplot as plt
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(y_test, y_test, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = local_run.register_model(description = description, tags = tags)
print(local_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptSerializing and deserializing complex data frames may be tricky. We first develop the `run()` function of the scoring script locally, then write it into a scoring script. It is much easier to debug any quirks of the scoring function without crossing two compute environments. For this exercise, we handle a common quirk of how pandas dataframes serialize time stamp values.
###Code
# this is where we test the run function of the scoring script interactively
# before putting it in the scoring script
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# test the run function here before putting in the scoring script
import json
test_sample = json.dumps({'X': X_test.to_json(), 'y' : y_query.tolist()})
response = run(test_sample, fitted_pipeline)
# unpack the response, dealing with the timestamp serialization again
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Now that the function works locally in the notebook, let's write it down into the scoring script. The scoring script is authored by the data scientist. Adjust it to taste, adding inputs, outputs and processing as needed.
###Code
%%writefile score_fcast.py
import pickle
import json
import numpy as np
import pandas as pd
import azureml.train.automl
from sklearn.externals import joblib
from azureml.core.model import Model
def init():
global model
model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
# prepare to send over wire as json
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# get the model
from azureml.train.automl.run import AutoMLRun
experiment = Experiment(ws, experiment_name)
ml_run = AutoMLRun(experiment = experiment, run_id = local_run.id)
best_iteration = int(str.split(best_run.id,'_')[-1]) # the iteration number is a postfix of the run ID.
# get the best model's dependencies and write them into this file
from azureml.core.conda_dependencies import CondaDependencies
conda_env_file_name = 'fcast_env.yml'
dependencies = ml_run.get_run_sdk_dependencies(iteration = best_iteration)
for p in ['azureml-train-automl', 'azureml-core']:
print('{}\t{}'.format(p, dependencies[p]))
myenv = CondaDependencies.create(conda_packages=['numpy>=1.16.0,<=1.16.2','scikit-learn','fbprophet==0.5'], pip_packages=['azureml-defaults','azureml-train-automl'])
myenv.save_to_file('.', conda_env_file_name)
# this is the script file name we wrote a few cells above
script_file_name = 'score_fcast.py'
# Substitute the actual version number in the environment file.
# This is not strictly needed in this notebook because the model should have been generated using the current SDK version.
# However, we include this in case this code is used on an experiment from a previous SDK version.
with open(conda_env_file_name, 'r') as cefr:
content = cefr.read()
with open(conda_env_file_name, 'w') as cefw:
cefw.write(content.replace(azureml.core.VERSION, dependencies['azureml-train-automl']))
# Substitute the actual model id in the script file.
with open(script_file_name, 'r') as cefr:
content = cefr.read()
with open(script_file_name, 'w') as cefw:
cefw.write(content.replace('<<modelid>>', local_run.model_id))
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(runtime = "python",
entry_script = script_file_name,
conda_file = conda_env_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Call the service
###Code
# we send the data to the service serialized into a json string
test_sample = json.dumps({'X':X_test.to_json(), 'y' : y_query.tolist()})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-forecast-01')
# serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.26.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop('logQuantity', axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names,
freq='W-THU' # Set the forecast frequency to be weekly (start on each Thursday)
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from sklearn.metrics import mean_absolute_error, mean_squared_error
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
# project folder
project_folder = './sample_projects/automl-local-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
X_train, X_test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesAutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
y_train = X_train.pop(target_column_name).values
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up-to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *X_valid* and *y_valid* parameters of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**X**|Training matrix of features as a pandas DataFrame, shape = [n_training_samples, n_features]||**y**|Target values as a numpy.ndarray, shape = [n_training_samples, ]||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_ensembling**|Allow AutoML to create ensembles of the best performing models|**debug_log**|Log file path for writing debugging information|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'],
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
iterations=10,
X=X_train,
y=y_train,
n_cross_validations=3,
enable_ensembling=False,
path=project_folder,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
local_run = experiment.submit(automl_config, show_output=True)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_pipeline = local_run.get_output()
fitted_pipeline.steps
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. We will first create a query `y_query`, which is aligned index-for-index to `X_test`. This is a vector of target values where each `NaN` serves the function of the question mark to be replaced by forecast. Passing definite values in the `y` argument allows the `forecast` function to make predictions on data that does not immediately follow the train data which contains `y`. In each grain, the last time point where the model sees a definite value of `y` is that grain's _forecast origin_.
###Code
# Replace ALL values in y_pred by NaN.
# The forecast origin will be at the beginning of the first forecast period.
# (Which is the same time as the end of the last training period.)
y_query = y_test.copy().astype(np.float)
y_query.fill(np.nan)
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_pred, X_trans = fitted_pipeline.forecast(X_test, y_query)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'):
"""
Demonstrates how to get the output aligned to the inputs
using pandas indexes. Helps understand what happened if
the output's shape differs from the input shape, or if
the data got re-sorted by time and grain during forecasting.
Typical causes of misalignment are:
* we predicted some periods that were missing in actuals -> drop from eval
* model was asked to predict past max_horizon -> increase max horizon
* data at start of X_test was needed for lags -> provide previous periods in y
"""
df_fcst = pd.DataFrame({predicted_column_name : y_predicted})
# y and X outputs are aligned by forecast() function contract
df_fcst.index = X_trans.index
# align original X_test to y_test
X_test_full = X_test.copy()
X_test_full[target_column_name] = y_test
# X_test_full's index does not include origin, so reset for merge
df_fcst.reset_index(inplace=True)
X_test_full = X_test_full.reset_index().drop(columns='index')
together = df_fcst.merge(X_test_full, how='right')
# drop rows where prediction or actuals are nan
# happens because of missing actuals
# or at edges of time due to lags/rolling windows
clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)]
return(clean)
df_all = align_outputs(y_pred, X_trans, X_test, y_test)
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
print("Simple forecasting model")
rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted']))
print("[Test Data] \nRoot Mean squared error: %.2f" % rmse)
mae = mean_absolute_error(df_all[target_column_name], df_all['predicted'])
print('mean_absolute_error score: %.2f' % mae)
print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted']))
# Plot outputs
import matplotlib.pyplot as plt
%matplotlib notebook
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(y_test, y_test, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = local_run.register_model(description = description, tags = tags)
print(local_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptSerializing and deserializing complex data frames may be tricky. We first develop the `run()` function of the scoring script locally, then write it into a scoring script. It is much easier to debug any quirks of the scoring function without crossing two compute environments. For this exercise, we handle a common quirk of how pandas dataframes serialize time stamp values.
###Code
# this is where we test the run function of the scoring script interactively
# before putting it in the scoring script
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# test the run function here before putting in the scoring script
import json
test_sample = json.dumps({'X': X_test.to_json(), 'y' : y_query.tolist()})
response = run(test_sample, fitted_pipeline)
# unpack the response, dealing with the timestamp serialization again
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Now that the function works locally in the notebook, let's write it down into the scoring script. The scoring script is authored by the data scientist. Adjust it to taste, adding inputs, outputs and processing as needed.
###Code
%%writefile score_fcast.py
import pickle
import json
import numpy as np
import pandas as pd
import azureml.train.automl
from sklearn.externals import joblib
from azureml.core.model import Model
def init():
global model
model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
# prepare to send over wire as json
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# get the model
from azureml.train.automl.run import AutoMLRun
experiment = Experiment(ws, experiment_name)
ml_run = AutoMLRun(experiment = experiment, run_id = local_run.id)
best_iteration = int(str.split(best_run.id,'_')[-1]) # the iteration number is a postfix of the run ID.
# get the best model's dependencies and write them into this file
from azureml.core.conda_dependencies import CondaDependencies
conda_env_file_name = 'fcast_env.yml'
dependencies = ml_run.get_run_sdk_dependencies(iteration = best_iteration)
for p in ['azureml-train-automl', 'azureml-sdk', 'azureml-core']:
print('{}\t{}'.format(p, dependencies[p]))
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'], pip_packages=['azureml-sdk[automl]'])
myenv.save_to_file('.', conda_env_file_name)
# this is the script file name we wrote a few cells above
script_file_name = 'score_fcast.py'
# Substitute the actual version number in the environment file.
# This is not strictly needed in this notebook because the model should have been generated using the current SDK version.
# However, we include this in case this code is used on an experiment from a previous SDK version.
with open(conda_env_file_name, 'r') as cefr:
content = cefr.read()
with open(conda_env_file_name, 'w') as cefw:
cefw.write(content.replace(azureml.core.VERSION, dependencies['azureml-sdk']))
# Substitute the actual model id in the script file.
with open(script_file_name, 'r') as cefr:
content = cefr.read()
with open(script_file_name, 'w') as cefw:
cefw.write(content.replace('<<modelid>>', local_run.model_id))
###Output
_____no_output_____
###Markdown
Create a Container Image
###Code
from azureml.core.image import Image, ContainerImage
image_config = ContainerImage.image_configuration(runtime= "python",
execution_script = script_file_name,
conda_file = conda_env_file_name,
tags = {'type': "automl-forecasting"},
description = "Image for automl forecasting sample")
image = Image.create(name = "automl-fcast-image",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
if image.creation_state == 'Failed':
print("Image build log at: " + image.image_build_log_uri)
###Output
_____no_output_____
###Markdown
Deploy the Image as a Web Service on Azure Container Instance
###Code
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
from azureml.core.webservice import Webservice
aci_service_name = 'automl-forecast-01'
print(aci_service_name)
aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = aci_service_name,
workspace = ws)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Call the service
###Code
# we send the data to the service serialized into a json string
test_sample = json.dumps({'X':X_test.to_json(), 'y' : y_query.tolist()})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-forecast-01')
# serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.21.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
featurization_config.drop_columns = ['logQuantity'] # 'logQuantity' is a leaky feature, so we remove it.
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.4.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please check out the forecasting grouping notebook. You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include,1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods, the supported methods are constant for target data and mean, median, most frequent and constant for training data. This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspaceupgrade)
###Code
featurization_config = FeaturizationConfig()
featurization_config.drop_columns = ['logQuantity'] # 'logQuantity' is a leaky feature, so we remove it.
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
from forecasting_helper import align_outputs
df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name)
from azureml.automl.core._vendor.automl.client.core.common import metrics
from matplotlib import pyplot as plt
from automl.client.core.common import constants
# use automl metrics module
scores = metrics.compute_metrics_regression(
df_all['predicted'],
df_all[target_column_name],
list(constants.Metric.SCALAR_REGRESSION_SET),
None, None, None)
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.6.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please check out the forecasting grouping notebook. You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include,1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods, the supported methods are constant for target data and mean, median, most frequent and constant for training data. This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspaceupgrade)
###Code
featurization_config = FeaturizationConfig()
featurization_config.drop_columns = ['logQuantity'] # 'logQuantity' is a leaky feature, so we remove it.
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
###Output
_____no_output_____
###Markdown
TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If grain columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet. Note that these models are only available for [Enterprise Edition Workspaces](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspaceupgrade).* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used.|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
from forecasting_helper import align_outputs
df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.23.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop('logQuantity', axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.8.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please check out the forecasting grouping notebook. You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include,1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods, the supported methods are constant for target data and mean, median, most frequent and constant for training data. This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspaceupgrade)
###Code
featurization_config = FeaturizationConfig()
featurization_config.drop_columns = ['logQuantity'] # 'logQuantity' is a leaky feature, so we remove it.
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
###Output
_____no_output_____
###Markdown
TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If grain columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet. Note that these models are only available for [Enterprise Edition Workspaces](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspaceupgrade).* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used.|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
from forecasting_helper import align_outputs
df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.12.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include,1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods, the supported methods are constant for target data and mean, median, most frequent and constant for training data. This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspaceupgrade)
###Code
featurization_config = FeaturizationConfig()
featurization_config.drop_columns = ['logQuantity'] # 'logQuantity' is a leaky feature, so we remove it.
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.| TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet. Note that these models are only available for [Enterprise Edition Workspaces](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspaceupgrade).* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from sklearn.metrics import mean_absolute_error, mean_squared_error
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
X_train, X_test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesAutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
y_train = X_train.pop(target_column_name).values
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up-to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *X_valid* and *y_valid* parameters of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**X**|Training matrix of features as a pandas DataFrame, shape = [n_training_samples, n_features]||**y**|Target values as a numpy.ndarray, shape = [n_training_samples, ]||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models|**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models|**debug_log**|Log file path for writing debugging information|**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'],
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
iterations=10,
X=X_train,
y=y_train,
n_cross_validations=3,
enable_voting_ensemble=False,
enable_stack_ensemble=False,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
local_run = experiment.submit(automl_config, show_output=True)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_pipeline = local_run.get_output()
fitted_pipeline.steps
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. We will first create a query `y_query`, which is aligned index-for-index to `X_test`. This is a vector of target values where each `NaN` serves the function of the question mark to be replaced by forecast. Passing definite values in the `y` argument allows the `forecast` function to make predictions on data that does not immediately follow the train data which contains `y`. In each grain, the last time point where the model sees a definite value of `y` is that grain's _forecast origin_.
###Code
# Replace ALL values in y_pred by NaN.
# The forecast origin will be at the beginning of the first forecast period.
# (Which is the same time as the end of the last training period.)
y_query = y_test.copy().astype(np.float)
y_query.fill(np.nan)
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_pred, X_trans = fitted_pipeline.forecast(X_test, y_query)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'):
"""
Demonstrates how to get the output aligned to the inputs
using pandas indexes. Helps understand what happened if
the output's shape differs from the input shape, or if
the data got re-sorted by time and grain during forecasting.
Typical causes of misalignment are:
* we predicted some periods that were missing in actuals -> drop from eval
* model was asked to predict past max_horizon -> increase max horizon
* data at start of X_test was needed for lags -> provide previous periods in y
"""
df_fcst = pd.DataFrame({predicted_column_name : y_predicted})
# y and X outputs are aligned by forecast() function contract
df_fcst.index = X_trans.index
# align original X_test to y_test
X_test_full = X_test.copy()
X_test_full[target_column_name] = y_test
# X_test_full's index does not include origin, so reset for merge
df_fcst.reset_index(inplace=True)
X_test_full = X_test_full.reset_index().drop(columns='index')
together = df_fcst.merge(X_test_full, how='right')
# drop rows where prediction or actuals are nan
# happens because of missing actuals
# or at edges of time due to lags/rolling windows
clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)]
return(clean)
df_all = align_outputs(y_pred, X_trans, X_test, y_test)
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
print("Simple forecasting model")
rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted']))
print("[Test Data] \nRoot Mean squared error: %.2f" % rmse)
mae = mean_absolute_error(df_all[target_column_name], df_all['predicted'])
print('mean_absolute_error score: %.2f' % mae)
print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted']))
# Plot outputs
import matplotlib.pyplot as plt
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(y_test, y_test, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = local_run.register_model(description = description, tags = tags)
print(local_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptSerializing and deserializing complex data frames may be tricky. We first develop the `run()` function of the scoring script locally, then write it into a scoring script. It is much easier to debug any quirks of the scoring function without crossing two compute environments. For this exercise, we handle a common quirk of how pandas dataframes serialize time stamp values.
###Code
# this is where we test the run function of the scoring script interactively
# before putting it in the scoring script
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# test the run function here before putting in the scoring script
import json
test_sample = json.dumps({'X': X_test.to_json(), 'y' : y_query.tolist()})
response = run(test_sample, fitted_pipeline)
# unpack the response, dealing with the timestamp serialization again
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Now that the function works locally in the notebook, let's write it down into the scoring script. The scoring script is authored by the data scientist. Adjust it to taste, adding inputs, outputs and processing as needed.
###Code
%%writefile score_fcast.py
import pickle
import json
import numpy as np
import pandas as pd
import azureml.train.automl
from sklearn.externals import joblib
from azureml.core.model import Model
def init():
global model
model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
# prepare to send over wire as json
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# get the model
from azureml.train.automl.run import AutoMLRun
experiment = Experiment(ws, experiment_name)
ml_run = AutoMLRun(experiment = experiment, run_id = local_run.id)
best_iteration = int(str.split(best_run.id,'_')[-1]) # the iteration number is a postfix of the run ID.
# get the best model's dependencies and write them into this file
from azureml.core.conda_dependencies import CondaDependencies
conda_env_file_name = 'fcast_env.yml'
dependencies = ml_run.get_run_sdk_dependencies(iteration = best_iteration)
for p in ['azureml-train-automl', 'azureml-core']:
print('{}\t{}'.format(p, dependencies[p]))
myenv = CondaDependencies.create(conda_packages=['numpy>=1.16.0,<=1.16.2','scikit-learn','fbprophet==0.5'], pip_packages=['azureml-defaults','azureml-train-automl'])
myenv.save_to_file('.', conda_env_file_name)
# this is the script file name we wrote a few cells above
script_file_name = 'score_fcast.py'
# Substitute the actual version number in the environment file.
# This is not strictly needed in this notebook because the model should have been generated using the current SDK version.
# However, we include this in case this code is used on an experiment from a previous SDK version.
with open(conda_env_file_name, 'r') as cefr:
content = cefr.read()
with open(conda_env_file_name, 'w') as cefw:
cefw.write(content.replace(azureml.core.VERSION, dependencies['azureml-train-automl']))
# Substitute the actual model id in the script file.
with open(script_file_name, 'r') as cefr:
content = cefr.read()
with open(script_file_name, 'w') as cefw:
cefw.write(content.replace('<<modelid>>', local_run.model_id))
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(runtime = "python",
entry_script = script_file_name,
conda_file = conda_env_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Call the service
###Code
# we send the data to the service serialized into a json string
test_sample = json.dumps({'X':X_test.to_json(), 'y' : y_query.tolist()})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-forecast-01')
# serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we use AutoML to find and tune a time-series forecasting model.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.In this notebook, you will:1. Create an Experiment in an existing Workspace2. Instantiate an AutoMLConfig 3. Find and train a forecasting model using local compute4. Evaluate the performance of the modelThe examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from sklearn.metrics import mean_absolute_error, mean_squared_error
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment is a named object in a Workspace which represents a predictive task, the output of which is a trained model and a set of evaluation metrics for the model.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojsalesforecasting'
# project folder
project_folder = './sample_projects/automl-local-ojsalesforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
X_train, X_test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesAutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
y_train = X_train.pop(target_column_name).values
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up-to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *X_valid* and *y_valid* parameters of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**X**|Training matrix of features as a pandas DataFrame, shape = [n_training_samples, n_features]||**y**|Target values as a numpy.ndarray, shape = [n_training_samples, ]||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_ensembling**|Allow AutoML to create ensembles of the best performing models|**debug_log**|Log file path for writing debugging information|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'],
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_root_mean_squared_error',
iterations=10,
X=X_train,
y=y_train,
n_cross_validations=5,
enable_ensembling=False,
path=project_folder,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
local_run = experiment.submit(automl_config, show_output=True)
local_run
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_pipeline = local_run.get_output()
fitted_pipeline.steps
###Output
_____no_output_____
###Markdown
Make Predictions from the Best Fitted ModelNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. The target predictions can be retrieved by calling the `predict` method on the best model:
###Code
y_pred = fitted_pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
Calculate evaluation metrics for the predictionTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).
###Code
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
print("[Test Data] \nRoot Mean squared error: %.2f" % np.sqrt(mean_squared_error(y_test, y_pred)))
print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred))
print('MAPE: %.2f' % MAPE(y_test, y_pred))
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.5.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please check out the forecasting grouping notebook. You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include,1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods, the supported methods are constant for target data and mean, median, most frequent and constant for training data. This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspaceupgrade)
###Code
featurization_config = FeaturizationConfig()
featurization_config.drop_columns = ['logQuantity'] # 'logQuantity' is a leaky feature, so we remove it.
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
###Output
_____no_output_____
###Markdown
TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If grain columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet. Note that these models are only available for [Enterprise Edition Workspaces](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspaceupgrade).* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used.|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-high-frequency/auto-ml-forecasting-function.ipynb) demonstrates the use of the forecast function for a variety of use cases. Also, please see the [API documentation for the forecast function](https://docs.microsoft.com/en-us/python/api/azureml-automl-runtime/azureml.automl.runtime.shared.model_wrappers.forecastingpipelinewrapper?view=azure-ml-pyforecast-x-pred--typing-union-pandas-core-frame-dataframe--nonetype----none--y-pred--typing-union-pandas-core-frame-dataframe--numpy-ndarray--nonetype----none--forecast-destination--typing-union-pandas--libs-tslibs-timestamps-timestamp--nonetype----none--ignore-data-errors--bool---false-----typing-tuple-numpy-ndarray--pandas-core-frame-dataframe-). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
from forecasting_helper import align_outputs
df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name)
from azureml.automl.core.shared import constants, metrics
from matplotlib import pyplot as plt
# use automl metrics module
scores = metrics.compute_metrics_regression(
df_all['predicted'],
df_all[target_column_name],
list(constants.Metric.SCALAR_REGRESSION_SET),
None, None, None)
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your cluster.
amlcompute_cluster_name = "cpu-cluster-oj"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# For a more detailed view of current AmlCompute status, use get_status().
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please check out the forecasting grouping notebook. You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_minutes**|Experimentation timeout in minutes.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'], # 'logQuantity' is a leaky feature, so we remove it.
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_minutes=15,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
n_cross_validations=3,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. We will first create a query `y_query`, which is aligned index-for-index to `X_test`. This is a vector of target values where each `NaN` serves the function of the question mark to be replaced by forecast. Passing definite values in the `y` argument allows the `forecast` function to make predictions on data that does not immediately follow the train data which contains `y`. In each grain, the last time point where the model sees a definite value of `y` is that grain's _forecast origin_.
###Code
# Replace ALL values in y by NaN.
# The forecast origin will be at the beginning of the first forecast period.
# (Which is the same time as the end of the last training period.)
y_query = y_test.copy().astype(np.float)
y_query.fill(np.nan)
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_predictions, X_trans = fitted_model.forecast(X_test, y_query)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
from forecasting_helper import align_outputs
df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name)
from azureml.automl.core._vendor.automl.client.core.common import metrics
from matplotlib import pyplot as plt
from automl.client.core.common import constants
# use automl metrics module
scores = metrics.compute_metrics_regression(
df_all['predicted'],
df_all[target_column_name],
list(constants.Metric.SCALAR_REGRESSION_SET),
None, None, None)
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
# The request data frame needs to have y_query column which corresponds to query.
X_query = X_test.copy()
X_query['y_query'] = y_query
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.20.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
featurization_config.drop_columns = ['logQuantity'] # 'logQuantity' is a leaky feature, so we remove it.
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.| TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to find and tune a time-series forecasting model.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.In this notebook, you will:1. Create an Experiment in an existing Workspace2. Instantiate an AutoMLConfig 3. Find and train a forecasting model using local compute4. Evaluate the performance of the modelThe examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from sklearn.metrics import mean_absolute_error, mean_squared_error
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment is a named object in a Workspace which represents a predictive task, the output of which is a trained model and a set of evaluation metrics for the model.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
# project folder
project_folder = './sample_projects/automl-local-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
X_train, X_test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesAutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
y_train = X_train.pop(target_column_name).values
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up-to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *X_valid* and *y_valid* parameters of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**X**|Training matrix of features as a pandas DataFrame, shape = [n_training_samples, n_features]||**y**|Target values as a numpy.ndarray, shape = [n_training_samples, ]||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_ensembling**|Allow AutoML to create ensembles of the best performing models|**debug_log**|Log file path for writing debugging information|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'],
'max_horizon': n_test_periods # optional
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
iterations=10,
X=X_train,
y=y_train,
n_cross_validations=5,
enable_ensembling=False,
path=project_folder,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
local_run = experiment.submit(automl_config, show_output=True)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_pipeline = local_run.get_output()
fitted_pipeline.steps
###Output
_____no_output_____
###Markdown
PredictNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. We will first create a query `y_query`, which is aligned index-for-index to `X_test`. This is a vector of target values where each `NaN` serves the function of the question mark to be replaced by forecast. Passing definite values in the `y` argument allows the `forecast` function to make predictions on data that does not immediately follow the train data which contains `y`. In each grain, the last time point where the model sees a definite value of `y` is that grain's _forecast origin_.
###Code
# Replace ALL values in y_pred by NaN.
# The forecast origin will be at the beginning of the first forecast period.
# (Which is the same time as the end of the last training period.)
y_query = y_test.copy().astype(np.float)
y_query.fill(np.nan)
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_pred, X_trans = fitted_pipeline.forecast(X_test, y_query)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'):
"""
Demonstrates how to get the output aligned to the inputs
using pandas indexes. Helps understand what happened if
the output's shape differs from the input shape, or if
the data got re-sorted by time and grain during forecasting.
Typical causes of misalignment are:
* we predicted some periods that were missing in actuals -> drop from eval
* model was asked to predict past max_horizon -> increase max horizon
* data at start of X_test was needed for lags -> provide previous periods in y
"""
df_fcst = pd.DataFrame({predicted_column_name : y_predicted})
# y and X outputs are aligned by forecast() function contract
df_fcst.index = X_trans.index
# align original X_test to y_test
X_test_full = X_test.copy()
X_test_full[target_column_name] = y_test
# X_test_full's index does not include origin, so reset for merge
df_fcst.reset_index(inplace=True)
X_test_full = X_test_full.reset_index().drop(columns='index')
together = df_fcst.merge(X_test_full, how='right')
# drop rows where prediction or actuals are nan
# happens because of missing actuals
# or at edges of time due to lags/rolling windows
clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)]
return(clean)
df_all = align_outputs(y_pred, X_trans, X_test, y_test)
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
print("Simple forecasting model")
rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted']))
print("[Test Data] \nRoot Mean squared error: %.2f" % rmse)
mae = mean_absolute_error(df_all[target_column_name], df_all['predicted'])
print('mean_absolute_error score: %.2f' % mae)
print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted']))
# Plot outputs
import matplotlib.pyplot as plt
%matplotlib notebook
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(y_test, y_test, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = local_run.register_model(description = description, tags = tags)
print(local_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptSerializing and deserializing complex data frames may be tricky. We first develop the `run()` function of the scoring script locally, then write it into a scoring script. It is much easier to debug any quirks of the scoring function without crossing two compute environments. For this exercise, we handle a common quirk of how pandas dataframes serialize time stamp values.
###Code
# this is where we test the run function of the scoring script interactively
# before putting it in the scoring script
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# test the run function here before putting in the scoring script
import json
test_sample = json.dumps({'X': X_test.to_json(), 'y' : y_query.tolist()})
response = run(test_sample, fitted_pipeline)
# unpack the response, dealing with the timestamp serialization again
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Now that the function works locally in the notebook, let's write it down into the scoring script. The scoring script is authored by the data scientist. Adjust it to taste, adding inputs, outputs and processing as needed.
###Code
%%writefile score_fcast.py
import pickle
import json
import numpy as np
import pandas as pd
import azureml.train.automl
from sklearn.externals import joblib
from azureml.core.model import Model
def init():
global model
model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
# prepare to send over wire as json
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# get the model
from azureml.train.automl.run import AutoMLRun
experiment = Experiment(ws, experiment_name)
ml_run = AutoMLRun(experiment = experiment, run_id = local_run.id)
best_iteration = int(str.split(best_run.id,'_')[-1]) # the iteration number is a postfix of the run ID.
# get the best model's dependencies and write them into this file
from azureml.core.conda_dependencies import CondaDependencies
conda_env_file_name = 'fcast_env.yml'
dependencies = ml_run.get_run_sdk_dependencies(iteration = best_iteration)
for p in ['azureml-train-automl', 'azureml-sdk', 'azureml-core']:
print('{}\t{}'.format(p, dependencies[p]))
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'], pip_packages=['azureml-sdk[automl]'])
myenv.save_to_file('.', conda_env_file_name)
# this is the script file name we wrote a few cells above
script_file_name = 'score_fcast.py'
# Substitute the actual version number in the environment file.
# This is not strictly needed in this notebook because the model should have been generated using the current SDK version.
# However, we include this in case this code is used on an experiment from a previous SDK version.
with open(conda_env_file_name, 'r') as cefr:
content = cefr.read()
with open(conda_env_file_name, 'w') as cefw:
cefw.write(content.replace(azureml.core.VERSION, dependencies['azureml-sdk']))
# Substitute the actual model id in the script file.
with open(script_file_name, 'r') as cefr:
content = cefr.read()
with open(script_file_name, 'w') as cefw:
cefw.write(content.replace('<<modelid>>', local_run.model_id))
###Output
_____no_output_____
###Markdown
Create a Container Image
###Code
from azureml.core.image import Image, ContainerImage
image_config = ContainerImage.image_configuration(runtime= "python",
execution_script = script_file_name,
conda_file = conda_env_file_name,
tags = {'type': "automl-forecasting"},
description = "Image for automl forecasting sample")
image = Image.create(name = "automl-fcast-image",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
if image.creation_state == 'Failed':
print("Image build log at: " + image.image_build_log_uri)
###Output
_____no_output_____
###Markdown
Deploy the Image as a Web Service on Azure Container Instance
###Code
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
from azureml.core.webservice import Webservice
aci_service_name = 'automl-forecast-01'
print(aci_service_name)
aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = aci_service_name,
workspace = ws)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Call the service
###Code
# we send the data to the service serialized into a json string
test_sample = json.dumps({'X':X_test.to_json(), 'y' : y_query.tolist()})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-forecast-01')
# serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from sklearn.metrics import mean_absolute_error, mean_squared_error
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
# project folder
project_folder = './sample_projects/automl-local-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
X_train, X_test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesAutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
y_train = X_train.pop(target_column_name).values
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up-to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *X_valid* and *y_valid* parameters of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**X**|Training matrix of features as a pandas DataFrame, shape = [n_training_samples, n_features]||**y**|Target values as a numpy.ndarray, shape = [n_training_samples, ]||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models|**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models|**debug_log**|Log file path for writing debugging information|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'],
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
iterations=10,
X=X_train,
y=y_train,
n_cross_validations=3,
enable_voting_ensemble=False,
enable_stack_ensemble=False,
path=project_folder,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
local_run = experiment.submit(automl_config, show_output=True)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_pipeline = local_run.get_output()
fitted_pipeline.steps
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. We will first create a query `y_query`, which is aligned index-for-index to `X_test`. This is a vector of target values where each `NaN` serves the function of the question mark to be replaced by forecast. Passing definite values in the `y` argument allows the `forecast` function to make predictions on data that does not immediately follow the train data which contains `y`. In each grain, the last time point where the model sees a definite value of `y` is that grain's _forecast origin_.
###Code
# Replace ALL values in y_pred by NaN.
# The forecast origin will be at the beginning of the first forecast period.
# (Which is the same time as the end of the last training period.)
y_query = y_test.copy().astype(np.float)
y_query.fill(np.nan)
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_pred, X_trans = fitted_pipeline.forecast(X_test, y_query)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'):
"""
Demonstrates how to get the output aligned to the inputs
using pandas indexes. Helps understand what happened if
the output's shape differs from the input shape, or if
the data got re-sorted by time and grain during forecasting.
Typical causes of misalignment are:
* we predicted some periods that were missing in actuals -> drop from eval
* model was asked to predict past max_horizon -> increase max horizon
* data at start of X_test was needed for lags -> provide previous periods in y
"""
df_fcst = pd.DataFrame({predicted_column_name : y_predicted})
# y and X outputs are aligned by forecast() function contract
df_fcst.index = X_trans.index
# align original X_test to y_test
X_test_full = X_test.copy()
X_test_full[target_column_name] = y_test
# X_test_full's index does not include origin, so reset for merge
df_fcst.reset_index(inplace=True)
X_test_full = X_test_full.reset_index().drop(columns='index')
together = df_fcst.merge(X_test_full, how='right')
# drop rows where prediction or actuals are nan
# happens because of missing actuals
# or at edges of time due to lags/rolling windows
clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)]
return(clean)
df_all = align_outputs(y_pred, X_trans, X_test, y_test)
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
print("Simple forecasting model")
rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted']))
print("[Test Data] \nRoot Mean squared error: %.2f" % rmse)
mae = mean_absolute_error(df_all[target_column_name], df_all['predicted'])
print('mean_absolute_error score: %.2f' % mae)
print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted']))
# Plot outputs
import matplotlib.pyplot as plt
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(y_test, y_test, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = local_run.register_model(description = description, tags = tags)
print(local_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptSerializing and deserializing complex data frames may be tricky. We first develop the `run()` function of the scoring script locally, then write it into a scoring script. It is much easier to debug any quirks of the scoring function without crossing two compute environments. For this exercise, we handle a common quirk of how pandas dataframes serialize time stamp values.
###Code
# this is where we test the run function of the scoring script interactively
# before putting it in the scoring script
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# test the run function here before putting in the scoring script
import json
test_sample = json.dumps({'X': X_test.to_json(), 'y' : y_query.tolist()})
response = run(test_sample, fitted_pipeline)
# unpack the response, dealing with the timestamp serialization again
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Now that the function works locally in the notebook, let's write it down into the scoring script. The scoring script is authored by the data scientist. Adjust it to taste, adding inputs, outputs and processing as needed.
###Code
%%writefile score_fcast.py
import pickle
import json
import numpy as np
import pandas as pd
import azureml.train.automl
from sklearn.externals import joblib
from azureml.core.model import Model
def init():
global model
model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
# prepare to send over wire as json
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# get the model
from azureml.train.automl.run import AutoMLRun
experiment = Experiment(ws, experiment_name)
ml_run = AutoMLRun(experiment = experiment, run_id = local_run.id)
best_iteration = int(str.split(best_run.id,'_')[-1]) # the iteration number is a postfix of the run ID.
# get the best model's dependencies and write them into this file
from azureml.core.conda_dependencies import CondaDependencies
conda_env_file_name = 'fcast_env.yml'
dependencies = ml_run.get_run_sdk_dependencies(iteration = best_iteration)
for p in ['azureml-train-automl', 'azureml-core']:
print('{}\t{}'.format(p, dependencies[p]))
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'], pip_packages=['azureml-train-automl'])
myenv.save_to_file('.', conda_env_file_name)
# this is the script file name we wrote a few cells above
script_file_name = 'score_fcast.py'
# Substitute the actual version number in the environment file.
# This is not strictly needed in this notebook because the model should have been generated using the current SDK version.
# However, we include this in case this code is used on an experiment from a previous SDK version.
with open(conda_env_file_name, 'r') as cefr:
content = cefr.read()
with open(conda_env_file_name, 'w') as cefw:
cefw.write(content.replace(azureml.core.VERSION, dependencies['azureml-train-automl']))
# Substitute the actual model id in the script file.
with open(script_file_name, 'r') as cefr:
content = cefr.read()
with open(script_file_name, 'w') as cefw:
cefw.write(content.replace('<<modelid>>', local_run.model_id))
###Output
_____no_output_____
###Markdown
Create a Container Image
###Code
from azureml.core.image import Image, ContainerImage
image_config = ContainerImage.image_configuration(runtime= "python",
execution_script = script_file_name,
conda_file = conda_env_file_name,
tags = {'type': "automl-forecasting"},
description = "Image for automl forecasting sample")
image = Image.create(name = "automl-fcast-image",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
if image.creation_state == 'Failed':
print("Image build log at: " + image.image_build_log_uri)
###Output
_____no_output_____
###Markdown
Deploy the Image as a Web Service on Azure Container Instance
###Code
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
from azureml.core.webservice import Webservice
aci_service_name = 'automl-forecast-01'
print(aci_service_name)
aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = aci_service_name,
workspace = ws)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Call the service
###Code
# we send the data to the service serialized into a json string
test_sample = json.dumps({'X':X_test.to_json(), 'y' : y_query.tolist()})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-forecast-01')
# serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we use AutoML to find and tune a time-series forecasting model.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.In this notebook, you will:1. Create an Experiment in an existing Workspace2. Instantiate an AutoMLConfig 3. Find and train a forecasting model using local compute4. Evaluate the performance of the modelThe examples in the follow code samples use the [University of Chicago's Dominick's Finer Foods dataset](https://research.chicagobooth.edu/kilts/marketing-databases/dominicks) to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup To use the *forecasting* task in AutoML, you need to have the **azuremlftk** package installed in your environment. The following cell tests whether this package is installed locally and, if not, gives you instructions for installing it.
###Code
try:
import ftk
print('Using FTK version ' + ftk.__version__)
except ImportError:
print("Unable to import forecasting package. This notebook does not work without this package.\n"
+ "Please open a command prompt and run `pip install azuremlftk` to install the package. \n"
+ "Make sure you install the package into AutoML's Python environment.\n\n"
+ "For instance, if AutoML is installed in a conda environment called `python36`, run:\n"
+ "> activate python36\n> pip install azuremlftk")
import azureml.core
import pandas as pd
import numpy as np
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from sklearn.metrics import mean_absolute_error, mean_squared_error
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment is a named object in a Workspace which represents a predictive task, the output of which is a trained model and a set of evaluation metrics for the model.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojsalesforecasting'
# project folder
project_folder = './sample_projects/automl-local-ojsalesforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingFor the purposes of demonstration and later forecast evaluation, we now split the data into a training and a testing set. The test set will contain the final 20 weeks of observed sales for each time-series.
###Code
ntest_periods = 20
def split_last_n_by_grain(df, n):
"""
Group df by grain and split on last n rows for each group
"""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
X_train, X_test = split_last_n_by_grain(data, ntest_periods)
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesAutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.You are almost ready to start an AutoML training job. We will first need to create a validation set from the existing training set (i.e. for hyper-parameter tuning):
###Code
nvalidation_periods = 20
X_train, X_validate = split_last_n_by_grain(X_train, nvalidation_periods)
###Output
_____no_output_____
###Markdown
We also need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
y_train = X_train.pop(target_column_name).values
y_validate = X_validate.pop(target_column_name).values
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, and the training and validation data. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time and the grain column names. A time column is required for forecasting, while the grain is optional. If a grain is not given, the forecaster assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak. |Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**X**|Training matrix of features, shape = [n_training_samples, n_features]||**y**|Target values, shape = [n_training_samples, ]||**X_valid**|Validation matrix of features, shape = [n_validation_samples, n_features]||**y_valid**|Target values for validation, shape = [n_validation_samples, ]|**enable_ensembling**|Allow AutoML to create ensembles of the best performing models|**debug_log**|Log file path for writing debugging information|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.
###Code
automl_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity']
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_root_mean_squared_error',
iterations=10,
X=X_train,
y=y_train,
X_valid=X_validate,
y_valid=y_validate,
enable_ensembling=False,
path=project_folder,
verbosity=logging.INFO,
**automl_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
local_run = experiment.submit(automl_config, show_output=True)
local_run
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_pipeline = local_run.get_output()
fitted_pipeline.steps
###Output
_____no_output_____
###Markdown
Make Predictions from the Best Fitted ModelNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. The target predictions can be retrieved by calling the `predict` method on the best model:
###Code
y_pred = fitted_pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
Calculate evaluation metrics for the predictionTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).
###Code
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
print("[Test Data] \nRoot Mean squared error: %.2f" % np.sqrt(mean_squared_error(y_test, y_pred)))
print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred))
print('MAPE: %.2f' % MAPE(y_test, y_pred))
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.9.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please check out the forecasting grouping notebook. You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include,1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods, the supported methods are constant for target data and mean, median, most frequent and constant for training data. This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspaceupgrade)
###Code
featurization_config = FeaturizationConfig()
featurization_config.drop_columns = ['logQuantity'] # 'logQuantity' is a leaky feature, so we remove it.
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
###Output
_____no_output_____
###Markdown
TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If grain columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet. Note that these models are only available for [Enterprise Edition Workspaces](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspaceupgrade).* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used.|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
from forecasting_helper import align_outputs
df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.7.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please check out the forecasting grouping notebook. You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include,1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods, the supported methods are constant for target data and mean, median, most frequent and constant for training data. This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspaceupgrade)
###Code
featurization_config = FeaturizationConfig()
featurization_config.drop_columns = ['logQuantity'] # 'logQuantity' is a leaky feature, so we remove it.
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
###Output
_____no_output_____
###Markdown
TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If grain columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet. Note that these models are only available for [Enterprise Edition Workspaces](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspaceupgrade).* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used.|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
from forecasting_helper import align_outputs
df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.28.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop('logQuantity', axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names,
freq='W-THU' # Set the forecast frequency to be weekly (start on each Thursday)
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.15.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include,1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspaceupgrade)
###Code
featurization_config = FeaturizationConfig()
featurization_config.drop_columns = ['logQuantity'] # 'logQuantity' is a leaky feature, so we remove it.
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.| TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet. Note that these models are only available for [Enterprise Edition Workspaces](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspaceupgrade).* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your cluster.
amlcompute_cluster_name = "cpu-cluster-oj"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# For a more detailed view of current AmlCompute status, use get_status().
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please check out the forecasting grouping notebook. You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_minutes**|Experimentation timeout in minutes.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'], # 'logQuantity' is a leaky feature, so we remove it.
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_minutes=15,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
n_cross_validations=3,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
from forecasting_helper import align_outputs
df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name)
from azureml.automl.core._vendor.automl.client.core.common import metrics
from matplotlib import pyplot as plt
from automl.client.core.common import constants
# use automl metrics module
scores = metrics.compute_metrics_regression(
df_all['predicted'],
df_all[target_column_name],
list(constants.Metric.SCALAR_REGRESSION_SET),
None, None, None)
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
# The request data frame needs to have y_query column which corresponds to query.
X_query = X_test.copy()
X_query['y_query'] = np.NaN
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to find and tune a time-series forecasting model.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.In this notebook, you will:1. Create an Experiment in an existing Workspace2. Instantiate an AutoMLConfig 3. Find and train a forecasting model using local compute4. Evaluate the performance of the modelThe examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from sklearn.metrics import mean_absolute_error, mean_squared_error
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment is a named object in a Workspace which represents a predictive task, the output of which is a trained model and a set of evaluation metrics for the model.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
# project folder
project_folder = './sample_projects/automl-local-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
X_train, X_test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesAutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
y_train = X_train.pop(target_column_name).values
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up-to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *X_valid* and *y_valid* parameters of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**X**|Training matrix of features as a pandas DataFrame, shape = [n_training_samples, n_features]||**y**|Target values as a numpy.ndarray, shape = [n_training_samples, ]||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_ensembling**|Allow AutoML to create ensembles of the best performing models|**debug_log**|Log file path for writing debugging information|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'],
'max_horizon': n_test_periods # optional
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
iterations=10,
X=X_train,
y=y_train,
n_cross_validations=5,
enable_ensembling=False,
path=project_folder,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
local_run = experiment.submit(automl_config, show_output=True)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_pipeline = local_run.get_output()
fitted_pipeline.steps
###Output
_____no_output_____
###Markdown
PredictNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. We will first create a query `y_query`, which is aligned index-for-index to `X_test`. This is a vector of target values where each `NaN` serves the function of the question mark to be replaced by forecast. Passing definite values in the `y` argument allows the `forecast` function to make predictions on data that does not immediately follow the train data which contains `y`. In each grain, the last time point where the model sees a definite value of `y` is that grain's _forecast origin_.
###Code
# Replace ALL values in y_pred by NaN.
# The forecast origin will be at the beginning of the first forecast period.
# (Which is the same time as the end of the last training period.)
y_query = y_test.copy().astype(np.float)
y_query.fill(np.nan)
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_pred, X_trans = fitted_pipeline.forecast(X_test, y_query)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'):
"""
Demonstrates how to get the output aligned to the inputs
using pandas indexes. Helps understand what happened if
the output's shape differs from the input shape, or if
the data got re-sorted by time and grain during forecasting.
Typical causes of misalignment are:
* we predicted some periods that were missing in actuals -> drop from eval
* model was asked to predict past max_horizon -> increase max horizon
* data at start of X_test was needed for lags -> provide previous periods in y
"""
df_fcst = pd.DataFrame({predicted_column_name : y_predicted})
# y and X outputs are aligned by forecast() function contract
df_fcst.index = X_trans.index
# align original X_test to y_test
X_test_full = X_test.copy()
X_test_full[target_column_name] = y_test
# X_test_full's index does not include origin, so reset for merge
df_fcst.reset_index(inplace=True)
X_test_full = X_test_full.reset_index().drop(columns='index')
together = df_fcst.merge(X_test_full, how='right')
# drop rows where prediction or actuals are nan
# happens because of missing actuals
# or at edges of time due to lags/rolling windows
clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)]
return(clean)
df_all = align_outputs(y_pred, X_trans, X_test, y_test)
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
print("Simple forecasting model")
rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted']))
print("[Test Data] \nRoot Mean squared error: %.2f" % rmse)
mae = mean_absolute_error(df_all[target_column_name], df_all['predicted'])
print('mean_absolute_error score: %.2f' % mae)
print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted']))
# Plot outputs
import matplotlib.pyplot as plt
%matplotlib notebook
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(y_test, y_test, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = local_run.register_model(description = description, tags = tags)
print(local_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptSerializing and deserializing complex data frames may be tricky. We first develop the `run()` function of the scoring script locally, then write it into a scoring script. It is much easier to debug any quirks of the scoring function without crossing two compute environments. For this exercise, we handle a common quirk of how pandas dataframes serialize time stamp values.
###Code
# this is where we test the run function of the scoring script interactively
# before putting it in the scoring script
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# test the run function here before putting in the scoring script
import json
test_sample = json.dumps({'X': X_test.to_json(), 'y' : y_query.tolist()})
response = run(test_sample, fitted_pipeline)
# unpack the response, dealing with the timestamp serialization again
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Now that the function works locally in the notebook, let's write it down into the scoring script. The scoring script is authored by the data scientist. Adjust it to taste, adding inputs, outputs and processing as needed.
###Code
%%writefile score_fcast.py
import pickle
import json
import numpy as np
import pandas as pd
import azureml.train.automl
from sklearn.externals import joblib
from azureml.core.model import Model
def init():
global model
model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
# prepare to send over wire as json
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# get the model
from azureml.train.automl.run import AutoMLRun
experiment = Experiment(ws, experiment_name)
ml_run = AutoMLRun(experiment = experiment, run_id = local_run.id)
best_iteration = int(str.split(best_run.id,'_')[-1]) # the iteration number is a postfix of the run ID.
# get the best model's dependencies and write them into this file
from azureml.core.conda_dependencies import CondaDependencies
conda_env_file_name = 'fcast_env.yml'
dependencies = ml_run.get_run_sdk_dependencies(iteration = best_iteration)
for p in ['azureml-train-automl', 'azureml-sdk', 'azureml-core']:
print('{}\t{}'.format(p, dependencies[p]))
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'], pip_packages=['azureml-sdk[automl]'])
myenv.save_to_file('.', conda_env_file_name)
# this is the script file name we wrote a few cells above
script_file_name = 'score_fcast.py'
# Substitute the actual version number in the environment file.
# This is not strictly needed in this notebook because the model should have been generated using the current SDK version.
# However, we include this in case this code is used on an experiment from a previous SDK version.
with open(conda_env_file_name, 'r') as cefr:
content = cefr.read()
with open(conda_env_file_name, 'w') as cefw:
cefw.write(content.replace(azureml.core.VERSION, dependencies['azureml-sdk']))
# Substitute the actual model id in the script file.
with open(script_file_name, 'r') as cefr:
content = cefr.read()
with open(script_file_name, 'w') as cefw:
cefw.write(content.replace('<<modelid>>', local_run.model_id))
###Output
_____no_output_____
###Markdown
Create a Container Image
###Code
from azureml.core.image import Image, ContainerImage
image_config = ContainerImage.image_configuration(runtime= "python",
execution_script = script_file_name,
conda_file = conda_env_file_name,
tags = {'type': "automl-forecasting"},
description = "Image for automl forecasting sample")
image = Image.create(name = "automl-fcast-image",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
if image.creation_state == 'Failed':
print("Image build log at: " + image.image_build_log_uri)
###Output
_____no_output_____
###Markdown
Deploy the Image as a Web Service on Azure Container Instance
###Code
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
from azureml.core.webservice import Webservice
aci_service_name = 'automl-forecast-01'
print(aci_service_name)
aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = aci_service_name,
workspace = ws)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Call the service
###Code
# we send the data to the service serialized into a json string
test_sample = json.dumps({'X':X_test.to_json(), 'y' : y_query.tolist()})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-forecast-01')
# serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Compute](compute)1. [Data](data)1. [Train](train)1. [Forecast](forecast)1. [Operationalize](operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import json
import logging
import azureml.core
import pandas as pd
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.39.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = "automl-ojforecasting"
experiment = Experiment(ws, experiment_name)
output = {}
output["Subscription ID"] = ws.subscription_id
output["Workspace"] = ws.name
output["SKU"] = ws.sku
output["Resource Group"] = ws.resource_group
output["Location"] = ws.location
output["Run History Name"] = experiment_name
pd.set_option("display.max_colwidth", None)
outputDf = pd.DataFrame(data=output, index=[""])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print("Found existing cluster, use it.")
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_D12_V2", max_nodes=6
)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = "WeekStarting"
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop("logQuantity", axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ["Store", "Brand"]
nseries = data.groupby(time_series_id_column_names).ngroups
print("Data contains {0} individual time-series.".format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print("Data subset contains {0} individual time-series.".format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = df.sort_values(time_column_name).groupby( # Sort by ascending time
time_series_id_column_names, group_keys=False
)
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
from azureml.data.dataset_factory import TabularDatasetFactory
datastore = ws.get_default_datastore()
train_dataset = TabularDatasetFactory.register_pandas_dataframe(
train, target=(datastore, "dataset/"), name="dominicks_OJ_train"
)
test_dataset = TabularDatasetFactory.register_pandas_dataframe(
test, target=(datastore, "dataset/"), name="dominicks_OJ_test"
)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = "Quantity"
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose("CPWVOL5", "Numeric")
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params(
"Imputer", ["Quantity"], {"strategy": "constant", "fill_value": 0}
)
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params(
"Imputer", ["INCOME"], {"strategy": "median"}
)
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params("Imputer", ["Price"], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|This optional parameter represents the column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined or incorrectly defined, time series identifiers will be created automatically if they exist.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given or incorrectly given, AutoML automatically creates time_series_id columns if they exist. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
freq="W-THU", # Set the forecast frequency to be weekly (start on each Thursday)
)
automl_config = AutoMLConfig(
task="forecasting",
debug_log="automl_oj_sales_errors.log",
primary_metric="normalized_mean_absolute_error",
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters,
)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best Run detailsBelow we retrieve the best Run object from among all the runs in the experiment.
###Code
best_run = remote_run.get_best_child()
model_name = best_run.properties["model_name"]
best_run
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
# Download the featurization summary JSON file locally
best_run.download_file("outputs/featurization_summary.json", "featurization_summary.json")
# Render the JSON as a pandas DataFrame
with open("featurization_summary.json", "r") as f:
records = json.load(f)
fs = pd.DataFrame.from_records(records)
# View a summary of the featurization
fs[["RawFeatureName", "TypeDetected", "Dropped", "EngineeredFeatureCount", "Transformations"]]
###Output
_____no_output_____
###Markdown
ForecastNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute.
###Code
test_experiment = Experiment(ws, experiment_name + "_inference")
###Output
_____no_output_____
###Markdown
Retreiving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
from run_forecast import run_remote_inference
remote_run_infer = run_remote_inference(
test_experiment=test_experiment,
compute_target=compute_target,
train_run=best_run,
test_dataset=test_dataset,
target_column_name=target_column_name,
)
remote_run_infer.wait_for_completion(show_output=False)
# download the forecast file to the local machine
remote_run_infer.download_file("outputs/predictions.csv", "predictions.csv")
###Output
_____no_output_____
###Markdown
EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
# load forecast data frame
fcst_df = pd.read_csv("predictions.csv", parse_dates=[time_column_name])
fcst_df.head()
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=fcst_df[target_column_name],
y_pred=fcst_df["predicted"],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET),
)
print("[Test data scores]\n")
for key, value in scores.items():
print("{}: {:.3f}".format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(fcst_df[target_column_name], fcst_df["predicted"], color="b")
test_test = plt.scatter(
fcst_df[target_column_name], fcst_df[target_column_name], color="g"
)
plt.legend(
(test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8
)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = "AutoML OJ forecaster"
tags = None
model = remote_run.register_model(
model_name=model_name, description=description, tags=tags
)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = "score_fcast.py"
best_run.download_file("outputs/scoring_file_v_1_0_0.py", script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(
environment=best_run.get_environment(), entry_script=script_file_name
)
aciconfig = AciWebservice.deploy_configuration(
cpu_cores=2,
memory_gb=4,
tags={"type": "automl-forecasting"},
description="Automl forecasting sample service",
)
aci_service_name = "automl-oj-forecast-01"
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = test.copy()
X_query.pop(target_column_name)
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
sample_quantiles = [0.025, 0.975]
test_sample = json.dumps(
{"data": X_query.to_dict(orient="records"), "quantiles": sample_quantiles}
)
response = aci_service.run(input_data=test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict["index"])
y_fcst_all[time_column_name] = pd.to_datetime(
y_fcst_all[time_column_name], unit="ms"
)
y_fcst_all["forecast"] = res_dict["forecast"]
y_fcst_all["prediction_interval"] = res_dict["prediction_interval"]
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, "automl-oj-forecast-01")
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.25.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop('logQuantity', axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names,
freq='W-THU' # Set the forecast frequency to be weekly (start on each Thursday)
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Compute](compute)1. [Data](data)1. [Train](train)1. [Forecast](forecast)1. [Operationalize](operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.34.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D12_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop('logQuantity', axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
test_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_test.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names,
freq='W-THU' # Set the forecast frequency to be weekly (start on each Thursday)
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute.
###Code
test_experiment = Experiment(ws, experiment_name + "_inference")
###Output
_____no_output_____
###Markdown
Retreiving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
from run_forecast import run_remote_inference
remote_run_infer = run_remote_inference(test_experiment=test_experiment,
compute_target=compute_target,
train_run=best_run,
test_dataset=test_dataset,
target_column_name=target_column_name)
remote_run_infer.wait_for_completion(show_output=False)
# download the forecast file to the local machine
remote_run_infer.download_file('outputs/predictions.csv', 'predictions.csv')
###Output
_____no_output_____
###Markdown
EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
# load forecast data frame
fcst_df = pd.read_csv('predictions.csv', parse_dates=[time_column_name])
fcst_df.head()
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=fcst_df[target_column_name],
y_pred=fcst_df['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(fcst_df[target_column_name], fcst_df['predicted'], color='b')
test_test = plt.scatter(fcst_df[target_column_name], fcst_df[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 2,
memory_gb = 4,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = test.copy()
X_query.pop(target_column_name)
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({"data": json.loads(X_query.to_json(orient="records"))})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Compute](compute)1. [Data](data)1. [Train](train)1. [Forecast](forecast)1. [Operationalize](operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import json
import logging
import azureml.core
import pandas as pd
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
###Output
_____no_output_____
###Markdown
This notebook is compatible with Azure ML SDK version 1.35.0 or later.
###Code
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = "automl-ojforecasting"
experiment = Experiment(ws, experiment_name)
output = {}
output["Subscription ID"] = ws.subscription_id
output["Workspace"] = ws.name
output["SKU"] = ws.sku
output["Resource Group"] = ws.resource_group
output["Location"] = ws.location
output["Run History Name"] = experiment_name
output["SDK Version"] = azureml.core.VERSION
pd.set_option("display.max_colwidth", None)
outputDf = pd.DataFrame(data=output, index=[""])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print("Found existing cluster, use it.")
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_D12_V2", max_nodes=6
)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = "WeekStarting"
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop("logQuantity", axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ["Store", "Brand"]
nseries = data.groupby(time_series_id_column_names).ngroups
print("Data contains {0} individual time-series.".format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print("Data subset contains {0} individual time-series.".format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = df.sort_values(time_column_name).groupby( # Sort by ascending time
time_series_id_column_names, group_keys=False
)
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
from azureml.data.dataset_factory import TabularDatasetFactory
datastore = ws.get_default_datastore()
train_dataset = TabularDatasetFactory.register_pandas_dataframe(
train, target=(datastore, "dataset/"), name="dominicks_OJ_train"
)
test_dataset = TabularDatasetFactory.register_pandas_dataframe(
test, target=(datastore, "dataset/"), name="dominicks_OJ_test"
)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = "Quantity"
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose("CPWVOL5", "Numeric")
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params(
"Imputer", ["Quantity"], {"strategy": "constant", "fill_value": 0}
)
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params(
"Imputer", ["INCOME"], {"strategy": "median"}
)
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params("Imputer", ["Price"], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names,
freq="W-THU", # Set the forecast frequency to be weekly (start on each Thursday)
)
automl_config = AutoMLConfig(
task="forecasting",
debug_log="automl_oj_sales_errors.log",
primary_metric="normalized_mean_absolute_error",
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters,
)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best Run detailsBelow we retrieve the best Run object from among all the runs in the experiment.
###Code
best_run = remote_run.get_best_child()
model_name = best_run.properties["model_name"]
best_run
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
# Download the featurization summary JSON file locally
best_run.download_file(
"outputs/featurization_summary.json", "featurization_summary.json"
)
# Render the JSON as a pandas DataFrame
with open("featurization_summary.json", "r") as f:
records = json.load(f)
fs = pd.DataFrame.from_records(records)
# View a summary of the featurization
fs[
[
"RawFeatureName",
"TypeDetected",
"Dropped",
"EngineeredFeatureCount",
"Transformations",
]
]
###Output
_____no_output_____
###Markdown
ForecastNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute.
###Code
test_experiment = Experiment(ws, experiment_name + "_inference")
###Output
_____no_output_____
###Markdown
Retrieving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
from run_forecast import run_remote_inference
remote_run_infer = run_remote_inference(
test_experiment=test_experiment,
compute_target=compute_target,
train_run=best_run,
test_dataset=test_dataset,
target_column_name=target_column_name,
)
remote_run_infer.wait_for_completion(show_output=False)
# download the forecast file to the local machine
remote_run_infer.download_file("outputs/predictions.csv", "predictions.csv")
###Output
_____no_output_____
###Markdown
EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
# load forecast data frame
fcst_df = pd.read_csv("predictions.csv", parse_dates=[time_column_name])
fcst_df.head()
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=fcst_df[target_column_name],
y_pred=fcst_df["predicted"],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET),
)
print("[Test data scores]\n")
for key, value in scores.items():
print("{}: {:.3f}".format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(fcst_df[target_column_name], fcst_df["predicted"], color="b")
test_test = plt.scatter(
fcst_df[target_column_name], fcst_df[target_column_name], color="g"
)
plt.legend(
(test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8
)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = "AutoML OJ forecaster"
tags = None
model = remote_run.register_model(
model_name=model_name, description=description, tags=tags
)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = "score_fcast.py"
best_run.download_file("outputs/scoring_file_v_1_0_0.py", script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(
environment=best_run.get_environment(), entry_script=script_file_name
)
aciconfig = AciWebservice.deploy_configuration(
cpu_cores=2,
memory_gb=4,
tags={"type": "automl-forecasting"},
description="Automl forecasting sample service",
)
aci_service_name = "automl-oj-forecast-01"
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = test.copy()
X_query.pop(target_column_name)
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
sample_quantiles = [0.025, 0.975]
test_sample = json.dumps(
{"data": X_query.to_dict(orient="records"), "quantiles": sample_quantiles}
)
response = aci_service.run(input_data=test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict["index"])
y_fcst_all[time_column_name] = pd.to_datetime(
y_fcst_all[time_column_name], unit="ms"
)
y_fcst_all["forecast"] = res_dict["forecast"]
y_fcst_all["prediction_interval"] = res_dict["prediction_interval"]
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, "automl-oj-forecast-01")
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your cluster.
amlcompute_cluster_name = "cpu-cluster-oj"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# For a more detailed view of current AmlCompute status, use get_status().
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please check out the forecasting grouping notebook. You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'], # 'logQuantity' is a leaky feature, so we remove it.
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
n_cross_validations=3,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
from forecasting_helper import align_outputs
df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name)
from azureml.automl.core._vendor.automl.client.core.common import metrics
from matplotlib import pyplot as plt
from automl.client.core.common import constants
# use automl metrics module
scores = metrics.compute_metrics_regression(
df_all['predicted'],
df_all[target_column_name],
list(constants.Metric.SCALAR_REGRESSION_SET),
None, None, None)
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
# The request data frame needs to have y_query column which corresponds to query.
X_query = X_test.copy()
X_query['y_query'] = np.NaN
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from sklearn.metrics import mean_absolute_error, mean_squared_error
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
# project folder
project_folder = './sample_projects/automl-local-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
X_train, X_test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesAutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
y_train = X_train.pop(target_column_name).values
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up-to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *X_valid* and *y_valid* parameters of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**X**|Training matrix of features as a pandas DataFrame, shape = [n_training_samples, n_features]||**y**|Target values as a numpy.ndarray, shape = [n_training_samples, ]||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_ensembling**|Allow AutoML to create ensembles of the best performing models|**debug_log**|Log file path for writing debugging information|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'],
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
iterations=10,
X=X_train,
y=y_train,
n_cross_validations=3,
enable_ensembling=False,
path=project_folder,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
local_run = experiment.submit(automl_config, show_output=True)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_pipeline = local_run.get_output()
fitted_pipeline.steps
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. We will first create a query `y_query`, which is aligned index-for-index to `X_test`. This is a vector of target values where each `NaN` serves the function of the question mark to be replaced by forecast. Passing definite values in the `y` argument allows the `forecast` function to make predictions on data that does not immediately follow the train data which contains `y`. In each grain, the last time point where the model sees a definite value of `y` is that grain's _forecast origin_.
###Code
# Replace ALL values in y_pred by NaN.
# The forecast origin will be at the beginning of the first forecast period.
# (Which is the same time as the end of the last training period.)
y_query = y_test.copy().astype(np.float)
y_query.fill(np.nan)
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_pred, X_trans = fitted_pipeline.forecast(X_test, y_query)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'):
"""
Demonstrates how to get the output aligned to the inputs
using pandas indexes. Helps understand what happened if
the output's shape differs from the input shape, or if
the data got re-sorted by time and grain during forecasting.
Typical causes of misalignment are:
* we predicted some periods that were missing in actuals -> drop from eval
* model was asked to predict past max_horizon -> increase max horizon
* data at start of X_test was needed for lags -> provide previous periods in y
"""
df_fcst = pd.DataFrame({predicted_column_name : y_predicted})
# y and X outputs are aligned by forecast() function contract
df_fcst.index = X_trans.index
# align original X_test to y_test
X_test_full = X_test.copy()
X_test_full[target_column_name] = y_test
# X_test_full's index does not include origin, so reset for merge
df_fcst.reset_index(inplace=True)
X_test_full = X_test_full.reset_index().drop(columns='index')
together = df_fcst.merge(X_test_full, how='right')
# drop rows where prediction or actuals are nan
# happens because of missing actuals
# or at edges of time due to lags/rolling windows
clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)]
return(clean)
df_all = align_outputs(y_pred, X_trans, X_test, y_test)
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
print("Simple forecasting model")
rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted']))
print("[Test Data] \nRoot Mean squared error: %.2f" % rmse)
mae = mean_absolute_error(df_all[target_column_name], df_all['predicted'])
print('mean_absolute_error score: %.2f' % mae)
print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted']))
# Plot outputs
import matplotlib.pyplot as plt
%matplotlib notebook
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(y_test, y_test, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = local_run.register_model(description = description, tags = tags)
print(local_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptSerializing and deserializing complex data frames may be tricky. We first develop the `run()` function of the scoring script locally, then write it into a scoring script. It is much easier to debug any quirks of the scoring function without crossing two compute environments. For this exercise, we handle a common quirk of how pandas dataframes serialize time stamp values.
###Code
# this is where we test the run function of the scoring script interactively
# before putting it in the scoring script
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# test the run function here before putting in the scoring script
import json
test_sample = json.dumps({'X': X_test.to_json(), 'y' : y_query.tolist()})
response = run(test_sample, fitted_pipeline)
# unpack the response, dealing with the timestamp serialization again
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Now that the function works locally in the notebook, let's write it down into the scoring script. The scoring script is authored by the data scientist. Adjust it to taste, adding inputs, outputs and processing as needed.
###Code
%%writefile score_fcast.py
import pickle
import json
import numpy as np
import pandas as pd
import azureml.train.automl
from sklearn.externals import joblib
from azureml.core.model import Model
def init():
global model
model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
# prepare to send over wire as json
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# get the model
from azureml.train.automl.run import AutoMLRun
experiment = Experiment(ws, experiment_name)
ml_run = AutoMLRun(experiment = experiment, run_id = local_run.id)
best_iteration = int(str.split(best_run.id,'_')[-1]) # the iteration number is a postfix of the run ID.
# get the best model's dependencies and write them into this file
from azureml.core.conda_dependencies import CondaDependencies
conda_env_file_name = 'fcast_env.yml'
dependencies = ml_run.get_run_sdk_dependencies(iteration = best_iteration)
for p in ['azureml-train-automl', 'azureml-sdk', 'azureml-core']:
print('{}\t{}'.format(p, dependencies[p]))
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'], pip_packages=['azureml-sdk[automl]'])
myenv.save_to_file('.', conda_env_file_name)
# this is the script file name we wrote a few cells above
script_file_name = 'score_fcast.py'
# Substitute the actual version number in the environment file.
# This is not strictly needed in this notebook because the model should have been generated using the current SDK version.
# However, we include this in case this code is used on an experiment from a previous SDK version.
with open(conda_env_file_name, 'r') as cefr:
content = cefr.read()
with open(conda_env_file_name, 'w') as cefw:
cefw.write(content.replace(azureml.core.VERSION, dependencies['azureml-sdk']))
# Substitute the actual model id in the script file.
with open(script_file_name, 'r') as cefr:
content = cefr.read()
with open(script_file_name, 'w') as cefw:
cefw.write(content.replace('<<modelid>>', local_run.model_id))
###Output
_____no_output_____
###Markdown
Create a Container Image
###Code
from azureml.core.image import Image, ContainerImage
image_config = ContainerImage.image_configuration(runtime= "python",
execution_script = script_file_name,
conda_file = conda_env_file_name,
tags = {'type': "automl-forecasting"},
description = "Image for automl forecasting sample")
image = Image.create(name = "automl-fcast-image",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
if image.creation_state == 'Failed':
print("Image build log at: " + image.image_build_log_uri)
###Output
_____no_output_____
###Markdown
Deploy the Image as a Web Service on Azure Container Instance
###Code
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
from azureml.core.webservice import Webservice
aci_service_name = 'automl-forecast-01'
print(aci_service_name)
aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = aci_service_name,
workspace = ws)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Call the service
###Code
# we send the data to the service serialized into a json string
test_sample = json.dumps({'X':X_test.to_json(), 'y' : y_query.tolist()})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-forecast-01')
# serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.3.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please check out the forecasting grouping notebook. You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'], # 'logQuantity' is a leaky feature, so we remove it.
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
n_cross_validations=3,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
from forecasting_helper import align_outputs
df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name)
from azureml.automl.core._vendor.automl.client.core.common import metrics
from matplotlib import pyplot as plt
from automl.client.core.common import constants
# use automl metrics module
scores = metrics.compute_metrics_regression(
df_all['predicted'],
df_all[target_column_name],
list(constants.Metric.SCALAR_REGRESSION_SET),
None, None, None)
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.30.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop('logQuantity', axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names,
freq='W-THU' # Set the forecast frequency to be weekly (start on each Thursday)
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Compute](compute)1. [Data](data)1. [Train](train)1. [Forecast](forecast)1. [Operationalize](operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.37.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = "automl-ojforecasting"
experiment = Experiment(ws, experiment_name)
output = {}
output["Subscription ID"] = ws.subscription_id
output["Workspace"] = ws.name
output["SKU"] = ws.sku
output["Resource Group"] = ws.resource_group
output["Location"] = ws.location
output["Run History Name"] = experiment_name
pd.set_option("display.max_colwidth", -1)
outputDf = pd.DataFrame(data=output, index=[""])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print("Found existing cluster, use it.")
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_D12_V2", max_nodes=6
)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = "WeekStarting"
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop("logQuantity", axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ["Store", "Brand"]
nseries = data.groupby(time_series_id_column_names).ngroups
print("Data contains {0} individual time-series.".format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print("Data subset contains {0} individual time-series.".format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = df.sort_values(time_column_name).groupby( # Sort by ascending time
time_series_id_column_names, group_keys=False
)
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
from azureml.data.dataset_factory import TabularDatasetFactory
datastore = ws.get_default_datastore()
train_dataset = TabularDatasetFactory.register_pandas_dataframe(
train, target=(datastore, "dataset/"), name="dominicks_OJ_train"
)
test_dataset = TabularDatasetFactory.register_pandas_dataframe(
test, target=(datastore, "dataset/"), name="dominicks_OJ_test"
)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = "Quantity"
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose("CPWVOL5", "Numeric")
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params(
"Imputer", ["Quantity"], {"strategy": "constant", "fill_value": 0}
)
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params(
"Imputer", ["INCOME"], {"strategy": "median"}
)
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params("Imputer", ["Price"], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names,
freq="W-THU", # Set the forecast frequency to be weekly (start on each Thursday)
)
automl_config = AutoMLConfig(
task="forecasting",
debug_log="automl_oj_sales_errors.log",
primary_metric="normalized_mean_absolute_error",
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters,
)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties["model_name"]
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps["timeseriestransformer"]
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute.
###Code
test_experiment = Experiment(ws, experiment_name + "_inference")
###Output
_____no_output_____
###Markdown
Retreiving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
from run_forecast import run_remote_inference
remote_run_infer = run_remote_inference(
test_experiment=test_experiment,
compute_target=compute_target,
train_run=best_run,
test_dataset=test_dataset,
target_column_name=target_column_name,
)
remote_run_infer.wait_for_completion(show_output=False)
# download the forecast file to the local machine
remote_run_infer.download_file("outputs/predictions.csv", "predictions.csv")
###Output
_____no_output_____
###Markdown
EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
# load forecast data frame
fcst_df = pd.read_csv("predictions.csv", parse_dates=[time_column_name])
fcst_df.head()
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=fcst_df[target_column_name],
y_pred=fcst_df["predicted"],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET),
)
print("[Test data scores]\n")
for key, value in scores.items():
print("{}: {:.3f}".format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(fcst_df[target_column_name], fcst_df["predicted"], color="b")
test_test = plt.scatter(
fcst_df[target_column_name], fcst_df[target_column_name], color="g"
)
plt.legend(
(test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8
)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = "AutoML OJ forecaster"
tags = None
model = remote_run.register_model(
model_name=model_name, description=description, tags=tags
)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = "score_fcast.py"
best_run.download_file("outputs/scoring_file_v_1_0_0.py", script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(
environment=best_run.get_environment(), entry_script=script_file_name
)
aciconfig = AciWebservice.deploy_configuration(
cpu_cores=2,
memory_gb=4,
tags={"type": "automl-forecasting"},
description="Automl forecasting sample service",
)
aci_service_name = "automl-oj-forecast-01"
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = test.copy()
X_query.pop(target_column_name)
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
sample_quantiles = [0.025, 0.975]
test_sample = json.dumps(
{"data": X_query.to_dict(orient="records"), "quantiles": sample_quantiles}
)
response = aci_service.run(input_data=test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict["index"])
y_fcst_all[time_column_name] = pd.to_datetime(
y_fcst_all[time_column_name], unit="ms"
)
y_fcst_all["forecast"] = res_dict["forecast"]
y_fcst_all["prediction_interval"] = res_dict["prediction_interval"]
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, "automl-oj-forecast-01")
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we use AutoML to find and tune a time-series forecasting model.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.In this notebook, you will:1. Create an Experiment in an existing Workspace2. Instantiate an AutoMLConfig 3. Find and train a forecasting model using local compute4. Evaluate the performance of the modelThe examples in the follow code samples use the [University of Chicago's Dominick's Finer Foods dataset](https://research.chicagobooth.edu/kilts/marketing-databases/dominicks) to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. SetupAs part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment is a named object in a Workspace which represents a predictive task, the output of which is a trained model and a set of evaluation metrics for the model.
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from sklearn.metrics import mean_absolute_error, mean_squared_error
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojsalesforecasting'
# project folder
project_folder = './sample_projects/automl-local-ojsalesforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingFor the purposes of demonstration and later forecast evaluation, we now split the data into a training and a testing set. The test set will contain the final 20 weeks of observed sales for each time-series.
###Code
ntest_periods = 20
def split_last_n_by_grain(df, n):
"""
Group df by grain and split on last n rows for each group
"""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
X_train, X_test = split_last_n_by_grain(data, ntest_periods)
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesAutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.You are almost ready to start an AutoML training job. We will first need to create a validation set from the existing training set (i.e. for hyper-parameter tuning):
###Code
nvalidation_periods = 20
X_train, X_validate = split_last_n_by_grain(X_train, nvalidation_periods)
###Output
_____no_output_____
###Markdown
We also need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
y_train = X_train.pop(target_column_name).values
y_validate = X_validate.pop(target_column_name).values
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, and the training and validation data. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time and the grain column names. A time column is required for forecasting, while the grain is optional. If a grain is not given, the forecaster assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak. |Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**X**|Training matrix of features, shape = [n_training_samples, n_features]||**y**|Target values, shape = [n_training_samples, ]||**X_valid**|Validation matrix of features, shape = [n_validation_samples, n_features]||**y_valid**|Target values for validation, shape = [n_validation_samples, ]|**enable_ensembling**|Allow AutoML to create ensembles of the best performing models|**debug_log**|Log file path for writing debugging information|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.
###Code
automl_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity']
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_root_mean_squared_error',
iterations=10,
X=X_train,
y=y_train,
X_valid=X_validate,
y_valid=y_validate,
enable_ensembling=False,
path=project_folder,
verbosity=logging.INFO,
**automl_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
local_run = experiment.submit(automl_config, show_output=True)
local_run
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_pipeline = local_run.get_output()
fitted_pipeline.steps
###Output
_____no_output_____
###Markdown
Make Predictions from the Best Fitted ModelNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. The target predictions can be retrieved by calling the `predict` method on the best model:
###Code
y_pred = fitted_pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
Calculate evaluation metrics for the predictionTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).
###Code
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
print("[Test Data] \nRoot Mean squared error: %.2f" % np.sqrt(mean_squared_error(y_test, y_pred)))
print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred))
print('MAPE: %.2f' % MAPE(y_test, y_pred))
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.17.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include,1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspaceupgrade)
###Code
featurization_config = FeaturizationConfig()
featurization_config.drop_columns = ['logQuantity'] # 'logQuantity' is a leaky feature, so we remove it.
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.| TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet. Note that these models are only available for [Enterprise Edition Workspaces](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspaceupgrade).* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Compute](compute)1. [Data](data)1. [Train](train)1. [Forecast](forecast)1. [Operationalize](operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.35.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D12_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop('logQuantity', axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
test_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_test.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names,
freq='W-THU' # Set the forecast frequency to be weekly (start on each Thursday)
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute.
###Code
test_experiment = Experiment(ws, experiment_name + "_inference")
###Output
_____no_output_____
###Markdown
Retreiving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
from run_forecast import run_remote_inference
remote_run_infer = run_remote_inference(test_experiment=test_experiment,
compute_target=compute_target,
train_run=best_run,
test_dataset=test_dataset,
target_column_name=target_column_name)
remote_run_infer.wait_for_completion(show_output=False)
# download the forecast file to the local machine
remote_run_infer.download_file('outputs/predictions.csv', 'predictions.csv')
###Output
_____no_output_____
###Markdown
EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
# load forecast data frame
fcst_df = pd.read_csv('predictions.csv', parse_dates=[time_column_name])
fcst_df.head()
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=fcst_df[target_column_name],
y_pred=fcst_df['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(fcst_df[target_column_name], fcst_df['predicted'], color='b')
test_test = plt.scatter(fcst_df[target_column_name], fcst_df[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 2,
memory_gb = 4,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = test.copy()
X_query.pop(target_column_name)
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
sample_quantiles=[0.025,0.975]
test_sample = json.dumps({'data': X_query.to_dict(orient='records'), 'quantiles': sample_quantiles})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
y_fcst_all['prediction_interval'] = res_dict['prediction_interval']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Compute](compute)1. [Data](data)1. [Train](train)1. [Forecast](forecast)1. [Operationalize](operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.36.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = "automl-ojforecasting"
experiment = Experiment(ws, experiment_name)
output = {}
output["Subscription ID"] = ws.subscription_id
output["Workspace"] = ws.name
output["SKU"] = ws.sku
output["Resource Group"] = ws.resource_group
output["Location"] = ws.location
output["Run History Name"] = experiment_name
pd.set_option("display.max_colwidth", -1)
outputDf = pd.DataFrame(data=output, index=[""])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print("Found existing cluster, use it.")
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_D12_V2", max_nodes=6
)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = "WeekStarting"
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop("logQuantity", axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ["Store", "Brand"]
nseries = data.groupby(time_series_id_column_names).ngroups
print("Data contains {0} individual time-series.".format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print("Data subset contains {0} individual time-series.".format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = df.sort_values(time_column_name).groupby( # Sort by ascending time
time_series_id_column_names, group_keys=False
)
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv(r"./dominicks_OJ_train.csv", index=None, header=True)
test.to_csv(r"./dominicks_OJ_test.csv", index=None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(
files=["./dominicks_OJ_train.csv", "./dominicks_OJ_test.csv"],
target_path="dataset/",
overwrite=True,
show_progress=True,
)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(
path=datastore.path("dataset/dominicks_OJ_train.csv")
)
test_dataset = Dataset.Tabular.from_delimited_files(
path=datastore.path("dataset/dominicks_OJ_test.csv")
)
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = "Quantity"
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose("CPWVOL5", "Numeric")
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params(
"Imputer", ["Quantity"], {"strategy": "constant", "fill_value": 0}
)
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params(
"Imputer", ["INCOME"], {"strategy": "median"}
)
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params("Imputer", ["Price"], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names,
freq="W-THU", # Set the forecast frequency to be weekly (start on each Thursday)
)
automl_config = AutoMLConfig(
task="forecasting",
debug_log="automl_oj_sales_errors.log",
primary_metric="normalized_mean_absolute_error",
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters,
)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties["model_name"]
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps["timeseriestransformer"]
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute.
###Code
test_experiment = Experiment(ws, experiment_name + "_inference")
###Output
_____no_output_____
###Markdown
Retreiving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
from run_forecast import run_remote_inference
remote_run_infer = run_remote_inference(
test_experiment=test_experiment,
compute_target=compute_target,
train_run=best_run,
test_dataset=test_dataset,
target_column_name=target_column_name,
)
remote_run_infer.wait_for_completion(show_output=False)
# download the forecast file to the local machine
remote_run_infer.download_file("outputs/predictions.csv", "predictions.csv")
###Output
_____no_output_____
###Markdown
EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
# load forecast data frame
fcst_df = pd.read_csv("predictions.csv", parse_dates=[time_column_name])
fcst_df.head()
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=fcst_df[target_column_name],
y_pred=fcst_df["predicted"],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET),
)
print("[Test data scores]\n")
for key, value in scores.items():
print("{}: {:.3f}".format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(fcst_df[target_column_name], fcst_df["predicted"], color="b")
test_test = plt.scatter(
fcst_df[target_column_name], fcst_df[target_column_name], color="g"
)
plt.legend(
(test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8
)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = "AutoML OJ forecaster"
tags = None
model = remote_run.register_model(
model_name=model_name, description=description, tags=tags
)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = "score_fcast.py"
best_run.download_file("outputs/scoring_file_v_1_0_0.py", script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(
environment=best_run.get_environment(), entry_script=script_file_name
)
aciconfig = AciWebservice.deploy_configuration(
cpu_cores=2,
memory_gb=4,
tags={"type": "automl-forecasting"},
description="Automl forecasting sample service",
)
aci_service_name = "automl-oj-forecast-01"
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = test.copy()
X_query.pop(target_column_name)
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
sample_quantiles = [0.025, 0.975]
test_sample = json.dumps(
{"data": X_query.to_dict(orient="records"), "quantiles": sample_quantiles}
)
response = aci_service.run(input_data=test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict["index"])
y_fcst_all[time_column_name] = pd.to_datetime(
y_fcst_all[time_column_name], unit="ms"
)
y_fcst_all["forecast"] = res_dict["forecast"]
y_fcst_all["prediction_interval"] = res_dict["prediction_interval"]
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, "automl-oj-forecast-01")
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.24.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop('logQuantity', axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.10.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please check out the forecasting grouping notebook. You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include,1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods, the supported methods are constant for target data and mean, median, most frequent and constant for training data. This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspaceupgrade)
###Code
featurization_config = FeaturizationConfig()
featurization_config.drop_columns = ['logQuantity'] # 'logQuantity' is a leaky feature, so we remove it.
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
###Output
_____no_output_____
###Markdown
TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If grain columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet. Note that these models are only available for [Enterprise Edition Workspaces](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspaceupgrade).* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used.|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
from forecasting_helper import align_outputs
df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Compute](compute)1. [Data](data)1. [Train](train)1. [Forecast](forecast)1. [Operationalize](operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import json
import logging
import azureml.core
import pandas as pd
from azureml.automl.core.featurization import FeaturizationConfig
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.38.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = "automl-ojforecasting"
experiment = Experiment(ws, experiment_name)
output = {}
output["Subscription ID"] = ws.subscription_id
output["Workspace"] = ws.name
output["SKU"] = ws.sku
output["Resource Group"] = ws.resource_group
output["Location"] = ws.location
output["Run History Name"] = experiment_name
pd.set_option("display.max_colwidth", -1)
outputDf = pd.DataFrame(data=output, index=[""])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print("Found existing cluster, use it.")
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_D12_V2", max_nodes=6
)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = "WeekStarting"
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop("logQuantity", axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ["Store", "Brand"]
nseries = data.groupby(time_series_id_column_names).ngroups
print("Data contains {0} individual time-series.".format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print("Data subset contains {0} individual time-series.".format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = df.sort_values(time_column_name).groupby( # Sort by ascending time
time_series_id_column_names, group_keys=False
)
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
from azureml.data.dataset_factory import TabularDatasetFactory
datastore = ws.get_default_datastore()
train_dataset = TabularDatasetFactory.register_pandas_dataframe(
train, target=(datastore, "dataset/"), name="dominicks_OJ_train"
)
test_dataset = TabularDatasetFactory.register_pandas_dataframe(
test, target=(datastore, "dataset/"), name="dominicks_OJ_test"
)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = "Quantity"
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose("CPWVOL5", "Numeric")
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params(
"Imputer", ["Quantity"], {"strategy": "constant", "fill_value": 0}
)
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params(
"Imputer", ["INCOME"], {"strategy": "median"}
)
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params("Imputer", ["Price"], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|This optional parameter represents the column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined or incorrectly defined, time series identifiers will be created automatically if they exist.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given or incorrectly given, AutoML automatically creates time_series_id columns if they exist. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
freq="W-THU", # Set the forecast frequency to be weekly (start on each Thursday)
)
automl_config = AutoMLConfig(
task="forecasting",
debug_log="automl_oj_sales_errors.log",
primary_metric="normalized_mean_absolute_error",
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters,
)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best Run detailsBelow we retrieve the best Run object from among all the runs in the experiment.
###Code
best_run = remote_run.get_best_child()
model_name = best_run.properties["model_name"]
best_run
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
# Download the featurization summary JSON file locally
best_run.download_file("outputs/featurization_summary.json", "featurization_summary.json")
# Render the JSON as a pandas DataFrame
with open("featurization_summary.json", "r") as f:
records = json.load(f)
fs = pd.DataFrame.from_records(records)
# View a summary of the featurization
fs[["RawFeatureName", "TypeDetected", "Dropped", "EngineeredFeatureCount", "Transformations"]]
###Output
_____no_output_____
###Markdown
ForecastNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute.
###Code
test_experiment = Experiment(ws, experiment_name + "_inference")
###Output
_____no_output_____
###Markdown
Retreiving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
from run_forecast import run_remote_inference
remote_run_infer = run_remote_inference(
test_experiment=test_experiment,
compute_target=compute_target,
train_run=best_run,
test_dataset=test_dataset,
target_column_name=target_column_name,
)
remote_run_infer.wait_for_completion(show_output=False)
# download the forecast file to the local machine
remote_run_infer.download_file("outputs/predictions.csv", "predictions.csv")
###Output
_____no_output_____
###Markdown
EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
# load forecast data frame
fcst_df = pd.read_csv("predictions.csv", parse_dates=[time_column_name])
fcst_df.head()
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=fcst_df[target_column_name],
y_pred=fcst_df["predicted"],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET),
)
print("[Test data scores]\n")
for key, value in scores.items():
print("{}: {:.3f}".format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(fcst_df[target_column_name], fcst_df["predicted"], color="b")
test_test = plt.scatter(
fcst_df[target_column_name], fcst_df[target_column_name], color="g"
)
plt.legend(
(test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8
)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = "AutoML OJ forecaster"
tags = None
model = remote_run.register_model(
model_name=model_name, description=description, tags=tags
)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = "score_fcast.py"
best_run.download_file("outputs/scoring_file_v_1_0_0.py", script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(
environment=best_run.get_environment(), entry_script=script_file_name
)
aciconfig = AciWebservice.deploy_configuration(
cpu_cores=2,
memory_gb=4,
tags={"type": "automl-forecasting"},
description="Automl forecasting sample service",
)
aci_service_name = "automl-oj-forecast-01"
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = test.copy()
X_query.pop(target_column_name)
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
sample_quantiles = [0.025, 0.975]
test_sample = json.dumps(
{"data": X_query.to_dict(orient="records"), "quantiles": sample_quantiles}
)
response = aci_service.run(input_data=test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict["index"])
y_fcst_all[time_column_name] = pd.to_datetime(
y_fcst_all[time_column_name], unit="ms"
)
y_fcst_all["forecast"] = res_dict["forecast"]
y_fcst_all["prediction_interval"] = res_dict["prediction_interval"]
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, "automl-oj-forecast-01")
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to find and tune a time-series forecasting model.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.In this notebook, you will:1. Create an Experiment in an existing Workspace2. Instantiate an AutoMLConfig 3. Find and train a forecasting model using local compute4. Evaluate the performance of the modelThe examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from sklearn.metrics import mean_absolute_error, mean_squared_error
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment is a named object in a Workspace which represents a predictive task, the output of which is a trained model and a set of evaluation metrics for the model.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
# project folder
project_folder = './sample_projects/automl-local-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
X_train, X_test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesAutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
y_train = X_train.pop(target_column_name).values
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up-to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *X_valid* and *y_valid* parameters of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**X**|Training matrix of features as a pandas DataFrame, shape = [n_training_samples, n_features]||**y**|Target values as a numpy.ndarray, shape = [n_training_samples, ]||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_ensembling**|Allow AutoML to create ensembles of the best performing models|**debug_log**|Log file path for writing debugging information|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'],
'max_horizon': n_test_periods # optional
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
iterations=10,
X=X_train,
y=y_train,
n_cross_validations=5,
enable_ensembling=False,
path=project_folder,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
local_run = experiment.submit(automl_config, show_output=True)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_pipeline = local_run.get_output()
fitted_pipeline.steps
###Output
_____no_output_____
###Markdown
PredictNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. We will first create a query `y_query`, which is aligned index-for-index to `X_test`. This is a vector of target values where each `NaN` serves the function of the question mark to be replaced by forecast. Passing definite values in the `y` argument allows the `forecast` function to make predictions on data that does not immediately follow the train data which contains `y`. In each grain, the last time point where the model sees a definite value of `y` is that grain's _forecast origin_.
###Code
# Replace ALL values in y_pred by NaN.
# The forecast origin will be at the beginning of the first forecast period.
# (Which is the same time as the end of the last training period.)
y_query = y_test.copy().astype(np.float)
y_query.fill(np.nan)
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_pred, X_trans = fitted_pipeline.forecast(X_test, y_query)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'):
"""
Demonstrates how to get the output aligned to the inputs
using pandas indexes. Helps understand what happened if
the output's shape differs from the input shape, or if
the data got re-sorted by time and grain during forecasting.
Typical causes of misalignment are:
* we predicted some periods that were missing in actuals -> drop from eval
* model was asked to predict past max_horizon -> increase max horizon
* data at start of X_test was needed for lags -> provide previous periods in y
"""
df_fcst = pd.DataFrame({predicted_column_name : y_predicted})
# y and X outputs are aligned by forecast() function contract
df_fcst.index = X_trans.index
# align original X_test to y_test
X_test_full = X_test.copy()
X_test_full[target_column_name] = y_test
# X_test_full's index does not include origin, so reset for merge
df_fcst.reset_index(inplace=True)
X_test_full = X_test_full.reset_index().drop(columns='index')
together = df_fcst.merge(X_test_full, how='right')
# drop rows where prediction or actuals are nan
# happens because of missing actuals
# or at edges of time due to lags/rolling windows
clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)]
return(clean)
df_all = align_outputs(y_pred, X_trans, X_test, y_test)
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
print("Simple forecasting model")
rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted']))
print("[Test Data] \nRoot Mean squared error: %.2f" % rmse)
mae = mean_absolute_error(df_all[target_column_name], df_all['predicted'])
print('mean_absolute_error score: %.2f' % mae)
print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted']))
# Plot outputs
import matplotlib.pyplot as plt
%matplotlib notebook
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(y_test, y_test, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = local_run.register_model(description = description, tags = tags)
print(local_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptSerializing and deserializing complex data frames may be tricky. We first develop the `run()` function of the scoring script locally, then write it into a scoring script. It is much easier to debug any quirks of the scoring function without crossing two compute environments. For this exercise, we handle a common quirk of how pandas dataframes serialize time stamp values.
###Code
# this is where we test the run function of the scoring script interactively
# before putting it in the scoring script
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# test the run function here before putting in the scoring script
import json
test_sample = json.dumps({'X': X_test.to_json(), 'y' : y_query.tolist()})
response = run(test_sample, fitted_pipeline)
# unpack the response, dealing with the timestamp serialization again
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Now that the function works locally in the notebook, let's write it down into the scoring script. The scoring script is authored by the data scientist. Adjust it to taste, adding inputs, outputs and processing as needed.
###Code
%%writefile score_fcast.py
import pickle
import json
import numpy as np
import pandas as pd
import azureml.train.automl
from sklearn.externals import joblib
from azureml.core.model import Model
def init():
global model
model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
# prepare to send over wire as json
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# get the model
from azureml.train.automl.run import AutoMLRun
experiment = Experiment(ws, experiment_name)
ml_run = AutoMLRun(experiment = experiment, run_id = local_run.id)
best_iteration = int(str.split(best_run.id,'_')[-1]) # the iteration number is a postfix of the run ID.
# get the best model's dependencies and write them into this file
from azureml.core.conda_dependencies import CondaDependencies
conda_env_file_name = 'fcast_env.yml'
dependencies = ml_run.get_run_sdk_dependencies(iteration = best_iteration)
for p in ['azureml-train-automl', 'azureml-sdk', 'azureml-core']:
print('{}\t{}'.format(p, dependencies[p]))
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'], pip_packages=['azureml-sdk[automl]'])
myenv.save_to_file('.', conda_env_file_name)
# this is the script file name we wrote a few cells above
script_file_name = 'score_fcast.py'
# Substitute the actual version number in the environment file.
# This is not strictly needed in this notebook because the model should have been generated using the current SDK version.
# However, we include this in case this code is used on an experiment from a previous SDK version.
with open(conda_env_file_name, 'r') as cefr:
content = cefr.read()
with open(conda_env_file_name, 'w') as cefw:
cefw.write(content.replace(azureml.core.VERSION, dependencies['azureml-sdk']))
# Substitute the actual model id in the script file.
with open(script_file_name, 'r') as cefr:
content = cefr.read()
with open(script_file_name, 'w') as cefw:
cefw.write(content.replace('<<modelid>>', local_run.model_id))
###Output
_____no_output_____
###Markdown
Create a Container Image
###Code
from azureml.core.image import Image, ContainerImage
image_config = ContainerImage.image_configuration(runtime= "python",
execution_script = script_file_name,
conda_file = conda_env_file_name,
tags = {'type': "automl-forecasting"},
description = "Image for automl forecasting sample")
image = Image.create(name = "automl-fcast-image",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
if image.creation_state == 'Failed':
print("Image build log at: " + image.image_build_log_uri)
###Output
_____no_output_____
###Markdown
Deploy the Image as a Web Service on Azure Container Instance
###Code
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
from azureml.core.webservice import Webservice
aci_service_name = 'automl-forecast-01'
print(aci_service_name)
aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = aci_service_name,
workspace = ws)
aci_service.wait_for_deployment(True)
print(aci_service.state)
###Output
_____no_output_____
###Markdown
Call the service
###Code
# we send the data to the service serialized into a json string
test_sample = json.dumps({'X':X_test.to_json(), 'y' : y_query.tolist()})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-forecast-01')
# serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your cluster.
amlcompute_cluster_name = "cpu-cluster-oj"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# For a more detailed view of current AmlCompute status, use get_status().
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please check out the forecasting grouping notebook. You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
TrainThe AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**drop_column_names**|Name(s) of columns to drop prior to modeling||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'], # 'logQuantity' is a leaky feature, so we remove it.
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
n_cross_validations=3,
verbosity=logging.INFO,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features. EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
from forecasting_helper import align_outputs
df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name)
from azureml.automl.core._vendor.automl.client.core.common import metrics
from matplotlib import pyplot as plt
from automl.client.core.common import constants
# use automl metrics module
scores = metrics.compute_metrics_regression(
df_all['predicted'],
df_all[target_column_name],
list(constants.Metric.SCALAR_REGRESSION_SET),
None, None, None)
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.19.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
featurization_config.drop_columns = ['logQuantity'] # 'logQuantity' is a leaky feature, so we remove it.
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.| TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.18.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
featurization_config.drop_columns = ['logQuantity'] # 'logQuantity' is a leaky feature, so we remove it.
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.| TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.11.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
###Code
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
###Code
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_grain(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create grain-based features to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please check out the forecasting grouping notebook. You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include,1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods, the supported methods are constant for target data and mean, median, most frequent and constant for training data. This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspaceupgrade)
###Code
featurization_config = FeaturizationConfig()
featurization_config.drop_columns = ['logQuantity'] # 'logQuantity' is a leaky feature, so we remove it.
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
###Output
_____no_output_____
###Markdown
TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If grain columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet. Note that these models are only available for [Enterprise Edition Workspaces](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspaceupgrade).* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**time_column_name**|Name of the datetime column in the input data||**grain_column_names**|Name(s) of the columns defining individual series in the input data||**max_horizon**|Maximum desired forecast horizon in units of time-series frequency||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used.|
###Code
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
**time_series_settings)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
###Code
from forecasting_helper import align_outputs
df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Compute](compute)1. [Data](data)1. [Train](train)1. [Forecast](forecast)1. [Operationalize](operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.33.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D12_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop('logQuantity', axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
test_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_test.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names,
freq='W-THU' # Set the forecast frequency to be weekly (start on each Thursday)
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute.
###Code
test_experiment = Experiment(ws, experiment_name + "_inference")
###Output
_____no_output_____
###Markdown
Retreiving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
from run_forecast import run_remote_inference
remote_run_infer = run_remote_inference(test_experiment=test_experiment,
compute_target=compute_target,
train_run=best_run,
test_dataset=test_dataset,
target_column_name=target_column_name)
remote_run_infer.wait_for_completion(show_output=False)
# download the forecast file to the local machine
remote_run_infer.download_file('outputs/predictions.csv', 'predictions.csv')
###Output
_____no_output_____
###Markdown
EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
# load forecast data frame
fcst_df = pd.read_csv('predictions.csv', parse_dates=[time_column_name])
fcst_df.head()
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=fcst_df[target_column_name],
y_pred=fcst_df['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(fcst_df[target_column_name], fcst_df['predicted'], color='b')
test_test = plt.scatter(fcst_df[target_column_name], fcst_df[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 2,
memory_gb = 4,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = test.copy()
X_query.pop(target_column_name)
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({"data": json.loads(X_query.to_json(orient="records"))})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) Automated Machine Learning_**Orange Juice Sales Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Compute](Compute)1. [Data](Data)1. [Train](Train)1. [Predict](Predict)1. [Operationalize](Operationalize) IntroductionIn this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. Setup
###Code
import azureml.core
import pandas as pd
import numpy as np
import logging
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
###Output
_____no_output_____
###Markdown
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
###Code
print("This notebook was created using version 1.29.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
_____no_output_____
###Markdown
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
###Code
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
ComputeYou will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "oj-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
DataYou are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
###Code
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
# Drop the columns 'logQuantity' as it is a leaky feature.
data.drop('logQuantity', axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series:
###Code
time_series_id_column_names = ['Store', 'Brand']
nseries = data.groupby(time_series_id_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
For demonstration purposes, we extract sales time-series for just a few of the stores:
###Code
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(time_series_id_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
###Output
_____no_output_____
###Markdown
Data SplittingWe now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns.
###Code
n_test_periods = 20
def split_last_n_by_series_id(df, n):
"""Group df by series identifiers and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(time_series_id_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
train, test = split_last_n_by_series_id(data_subset, n_test_periods)
###Output
_____no_output_____
###Markdown
Upload data to datastoreThe [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True)
test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True)
datastore = ws.get_default_datastore()
datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True)
###Output
_____no_output_____
###Markdown
Create dataset for training
###Code
from azureml.core.dataset import Dataset
train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv'))
train_dataset.to_pandas_dataframe().tail()
###Output
_____no_output_____
###Markdown
ModelingFor forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series* Create time-based features to assist in learning seasonal patterns* Encode categorical variables to numeric quantitiesIn this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
###Code
target_column_name = 'Quantity'
###Output
_____no_output_____
###Markdown
CustomizationThe featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.
###Code
featurization_config = FeaturizationConfig()
# Force the CPWVOL5 feature to be numeric type.
featurization_config.add_column_purpose('CPWVOL5', 'Numeric')
# Fill missing values in the target column, Quantity, with zeros.
featurization_config.add_transformer_params('Imputer', ['Quantity'], {"strategy": "constant", "fill_value": 0})
# Fill missing values in the INCOME column with median value.
featurization_config.add_transformer_params('Imputer', ['INCOME'], {"strategy": "median"})
# Fill missing values in the Price column with forward fill (last value carried forward).
featurization_config.add_transformer_params('Imputer', ['Price'], {"strategy": "ffill"})
###Output
_____no_output_____
###Markdown
Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainThe [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.We note here that AutoML can sweep over two types of time-series models:* Models that are trained for each series such as ARIMA and Facebook's Prophet.* Models trained across multiple time-series using a regression approach.In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.Here is a summary of AutoMLConfig parameters used for training the OJ model:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**experiment_timeout_hours**|Experimentation timeout in hours.||**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection||**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models||**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models||**debug_log**|Log file path for writing debugging information||**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.||**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used
###Code
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=n_test_periods,
time_series_id_column_names=time_series_id_column_names,
freq='W-THU' # Set the forecast frequency to be weekly (start on each Thursday)
)
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
experiment_timeout_hours=0.25,
training_data=train_dataset,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
featurization=featurization_config,
n_cross_validations=3,
verbosity=logging.INFO,
max_cores_per_iteration=-1,
forecasting_parameters=forecasting_parameters)
###Output
_____no_output_____
###Markdown
You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous.
###Code
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelEach run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
###Code
best_run, fitted_model = remote_run.get_output()
print(fitted_model.steps)
model_name = best_run.properties['model_name']
###Output
_____no_output_____
###Markdown
TransparencyView updated featurization summary
###Code
custom_featurizer = fitted_model.named_steps['timeseriestransformer']
custom_featurizer.get_featurization_summary()
###Output
_____no_output_____
###Markdown
ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
###Code
X_test = test
y_test = X_test.pop(target_column_name).values
X_test.head()
###Output
_____no_output_____
###Markdown
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
###Code
# forecast returns the predictions and the featurized data, aligned to X_test.
# This contains the assumptions that were made in the forecast
y_predictions, X_trans = fitted_model.forecast(X_test)
###Output
_____no_output_____
###Markdown
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.The [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics.
###Code
assign_dict = {'predicted': y_predictions, target_column_name: y_test}
df_all = X_test.assign(**assign_dict)
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl scoring module
scores = scoring.score_regression(
y_test=df_all[target_column_name],
y_pred=df_all['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
Operationalize _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
###Code
description = 'AutoML OJ forecaster'
tags = None
model = remote_run.register_model(model_name = model_name, description = description, tags = tags)
print(remote_run.model_id)
###Output
_____no_output_____
###Markdown
Develop the scoring scriptFor the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run.
###Code
script_file_name = 'score_fcast.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name)
###Output
_____no_output_____
###Markdown
Deploy the model as a Web Service on Azure Container Instance
###Code
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(environment = best_run.get_environment(),
entry_script = script_file_name)
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
aci_service_name = 'automl-oj-forecast-01'
print(aci_service_name)
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)
print(aci_service.state)
aci_service.get_logs()
###Output
_____no_output_____
###Markdown
Call the service
###Code
import json
X_query = X_test.copy()
# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.
X_query[time_column_name] = X_query[time_column_name].astype(str)
# The Service object accept the complex dictionary, which is internally converted to JSON string.
# The section 'data' contains the data frame in the form of dictionary.
test_sample = json.dumps({'data': X_query.to_dict(orient='records')})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.DataFrame(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
###Output
_____no_output_____
###Markdown
Delete the web service if desired
###Code
serv = Webservice(ws, 'automl-oj-forecast-01')
serv.delete() # don't do it accidentally
###Output
_____no_output_____ |
ipynb/Travel Claim Hackathon 2.ipynb | ###Markdown
Importing Library
###Code
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split, GridSearchCV, RandomizedSearchCV
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score,f1_score, precision_score, recall_score
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from scipy.stats import chi2_contingency
import category_encoders as ce
from sklearn.preprocessing import normalize,scale
import imblearn
from collections import Counter
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
%matplotlib inline
###Output
_____no_output_____
###Markdown
Reading training data
###Code
path = r"C:\Users\eakhumb\Downloads\file\train.csv"
df = pd.read_csv(path)
df = df.drop(columns=["ID"])
data=df
data[data["Claim"]==0].shape, data[data["Claim"]==1].shape
###Output
_____no_output_____
###Markdown
Train Test Split
###Code
X_train, X_test, y_train, y_test = train_test_split(data.drop(columns=["Claim"]), data["Claim"], test_size=0.3, random_state=6)
###Output
_____no_output_____
###Markdown
Data Pre-Processing
###Code
unique_destinations_count = X_train["Destination"].value_counts()
unique_destinations = list(unique_destinations_count.index)
rare_destination = list(unique_destinations_count[unique_destinations_count<2].index)
X_train["Destination"] = X_train["Destination"].apply(lambda x: "Others" if x in rare_destination else x)
X_test["Destination"] = X_test["Destination"].apply(lambda x: "Others" if ((x not in unique_destinations) | (x in rare_destination)) else x)
###Output
_____no_output_____
###Markdown
Lable Encoding
###Code
le = LabelEncoder()
X_train.Agency = le.fit_transform(X_train.Agency)
X_test.Agency = le.transform(X_test.Agency)
le = LabelEncoder()
X_train["Agency Type"] = le.fit_transform(X_train["Agency Type"])
X_test["Agency Type"] = le.transform(X_test["Agency Type"])
le = LabelEncoder()
X_train["Distribution Channel"] = le.fit_transform(X_train["Distribution Channel"])
X_test["Distribution Channel"] = le.transform(X_test["Distribution Channel"])
le = LabelEncoder()
X_train["Product Name"] = le.fit_transform(X_train["Product Name"])
X_test["Product Name"] = le.transform(X_test["Product Name"])
le = LabelEncoder()
X_train["Destination"] = le.fit_transform(X_train["Destination"])
X_test["Destination"] = le.transform(X_test["Destination"])
###Output
_____no_output_____
###Markdown
Making Prediction Using Logestic regression
###Code
lr = LogisticRegression()
lr.fit(X_train, y_train)
lr.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Checking correlation
###Code
cor = data.corr()
cor = abs(cor)
cor[cor>=0.75]
# cor
pd.crosstab(data["Agency"], data["Claim"])
sns.heatmap(data.isnull(), yticklabels=False, cbar=False, cmap='tab20c_r')
data["Agency"].value_counts()
data.head()
###Output
_____no_output_____
###Markdown
chi**2 for categorical columns
###Code
csq = chi2_contingency(pd.crosstab(data['Claim'], data['Agency']))
print("Agency P-value : ",csq[0])
csq1 = chi2_contingency(pd.crosstab(data['Claim'], data['Agency Type']))
print("Agency Type P-value : ",csq1[0])
csq2 = chi2_contingency(pd.crosstab(data['Claim'], data['Distribution Channel']))
print("Distribution Channel P-value : ",csq2[0])
csq3 = chi2_contingency(pd.crosstab(data['Claim'], data['Product Name']))
print("Product Name P-value : ",csq3[0])
csq4 = chi2_contingency(pd.crosstab(data['Claim'], data['Destination']))
print("Destination P-value : ",csq4[0])
###Output
Agency P-value : 10852.787999889759
Agency Type P-value : 4322.140017164073
Distribution Channel P-value : 8.984836712505931
Product Name P-value : 12006.993058015907
Destination P-value : 7050.809768066112
###Markdown
Trying to find outliers
###Code
fig, ax = plt.subplots(2, 2, figsize=[20,10])
fig.subplots_adjust(hspace = .30)
sns.boxplot(data["Duration"], ax=ax[0][0])
ax[0][0].set_title("Duration")
sns.boxplot(data["Net Sales"], ax=ax[0][1])
ax[0][1].set_title("Net Sales")
sns.boxplot(data[r"Commision (in value)"], ax=ax[1][0])
ax[1][0].set_title(r"Commision (in value)")
sns.boxplot(data["Age"], ax=ax[1][1])
ax[1][1].set_title("Age")
###Output
_____no_output_____
###Markdown
Removing Outliers
###Code
data = data[(data.Duration<1000) & (data.Duration>=0)]
print(data[data["Net Sales"]<0].shape)
print(data[data["Net Sales"]>=0].shape)
###Output
(503, 10)
(51794, 10)
###Markdown
EDA
###Code
fig, ax = plt.subplots(4, 2, figsize=[20,15])
fig.subplots_adjust(hspace = .30)
ax[0][0].bar(list(data["Agency"].value_counts().index), data["Agency"].value_counts(), color ='darkred')
ax[0][0].tick_params(axis = 'x', labelrotation=90)
ax[0][0].set_title("Agency Count")
ax[1][0].bar(list(data["Agency Type"].value_counts().index), data["Agency Type"].value_counts(), color ='darkred')
ax[1][0].set_title("Agency Type Count")
ax[2][0].bar(list(data["Distribution Channel"].value_counts().index), data["Distribution Channel"].value_counts(), color ='darkred')
ax[2][0].set_title("Distribution Channel Count")
ax[3][0].bar(list(data["Product Name"].value_counts().index), data["Product Name"].value_counts(), color ='darkred')
ax[3][0].tick_params(axis = 'x', labelrotation=90)
ax[3][0].set_title("Product Name Count")
ax[0][1].hist(data["Duration"], bins=20, color ='darkred')
ax[0][1].set_title("Duration")
ax[1][1].hist(data["Net Sales"], bins=20, color ='darkred')
ax[1][1].set_title("Net Sales")
ax[2][1].hist(data[r"Commision (in value)"], bins=20, color ='darkred')
ax[2][1].set_title(r"Commision (in value)")
ax[3][1].hist(data["Age"], bins=20, color ='darkred')
ax[3][1].set_title("Age")
plt.show()
distribution = data.groupby("Distribution Channel")["Claim"].value_counts().unstack()
distribution.plot(kind="bar", stacked=True)
pd.crosstab(data["Distribution Channel"], data["Claim"], normalize="index").plot(kind="bar", stacked=True)
fig, ax = plt.subplots(2, 2, figsize=[20,15])
fig.subplots_adjust(hspace = .30)
ax[0][0].hist(data[data["Claim"]==0]["Duration"], bins = 25, label ='0', alpha = .50,edgecolor= 'black',color ='grey')
ax[0][0].hist(data[data["Claim"]==1]["Duration"], bins = 25, label ='1', alpha = .50,edgecolor= 'black',color ='lightgreen')
ax[0][0].legend()
ax[0][0].set_title("Duration vs Claim")
ax[0][1].hist(data[data["Claim"]==0]["Net Sales"], bins = 25, label ='0', alpha = .50,edgecolor= 'black',color ='grey')
ax[0][1].hist(data[data["Claim"]==1]["Net Sales"], bins = 25, label ='1', alpha = .50,edgecolor= 'black',color ='lightgreen')
ax[0][1].legend()
ax[0][1].set_title("Net Sales vs Claim")
ax[1][0].hist(data[data["Claim"]==0][r"Commision (in value)"], bins = 25, label ='0', alpha = .50,edgecolor= 'black',color ='grey')
ax[1][0].hist(data[data["Claim"]==1][r"Commision (in value)"], bins = 25, label ='1', alpha = .50,edgecolor= 'black',color ='lightgreen')
ax[1][0].legend()
ax[1][0].set_title(r"Commision (in value) vs Claim")
ax[1][1].hist(data[data["Claim"]==0]["Age"], bins = 25, label ='0', alpha = .50,edgecolor= 'black',color ='grey')
ax[1][1].hist(data[data["Claim"]==1]["Age"], bins = 25, label ='1', alpha = .50,edgecolor= 'black',color ='lightgreen')
ax[1][1].legend()
ax[1][1].set_title("Age vs Claim")
plt.show()
###Output
_____no_output_____
###Markdown
Scatter plot for numeric data
###Code
fig, ax = plt.subplots(2, 2, figsize=[20,10])
sns.scatterplot(data["Net Sales"], data["Duration"], ax=ax[0][0], color="red")
sns.scatterplot(data["Age"], data["Net Sales"], ax=ax[0][1], color="green")
sns.scatterplot(data["Net Sales"], data["Commision (in value)"], ax=ax[1][0], color="purple")
sns.scatterplot(data["Commision (in value)"], data["Age"], ax=ax[1][1], color="blue")
###Output
_____no_output_____
###Markdown
One Hot Coding
###Code
newdata = data.join(pd.get_dummies(data["Agency"])).join(pd.get_dummies(data["Agency Type"])).join(pd.get_dummies(data["Distribution Channel"])).join(pd.get_dummies(data["Product Name"])).join(pd.get_dummies(data["Destination"])).drop(columns=["Agency", "Agency Type", "Distribution Channel", "Product Name", "Destination"])
newdata.shape
###Output
_____no_output_____
###Markdown
Making prediction
###Code
lr = LogisticRegression()
lr.fit(X_train, y_train)
y_pred = lr.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print("Confusion Matrix")
print(cm)
cr = classification_report(y_test, y_pred)
print()
print("Classification report")
print(cr)
accuracy = accuracy_score(y_test, y_pred)
print()
print("Accuracy score : ", round(accuracy*100,2), "%")
###Output
Confusion Matrix
[[12621 447]
[ 1810 799]]
Classification report
precision recall f1-score support
0 0.87 0.97 0.92 13068
1 0.64 0.31 0.41 2609
accuracy 0.86 15677
macro avg 0.76 0.64 0.67 15677
weighted avg 0.84 0.86 0.83 15677
Accuracy score : 85.6 %
###Markdown
Label encoding
###Code
le = LabelEncoder()
newdata2=data
newdata2["Agency"] = le.fit_transform(newdata2["Agency"])
newdata2["Agency Type"] = le.fit_transform(newdata2["Agency Type"])
newdata2["Distribution Channel"] = le.fit_transform(newdata2["Distribution Channel"])
newdata2["Product Name"] = le.fit_transform(newdata2["Product Name"])
newdata2["Destination"] = le.fit_transform(newdata2["Destination"])
###Output
_____no_output_____
###Markdown
Spliting data then Making prediction
###Code
X_train, X_test, y_train, y_test = train_test_split(newdata2.drop(columns=["Claim"]), newdata2["Claim"], test_size=0.3, random_state=6)
lr = LogisticRegression()
lr.fit(X_train, y_train)
y_pred = lr.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print("Confusion Matrix")
print(cm)
cr = classification_report(y_test, y_pred)
print()
print("Classification report")
print(cr)
accuracy = accuracy_score(y_test, y_pred)
print()
print("Accuracy score : ", round(accuracy*100,2), "%")
###Output
Confusion Matrix
[[12728 406]
[ 1993 566]]
Classification report
precision recall f1-score support
0 0.86 0.97 0.91 13134
1 0.58 0.22 0.32 2559
accuracy 0.85 15693
macro avg 0.72 0.60 0.62 15693
weighted avg 0.82 0.85 0.82 15693
Accuracy score : 84.71 %
###Markdown
Label Encoder then Random Forest Prediction
###Code
rfc = RandomForestClassifier(n_estimators=200)
rfc.fit(X_train, y_train)
y_pred = rfc.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print("Confusion Matrix")
print(cm)
cr = classification_report(y_test, y_pred)
print()
print("Classification report")
print(cr)
accuracy = accuracy_score(y_test, y_pred)
print()
print("Accuracy score : ", round(accuracy*100,2), "%")
###Output
Confusion Matrix
[[12705 429]
[ 551 2008]]
Classification report
precision recall f1-score support
0 0.96 0.97 0.96 13134
1 0.82 0.78 0.80 2559
accuracy 0.94 15693
macro avg 0.89 0.88 0.88 15693
weighted avg 0.94 0.94 0.94 15693
Accuracy score : 93.76 %
###Markdown
Making prediction using DecisionTreeClassifier
###Code
newdata = data.join(pd.get_dummies(data["Agency"])).join(pd.get_dummies(data["Agency Type"])).join(pd.get_dummies(data["Distribution Channel"])).join(pd.get_dummies(data["Product Name"])).join(pd.get_dummies(data["Destination"])).drop(columns=["Agency", "Agency Type", "Distribution Channel", "Product Name", "Destination"])
newdata.shape
X_train, X_test, y_train, y_test = train_test_split(newdata.drop(columns=["Claim"]), newdata["Claim"], test_size=0.3, random_state=6)
dt = DecisionTreeClassifier(max_depth=3)
dt.fit(X_train, y_train)
y_pred = dt.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print("Confusion Matrix")
print(cm)
cr = classification_report(y_test, y_pred)
print()
print("Classification report")
print(cr)
accuracy = accuracy_score(y_test, y_pred)
print()
print("Accuracy score : ", round(accuracy*100,2), "%")
###Output
Confusion Matrix
[[12352 782]
[ 1502 1057]]
Classification report
precision recall f1-score support
0 0.89 0.94 0.92 13134
1 0.57 0.41 0.48 2559
accuracy 0.85 15693
macro avg 0.73 0.68 0.70 15693
weighted avg 0.84 0.85 0.84 15693
Accuracy score : 85.45 %
###Markdown
Making prediction using RandomForestClassifier
###Code
rfc = RandomForestClassifier(n_estimators=200)
rfc.fit(X_train, y_train)
y_pred = rfc.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print("Confusion Matrix")
print(cm)
cr = classification_report(y_test, y_pred)
print()
print("Classification report")
print(cr)
accuracy = accuracy_score(y_test, y_pred)
print()
print("Accuracy score : ", round(accuracy*100,2), "%")
feature_imp = sorted(list(zip(newdata.columns,rfc.feature_importances_)), key= lambda x:x[1])[-1:-20:-1][-1::-1]
feature_imp = pd.DataFrame(feature_imp)
feature_imp.index = feature_imp.iloc[:,0]
feature_imp = feature_imp.drop(columns=[0])
plt.figure(figsize=(20,10))
plt.barh(feature_imp.index, feature_imp[1])
plt.show()
###Output
_____no_output_____
###Markdown
Ramdom search with grid search
###Code
rfc=RandomForestClassifier()
param_grid = {
'n_estimators': [200, 300, 400, 500],
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth' : [None,4,5,6,7,8],
'criterion' :['gini', 'entropy']
}
cv_rfc = GridSearchCV(estimator=rfc, param_grid=param_grid, cv=5)
cv_rfc.fit(X_train, y_train)
y_pred = cv_rfc.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print("Confusion Matrix")
print(cm)
cr = classification_report(y_test, y_pred)
print()
print("Classification report")
print(cr)
accuracy = accuracy_score(y_test, y_pred)
print()
print("Accuracy score : ", round(accuracy*100,2), "%")
cv_rfc.best_estimator_
###Output
_____no_output_____
###Markdown
Binary Encoding and Prediction using Binary Encoder
###Code
be = ce.BinaryEncoder(cols=["Agency", "Agency Type", "Distribution Channel", "Product Name", "Duration", "Destination"])
data_binary = be.fit_transform(data)
X_train, X_test, y_train, y_test = train_test_split(data_binary.drop(columns=["Claim"]), data_binary["Claim"], test_size=0.3, random_state=6)
rfc = RandomForestClassifier(n_estimators=200)
rfc.fit(X_train, y_train)
y_pred = rfc.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print("Confusion Matrix")
print(cm)
cr = classification_report(y_test, y_pred)
print()
print("Classification report")
print(cr)
accuracy = accuracy_score(y_test, y_pred)
print()
print("Accuracy score : ", round(accuracy*100,2), "%")
###Output
Confusion Matrix
[[12596 538]
[ 862 1697]]
Classification report
precision recall f1-score support
0 0.94 0.96 0.95 13134
1 0.76 0.66 0.71 2559
accuracy 0.91 15693
macro avg 0.85 0.81 0.83 15693
weighted avg 0.91 0.91 0.91 15693
Accuracy score : 91.08 %
###Markdown
Random forest with outlier handling and normalization
###Code
newdata = df.drop(columns="Agency")
###Output
_____no_output_____
###Markdown
This method of handling outlier is giving 93.58% accuracy
###Code
newdata["Duration"] = newdata["Duration"].apply(lambda x: np.NaN if (x>1000) | (x<0) else x)
newdata.fillna(newdata["Duration"].mean(),inplace=True)
###Output
_____no_output_____
###Markdown
This method of handling outlier is giving 93.76% accuracy
###Code
newdata = newdata[(newdata["Duration"]<1000) & (newdata["Duration"]>0)]
###Output
_____no_output_____
###Markdown
One-Hot Encoding
###Code
newdata = newdata.join(pd.get_dummies(newdata["Agency Type"])).join(pd.get_dummies(newdata["Distribution Channel"])).join(pd.get_dummies(newdata["Product Name"])).join(pd.get_dummies(newdata["Destination"])).drop(columns=["Agency Type", "Distribution Channel", "Product Name", "Destination"])
newdata.shape
newdata = newdata.join(pd.get_dummies(newdata["Agency"])).join(pd.get_dummies(newdata["Agency Type"])).join(pd.get_dummies(newdata["Distribution Channel"])).join(pd.get_dummies(newdata["Product Name"])).join(pd.get_dummies(newdata["Destination"])).drop(columns=["Agency", "Agency Type", "Distribution Channel", "Product Name", "Destination"])
newdata.shape
###Output
_____no_output_____
###Markdown
Normalization is able to achive 89.31% accuracy with rfc
###Code
newdata[["Duration", "Net Sales", "Commision (in value)", "Age"]] = pd.DataFrame(normalize(newdata.iloc[:,:4]))
###Output
_____no_output_____
###Markdown
Scalling is able to achive 93.65% accuracy with rfc
###Code
newdata[["Duration", "Net Sales", "Commision (in value)", "Age"]] = pd.DataFrame(scale(newdata.iloc[:,:4]))
for col in ["Duration", "Net Sales", "Commision (in value)", "Age"]:
newdata[col] = newdata[col].fillna(newdata[col].mean())
X_train, X_test, y_train, y_test = train_test_split(newdata.drop(columns=["Claim"]), newdata["Claim"], test_size=0.3, random_state=6)
rfc = RandomForestClassifier(n_estimators=750)
rfc.fit(X_train, y_train)
y_pred = rfc.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print("Confusion Matrix")
print(cm)
cr = classification_report(y_test, y_pred)
print()
print("Classification report")
print(cr)
accuracy = accuracy_score(y_test, y_pred)
print()
print("Accuracy score : ", round(accuracy*100,2), "%")
rfc.
X_train.shape
###Output
_____no_output_____
###Markdown
Balancing Data set
###Code
data["Claim"].value_counts().plot(kind="bar")
sns.countplot("Claim", data = data)
newdata = df.drop(columns="Agency")
newdata["Agency Type"] = newdata["Agency Type"].replace({"Travel Agency" : 0, "Airlines" : 1})
newdata["Distribution Channel"] = newdata["Distribution Channel"].replace({"Online" : 0, "Offline" : 1})
newdata = newdata.join(pd.get_dummies(newdata["Product Name"])).join(pd.get_dummies(newdata["Destination"])).drop(columns=["Product Name", "Destination"])
data=newdata
X_sample, X_test, y_sample, y_test = train_test_split(data.drop(columns=["Claim"]), data["Claim"], test_size=0.3, random_state=6)
X_sample = X_sample[(X_sample["Duration"]<1000) & (X_sample["Duration"]>0)]
X_sample = X_sample[X_sample["Age"]<=90]
X_sample = X_sample[X_sample["Net Sales"]>0]
y_sample = y_sample.loc[X_sample.index]
ros = imblearn.over_sampling.SMOTE()
X_resampled, y_resampled = ros.fit_resample(X_sample, y_sample)
pd.DataFrame(data = Counter(y_resampled), index=[0]).plot(kind="bar")
sns.countplot(y_resampled)
newdata = pd.DataFrame(X_resampled, columns=X_sample.columns)
y_resampled = pd.Series(y_resampled, name="Claim")
X_train, X_validate, y_train, y_validate = train_test_split(newdata, y_resampled, test_size=0.1, random_state=6)
rfc = RandomForestClassifier(n_estimators=750)
rfc.fit(X_train, y_train)
y_validate_pred = rfc.predict(X_validate)
cm = confusion_matrix(y_validate, y_validate_pred)
print("Confusion Matrix")
print(cm)
cr = classification_report(y_validate, y_validate_pred)
print()
print("Classification report")
print(cr)
accuracy = accuracy_score(y_validate, y_validate_pred)
print()
print("Accuracy score : ", round(accuracy*100,2), "%")
y_test_pred = rfc.predict(X_test)
cm = confusion_matrix(y_test, y_test_pred)
print("Confusion Matrix")
print(cm)
cr = classification_report(y_test, y_test_pred)
print()
print("Classification report")
print(cr)
accuracy = accuracy_score(y_test, y_test_pred)
print()
print("Accuracy score : ", round(accuracy*100,2), "%")
###Output
Confusion Matrix
[[12549 585]
[ 429 2130]]
Classification report
precision recall f1-score support
0 0.97 0.96 0.96 13134
1 0.78 0.83 0.81 2559
accuracy 0.94 15693
macro avg 0.88 0.89 0.88 15693
weighted avg 0.94 0.94 0.94 15693
Accuracy score : 93.54 %
###Markdown
new way
###Code
df.head()
# newdata = df.drop(columns=["Agency"])
newdata = df
newdata["Agency Type"] = newdata["Agency Type"].replace({"Travel Agency" : 0, "Airlines" : 1})
newdata["Distribution Channel"] = newdata["Distribution Channel"].replace({"Online" : 0, "Offline" : 1})
# newdata = newdata.join(pd.get_dummies(newdata["Product Name"])).join(pd.get_dummies(newdata["Destination"])).drop(columns=["Product Name", "Destination"])
newdata = newdata.join(pd.get_dummies(newdata["Agency"])).join(pd.get_dummies(newdata["Product Name"])).join(pd.get_dummies(newdata["Destination"])).drop(columns=["Agency", "Product Name", "Destination"])
newdata.head()
min_age = newdata["Age"].quantile(0.05)
max_age = newdata["Age"].quantile(0.95)
min_duration = newdata["Duration"].quantile(0.05)
max_duration = newdata["Duration"].quantile(0.95)
min_netSales = newdata["Net Sales"].quantile(0.05)
max_netSales = newdata["Net Sales"].quantile(0.95)
min_commision = newdata["Commision (in value)"].quantile(0.05)
max_commision = newdata["Commision (in value)"].quantile(0.95)
mean_duration = newdata[(newdata["Duration"]>min_duration) & (newdata["Duration"]<max_duration)]["Duration"].mean()
mean_age = newdata[(newdata["Age"]>min_age) & (newdata["Age"]<max_age)]["Age"].mean()
mean_netSales = newdata[(newdata["Net Sales"]>min_netSales) & (newdata["Net Sales"]<max_netSales)]["Duration"].mean()
mean_commision = newdata[(newdata["Commision (in value)"]>min_commision) & (newdata["Commision (in value)"]<max_commision)]["Commision (in value)"].mean()
newdata["Duration"] = newdata["Duration"].apply(lambda x: mean_duration if (x<=min_duration) | (x>=max_duration) else x)
newdata["Age"] = newdata["Age"].apply(lambda x: mean_age if (x<=min_age) | (x>=max_age) else x)
newdata["Net Sales"] = newdata["Net Sales"].apply(lambda x: mean_netSales if (x<=min_netSales) | (x>=max_netSales) else x)
newdata["Commision (in value)"] = newdata["Commision (in value)"].apply(lambda x: mean_commision if (x<=min_commision) | (x>=max_commision) else x)
newdata[["Duration", "Net Sales", "Commision (in value)", "Age"]] = pd.DataFrame(scale(newdata.iloc[:,:4]))
for col in ["Duration", "Net Sales", "Commision (in value)", "Age"]:
newdata[col] = newdata[col].fillna(newdata[col].mean())
X_train, X_test, y_train, y_test = train_test_split(newdata.drop(columns=["Claim"]), newdata["Claim"], test_size=0.3, random_state=6)
X_train = X_train[(X_train["Duration"]<1000) & (X_train["Duration"]>0)]
X_train = X_train[X_train["Age"]<=90]
X_train = X_train[X_train["Net Sales"]>0]
y_train = y_train.loc[X_train.index]
X_train = X_train[(X_train["Duration"]>min_duration) & (X_train["Duration"]<max_duration)]
X_train = X_train[(X_train["Age"]>min_age) & (X_train["Age"]<max_age)]
X_train = X_train[(X_train["Net Sales"]>min_netSales) & (X_train["Net Sales"]<max_netSales)]
y_train = y_train.loc[X_train.index]
rfc = RandomForestClassifier(n_estimators=300)
rfc.fit(X_train, y_train)
y_pred = rfc.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print("Confusion Matrix")
print(cm)
cr = classification_report(y_test, y_pred)
print()
print("Classification report")
print(cr)
accuracy = accuracy_score(y_test, y_pred)
print()
print("Accuracy score : ", round(accuracy*100,2), "%")
###Output
Confusion Matrix
[[12675 459]
[ 563 1996]]
Classification report
precision recall f1-score support
0 0.96 0.97 0.96 13134
1 0.81 0.78 0.80 2559
accuracy 0.93 15693
macro avg 0.89 0.87 0.88 15693
weighted avg 0.93 0.93 0.93 15693
Accuracy score : 93.49 %
###Markdown
KNN
###Code
knn = KNeighborsClassifier()
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print("Confusion Matrix")
print(cm)
cr = classification_report(y_test, y_pred)
print()
print("Classification report")
print(cr)
accuracy = accuracy_score(y_test, y_pred)
print()
print("Accuracy score : ", round(accuracy*100,2), "%")
###Output
Confusion Matrix
[[12176 958]
[ 816 1743]]
Classification report
precision recall f1-score support
0 0.94 0.93 0.93 13134
1 0.65 0.68 0.66 2559
accuracy 0.89 15693
macro avg 0.79 0.80 0.80 15693
weighted avg 0.89 0.89 0.89 15693
Accuracy score : 88.7 %
###Markdown
SVM
###Code
clf = SVC(kernel='linear')
clf.fit(X_train,y_train)
y_pred = clf.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print("Confusion Matrix")
print(cm)
cr = classification_report(y_test, y_pred)
print()
print("Classification report")
print(cr)
accuracy = accuracy_score(y_test, y_pred)
print()
print("Accuracy score : ", round(accuracy*100,2), "%")
feature_imp = sorted(list(zip(newdata.columns,rfc.feature_importances_)), key= lambda x:x[1])[-1:-20:-1][-1::-1]
feature_imp = pd.DataFrame(feature_imp)
feature_imp.index = feature_imp.iloc[:,0]
feature_imp = feature_imp.drop(columns=[0])
plt.figure(figsize=(20,10))
plt.barh(feature_imp.index, feature_imp[1])
plt.show()
###Output
_____no_output_____
###Markdown
Exploring Test data
###Code
submission = pd.read_csv(r"C:\Users\eakhumb\Downloads\file\test.csv")
data = submission
fig, ax = plt.subplots(4, 2, figsize=[20,15])
fig.subplots_adjust(hspace = .30)
ax[0][0].bar(list(data["Agency"].value_counts().index), data["Agency"].value_counts(), alpha = .50,edgecolor= 'black',color ='lightgreen')
ax[0][0].tick_params(axis = 'x', labelrotation=90)
ax[0][0].set_title("Agency Count")
ax[1][0].bar(list(data["Agency Type"].value_counts().index), data["Agency Type"].value_counts(), alpha = .50,edgecolor= 'black',color ='lightgreen')
ax[1][0].set_title("Agency Type Count")
ax[2][0].bar(list(data["Distribution Channel"].value_counts().index), data["Distribution Channel"].value_counts(), alpha = .50,edgecolor= 'black',color ='lightgreen')
ax[2][0].set_title("Distribution Channel Count")
ax[3][0].bar(list(data["Product Name"].value_counts().index), data["Product Name"].value_counts(), alpha = .50,edgecolor= 'black',color ='lightgreen')
ax[3][0].tick_params(axis = 'x', labelrotation=90)
ax[3][0].set_title("Product Name Count")
ax[0][1].hist(data["Duration"], bins=20, alpha = .50,edgecolor= 'black',color ='lightgreen')
ax[0][1].set_title("Duration")
ax[1][1].hist(data["Net Sales"], bins=20, alpha = .50,edgecolor= 'black',color ='lightgreen')
ax[1][1].set_title("Net Sales")
ax[2][1].hist(data[r"Commision (in value)"], bins=20, alpha = .50,edgecolor= 'black',color ='lightgreen')
ax[2][1].set_title(r"Commision (in value)")
ax[3][1].hist(data["Age"], bins=20, alpha = .50,edgecolor= 'black',color ='lightgreen')
ax[3][1].set_title("Age")
plt.show()
###Output
_____no_output_____
###Markdown
Creating Submission file using best method
###Code
path = r"C:\Users\eakhumb\Downloads\file\train.csv"
df = pd.read_csv(path)
df = df.drop(columns=["ID"])
newdata = df.drop(columns=["Agency"])
newdata["Agency Type"] = newdata["Agency Type"].replace({"Travel Agency" : 0, "Airlines" : 1})
newdata["Distribution Channel"] = newdata["Distribution Channel"].replace({"Online" : 0, "Offline" : 1})
newdata = newdata.join(pd.get_dummies(newdata["Product Name"])).join(pd.get_dummies(newdata["Destination"])).drop(columns=["Product Name", "Destination"])
newdata = newdata[(newdata["Duration"]<1000) & (newdata["Duration"]>0)]
newdata = newdata[newdata["Age"]<=90]
newdata = newdata[newdata["Net Sales"]>0]
rfc = RandomForestClassifier(n_estimators=300)
rfc.fit(newdata.drop(columns="Claim"), newdata["Claim"])
submission = pd.read_csv(r"C:\Users\eakhumb\Downloads\file\test.csv")
submission_id = submission["ID"]
submission.drop(columns=["ID"], inplace=True)
submission_id.head()
newdata = submission.drop(columns=["Agency"])
newdata["Agency Type"] = newdata["Agency Type"].replace({"Travel Agency" : 0, "Airlines" : 1})
newdata["Distribution Channel"] = newdata["Distribution Channel"].replace({"Online" : 0, "Offline" : 1})
newdata = newdata.join(pd.get_dummies(newdata["Product Name"])).join(pd.get_dummies(newdata["Destination"])).drop(columns=["Product Name", "Destination"])
submission_pred = rfc.predict(newdata)
submission_id.shape, submission_pred.shape
submission_template = pd.read_csv(r"C:\Users\eakhumb\Downloads\file\sample_submission.csv")
submission_template.head()
submission_final = pd.DataFrame({"ID" : submission_id, "Claim" : submission_pred})
submission_final.to_csv(r"C:\Users\eakhumb\Downloads\file\Submission.csv", index=False)
###Output
_____no_output_____ |
wandb/run-20210519_095825-286zb4nl/tmp/code/00-main.ipynb | ###Markdown
testing
###Code
from load_data import *
# load_data()
###Output
_____no_output_____
###Markdown
Loading the data
###Code
from load_data import *
X_train,X_test,y_train,y_test = load_data()
len(X_train),len(y_train)
len(X_test),len(y_test)
###Output
_____no_output_____
###Markdown
Test Modelling
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
class Test_Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.c1 = nn.Conv2d(1,64,5)
self.c2 = nn.Conv2d(64,128,5)
self.c3 = nn.Conv2d(128,256,5)
self.fc4 = nn.Linear(256*10*10,256)
self.fc6 = nn.Linear(256,128)
self.fc5 = nn.Linear(128,4)
def forward(self,X):
preds = F.max_pool2d(F.relu(self.c1(X)),(2,2))
preds = F.max_pool2d(F.relu(self.c2(preds)),(2,2))
preds = F.max_pool2d(F.relu(self.c3(preds)),(2,2))
# print(preds.shape)
preds = preds.view(-1,256*10*10)
preds = F.relu(self.fc4(preds))
preds = F.relu(self.fc6(preds))
preds = self.fc5(preds)
return preds
device = torch.device('cuda')
BATCH_SIZE = 32
IMG_SIZE = 112
model = Test_Model().to(device)
optimizer = optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
EPOCHS = 125
from tqdm import tqdm
PROJECT_NAME = 'Weather-Clf'
import wandb
# test_index += 1
# wandb.init(project=PROJECT_NAME,name=f'test')
# for _ in tqdm(range(EPOCHS)):
# for i in range(0,len(X_train),BATCH_SIZE):
# X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
# y_batch = y_train[i:i+BATCH_SIZE].to(device)
# model.to(device)
# preds = model(X_batch.float())
# preds.to(device)
# loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
# optimizer.zero_grad()
# loss.backward()
# optimizer.step()
# wandb.log({'loss':loss.item()})
# wandb.finish()
# for index in range(10):
# print(torch.argmax(preds[index]))
# print(y_batch[index])
# print('\n')
class Test_Model(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1,16,5)
self.conv2 = nn.Conv2d(16,32,5)
self.conv3 = nn.Conv2d(32,64,5)
self.fc1 = nn.Linear(64*10*10,16)
self.fc2 = nn.Linear(16,32)
self.fc3 = nn.Linear(32,64)
self.fc4 = nn.Linear(64,32)
self.fc5 = nn.Linear(32,6)
def forward(self,X):
preds = F.max_pool2d(F.relu(self.conv1(X)),(2,2))
preds = F.max_pool2d(F.relu(self.conv2(preds)),(2,2))
preds = F.max_pool2d(F.relu(self.conv3(preds)),(2,2))
# print(preds.shape)
preds = preds.view(-1,64*10*10)
preds = F.relu(self.fc1(preds))
preds = F.relu(self.fc2(preds))
preds = F.relu(self.fc3(preds))
preds = F.relu(self.fc4(preds))
preds = F.relu(self.fc5(preds))
return preds
model = Test_Model().to(device)
optimizer = optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
# test_index += 1
# wandb.init(project=PROJECT_NAME,name=f'test-{test_index}')
# for _ in tqdm(range(EPOCHS)):
# for i in range(0,len(X_train),BATCH_SIZE):
# X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
# y_batch = y_train[i:i+BATCH_SIZE].to(device)
# model.to(device)
# preds = model(X_batch.float())
# preds.to(device)
# loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
# optimizer.zero_grad()
# loss.backward()
# optimizer.step()
# wandb.log({'loss':loss.item()})
# wandb.finish()
###Output
_____no_output_____
###Markdown
Modelling
###Code
class Test_Model(nn.Module):
def __init__(self,conv1_output=16,conv2_output=32,conv3_output=64,fc1_output=16,fc2_output=32,fc3_output=64,activation=F.relu):
super().__init__()
self.conv3_output = conv3_output
self.conv1 = nn.Conv2d(1,conv1_output,5)
self.conv2 = nn.Conv2d(conv1_output,conv2_output,5)
self.conv3 = nn.Conv2d(conv2_output,conv3_output,5)
self.fc1 = nn.Linear(conv3_output*10*10,fc1_output)
self.fc2 = nn.Linear(fc1_output,fc2_output)
self.fc3 = nn.Linear(fc2_output,fc3_output)
self.fc4 = nn.Linear(fc3_output,fc2_output)
self.fc5 = nn.Linear(fc2_output,6)
self.activation = activation
def forward(self,X):
preds = F.max_pool2d(self.activation(self.conv1(X)),(2,2))
preds = F.max_pool2d(self.activation(self.conv2(preds)),(2,2))
preds = F.max_pool2d(self.activation(self.conv3(preds)),(2,2))
# print(preds.shape)
preds = preds.view(-1,self.conv3_output*10*10)
preds = self.activation(self.fc1(preds))
preds = self.activation(self.fc2(preds))
preds = self.activation(self.fc3(preds))
preds = self.activation(self.fc4(preds))
preds = self.activation(self.fc5(preds))
return preds
# conv1_output = 32
# conv2_output = 8
# conv3_output =
# fc1_output
# fc2_output
# fc3_output
# activation
# optimizer
# loss
# lr
# num of epochs
def get_loss(criterion,y,model,X):
model.to('cpu')
preds = model(X.view(-1,1,112,112).to('cpu').float())
preds.to('cpu')
loss = criterion(preds,torch.tensor(y,dtype=torch.long).to('cpu'))
loss.backward()
return loss.item()
def test(net,X,y):
device = 'cpu'
net.to(device)
correct = 0
total = 0
net.eval()
with torch.no_grad():
for i in range(len(X)):
real_class = torch.argmax(y[i]).to(device)
net_out = net(X[i].view(-1,1,112,112).to(device).float())
net_out = net_out[0]
predictied_class = torch.argmax(net_out)
if predictied_class == real_class:
correct += 1
total += 1
net.train()
net.to('cuda')
return round(correct/total,3)
EPOCHS = 3
fc1_outputs = [64,128,256,512]
for fc1_output in fc1_outputs:
wandb.init(project=PROJECT_NAME,name=f'fc1_output-{fc1_output}')
model = Test_Model(conv1_output=32,conv2_output=8,conv3_output=conv3_output).to(device)
optimizer = optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
for _ in tqdm(range(EPOCHS)):
for i in range(0,len(X_train),BATCH_SIZE):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch.float())
preds.to(device)
loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':get_loss(criterion,y_train,model,X_train),'accuracy':test(model,X_train,y_train),'val_accuracy':test(model,X_test,y_test),'val_loss':get_loss(criterion,y_test,model,X_test)})
for index in range(10):
print(torch.argmax(preds[index]))
print(y_batch[index])
print('\n')
wandb.finish()
# conv3_outputs = [16,32,64,128]
# for conv3_output in conv3_outputs:
# wandb.init(project=PROJECT_NAME,name=f'conv3_output-{conv3_output}')
# model = Test_Model(conv1_output=32,conv2_output=8,conv3_output=conv3_output).to(device)
# optimizer = optim.SGD(model.parameters(),lr=0.1)
# criterion = nn.CrossEntropyLoss()
# for _ in tqdm(range(EPOCHS)):
# for i in range(0,len(X_train),BATCH_SIZE):
# X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
# y_batch = y_train[i:i+BATCH_SIZE].to(device)
# model.to(device)
# preds = model(X_batch.float())
# preds.to(device)
# loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
# optimizer.zero_grad()
# loss.backward()
# optimizer.step()
# wandb.log({'loss':get_loss(criterion,y_train,model,X_train),'accuracy':test(model,X_train,y_train),'val_accuracy':test(model,X_test,y_test),'val_loss':get_loss(criterion,y_test,model,X_test)})
# for index in range(10):
# print(torch.argmax(preds[index]))
# print(y_batch[index])
# print('\n')
# wandb.finish()
###Output
_____no_output_____ |
DP_PATE_SkinCancer(2).ipynb | ###Markdown
###Code
!pip install syft
import numpy as np
from PIL import Image
import random
import torch
from torch.utils.data import Dataset, Subset, DataLoader
from torchvision import datasets, transforms, models
from torch import nn, optim
import torch.nn.functional as F
import time, os, random
# libary from pysyft needed to perform pate analysis
from syft.frameworks.torch.dp import pate
# we'll train on GPU if it is available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
%cd '/content/drive/My Drive/Colab Notebooks/CancerDataset'
class SkinCancerDataset(Dataset):
def __init__(self, benign_path, malignant_path, transform=None):
benign_list = [[os.path.join(benign_path, filename),'0'] for filename in os.listdir(benign_path)]
malignant_list = [[os.path.join(malignant_path, filename),'1'] for filename in os.listdir(malignant_path)]
self.img_list = []
self.img_list = benign_list + malignant_list
random.shuffle(self.img_list)
self.transform = transform
def __len__(self):
return len(self.img_list)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img_path = self.img_list[idx][0]
image = Image.open(img_path).convert('RGB')
if self.transform:
image = self.transform(image)
label = int(self.img_list[idx][1])
return image, label
data_transforms = transforms.Compose([
transforms.RandomResizedCrop((224),scale=(0.5,1.0)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
trainset = SkinCancerDataset(benign_path = './data/train/benign',
malignant_path = './data/train/malignant', transform = data_transforms)
testset = SkinCancerDataset(benign_path = './data/test/benign',
malignant_path = './data/test/malignant' , transform = data_transforms)
validset = SkinCancerDataset(benign_path = './data/valid/benign',
malignant_path = './data/valid/malignant', transform = data_transforms)
len(trainset),len(testset),len(validset)
batchsize=16
data_loader = DataLoader(trainset, batch_size=batchsize, shuffle=True)
import matplotlib.pyplot as plt
## Method to display Image for Tensor
def imshow(image, ax=None, title=None, normalize=True):
"""Imshow for Tensor."""
if ax is None:
fig, ax = plt.subplots()
#print(type(image))
image = image.numpy().transpose((1, 2, 0))
if normalize:
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
image = std * image + mean
image = np.clip(image, 0, 1)
ax.imshow(image)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.tick_params(axis='both', length=0)
ax.set_xticklabels('')
ax.set_yticklabels('')
return ax
# Displaying Images and other info about the train set
ii=0
images, labels = next(iter(data_loader))
print(" Image Size",images.size())
print(" Image Size",images[ii].size())
fig, axes = plt.subplots(figsize=(16,5), ncols=5)
for ii in range(5):
ax = axes[ii]
ax.set_title(labels[ii])
imshow(images[ii], ax=ax, normalize=True)
# TEACHERS
#divide train set among teachers and create dataloaders for valid and trainsets
num_teachers = 5
valid_per = 0.2 #20% for validation
batch_size = 32
def teacher_dataloaders(trainset=trainset, num_teachers=num_teachers, batch_size=batch_size, valid_per = 0.2):
trainloaders = []
validloaders = []
teacher_data_len = len(trainset) // num_teachers
# create a list of shuffled indices
my_list = random.sample(range(1,len(trainset)), len(trainset)-1)
random.shuffle(my_list)
for i in range(num_teachers):
# get particular subset of data
indice = my_list[i*teacher_data_len: (i+1)*teacher_data_len]
data_subset = Subset(trainset, indice)
# split into train and validation set
valid_size = int(len(data_subset) * valid_per)
train_size = len(data_subset) - valid_size
train_subset, valid_subset = torch.utils.data.random_split(data_subset, [train_size,valid_size])
#create data loaders
trainloader = DataLoader(train_subset, batch_size=batch_size, shuffle=True, num_workers=1)
validloader = DataLoader(valid_subset, batch_size=batch_size, shuffle=False, num_workers=1)
#add dataloaders to list
trainloaders.append(trainloader)
validloaders.append(validloader)
return trainloaders, validloaders
# creating dataloaders
trainloaders, validloaders = teacher_dataloaders()
len(trainloaders), len(validloaders)
# # STUDENT
# split into train and validation set
valid_size = int(len(testset) * 0.2)
train_size = len(testset) - valid_size
student_train_subset, student_valid_subset = torch.utils.data.random_split(testset, [train_size,valid_size])
#create data loaders
student_train_loader = DataLoader(student_train_subset, batch_size=batch_size, shuffle=False, num_workers=1)
student_valid_loader = DataLoader(student_valid_subset, batch_size=batch_size, shuffle=False, num_workers=1)
len(student_train_loader), len(student_valid_loader)
#Teacher Model
class SimpleCNN(torch.nn.Module):
def __init__(self):
super(SimpleCNN, self).__init__() # b, 3, 32, 32
layer1 = torch.nn.Sequential()
layer1.add_module('conv1', torch.nn.Conv2d(3, 32, 3, 1, padding=1))
#b, 32, 32, 32
layer1.add_module('relu1', torch.nn.ReLU(True))
layer1.add_module('pool1', torch.nn.MaxPool2d(2, 2))
self.layer1 = layer1
layer4 = torch.nn.Sequential()
layer4.add_module('fc1', torch.nn.Linear(401408, 2))
self.layer4 = layer4
def forward(self, x):
conv1 = self.layer1(x)
fc_input = conv1.view(conv1.size(0), -1)
fc_out = self.layer4(fc_input)
return fc_out
def train(n_epochs, trainloader, validloader, model, optimizer, criterion, use_cuda, save_path= None, is_not_teacher=False):
"""returns trained model"""
# # initialize tracker for minimum validation loss
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
train_correct = 0.0
train_total = 0.0
valid_correct =0.0
valid_total = 0.0
# train the model #
model.train()
for batch_idx, (data, target) in enumerate(trainloader):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
# initialize weights to zero
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
# convert output probabilities to predicted class
pred = output.data.max(1, keepdim=True)[1]
# compare predictions to true label
train_correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
train_total += data.size(0)
train_acc = 100. * train_correct / train_total
# validate the model
model.eval()
for batch_idx, (data, target) in enumerate(validloader):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
output = model(data)
loss = criterion(output, target)
valid_loss = valid_loss + ((1 / (batch_idx + 1)) * (loss.data - valid_loss))
pred = output.data.max(1, keepdim=True)[1]
# compare predictions to true label
valid_correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
valid_total += data.size(0)
valid_acc = 100. * valid_correct / valid_total
# print training/validation statistics
print('Epoch: {} \n\tTrain Loss: {:.6f} \tTrain Acc: {:.6f} \n\tValid Loss: {:.6f} \tValid Acc: {:.6f}'.format(
epoch,train_loss,train_acc,valid_loss,valid_acc ))
## save the student model if validation loss has decreased
if is_not_teacher:
if valid_loss < valid_loss_min:
torch.save(model.state_dict(), save_path)
print('\tValidation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
valid_loss_min = valid_loss
return model
# instantiate model and move it to GPU if available
model = SimpleCNN()
model.to(device)
#define hyperparameters
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters() , lr=0.001)
epochs = 30
#Training teachers
# Training teachers
teacher_models = []
i = 1
for trainloader, validloader in zip(trainloaders, validloaders):
print(" Training Teacher {}".format(i))
teacher_model = train(epochs, trainloader, validloader, model, optimizer, criterion, True)
teacher_models.append(teacher_model)
i+=1
print("="*40)
#Get private labels for training student
# get private labels
def student_train_labels(teacher_models, dataloader):
student_labels = []
# get label from each teacher
for model in teacher_models:
student_label = []
for images,_ in dataloader:
with torch.no_grad():
images = images.cuda()
outputs = model(images)
preds = torch.argmax(torch.exp(outputs), dim=1)
student_label.append(preds.tolist())
# add all teacher predictions to student_labels
student_label = sum(student_label, [])
student_labels.append(student_label)
return student_labels
predicted_labels = student_train_labels(teacher_models, student_train_loader)
predicted_labels = np.array([np.array(p) for p in predicted_labels]).transpose(1, 0)
# We see here that we have 5 labels for each image in our dataset
print(predicted_labels.shape)
# See labels of 3rd Image Scan
print(predicted_labels[3])
#Get private labels with the most votes count and add noise them
def add_noise(predicted_labels, epsilon=0.1):
noisy_labels = []
for preds in predicted_labels:
# get labels with max votes
label_counts = np.bincount(preds, minlength=2)
# add laplacian noise to label
epsilon = epsilon
beta = 1/epsilon
for i in range(len(label_counts)):
label_counts[i] += np.random.laplace(0, beta, 1)
# after adding noise we get labels with max counts
new_label = np.argmax(label_counts)
noisy_labels.append(new_label)
#return noisy_labels
return np.array(noisy_labels)
labels_with_noise = add_noise(predicted_labels, epsilon=0.15)
print(labels_with_noise)
print(labels_with_noise.shape)
import csv
def write_csv(data):
with open('labels.csv', 'a') as outfile:
writer = csv.writer(outfile)
writer.writerow(data)
write_csv(labels_with_noise)
# Performing PATE analysis
data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=predicted_labels.T, indices=labels_with_noise, noise_eps=0.15, delta=1e-5)
print('Data dependent epsilon:', data_dep_eps)
print('Data independent epsilon:', data_ind_eps)
# We have to create a new training dataloader for the student with the newly created
# labels with noise. We have to replace the old labels with the new labels
def new_student_data_loader(dataloader, noisy_labels, batch_size=32):
image_list = []
for image,_ in dataloader:
image_list.append(image)
data = np.vstack(image_list)
new_dataset = list(zip(data, noisy_labels))
new_dataloader = DataLoader(new_dataset, batch_size, shuffle=False)
return new_dataloader
labeled_student_trainloader = new_student_data_loader(student_train_loader, labels_with_noise)
len(labeled_student_trainloader),len(student_valid_loader)
student_model = train(epochs, labeled_student_trainloader, student_valid_loader, model, optimizer, criterion, True, save_path='./models/student.pth.tar', is_not_teacher=True)
# Normal DL Training
normal_model = train(epochs, student_train_loader, student_valid_loader, model, optimizer, criterion, True, save_path='./models/normal.pth.tar', is_not_teacher=True)
# Create a dataloader for the test Dataset
batch_size=16
print(len(validset))
dataloader = DataLoader(validset, batch_size=batchsize, shuffle=False)
# We set a seed for the dataset to prevent it from producing different values every time it is run
seed = 3
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
def test(dataloader, model, criterion, use_cuda):
# monitor test loss and accuracy
test_loss = 0.
correct = 0.
total = 0.
model.eval()
for batch_idx, (data, target) in enumerate(dataloader):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update average test loss
test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
# convert output probabilities to predicted class
pred = output.data.max(1, keepdim=True)[1]
# compare predictions to true label
correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
total += data.size(0)
print('\tTest Loss: {:.6f}'.format(test_loss))
print('\tTest Accuracy: %2d%% (%2d/%2d)' % (
100. * correct / total, correct, total))
# call test function
print("Student Model")
test(dataloader, student_model, criterion, True)
print("\n=======================\nNormal Model")
test(dataloader, normal_model, criterion, True)
###Output
Student Model
Test Loss: 0.407868
Test Accuracy: 84% (123/146)
=======================
Normal Model
Test Loss: 0.387469
Test Accuracy: 83% (122/146)
|
retrain_classification_ptq_tf1.ipynb | ###Markdown
*Copyright 2019 Google LLC**Licensed under the Apache License, Version 2.0 (the "License")*
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Retrain a classification model for Edge TPU using post-training quantization (with TF1) In this tutorial, we'll use TensorFlow 1.15 to create an image classification model, train it with a flowers dataset, and convert it into the TensorFlow Lite format that's compatible with the Edge TPU (available in [Coral devices](https://coral.ai/products/)).The model will be based on a pre-trained version of MobileNet V2. We'll start by retraining only the classification layers, reusing MobileNet's pre-trained feature extractor layers. Then we'll fine-tune the model by also updating weights in some of the feature extractor layers. This type of transfer learning is much faster than training the entire model from scratch.Once it's trained, we'll use post-training quantization to convert all the parameters to unit8 format, which increases inferencing speed and is required for compatibility on the Edge TPU.For more details about how to create a model that's compatible with the Edge TPU, see the [documentation at coral.ai](https://coral.ai/docs/edgetpu/models-intro/).**Note:** This tutorial requires TensorFlow 1.15. If you're using TF 2.0+, see [the 2.x version of this tutorial](https://colab.research.google.com/github/google-coral/tutorials/blob/master/retrain_classification_ptq_tf2.ipynb). To start running all the code in this tutorial, select **Runtime > Run all** in the Colab toolbar. Import the required libraries
###Code
try:
# This %tensorflow_version magic only works in Colab.
%tensorflow_version 1.x
except Exception:
pass
# For your non-Colab code, be sure you have tensorflow==1.15
import tensorflow as tf
assert tf.__version__.startswith('1')
tf.enable_eager_execution()
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Prepare the training data First let's download and organize the flowers dataset we'll use to retrain the model (it contains 5 flower classes).Pay attention to this part so you can reproduce it with your own images dataset. In particular, notice that the "flower_photos" directory contains an appropriately-named directory for each class. The following code then randomizes and divides up all these photos into training and validation sets, and generates the labels file.
###Code
_URL = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
zip_file = tf.keras.utils.get_file(origin=_URL,
fname="flower_photos.tgz",
extract=True)
flowers_dir = os.path.join(os.path.dirname(zip_file), 'flower_photos')
###Output
_____no_output_____
###Markdown
Next, we use [`ImageDataGenerator`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator) to rescale the image data into float values (divide by 255 so the tensor values are between 0 and 1), and call `flow_from_directory()` to create two generators: one for the training dataset and one for the validation dataset.
###Code
IMAGE_SIZE = 224
BATCH_SIZE = 64
datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale=1./255,
validation_split=0.2)
train_generator = datagen.flow_from_directory(
flowers_dir,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE,
subset='training')
val_generator = datagen.flow_from_directory(
flowers_dir,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE,
subset='validation')
###Output
_____no_output_____
###Markdown
On each iteration, these generators provide a batch of images by reading images from disk and processing them to the proper tensor size (224 x 224). The output is a tuple of (images, labels). For example, you can see the shapes here:
###Code
image_batch, label_batch = next(val_generator)
image_batch.shape, label_batch.shape
###Output
_____no_output_____
###Markdown
Now save the class labels to a text file:
###Code
print (train_generator.class_indices)
labels = '\n'.join(sorted(train_generator.class_indices.keys()))
with open('flower_labels.txt', 'w') as f:
f.write(labels)
!cat flower_labels.txt
###Output
_____no_output_____
###Markdown
Build the modelNow we'll create a model that's capable of transfer learning on just the last fully-connected layer. We'll start with MobileNet V2 from Keras as the base model, which is pre-trained with the ImageNet dataset (trained to recognize 1,000 classes). This provides us a great feature extractor for image classification and we can then simply train a new classification layer with our own dataset.**Note:** Not all models from [```tf.keras.applications```](https://www.tensorflow.org/api_docs/python/tf/keras/applications) are compatible with the Edge TPU. For details, read about [quantizing Keras models](https://coral.ai/docs/edgetpu/models-intro/quantizing-keras-models). Create the base model When instantiating the `MobileNetV2` model, we specify the `include_top=False` argument in order to load the network *without* the classification layers at the top. Then we set `trainable` false to freeze all the weights in the base model. This effectively converts the model into a feature extractor because all the pre-trained weights and biases are preserved in the lower layers when we begin training for our classification head.
###Code
IMG_SHAPE = (IMAGE_SIZE, IMAGE_SIZE, 3)
# Create the base model from the pre-trained MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
base_model.trainable = False
###Output
_____no_output_____
###Markdown
Add a classification headNow we create a new [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model and pass the frozen MobileNet model from above as the base of the graph, and append new classification layers so we can set the final output dimension to match the number of classes in our dataset (5 types of flowers).
###Code
model = tf.keras.Sequential([
base_model,
tf.keras.layers.Conv2D(filters=32, kernel_size=3, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(units=5, activation='softmax')
])
###Output
_____no_output_____
###Markdown
Configure the modelAlthough this method is called `compile()`, it's basically a configuration step that's required before we can start training.
###Code
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
You can see a string summary of the final network with the `summary()` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
And because the majority of the model graph is frozen in the base model, weights from only the last convolution and dense layers are trainable:
###Code
print('Number of trainable weights = {}'.format(len(model.trainable_weights)))
###Output
_____no_output_____
###Markdown
Train the model Now we can train the model using data provided by the `train_generator` and `val_generator` we created at the beginning. This takes 5-10 minutes to finish.
###Code
history = model.fit_generator(train_generator,
epochs=10,
validation_data=val_generator)
###Output
_____no_output_____
###Markdown
Review the learning curves
###Code
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
###Output
_____no_output_____
###Markdown
Fine tune the base model So far, we've only trained the classification layers—the weights of the pre-trained network were *not* changed. The accuracy results aren't bad, but could be better.One way we can increase the accuracy is to train (or "fine-tune") more layers from the pre-trained model. That is, we'll un-freeze some layers from the base model and adjust those weights (which were originally trained with 1,000 ImageNet classes) so they're better tuned for features found in our flowers dataset. Un-freeze more layers So instead of freezing the entire base model, we'll freeze individual layers.First, let's see how many layers are in the base model:
###Code
print("Number of layers in the base model: ", len(base_model.layers))
###Output
_____no_output_____
###Markdown
Let's try freezing just the bottom 100 layers.
###Code
base_model.trainable = True
fine_tune_at = 100
# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
###Output
_____no_output_____
###Markdown
Reconfigure the modelNow configure the model again, but this time with a lower training rate (the default is 0.001).
###Code
model.compile(optimizer=tf.keras.optimizers.Adam(1e-5),
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
print('Number of trainable weights = {}'.format(len(model.trainable_weights)))
###Output
_____no_output_____
###Markdown
Continue training Now start training all the trainable layers. This starts with the weights we already trained in the classification layers, so we don't need as many epochs.
###Code
history_fine = model.fit_generator(train_generator,
epochs=5,
validation_data=val_generator)
###Output
_____no_output_____
###Markdown
Review the new learning curves Now that we've done some fine-tuning on the MobileNet V2 base model, let's check the accuracy.
###Code
acc = history_fine.history['acc']
val_acc = history_fine.history['val_acc']
loss = history_fine.history['loss']
val_loss = history_fine.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
###Output
_____no_output_____
###Markdown
This is better, but it's not ideal.The validation loss is much higher than the training loss, so there could be some overfitting during training. The overfitting might also be because the new training set is relatively small with less intra-class variance, compared to the original ImageNet dataset used to train MobileNet V2.So this model isn't trained to an accuracy that's production ready, but it works well enough as a demonstration. So let's move on and convert the model to be compatible with the Edge TPU. Convert to TFLite Ordinarily, creating a TensorFlow Lite model is just a few lines of code using the [`TFLiteConverter`](https://www.tensorflow.org/api_docs/python/tf/lite/TFLiteConverter). For example, this code creates a standard TensorFlow Lite model:
###Code
saved_keras_model = 'model.h5'
model.save(saved_keras_model)
converter = tf.lite.TFLiteConverter.from_keras_model_file(saved_keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
with open('mobilenet_v2_1.0_224.tflite', 'wb') as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
However, this `.tflite` file isn't compatible with the Edge TPU because although the `DEFAULT` optimizations flag will quantize the weights, the activation values are still in floating-point. So we must fully quantize the model to use int8 format for all parameter data (both weights and activations).To fully quantize the model, we need to perform [post-training quantization](https://www.tensorflow.org/lite/performance/post_training_quantization) with a representative dataset, which requires a few more arguments for the `TFLiteConverter`, and a function that builds a dataset that's representative of the training dataset. So let's convert the model again, this time using post-training quantization:
###Code
# A generator that provides a representative dataset
def representative_data_gen():
dataset_list = tf.data.Dataset.list_files(flowers_dir + '/*/*')
for i in range(100):
image = next(iter(dataset_list))
image = tf.io.read_file(image)
image = tf.io.decode_jpeg(image, channels=3)
image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])
image = tf.cast(image / 255., tf.float32)
image = tf.expand_dims(image, 0)
yield [image]
saved_keras_model = 'model.h5'
model.save(saved_keras_model)
converter = tf.lite.TFLiteConverter.from_keras_model_file(saved_keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# This ensures that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# These set the input and output tensors to uint8
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
# And this sets the representative dataset so we can quantize the activations
converter.representative_dataset = representative_data_gen
tflite_model = converter.convert()
with open('mobilenet_v2_1.0_224_quant.tflite', 'wb') as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
**Note:** An alternative technique to quantize the model is to use [quantization-aware training](https://github.com/tensorflow/tensorflow/blob/r1.15/tensorflow/contrib/quantize/README.mdquantization-aware-training). This typically results in better accuracy because the training accounts for the decreased parameter precision. However, quantization-aware training requires modifications to the model graph, which is beyond the scope of this tutorial. Compare the accuracy So now we have a fully quantized model. To be sure the conversion went well, let's run some inferences using both the raw trained model and the new TensorFlow Lite model.First check the accuracy of the raw model:
###Code
batch_images, batch_labels = next(val_generator)
logits = model(batch_images)
prediction = np.argmax(logits, axis=1)
truth = np.argmax(batch_labels, axis=1)
keras_accuracy = tf.keras.metrics.Accuracy()
keras_accuracy(prediction, truth)
print("Raw model accuracy: {:.3%}".format(keras_accuracy.result()))
###Output
_____no_output_____
###Markdown
Now let's check the accuracy of the `.tflite` file, using the same dataset:(This TensorFlow Lite code is a bit more complicated. For details, read the guide to [TensorFlow Lite inference](https://www.tensorflow.org/lite/guide/inference).)
###Code
def set_input_tensor(interpreter, input):
input_details = interpreter.get_input_details()[0]
tensor_index = input_details['index']
input_tensor = interpreter.tensor(tensor_index)()[0]
# Inputs for the TFLite model must be uint8, so we quantize our input data.
# NOTE: This step is necessary only because we're receiving input data from
# ImageDataGenerator, which rescaled all image data to float [0,1]. When using
# bitmap inputs, they're already uint8 [0,255] so this can be replaced with:
# input_tensor[:, :] = input
scale, zero_point = input_details['quantization']
input_tensor[:, :] = np.uint8(input / scale + zero_point)
def classify_image(interpreter, input):
set_input_tensor(interpreter, input)
interpreter.invoke()
output_details = interpreter.get_output_details()[0]
output = interpreter.get_tensor(output_details['index'])
# Outputs from the TFLite model are uint8, so we dequantize the results:
scale, zero_point = output_details['quantization']
output = scale * (output - zero_point)
top_1 = np.argmax(output)
return top_1
interpreter = tf.lite.Interpreter('mobilenet_v2_1.0_224_quant.tflite')
interpreter.allocate_tensors()
# Collect all inference predictions in a list
batch_prediction = []
batch_truth = np.argmax(batch_labels, axis=1)
for i in range(len(batch_images)):
prediction = classify_image(interpreter, batch_images[i])
batch_prediction.append(prediction)
# Compare all predictions to the ground truth
tflite_accuracy = tf.keras.metrics.Accuracy()
tflite_accuracy(batch_prediction, batch_truth)
print("Quant TF Lite accuracy: {:.3%}".format(tflite_accuracy.result()))
###Output
_____no_output_____
###Markdown
A small drop in accuracy is expected with post-training quantization. You might be able to improve this by refining the representative dataset used during quantization.As mentioned earlier, you might also get better accuracy with [quantization-aware training](https://github.com/tensorflow/tensorflow/blob/r1.15/tensorflow/contrib/quantize/README.mdquantization-aware-training). Compile for the Edge TPU Finally, we're ready to compile the model for the Edge TPU.First download the [Edge TPU Compiler](https://coral.ai/docs/edgetpu/compiler/):
###Code
! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
! echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
! sudo apt-get update
! sudo apt-get install edgetpu-compiler
###Output
_____no_output_____
###Markdown
Then compile the model:
###Code
! edgetpu_compiler mobilenet_v2_1.0_224_quant.tflite
###Output
_____no_output_____
###Markdown
That's it.The compiled model uses the same filename but with "_edgetpu" appended at the end. Download the model You can download the converted model and labels file from Colab like this:
###Code
from google.colab import files
files.download('mobilenet_v2_1.0_224_quant_edgetpu.tflite')
files.download('flower_labels.txt')
###Output
_____no_output_____
###Markdown
*Copyright 2019 Google LLC**Licensed under the Apache License, Version 2.0 (the "License")*
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Retrain a classification model for Edge TPU using post-training quantization (with TF1) In this tutorial, we'll use TensorFlow 1.15 to create an image classification model, train it with a flowers dataset, and convert it into the TensorFlow Lite format that's compatible with the Edge TPU (available in [Coral devices](https://coral.ai/products/)).The model will be based on a pre-trained version of MobileNet V2. We'll start by retraining only the classification layers, reusing MobileNet's pre-trained feature extractor layers. Then we'll fine-tune the model by also updating weights in some of the feature extractor layers. This type of transfer learning is much faster than training the entire model from scratch.Once it's trained, we'll use post-training quantization to convert all the parameters to unit8 format, which increases inferencing speed and is required for compatibility on the Edge TPU.For more details about how to create a model that's compatible with the Edge TPU, see the [documentation at coral.ai](https://coral.ai/docs/edgetpu/models-intro/).**Note:** This tutorial requires TensorFlow 1.15. If you're using TF 2.0+, see [the 2.x version of this tutorial](https://colab.research.google.com/github/google-coral/tutorials/blob/master/retrain_classification_ptq_tf2.ipynb). To start running all the code in this tutorial, select **Runtime > Run all** in the Colab toolbar. Import the required libraries
###Code
try:
# This %tensorflow_version magic only works in Colab.
%tensorflow_version 1.x
except Exception:
pass
# For your non-Colab code, be sure you have tensorflow==1.15
import tensorflow as tf
assert tf.__version__.startswith('1')
tf.enable_eager_execution()
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Prepare the training data First let's download and organize the flowers dataset we'll use to retrain the model (it contains 5 flower classes).Pay attention to this part so you can reproduce it with your own images dataset. In particular, notice that the "flower_photos" directory contains an appropriately-named directory for each class. The following code then randomizes and divides up all these photos into training and validation sets, and generates the labels file.
###Code
_URL = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
zip_file = tf.keras.utils.get_file(origin=_URL,
fname="flower_photos.tgz",
extract=True)
flowers_dir = os.path.join(os.path.dirname(zip_file), 'flower_photos')
###Output
_____no_output_____
###Markdown
Next, we use [`ImageDataGenerator`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator) to rescale the image data into float values (divide by 255 so the tensor values are between 0 and 1), and call `flow_from_directory()` to create two generators: one for the training dataset and one for the validation dataset.
###Code
IMAGE_SIZE = 224
BATCH_SIZE = 64
datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale=1./255,
validation_split=0.2)
train_generator = datagen.flow_from_directory(
flowers_dir,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE,
subset='training')
val_generator = datagen.flow_from_directory(
flowers_dir,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE,
subset='validation')
###Output
_____no_output_____
###Markdown
On each iteration, these generators provide a batch of images by reading images from disk and processing them to the proper tensor size (224 x 224). The output is a tuple of (images, labels). For example, you can see the shapes here:
###Code
image_batch, label_batch = next(val_generator)
image_batch.shape, label_batch.shape
###Output
_____no_output_____
###Markdown
Now save the class labels to a text file:
###Code
print (train_generator.class_indices)
labels = '\n'.join(sorted(train_generator.class_indices.keys()))
with open('flower_labels.txt', 'w') as f:
f.write(labels)
!cat flower_labels.txt
###Output
_____no_output_____
###Markdown
Build the modelNow we'll create a model that's capable of transfer learning on just the last fully-connected layer. We'll start with MobileNet V2 from Keras as the base model, which is pre-trained with the ImageNet dataset (trained to recognize 1,000 classes). This provides us a great feature extractor for image classification and we can then simply train a new classification layer with our own dataset.**Note:** Not all models from [```tf.keras.applications```](https://www.tensorflow.org/api_docs/python/tf/keras/applications) are compatible with the Edge TPU. For details, read about [quantizing Keras models](https://coral.ai/docs/edgetpu/models-intro/quantizing-keras-models). Create the base model When instantiating the `MobileNetV2` model, we specify the `include_top=False` argument in order to load the network *without* the classification layers at the top. Then we set `trainable` false to freeze all the weights in the base model. This effectively converts the model into a feature extractor because all the pre-trained weights and biases are preserved in the lower layers when we begin training for our classification head.
###Code
IMG_SHAPE = (IMAGE_SIZE, IMAGE_SIZE, 3)
# Create the base model from the pre-trained MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
base_model.trainable = False
###Output
_____no_output_____
###Markdown
Add a classification headNow we create a new [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model and pass the frozen MobileNet model from above as the base of the graph, and append new classification layers so we can set the final output dimension to match the number of classes in our dataset (5 types of flowers).
###Code
model = tf.keras.Sequential([
base_model,
tf.keras.layers.Conv2D(filters=32, kernel_size=3, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(units=5, activation='softmax')
])
###Output
_____no_output_____
###Markdown
Configure the modelAlthough this method is called `compile()`, it's basically a configuration step that's required before we can start training.
###Code
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
You can see a string summary of the final network with the `summary()` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
And because the majority of the model graph is frozen in the base model, weights from only the last convolution and dense layers are trainable:
###Code
print('Number of trainable weights = {}'.format(len(model.trainable_weights)))
###Output
_____no_output_____
###Markdown
Train the model Now we can train the model using data provided by the `train_generator` and `val_generator` we created at the beginning. This takes 5-10 minutes to finish.
###Code
history = model.fit_generator(train_generator,
epochs=10,
validation_data=val_generator)
###Output
_____no_output_____
###Markdown
Review the learning curves
###Code
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
###Output
_____no_output_____
###Markdown
Fine tune the base model So far, we've only trained the classification layers—the weights of the pre-trained network were *not* changed. The accuracy results aren't bad, but could be better.One way we can increase the accuracy is to train (or "fine-tune") more layers from the pre-trained model. That is, we'll un-freeze some layers from the base model and adjust those weights (which were originally trained with 1,000 ImageNet classes) so they're better tuned for features found in our flowers dataset. Un-freeze more layers So instead of freezing the entire base model, we'll freeze individual layers.First, let's see how many layers are in the base model:
###Code
print("Number of layers in the base model: ", len(base_model.layers))
###Output
_____no_output_____
###Markdown
Let's try freezing just the bottom 100 layers.
###Code
base_model.trainable = True
fine_tune_at = 100
# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
###Output
_____no_output_____
###Markdown
Reconfigure the modelNow configure the model again, but this time with a lower training rate (the default is 0.001).
###Code
model.compile(optimizer=tf.keras.optimizers.Adam(1e-5),
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
print('Number of trainable weights = {}'.format(len(model.trainable_weights)))
###Output
_____no_output_____
###Markdown
Continue training Now start training all the trainable layers. This starts with the weights we already trained in the classification layers, so we don't need as many epochs.
###Code
history_fine = model.fit_generator(train_generator,
epochs=5,
validation_data=val_generator)
###Output
_____no_output_____
###Markdown
Review the new learning curves Now that we've done some fine-tuning on the MobileNet V2 base model, let's check the accuracy.
###Code
acc = history_fine.history['acc']
val_acc = history_fine.history['val_acc']
loss = history_fine.history['loss']
val_loss = history_fine.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
###Output
_____no_output_____
###Markdown
This is better, but it's not ideal.The validation loss is much higher than the training loss, so there could be some overfitting during training. The overfitting might also be because the new training set is relatively small with less intra-class variance, compared to the original ImageNet dataset used to train MobileNet V2.So this model isn't trained to an accuracy that's production ready, but it works well enough as a demonstration. So let's move on and convert the model to be compatible with the Edge TPU. Convert to TFLite Ordinarily, creating a TensorFlow Lite model is just a few lines of code using the [`TFLiteConverter`](https://www.tensorflow.org/api_docs/python/tf/lite/TFLiteConverter). For example, this code creates a standard TensorFlow Lite model:
###Code
saved_keras_model = 'model.h5'
model.save(saved_keras_model)
converter = tf.lite.TFLiteConverter.from_keras_model_file(saved_keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
with open('mobilenet_v2_1.0_224.tflite', 'wb') as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
However, this `.tflite` file isn't compatible with the Edge TPU because although the `DEFAULT` optimizations flag will quantize the weights, the activation values are still in floating-point. So we must fully quantize the model to use int8 format for all parameter data (both weights and activations).To fully quantize the model, we need to perform [post-training quantization](https://www.tensorflow.org/lite/performance/post_training_quantization) with a representative dataset, which requires a few more arguments for the `TFLiteConverter`, and a function that builds a dataset that's representative of the training dataset. So let's convert the model again, this time using post-training quantization:
###Code
# A generator that provides a representative dataset
def representative_data_gen():
dataset_list = tf.data.Dataset.list_files(flowers_dir + '/*/*')
for i in range(100):
image = next(iter(dataset_list))
image = tf.io.read_file(image)
image = tf.io.decode_jpeg(image, channels=3)
image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])
image = tf.cast(image / 255., tf.float32)
image = tf.expand_dims(image, 0)
yield [image]
saved_keras_model = 'model.h5'
model.save(saved_keras_model)
converter = tf.lite.TFLiteConverter.from_keras_model_file(saved_keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# This ensures that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# These set the input and output tensors to uint8
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
# And this sets the representative dataset so we can quantize the activations
converter.representative_dataset = representative_data_gen
tflite_model = converter.convert()
with open('mobilenet_v2_1.0_224_quant.tflite', 'wb') as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
**Note:** An alternative technique to quantize the model is to use [quantization-aware training](https://github.com/tensorflow/tensorflow/blob/r1.15/tensorflow/contrib/quantize/README.mdquantization-aware-training). This typically results in better accuracy because the training accounts for the decreased parameter precision. However, quantization-aware training requires modifications to the model graph, which is beyond the scope of this tutorial. Compare the accuracy So now we have a fully quantized model. To be sure the conversion went well, let's run some inferences using both the raw trained model and the new TensorFlow Lite model.First check the accuracy of the raw model:
###Code
batch_images, batch_labels = next(val_generator)
logits = model(batch_images)
prediction = np.argmax(logits, axis=1)
truth = np.argmax(batch_labels, axis=1)
keras_accuracy = tf.keras.metrics.Accuracy()
keras_accuracy(prediction, truth)
print("Raw model accuracy: {:.3%}".format(keras_accuracy.result()))
###Output
_____no_output_____
###Markdown
Now let's check the accuracy of the `.tflite` file, using the same dataset:(This TensorFlow Lite code is a bit more complicated. For details, read the guide to [TensorFlow Lite inference](https://www.tensorflow.org/lite/guide/inference).)
###Code
def set_input_tensor(interpreter, input):
input_details = interpreter.get_input_details()[0]
tensor_index = input_details['index']
input_tensor = interpreter.tensor(tensor_index)()[0]
# Inputs for the TFLite model must be uint8, so we quantize our input data.
# NOTE: This step is necessary only because we're receiving input data from
# ImageDataGenerator, which rescaled all image data to float [0,1]. When using
# bitmap inputs, they're already uint8 [0,255] so this can be replaced with:
# input_tensor[:, :] = input
scale, zero_point = input_details['quantization']
input_tensor[:, :] = np.uint8(input / scale + zero_point)
def classify_image(interpreter, input):
set_input_tensor(interpreter, input)
interpreter.invoke()
output_details = interpreter.get_output_details()[0]
output = interpreter.get_tensor(output_details['index'])
# Outputs from the TFLite model are uint8, so we dequantize the results:
scale, zero_point = output_details['quantization']
output = scale * (output - zero_point)
top_1 = np.argmax(output)
return top_1
interpreter = tf.lite.Interpreter('mobilenet_v2_1.0_224_quant.tflite')
interpreter.allocate_tensors()
# Collect all inference predictions in a list
batch_prediction = []
batch_truth = np.argmax(batch_labels, axis=1)
for i in range(len(batch_images)):
prediction = classify_image(interpreter, batch_images[i])
batch_prediction.append(prediction)
# Compare all predictions to the ground truth
tflite_accuracy = tf.keras.metrics.Accuracy()
tflite_accuracy(batch_prediction, batch_truth)
print("Quant TF Lite accuracy: {:.3%}".format(tflite_accuracy.result()))
###Output
_____no_output_____
###Markdown
A small drop in accuracy is expected with post-training quantization. You might be able to improve this by refining the representative dataset used during quantization.As mentioned earlier, you might also get better accuracy with [quantization-aware training](https://github.com/tensorflow/tensorflow/blob/r1.15/tensorflow/contrib/quantize/README.mdquantization-aware-training). Compile for the Edge TPU Finally, we're ready to compile the model for the Edge TPU.First download the [Edge TPU Compiler](https://coral.ai/docs/edgetpu/compiler/):
###Code
! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
! echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
! sudo apt-get update
! sudo apt-get install edgetpu-compiler
###Output
_____no_output_____
###Markdown
Then compile the model:
###Code
! edgetpu_compiler mobilenet_v2_1.0_224_quant.tflite
###Output
_____no_output_____
###Markdown
That's it.The compiled model uses the same filename but with "_edgetpu" appended at the end. Download the model You can download the converted model and labels file from Colab like this:
###Code
from google.colab import files
files.download('mobilenet_v2_1.0_224_quant_edgetpu.tflite')
files.download('flower_labels.txt')
###Output
_____no_output_____
###Markdown
*Copyright 2019 Google LLC**Licensed under the Apache License, Version 2.0 (the "License")*
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Retrain a classification model for Edge TPU using post-training quantization (with TF1) In this tutorial, we'll use TensorFlow 1.15 to create an image classification model, train it with a flowers dataset, and convert it into the TensorFlow Lite format that's compatible with the Edge TPU (available in [Coral devices](https://coral.ai/products/)).The model will be based on a pre-trained version of MobileNet V2. We'll start by retraining only the classification layers, reusing MobileNet's pre-trained feature extractor layers. Then we'll fine-tune the model by also updating weights in some of the feature extractor layers. This type of transfer learning is much faster than training the entire model from scratch.Once it's trained, we'll use post-training quantization to convert all the parameters to unit8 format, which increases inferencing speed and is required for compatibility on the Edge TPU.For more details about how to create a model that's compatible with the Edge TPU, see the [documentation at coral.ai](https://coral.ai/docs/edgetpu/models-intro/).**Note:** This tutorial requires TensorFlow 1.15. If you're using TF 2.0+, see [the 2.x version of this tutorial](https://colab.research.google.com/github/google-coral/tutorials/blob/master/retrain_classification_ptq_tf2.ipynb). To start running all the code in this tutorial, select **Runtime > Run all** in the Colab toolbar. Import the required libraries
###Code
try:
# This %tensorflow_version magic only works in Colab.
%tensorflow_version 1.x
except Exception:
pass
# For your non-Colab code, be sure you have tensorflow==1.15
import tensorflow as tf
assert tf.__version__.startswith('1')
tf.enable_eager_execution()
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Prepare the training data First let's download and organize the flowers dataset we'll use to retrain the model (it contains 5 flower classes).Pay attention to this part so you can reproduce it with your own images dataset. In particular, notice that the "flower_photos" directory contains an appropriately-named directory for each class. The following code then randomizes and divides up all these photos into training and validation sets, and generates the labels file.
###Code
_URL = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
zip_file = tf.keras.utils.get_file(origin=_URL,
fname="flower_photos.tgz",
extract=True)
flowers_dir = os.path.join(os.path.dirname(zip_file), 'flower_photos')
###Output
_____no_output_____
###Markdown
Next, we use [`ImageDataGenerator`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator) to rescale the image data into float values (divide by 255 so the tensor values are between 0 and 1), and call `flow_from_directory()` to create two generators: one for the training dataset and one for the validation dataset.
###Code
IMAGE_SIZE = 224
BATCH_SIZE = 64
datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale=1./255,
validation_split=0.2)
train_generator = datagen.flow_from_directory(
flowers_dir,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE,
subset='training')
val_generator = datagen.flow_from_directory(
flowers_dir,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE,
subset='validation')
###Output
_____no_output_____
###Markdown
On each iteration, these generators provide a batch of images by reading images from disk and processing them to the proper tensor size (224 x 224). The output is a tuple of (images, labels). For example, you can see the shapes here:
###Code
image_batch, label_batch = next(val_generator)
image_batch.shape, label_batch.shape
###Output
_____no_output_____
###Markdown
Now save the class labels to a text file:
###Code
print (train_generator.class_indices)
labels = '\n'.join(sorted(train_generator.class_indices.keys()))
with open('flower_labels.txt', 'w') as f:
f.write(labels)
!cat flower_labels.txt
###Output
_____no_output_____
###Markdown
Build the modelNow we'll create a model that's capable of transfer learning on just the last fully-connected layer. We'll start with MobileNet V2 from Keras as the base model, which is pre-trained with the ImageNet dataset (trained to recognize 1,000 classes). This provides us a great feature extractor for image classification and we can then simply train a new classification layer with our own dataset.**Note:** Not all models from [```tf.keras.applications```](https://www.tensorflow.org/api_docs/python/tf/keras/applications) are compatible with the Edge TPU. For details, read about [quantizing Keras models](https://coral.ai/docs/edgetpu/models-intro/quantizing-keras-models). Create the base model When instantiating the `MobileNetV2` model, we specify the `include_top=False` argument in order to load the network *without* the classification layers at the top. Then we set `trainable` false to freeze all the weights in the base model. This effectively converts the model into a feature extractor because all the pre-trained weights and biases are preserved in the lower layers when we begin training for our classification head.
###Code
IMG_SHAPE = (IMAGE_SIZE, IMAGE_SIZE, 3)
# Create the base model from the pre-trained MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
base_model.trainable = False
###Output
_____no_output_____
###Markdown
Add a classification headNow we create a new [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model and pass the frozen MobileNet model from above as the base of the graph, and append new classification layers so we can set the final output dimension to match the number of classes in our dataset (5 types of flowers).
###Code
model = tf.keras.Sequential([
base_model,
tf.keras.layers.Conv2D(filters=32, kernel_size=3, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(units=5, activation='softmax')
])
###Output
_____no_output_____
###Markdown
Configure the modelAlthough this method is called `compile()`, it's basically a configuration step that's required before we can start training.
###Code
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
You can see a string summary of the final network with the `summary()` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
And because the majority of the model graph is frozen in the base model, weights from only the last convolution and dense layers are trainable:
###Code
print('Number of trainable weights = {}'.format(len(model.trainable_weights)))
###Output
_____no_output_____
###Markdown
Train the model Now we can train the model using data provided by the `train_generator` and `val_generator` we created at the beginning. This takes 5-10 minutes to finish.
###Code
history = model.fit_generator(train_generator,
epochs=10,
validation_data=val_generator)
###Output
_____no_output_____
###Markdown
Review the learning curves
###Code
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
###Output
_____no_output_____
###Markdown
Fine tune the base model So far, we've only trained the classification layers—the weights of the pre-trained network were *not* changed. The accuracy results aren't bad, but could be better.One way we can increase the accuracy is to train (or "fine-tune") more layers from the pre-trained model. That is, we'll un-freeze some layers from the base model and adjust those weights (which were originally trained with 1,000 ImageNet classes) so they're better tuned for features found in our flowers dataset. Un-freeze more layers So instead of freezing the entire base model, we'll freeze individual layers.First, let's see how many layers are in the base model:
###Code
print("Number of layers in the base model: ", len(base_model.layers))
###Output
_____no_output_____
###Markdown
Let's try freezing just the bottom 100 layers.
###Code
base_model.trainable = True
fine_tune_at = 100
# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
###Output
_____no_output_____
###Markdown
Reconfigure the modelNow configure the model again, but this time with a lower training rate (the default is 0.001).
###Code
model.compile(optimizer=tf.keras.optimizers.Adam(1e-5),
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
print('Number of trainable weights = {}'.format(len(model.trainable_weights)))
###Output
_____no_output_____
###Markdown
Continue training Now start training all the trainable layers. This starts with the weights we already trained in the classification layers, so we don't need as many epochs.
###Code
history_fine = model.fit_generator(train_generator,
epochs=5,
validation_data=val_generator)
###Output
_____no_output_____
###Markdown
Review the new learning curves Now that we've done some fine-tuning on the MobileNet V2 base model, let's check the accuracy.
###Code
acc = history_fine.history['acc']
val_acc = history_fine.history['val_acc']
loss = history_fine.history['loss']
val_loss = history_fine.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
###Output
_____no_output_____
###Markdown
This is better, but it's not ideal.The validation loss is much higher than the training loss, so there could be some overfitting during training. The overfitting might also be because the new training set is relatively small with less intra-class variance, compared to the original ImageNet dataset used to train MobileNet V2.So this model isn't trained to an accuracy that's production ready, but it works well enough as a demonstration. So let's move on and convert the model to be compatible with the Edge TPU. Convert to TFLite Ordinarily, creating a TensorFlow Lite model is just a few lines of code using the [`TFLiteConverter`](https://www.tensorflow.org/api_docs/python/tf/lite/TFLiteConverter). For example, this code creates a standard TensorFlow Lite model:
###Code
saved_keras_model = 'model.h5'
model.save(saved_keras_model)
converter = tf.lite.TFLiteConverter.from_keras_model_file(saved_keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
with open('mobilenet_v2_1.0_224.tflite', 'wb') as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
However, this `.tflite` file isn't compatible with the Edge TPU because although the `DEFAULT` optimizations flag will quantize the weights, the activation values are still in floating-point. So we must fully quantize the model to use int8 format for all parameter data (both weights and activations).To fully quantize the model, we need to perform [post-training quantization](https://www.tensorflow.org/lite/performance/post_training_quantization) with a representative dataset, which requires a few more arguments for the `TFLiteConverter`, and a function that builds a dataset that's representative of the training dataset. So let's convert the model again, this time using post-training quantization:
###Code
# A generator that provides a representative dataset
def representative_data_gen():
dataset_list = tf.data.Dataset.list_files(flowers_dir + '/*/*')
for i in range(100):
image = next(iter(dataset_list))
image = tf.io.read_file(image)
image = tf.io.decode_jpeg(image, channels=3)
image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])
image = tf.cast(image / 255., tf.float32)
image = tf.expand_dims(image, 0)
yield [image]
saved_keras_model = 'model.h5'
model.save(saved_keras_model)
converter = tf.lite.TFLiteConverter.from_keras_model_file(saved_keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# This ensures that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# These set the input and output tensors to uint8
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
# And this sets the representative dataset so we can quantize the activations
converter.representative_dataset = representative_data_gen
tflite_model = converter.convert()
with open('mobilenet_v2_1.0_224_quant.tflite', 'wb') as f:
f.write(tflite_model)
###Output
_____no_output_____
###Markdown
**Note:** An alternative technique to quantize the model is to use [quantization-aware training](https://github.com/tensorflow/tensorflow/blob/r1.15/tensorflow/contrib/quantize/README.mdquantization-aware-training). This typically results in better accuracy because the training accounts for the decreased parameter precision. However, quantization-aware training requires modifications to the model graph, which is beyond the scope of this tutorial. Compare the accuracy So now we have a fully quantized model. To be sure the conversion went well, let's run some inferences using both the raw trained model and the new TensorFlow Lite model.First check the accuracy of the raw model:
###Code
batch_images, batch_labels = next(val_generator)
logits = model(batch_images)
prediction = np.argmax(logits, axis=1)
truth = np.argmax(batch_labels, axis=1)
keras_accuracy = tf.keras.metrics.Accuracy()
keras_accuracy(prediction, truth)
print("Raw model accuracy: {:.3%}".format(keras_accuracy.result()))
###Output
_____no_output_____
###Markdown
Now let's check the accuracy of the `.tflite` file, using the same dataset:(This TensorFlow Lite code is a bit more complicated. For details, read the guide to [TensorFlow Lite inference](https://www.tensorflow.org/lite/guide/inference).)
###Code
def set_input_tensor(interpreter, input):
input_details = interpreter.get_input_details()[0]
tensor_index = input_details['index']
input_tensor = interpreter.tensor(tensor_index)()[0]
# Inputs for the TFLite model must be uint8, so we quantize our input data.
# NOTE: This step is necessary only because we're receiving input data from
# ImageDataGenerator, which rescaled all image data to float [0,1]. When using
# bitmap inputs, they're already uint8 [0,255] so this can be replaced with:
# input_tensor[:, :] = input
scale, zero_point = input_details['quantization']
input_tensor[:, :] = np.uint8(input / scale + zero_point)
def classify_image(interpreter, input):
set_input_tensor(interpreter, input)
interpreter.invoke()
output_details = interpreter.get_output_details()[0]
output = interpreter.get_tensor(output_details['index'])
# Outputs from the TFLite model are uint8, so we dequantize the results:
scale, zero_point = output_details['quantization']
output = scale * (output - zero_point)
top_1 = np.argmax(output)
return top_1
interpreter = tf.lite.Interpreter('mobilenet_v2_1.0_224_quant.tflite')
interpreter.allocate_tensors()
# Collect all inference predictions in a list
batch_prediction = []
batch_truth = np.argmax(batch_labels, axis=1)
for i in range(len(batch_images)):
prediction = classify_image(interpreter, batch_images[i])
batch_prediction.append(prediction)
# Compare all predictions to the ground truth
tflite_accuracy = tf.keras.metrics.Accuracy()
tflite_accuracy(batch_prediction, batch_truth)
print("Quant TF Lite accuracy: {:.3%}".format(tflite_accuracy.result()))
###Output
_____no_output_____
###Markdown
A small drop in accuracy is expected with post-training quantization. You might be able to improve this by refining the representative dataset used during quantization.As mentioned earlier, you might also get better accuracy with [quantization-aware training](https://github.com/tensorflow/tensorflow/blob/r1.15/tensorflow/contrib/quantize/README.mdquantization-aware-training). Compile for the Edge TPU Finally, we're ready to compile the model for the Edge TPU.First download the [Edge TPU Compiler](https://coral.ai/docs/edgetpu/compiler/):
###Code
! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
! echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
! sudo apt-get update
! sudo apt-get install edgetpu-compiler
###Output
_____no_output_____
###Markdown
Then compile the model:
###Code
! edgetpu_compiler mobilenet_v2_1.0_224_quant.tflite
###Output
_____no_output_____
###Markdown
That's it.The compiled model uses the same filename but with "_edgetpu" appended at the end. Download the model You can download the converted model and labels file from Colab like this:
###Code
from google.colab import files
files.download('mobilenet_v2_1.0_224_quant_edgetpu.tflite')
files.download('flower_labels.txt')
###Output
_____no_output_____ |
Ian/.ipynb_checkpoints/weather_hw_for_avocados-checkpoint.ipynb | ###Markdown
WeatherPyAnalysis• Temperatures are generally highest from -20 to 40 degrees in latitude and steadily decline as the distance away from this hotter range increases. The findings from the plotted data jibes with our intuitive sense of where the hottest parts of the earth are. The described range in latitude of hottest parts of the world covers from southern Africa to the northern Mediterranean and from northern Australia to central Japan. • The relationship between humidity and latitude is not as obvious as in the temperature vs latitude, but several trends are apparent. The 0 to 20 and 60 to 80 degrees of latitude ranges are on average the most humid. These observations are not surprising because they encompass tropical zones and far northern North America and Eurasia which tend to be very humid in the late summer. Overall, the humidity is not as clearly correlated with latitude as temperature is and most latitudes plotted can have high humidity. • Wind speed measured in miles per hour shows that moderate wind speeds appear at almost any latitude. The highest wind speeds, however, appear at and beyond 60 degrees north.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from pprint import pprint
import datetime
today = datetime.date.today()
# Import API key
import api_keys
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
###Output
_____no_output_____
###Markdown
Generate Cities List
###Code
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
# testing size
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
#print(lat_lng)
# Identify nearest city latitude and longitude
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
#print(len(cities))
#print(cities)
###Output
_____no_output_____
###Markdown
Perform API Calls
###Code
# OpenWeatherMap API Key
api_key = api_keys.api_key
# Alternative API call structure / Save config information
url = "http://api.openweathermap.org/data/2.5/weather?units=Imperial&"
# Build query URL
query_url = url + "appid=" + api_key + "&q="
print(query_url)
# set up lists to hold reponse info
lat = []
lng = []
temp = []
humid = []
cloudy = []
windy = []
legit_cities = []
countries = []
date = []
i = -1
#print(cities)
response = requests.get(query_url + city).json()
#print(response)
# Loop through list of cities and request data on each
print("Beginning Data Retrieval\n----------------------------")
for city in cities:
try:
response = requests.get(query_url + city).json()
lat.append(response['coord']['lat'])
lng.append(response['coord']['lon'])
temp.append(response['main']['temp'])
humid.append(response['main']['humidity'])
cloudy.append(response['clouds']['all'])
windy.append(response['wind']['speed'])
legit_cities.append(response['name'])
countries.append(response['sys']['country'])
date.append(response['dt'])
i = i + 1
print("Retrieving Results for Index " + str(i) + ': ' + city.title())
print("Data available for: " + city.title())
print("Adding to dataframe...")
print("----------------------------")
except:
pass
i = i + 1
print("Retrieving Results for Index " + str(i) + ': ' + city.title())
print("Data not available for: " + city.title())
print("Skipping...")
print("----------------------------")
#print(f"The latitude information received is: {lat}")
#print(f"The temperature information received is: {temp}")
#print(legit_cities)
#print(len(legit_cities))
#print(len(lat))
#print(len(temp))
# create a data frame from cities, lat, and temp
weather_dict = {
"City": legit_cities,
"Country":countries,
"Lat": lat,
"Lng": lng,
"Temp": temp,
"Humidity": humid,
"Cloudiness": cloudy,
"Wind Speed": windy,
"Date": date
}
weather_data = pd.DataFrame(weather_dict)
weather_data.to_csv('output_data/cities_df.csv')
weather_data.head(15)
#create date stamp
new_date = weather_data['Date'].mean()
graph_date = datetime.datetime.fromtimestamp(new_date)
datefinal = graph_date.strftime('%Y-%m-%d')
# * Temperature (F) vs. Latitude
# Build a scatter plot for each data type
plt.scatter(weather_data["Lat"], weather_data["Temp"], marker="o")
# Incorporate the other graph properties
plt.title("Latitude vs. Temperature" + ' (Date: ' + datefinal + ')')
plt.ylabel("Temperature (F)")
plt.xlabel("Latitude")
plt.grid(True)
# Save the figure
plt.savefig("output_data/TemperaturevsLatitude.png", dpi = 900, linewidth=1)
# Show plot
plt.show()
# Humidity (%) vs. Latitude
# Build a scatter plot for each data type
plt.scatter(weather_data["Lat"], weather_data["Humidity"], marker="o")
# Incorporate the other graph properties
plt.title("Latitude vs. Humidity" + ' (Date: ' + datefinal + ')')
plt.ylabel("Humidity (%)")
plt.xlabel("Latitude")
plt.grid(True)
# Save the figure
plt.savefig("output_data/HumidityvsLatitude.png", dpi = 900, linewidth=1)
# Show plot
plt.show()
# Humidity (%) vs. Latitude
# Build a scatter plot for each data type
plt.scatter(weather_data["Lat"], weather_data["Cloudiness"], marker="o")
# Incorporate the other graph properties
plt.title("Latitude vs. Cloudiness" + ' (Date: ' + datefinal + ')')
plt.ylabel("% Sky Clouded")
plt.xlabel("Latitude")
plt.grid(True)
# Save the figure
plt.savefig("output_data/CloudinessvsLatitude.png", dpi = 900, linewidth=1)
# Show plot
plt.show()
# Humidity (%) vs. Latitude
# Build a scatter plot for each data type
plt.scatter(weather_data["Lat"], weather_data["Wind Speed"], marker="o")
# Incorporate the other graph properties
plt.title("Latitude vs Windiness" + ' (Date: ' + datefinal + ')')
plt.ylabel("Wind Speed (MPH)")
plt.xlabel("Latitude")
plt.grid(True)
# Save the figure
plt.savefig("output_data/WindinessvsLatitude.png", dpi = 900, linewidth=1)
# Show plot
plt.show()
###Output
_____no_output_____ |
doc/source/examples/hydro_thermal/TS.ipynb | ###Markdown
Solve the hydro-thermal power system planning problem: TS approach=====================================================The TS approach addes additional four state variables to transform the Markovian problem into stage-wise independent.
###Code
import pandas
import numpy
import gurobipy
from msppy.msp import MSLP
from msppy.solver import SDDP
from msppy.evaluation import Evaluation,EvaluationTrue
import sys
import seaborn
seaborn.set_style('darkgrid')
gamma = numpy.array(pandas.read_csv(
"./data/gamma.csv",
names=[0,1,2,3],
index_col=0,
skiprows=1,
))
sigma = [
numpy.array(pandas.read_csv(
"./data/sigma_{}.csv".format(i),
names=[0,1,2,3],
index_col=0,
skiprows=1,
)) for i in range(12)
]
exp_mu = numpy.array(pandas.read_csv(
"./data/exp_mu.csv",
names=[0,1,2,3],
index_col=0,
skiprows=1,
))
hydro_ = pandas.read_csv("./data/hydro.csv", index_col=0)
demand = pandas.read_csv("./data/demand.csv", index_col=0)
deficit_ = pandas.read_csv("./data/deficit.csv", index_col=0)
exchange_ub = pandas.read_csv("./data/exchange.csv", index_col=0)
exchange_cost = pandas.read_csv("./data/exchange_cost.csv", index_col=0)
thermal_ = [pandas.read_csv("./data/thermal_{}.csv".format(i),
index_col=0) for i in range(4)]
stored_initial = hydro_['INITIAL'][:4]
inflow_initial = hydro_['INITIAL'][4:8]
def sampler(t):
def inner(random_state):
noise = numpy.exp(
random_state.multivariate_normal(mean=[0]*4, cov=sigma[t%12]))
coef = [None]*4
rhs = [None]*4
for i in range(4):
coef[i] = -noise[i]*gamma[t%12][i]*exp_mu[t%12][i]/exp_mu[(t-1)%12][i]
rhs[i] = noise[i]*(1-gamma[t%12][i])*exp_mu[t%12][i]
return (coef+rhs)
return inner
T = 120
HydroThermal = MSLP(T=T, bound=0, discount=0.9906)
for t in range(T):
m = HydroThermal[t]
stored_now,stored_past = m.addStateVars(4, ub=hydro_['UB'][:4], name="stored")
inflow_now,inflow_past = m.addStateVars(4, name="inflow")
spill = m.addVars(4, obj=0.001, name="spill")
hydro = m.addVars(4, ub=hydro_['UB'][-4:], name="hydro")
deficit = m.addVars(
[(i,j) for i in range(4) for j in range(4)],
ub = [
demand.iloc[t%12][i] * deficit_['DEPTH'][j]
for i in range(4) for j in range(4)
],
obj = [
deficit_['OBJ'][j]
for i in range(4) for j in range(4)
],
name = "deficit")
thermal = [None] * 4
for i in range(4):
thermal[i] = m.addVars(
len(thermal_[i]),
ub=thermal_[i]['UB'],
lb=thermal_[i]['LB'],
obj=thermal_[i]['OBJ'],
name="thermal_{}".format(i)
)
exchange = m.addVars(5,5, obj=exchange_cost.values.flatten(),
ub=exchange_ub.values.flatten(), name="exchange")
thermal_sum = m.addVars(4, name="thermal_sum")
m.addConstrs(thermal_sum[i] ==
gurobipy.quicksum(thermal[i].values()) for i in range(4))
for i in range(4):
m.addConstr(
thermal_sum[i]
+ gurobipy.quicksum(deficit[(i,j)] for j in range(4))
+ hydro[i]
- gurobipy.quicksum(exchange[(i,j)] for j in range(5))
+ gurobipy.quicksum(exchange[(j,i)] for j in range(5))
== demand.iloc[t%12][i]
)
m.addConstr(
gurobipy.quicksum(exchange[(j,4)] for j in range(5))
- gurobipy.quicksum(exchange[(4,j)] for j in range(5))
== 0
)
m.addConstrs(
stored_now[i] + spill[i] + hydro[i] - stored_past[i] == inflow_now[i]
for i in range(4)
)
if t == 0:
m.addConstrs(stored_past[i] == stored_initial[i] for i in range(4))
m.addConstrs(inflow_now[i] == inflow_initial[i] for i in range(4))
else:
TS = m.addConstrs(inflow_now[i] + inflow_past[i] == 0 for i in range(4))
m.add_continuous_uncertainty(
uncertainty=sampler(t),
locations=(
[(TS[i],inflow_past[i]) for i in range(4)]
+ [TS[i] for i in range(4)]
),
)
HydroThermal.discretize(n_samples=100, random_state=888)
HT_sddp = SDDP(HydroThermal)
HT_sddp.solve(max_iterations=30, logToConsole=0)
HT_sddp.plot_bounds(smooth=1);
result = Evaluation(HydroThermal)
result.run(n_simulations=100)
resultTrue = EvaluationTrue(HydroThermal)
resultTrue.run(n_simulations=100)
result.gap, result.CI
resultTrue.CI
###Output
_____no_output_____ |
pipeline/SuSiE.ipynb | ###Markdown
Fine-mapping with SuSiE modelThis notebook conduct fine_mapping with complete data, unlike the susie_RSS module, this module perform analysis on 1 of the theme each time Input1. 1 region list documenting the regions to be analysised2. a list to the path where phenotype per gene data stored3. a list to the path where genotype per gene data storedBy default the input 2 and 3 is the output from the data_preprocessing module. Example genotype listregion dirENSG00000000457 /mnt/mfs/statgen/xqtl_workflow_testing/demo/genotype_reformmating/demo_per_gene_plink/ENSG00000000457.bedENSG00000000460 /mnt/mfs/statgen/xqtl_workflow_testing/demo/genotype_reformmating/demo_per_gene_plink/ENSG00000000460.bedENSG00000000938 /mnt/mfs/statgen/xqtl_workflow_testing/demo/genotype_reformmating/demo_per_gene_plink/ENSG00000000938.bedENSG00000000971 /mnt/mfs/statgen/xqtl_workflow_testing/demo/genotype_reformmating/demo_per_gene_plink/ENSG00000000971.bedENSG00000001036 /mnt/mfs/statgen/xqtl_workflow_testing/demo/genotype_reformmating/demo_per_gene_plink/ENSG00000001036.bedENSG00000001084 /mnt/mfs/statgen/xqtl_workflow_testing/demo/genotype_reformmating/demo_per_gene_plink/ENSG00000001084.bedENSG00000001167 /mnt/mfs/statgen/xqtl_workflow_testing/demo/genotype_reformmating/demo_per_gene_plink/ENSG00000001167.bedENSG00000001460 /mnt/mfs/statgen/xqtl_workflow_testing/demo/genotype_reformmating/demo_per_gene_plink/ENSG00000001460.bed Example phenotype listregion dirENSG00000000457 /mnt/mfs/statgen/xqtl_workflow_testing/demo/phenotype_reformat/demo.ENSG00000000457.mol_phe.bed.gzENSG00000000460 /mnt/mfs/statgen/xqtl_workflow_testing/demo/phenotype_reformat/demo.ENSG00000000460.mol_phe.bed.gzENSG00000000938 /mnt/mfs/statgen/xqtl_workflow_testing/demo/phenotype_reformat/demo.ENSG00000000938.mol_phe.bed.gzENSG00000000971 /mnt/mfs/statgen/xqtl_workflow_testing/demo/phenotype_reformat/demo.ENSG00000000971.mol_phe.bed.gzENSG00000001036 /mnt/mfs/statgen/xqtl_workflow_testing/demo/phenotype_reformat/demo.ENSG00000001036.mol_phe.bed.gzENSG00000001084 /mnt/mfs/statgen/xqtl_workflow_testing/demo/phenotype_reformat/demo.ENSG00000001084.mol_phe.bed.gzENSG00000001167 /mnt/mfs/statgen/xqtl_workflow_testing/demo/phenotype_reformat/demo.ENSG00000001167.mol_phe.bed.gzENSG00000001460 /mnt/mfs/statgen/xqtl_workflow_testing/demo/phenotype_reformat/demo.ENSG00000001460.mol_phe.bed.gz OutputFor each analysis unit we output:1. Analysis results in RDS format: A mvsusie Model2. A vcf file with selected snps ES:PIP:CS MWE
###Code
sos run pipeline/SuSiE.ipynb uni_susie \
--genoFile /mnt/mfs/statgen/snuc_pseudo_bulk/Ast/genotype_per_gene/MWE.region_plink_files/plink_files_list.txt \
--cwd MWE/susie_per_gene/ \
--region-list data/mwe/MWE.region.list \
--phenoFile data/mwe/MWE.phenotype.list \
--covFile data/mwe/MWE.covar.list -J 8 -c csg.yml -q csg &
sos run pipeline/SuSiE.ipynb mv_susie \
--genoFile ./mwe.region_plink_files/plink_files_list.txt \
--cwd MWE/rds_per_gene/ \
--region-list MWE.region.list \
--phenoFile MWE.phenotype.list \
--covFile MWE.covar.list &
[global]
import glob
import pandas as pd
# Input
parameter: genoFile = paths
parameter: phenoFile = path
parameter: covFile = path
parameter: region_list = path
parameter: cwd = path
parameter: name = "demo"
region_tbl = pd.read_csv(region_list,sep = "\t")
genoFile = pd.read_csv(genoFile,sep = "\t",names = ["gene_id","path"],header = 0).merge(region_tbl,on = "gene_id").to_dict("records")
## Path to work directory where output locates
## Containers that contains the necessary packages
parameter: container = ""
# For cluster jobs, number commands to run per job
parameter: job_size = 1
# Wall clock time expected
parameter: walltime = "5h"
# Memory expected
parameter: mem = "16G"
# Number of threads
parameter: numThreads = 20
# use this function to edit memory string for PLINK input
from sos.utils import expand_size
###Output
_____no_output_____
###Markdown
Univariate SuSiE
###Code
[uni_susie_1]
parameter: max_L = 10
# remove a variant if it has more than imiss missing individual data
parameter: imiss = 0.1
parameter: maf = 0.05
input: phenoFile,covFile, for_each = "genoFile"
output: f'{cwd:a}/cache/{name}.{_genoFile["gene_id"]}.unisusie.fit.rds'
task: trunk_workers = 1, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output[0]:bn}'
R: expand = '${ }', stdout = f"{_output:nn}.stdout", stderr = f"{_output:nn}.stderr", container = container
## Define function
compute_maf <- function(geno){
f <- mean(geno,na.rm = TRUE)/2
return(min(f, 1-f))
}
compute_missing <- function(geno){
miss <- sum(is.na(geno))/length(geno)
return(miss)
}
mean_impute <- function(geno){
f <- apply(geno, 2, function(x) mean(x,na.rm = TRUE))
for (i in 1:length(f)) geno[,i][which(is.na(geno[,i]))] <- f[i]
return(geno)
}
is_zero_variance <- function(x) {
if (length(unique(x))==1) return(T)
else return(F)
}
filter_X <- function(X, missing_rate_thresh, maf_thresh) {
rm_col <- which(apply(X, 2, compute_missing) > missing_rate_thresh)
if (length(rm_col)) X <- X[, -rm_col]
rm_col <- which(apply(X, 2, compute_maf) < maf_thresh)
if (length(rm_col)) X <- X[, -rm_col]
rm_col <- which(apply(X, 2, is_zero_variance))
if (length(rm_col)) X <- X[, -rm_col]
return(mean_impute(X))
}
compute_cov_flash <- function(Y, error_cache = NULL){
covar <- diag(ncol(Y))
tryCatch({
fl <- flashier::flash(Y, var.type = 2, prior.family = c(flashier::prior.normal(), flashier::prior.normal.scale.mix()), backfit = TRUE, verbose.lvl=0)
if(fl$n.factors==0){
covar <- diag(fl$residuals.sd^2)
} else {
fsd <- sapply(fl$fitted.g[[1]], '[[', "sd")
covar <- diag(fl$residuals.sd^2) + crossprod(t(fl$flash.fit$EF[[2]]) * fsd)
}
if (nrow(covar) == 0) {
covar <- diag(ncol(Y))
stop("Computed covariance matrix has zero rows")
}
}, error = function(e) {
if (!is.null(error_cache)) {
saveRDS(list(data=Y, message=warning(e)), error_cache)
warning("FLASH failed. Using Identity matrix instead.")
warning(e)
} else {
stop(e)
}
})
s <- apply(Y, 2, sd, na.rm=T)
if (length(s)>1) s = diag(s)
else s = matrix(s,1,1)
covar <- s%*%cov2cor(covar)%*%s
return(covar)
}
read_gene_pheno = function(path){
arg = paste0(c("tabix -h ",path," ${_genoFile["#chr"]}:${_genoFile["start"]}-${_genoFile["start"]+1}"),collapse = "")
result = system(arg,intern = T)
output = read.table(text = result[2], sep = "\t")
colnames(output) = result[1]%>%stringr::str_split("\t")%>%unlist()
return(output)
}
remove_covX = function(X,covar){
for ( i in 1:ncol(X) ) {
X[,i] = .lm.fit(x = covar, y = X[,i])$residuals
}
X = scale(X)
}
## Load Library
library("susieR")
library("plink2R")
library("dplyr")
library("readr")
library("stringr")
library("purrr")
###
# Core code
###
# Input
### Genotype
geno = read_plink("${path(_genoFile["path"]):n}")
X = filter_X(geno$bed,${imiss}, ${maf} )
### Phenotype
phenotype_list = read_delim("${_input[0]}","\t")
covar_list = read_delim("${_input[1]}","\t")
covar_list = covar_list%>%mutate(covar = map(path, ~read_delim(.x,"\t")%>%select(-`#id`)%>%na.omit%>%t()))
phenotype_list = inner_join(phenotype_list,covar_list, by = "tissue")
phenotype_list = phenotype_list%>%mutate(Y = map(path.x, ~read_gene_pheno(.x)%>%select(-c(`#chr`,start,end,gene_id))%>%t%>%as.matrix))%>%mutate(
#### Get residue for each of tissue
Y_resid = map2(Y,covar,~.lm.fit(x = .y, y = .x)$residuals%>%t%>%as_tibble))
y_res = phenotype_list%>%select(Y_resid)%>%tidyr::unnest(Y_resid)%>%t%>%as.matrix
colnames(y_res) = phenotype_list$tissue
X_list = phenotype_list%>%mutate( X_data = map(covar,~X[intersect(rownames(X),paste0(rownames(.x),":",rownames(.x))),]), # Get only the intersect samples
X_resid = map2(covar,X_data,~remove_covX(X = .y, covar = .x)))%>%pull(X_resid) # Remove covariate by tissues
non_missing = list()
fitted = list()
# Fine-mapping with SuSiE
for (r in 1:ncol(y_res)) {
non_missing[[r]] = which(!is.na(y_res[,r]))
st = proc.time()
X = X_list[[r]]
print(paste("Dimension of X matrix:", nrow(X), ncol(X)))
print(paste("Dimension of Y matrix:", nrow(y_res), ncol(y_res)))
fitted[[r]] <- susieR::susie(X[non_missing[[r]],], y_res[non_missing[[r]],r],
L=${max_L},
max_iter=1000,
estimate_residual_variance=TRUE,
estimate_prior_variance=TRUE,
refine=TRUE)
fitted[[r]]$time = proc.time() - st
fitted[[r]]$cs_corr = susieR:::get_cs_correlation(fitted[[r]], X=X[non_missing[[r]],])
fitted[[r]]$cs_snps = names(fitted[[r]]$X_column_scale_factors[unlist(fitted[[r]]$sets$cs)])
fitted[[r]]$variable_name = names(fitted[[r]]$X_column_scale_factors)
fitted[[r]]$coef = coef.susie(fitted[[r]])
}
names(fitted) = phenotype_list$tissue
saveRDS(fitted, ${_output[0]:r})
[uni_susie_2]
input: group_with = "genoFile"
output: f"{_input:n}.vcf.bgz"
task: trunk_workers = 1, trunk_size = 1, walltime = '2h', mem = '55G', cores = 1, tags = f'{step_name}_{_output[0]:bn}'
R: expand = '${ }', stdout = f"{_output:nn}.stdout", stderr = f"{_output:nn}.stderr"
## Define create_vcf function
create_vcf = function (chrom, pos, nea, ea, snp = NULL, ea_af = NULL, effect = NULL,
se = NULL, pval = NULL, name = NULL,cs = NULL, pip = NULL)
{
stopifnot(length(chrom) == length(pos))
if (is.null(snp)) {
snp <- paste0(chrom, ":", pos)
}
snp <- paste0(chrom, ":", pos)
nsnp <- length(chrom)
gen <- list()
## Setupt data content for each sample column
if (!is.null(ea_af))
gen[["AF"]] <- matrix(ea_af, nsnp)
if (!is.null(effect))
gen[["ES"]] <- matrix(effect, nsnp)
if (!is.null(se))
gen[["SE"]] <- matrix(se, nsnp)
if (!is.null(pval))
gen[["LP"]] <- matrix(-log10(pval), nsnp)
if (!is.null(cs))
gen[["CS"]] <- matrix(cs, nsnp)
if (!is.null(pip))
gen[["PIP"]] <- matrix(pip, nsnp)
gen <- S4Vectors::SimpleList(gen)
## Setup snps info for the fix columns
gr <- GenomicRanges::GRanges(chrom, IRanges::IRanges(start = pos,
end = pos + pmax(nchar(nea), nchar(ea)) - 1, names = snp))
coldata <- S4Vectors::DataFrame(Studies = name, row.names = name)
## Setup header informations
hdr <- VariantAnnotation::VCFHeader(header = IRanges::DataFrameList(fileformat = S4Vectors::DataFrame(Value = "VCFv4.2",
row.names = "fileformat")), sample = name)
VariantAnnotation::geno(hdr) <- S4Vectors::DataFrame(Number = c("A",
"A", "A", "A", "A", "A"), Type = c("Float", "Float",
"Float", "Float", "Float", "Float"), Description = c("Effect size estimate relative to the alternative allele",
"Standard error of effect size estimate", "-log10 p-value for effect estimate",
"Alternate allele frequency in the association study",
"The CS this variate are captured, 0 indicates not in any cs", "The posterior inclusion probability to a CS"),
row.names = c("ES", "SE", "LP", "AF", "CS", "PIP"))
## Save only the meta information in the sample columns
VariantAnnotation::geno(hdr) <- subset(VariantAnnotation::geno(hdr),
rownames(VariantAnnotation::geno(hdr)) %in% names(gen))
## Save VCF
vcf <- VariantAnnotation::VCF(rowRanges = gr, colData = coldata,
exptData = list(header = hdr), geno = gen)
VariantAnnotation::alt(vcf) <- Biostrings::DNAStringSetList(as.list(ea))
VariantAnnotation::ref(vcf) <- Biostrings::DNAStringSet(nea)
## Add fixed values
VariantAnnotation::fixed(vcf)$FILTER <- "PASS"
return(sort(vcf))
}
library("susieR")
library("dplyr")
library("tibble")
library("purrr")
library("readr")
library("tidyr")
library("stringr")
# Get list of cs snps
susie_list = readRDS(${_input:r})
susie_tb_ls = list()
for (i in 1:length(susie_list)){
susie_tb = tibble( snps = names(susie_list[[1]]$pip)[which( susie_list[[i]]$pip >= 0)], snps_index = which(( susie_list[[i]]$pip >= 0)) )
susie_tb_ls[[i]]= susie_tb%>%mutate( cs = map(snps_index,~which( susie_list[[i]]$sets$cs %in% .x))%>%as.numeric%>%replace_na(0),
pip = map_dbl(snps_index,~( susie_list[[i]]$pip[.x])),
coef = map_dbl(snps_index,~(coef.susie( susie_list[[i]])[.x+1])))
}
if(length(susie_tb_ls) > 2){
for(i in 2:length(susie_tb_ls)){
susie_tb_ls[[i]] = full_join(susie_tb_ls[[i-1]],susie_tb_ls[[i]], by = "snps")
}
}
m = c("cs","pip","coef")
output = list()
for(i in m){
output[[i]] = susie_tb_ls[[length(susie_tb_ls)]]%>%select(contains(i))%>%as.matrix
}
snps_tb = susie_tb_ls[[length(susie_tb_ls)]]%>%mutate(
chr = map_chr(snps,~read.table(text = .x,sep = ":",as.is = T)$V1),
pos_alt_ref = map_chr(snps,~read.table(text = .x,sep = ":",as.is = TRUE)$V2),
pos = map_dbl(pos_alt_ref,~read.table(text = .x,sep = "_",as.is = TRUE)$V1),
alt = map_chr(pos_alt_ref,~read.table(text = .x,sep = "_",as.is = TRUE, colClass = "character")$V2),
ref = map_chr(pos_alt_ref,~read.table(text = .x,sep = "_",as.is = TRUE, colClass = "character")$V3))
snps_tb = snps_tb%>%filter(str_detect(ref, "[ACTG]") & str_detect(alt, "[ACTG]"))
output_vcf = create_vcf(
chrom = snps_tb$chr,
pos = snps_tb$pos,
ea = snps_tb$alt,
nea = snps_tb$ref,
effect = snps_tb%>%select(contains("coef"))%>%as.matrix ,
pip = snps_tb%>%select(contains("pip"))%>%as.matrix,
cs = snps_tb%>%select(contains("cs"))%>%as.matrix,
name = names(susie_list)
VariantAnnotation::writeVcf(output_vcf,${_output:nr},index = TRUE)
[*_susie_3]
input: group_by = "all"
output: f'{wd}/{name}.susie.output_list.txt'
python: expand= "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
import pandas as pd
pd.DataFrame({"output_vcf" : [$[_input:ar,]]}).to_csv("$[_output]",index = False ,header = False, sep = "t")
[mv_susie_1]
parameter: max_L = 10
# remove a variant if it has more than imiss missing individual data
parameter: imiss = 0.1
parameter: maf = 0.05
# Only analyze `cis` variants -- cis = N means using N variants around the center column of X matrix
parameter: cis = 'NULL'
parameter: prior = path
input: phenoFile,covFile, for_each = "genoFile"
output: mvsusie = f'{wd:a}/{_input:bn}{("_cis_%s" % cis) if cis != "NULL" else ""}_{name}.mvsusie.rds'
task: trunk_workers = 1, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output[0]:bn}'
R: expand = '${ }', stdout = f"{_output[0]:nn}.stdout", stderr = f"{_output[0]:nn}.stderr", container = container
###
# Utility functions
###
compute_maf <- function(geno){
f <- mean(geno,na.rm = TRUE)/2
return(min(f, 1-f))
}
compute_missing <- function(geno){
miss <- sum(is.na(geno))/length(geno)
return(miss)
}
mean_impute <- function(geno){
f <- apply(geno, 2, function(x) mean(x,na.rm = TRUE))
for (i in 1:length(f)) geno[,i][which(is.na(geno[,i]))] <- f[i]
return(geno)
}
is_zero_variance <- function(x) {
if (length(unique(x))==1) return(T)
else return(F)
}
filter_X <- function(X, missing_rate_thresh, maf_thresh) {
rm_col <- which(apply(X, 2, compute_missing) > missing_rate_thresh)
if (length(rm_col)) X <- X[, -rm_col]
rm_col <- which(apply(X, 2, compute_maf) < maf_thresh)
if (length(rm_col)) X <- X[, -rm_col]
rm_col <- which(apply(X, 2, is_zero_variance))
if (length(rm_col)) X <- X[, -rm_col]
return(mean_impute(X))
}
compute_cov_flash <- function(Y, error_cache = NULL){
covar <- diag(ncol(Y))
tryCatch({
fl <- flashier::flash(Y, var.type = 2, prior.family = c(flashier::prior.normal(), flashier::prior.normal.scale.mix()), backfit = TRUE, verbose.lvl=0)
if(fl$n.factors==0){
covar <- diag(fl$residuals.sd^2)
} else {
fsd <- sapply(fl$fitted.g[[1]], '[[', "sd")
covar <- diag(fl$residuals.sd^2) + crossprod(t(fl$flash.fit$EF[[2]]) * fsd)
}
if (nrow(covar) == 0) {
covar <- diag(ncol(Y))
stop("Computed covariance matrix has zero rows")
}
}, error = function(e) {
if (!is.null(error_cache)) {
saveRDS(list(data=Y, message=warning(e)), error_cache)
warning("FLASH failed. Using Identity matrix instead.")
warning(e)
} else {
stop(e)
}
})
s <- apply(Y, 2, sd, na.rm=T)
if (length(s)>1) s = diag(s)
else s = matrix(s,1,1)
covar <- s%*%cov2cor(covar)%*%s
return(covar)
}
compute_cov_diag <- function(Y){
covar <- diag(apply(Y, 2, var, na.rm=T))
return(covar)
}
get_center <- function(k,n) {
## For given number k, get the range k surrounding n/2
## but have to make sure it does not go over the bounds
if (is.null(k)) {
return(1:n)
}
start = floor(n/2 - k/2)
end = floor(n/2 + k/2)
if (start<1) start = 1
if (end>n) end = n
return(start:end)
}
get_prior_indices <- function(Y, U) {
# make sure the prior col/rows match the colnames of the Y matrix
y_names = colnames(Y)
u_names = colnames(U)
if (is.null(y_names) || is.null(u_names)) {
return(NULL)
} else if (identical(y_names, u_names)) {
return(NULL)
} else {
return(match(y_names, u_names))
}
}
###
# Core code
###
# Input
### Genotype
geno = read_plink("${path(_genoFile["path"]):n}")
X = filter_X(geno$bed,${imiss}, ${maf} )
X = X[,get_center(${cis}, ncol(X))]
### Phenotype
phenotype_list = read_delim("${_input[0]}","\t")
covar_list = read_delim("${_input[1]}","\t")
covar_list = covar_list%>%mutate(covar = map(path, ~read_delim(.x,"\t")%>%select(-`#id`)%>%na.omit%>%t()))
phenotype_list = inner_join(phenotype_list,covar_list, by = "tissue")
phenotype_list = phenotype_list%>%mutate(Y = map(path.x, ~read_gene_pheno(.x)%>%select(-c(`#chr`,start,end,gene_id))%>%t%>%as.matrix))%>%mutate(
#### Get residue for each of tissue
Y_resid = map2(Y,covar,~.lm.fit(x = .y, y = .x)$residuals%>%t%>%as_tibble))
y_res = phenotype_list%>%select(Y_resid)%>%tidyr::unnest(Y_resid)%>%t%>%as.matrix
colnames(Y_resid) = phenotype_list$tissue
# FIXME: handle it when prior does not exist
prior = readRDS(${prior:r})
print(paste("Number of components in the mixture prior:", length(prior$U)))
prior = mvsusieR::create_mash_prior(mixture_prior=list(weights=prior$w, matrices=prior$U), include_indices = get_prior_indices(y_res, prior$U[[1]]), max_mixture_len=-1)
print(paste("Dimension of X matrix:", nrow(X), ncol(X)))
print(paste("Dimension of Y matrix:", nrow(y_res), ncol(y_res)))
# GWAS Summary statistics
univariate_res = lapply(1:ncol(y_res), function(r) susieR:::univariate_regression(X[non_missing[[r]], ], y_res[non_missing[[r]], r]))
bhat = do.call(cbind, lapply(1:ncol(y_res), function(r) univariate_res[[r]]$betahat))
sbhat = do.call(cbind, lapply(1:ncol(y_res), function(r) univariate_res[[r]]$sebetahat))
saveRDS(list(bhat=bhat, sbhat=sbhat), "${_output[0]:nn}.sumstat.rds")
rm(bhat)
rm(sbhat)
# Multivariate fine-mapping
st = proc.time()
mv_res = mvsusieR::mvsusie(X, y_res, L=${max_L},
prior_variance=prior, residual_variance=resid_Y,
precompute_covariances=F, compute_objective=T,
estimate_residual_variance=F, estimate_prior_variance=T, estimate_prior_method='EM',
max_iter = 100, n_thread=1, approximate=F)
mv_res$time = proc.time() - st
mv_res$cs_corr = susieR:::get_cs_correlation(mv_res, X=X)
saveRDS(mv_res, ${_output[0]:r})
[mv_susie_2]
input: group_with = "genoFile"
output: f"{_input:n}.vcf.bgz"
task: trunk_workers = 1, trunk_size = 1, walltime = '2h', mem = '55G', cores = 1, tags = f'{step_name}_{_output[0]:bn}'
R: expand = '${ }', stdout = f"{_output:nn}.stdout", stderr = f"{_output:nn}.stderr"
## Define create_vcf function
create_vcf = function (chrom, pos, nea, ea, snp = NULL, ea_af = NULL, effect = NULL,
se = NULL, pval = NULL, name = NULL,cs = NULL, pip = NULL)
{
stopifnot(length(chrom) == length(pos))
if (is.null(snp)) {
snp <- paste0(chrom, ":", pos)
}
snp <- paste0(chrom, ":", pos)
nsnp <- length(chrom)
gen <- list()
## Setupt data content for each sample column
if (!is.null(ea_af))
gen[["AF"]] <- matrix(ea_af, nsnp)
if (!is.null(effect))
gen[["ES"]] <- matrix(effect, nsnp)
if (!is.null(se))
gen[["SE"]] <- matrix(se, nsnp)
if (!is.null(pval))
gen[["LP"]] <- matrix(-log10(pval), nsnp)
if (!is.null(cs))
gen[["CS"]] <- matrix(cs, nsnp)
if (!is.null(pip))
gen[["PIP"]] <- matrix(pip, nsnp)
gen <- S4Vectors::SimpleList(gen)
## Setup snps info for the fix columns
gr <- GenomicRanges::GRanges(chrom, IRanges::IRanges(start = pos,
end = pos + pmax(nchar(nea), nchar(ea)) - 1, names = snp))
coldata <- S4Vectors::DataFrame(Studies = name, row.names = name)
## Setup header informations
hdr <- VariantAnnotation::VCFHeader(header = IRanges::DataFrameList(fileformat = S4Vectors::DataFrame(Value = "VCFv4.2",
row.names = "fileformat")), sample = name)
VariantAnnotation::geno(hdr) <- S4Vectors::DataFrame(Number = c("A",
"A", "A", "A", "A", "A"), Type = c("Float", "Float",
"Float", "Float", "Float", "Float"), Description = c("Effect size estimate relative to the alternative allele",
"Standard error of effect size estimate", "-log10 p-value for effect estimate",
"Alternate allele frequency in the association study",
"The CS this variate are captured, 0 indicates not in any cs", "The posterior inclusion probability to a CS"),
row.names = c("ES", "SE", "LP", "AF", "CS", "PIP"))
## Save only the meta information in the sample columns
VariantAnnotation::geno(hdr) <- subset(VariantAnnotation::geno(hdr),
rownames(VariantAnnotation::geno(hdr)) %in% names(gen))
## Save VCF
vcf <- VariantAnnotation::VCF(rowRanges = gr, colData = coldata,
exptData = list(header = hdr), geno = gen)
VariantAnnotation::alt(vcf) <- Biostrings::DNAStringSetList(as.list(ea))
VariantAnnotation::ref(vcf) <- Biostrings::DNAStringSet(nea)
## Add fixed values
VariantAnnotation::fixed(vcf)$FILTER <- "PASS"
return(sort(vcf))
}
library("susieR")
library("dplyr")
library("tibble")
library("purrr")
library("readr")
library("tidyr")
# Get list of cs snps
res = readRDS(${_input:r})
output_snps = tibble( snps = res$variable_name[which(res$pip >= 0)], snps_index = which((res$pip >= 0)) )
output_snps = output_snps%>%mutate( cs = map(snps_index,~which(res$sets$cs %in% .x))%>%as.numeric%>%replace_na(0),
pip = map_dbl(snps_index,~(res$pip[.x])),
chr = map_chr(snps,~read.table(text = .x,sep = ":",as.is = T)$V1),
pos_alt_ref = map_chr(snps,~read.table(text = .x,sep = ":",as.is = TRUE)$V2),
pos = map_dbl(pos_alt_ref,~read.table(text = .x,sep = "_",as.is = TRUE)$V1),
alt = map_chr(pos_alt_ref,~read.table(text = .x,sep = "_",as.is = TRUE, colClass = "character")$V2),
ref = map_chr(pos_alt_ref,~read.table(text = .x,sep = "_",as.is = TRUE, colClass = "character")$V3))
effect_mtr = res$coef[output_snps$snps_index+1]%>%as.matrix
colnames(effect_mtr) = "${name}"
rownames(effect_mtr) = output_snps$snps
cs_mtr = effect_mtr
for(i in 1:nrow(cs_mtr)) cs_mtr[i,] = output_snps$cs[[i]]
pip_mtr = effect_mtr
for(i in 1:nrow(pip_mtr)) pip_mtr[i,] = output_snps$pip[[i]]
output_vcf = create_vcf(
chrom = output_snps$chr,
pos = output_snps$pos,
ea = output_snps$alt,
nea = output_snps$ref,
effect = effect_mtr ,
pip = pip_mtr,
cs = cs_mtr,
name = colnames(effect_mtr)
)
VariantAnnotation::writeVcf(output_vcf,${_output:nr},index = TRUE)
###Output
_____no_output_____ |
notebooks/image/removeBands.ipynb | ###Markdown
Remove some bands
###Code
removed = tools.image.removeBands(i, ['B1', 'B2', 'B7'])
eprint(removed.bandNames())
###Output
_____no_output_____
###Markdown
In a collection
###Code
col = ee.ImageCollection('COPERNICUS/S2').limit(5)
def print_bands(col):
info = col.getInfo()
images = info['features']
for image in images:
bands = image['bands']
print([band['id'] for band in bands])
print_bands(col)
removed_col = col.map(lambda img: tools.image.removeBands(img, ['B1', 'B2', 'B7']))
print_bands(removed_col)
###Output
['B3', 'B4', 'B5', 'B6', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12', 'QA10', 'QA20', 'QA60']
['B3', 'B4', 'B5', 'B6', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12', 'QA10', 'QA20', 'QA60']
['B3', 'B4', 'B5', 'B6', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12', 'QA10', 'QA20', 'QA60']
['B3', 'B4', 'B5', 'B6', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12', 'QA10', 'QA20', 'QA60']
['B3', 'B4', 'B5', 'B6', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12', 'QA10', 'QA20', 'QA60']
###Markdown
Remove some bands
###Code
removed = tools.image.removeBands(i, ['B1', 'B2', 'B7'])
ui.eprint(removed.bandNames())
###Output
['B3', 'B4', 'B5', 'B6', 'B8', 'B9']
###Markdown
In a collection
###Code
col = ee.ImageCollection('COPERNICUS/S2').limit(5)
def print_bands(col):
info = col.getInfo()
images = info['features']
for image in images:
bands = image['bands']
print([band['id'] for band in bands])
print_bands(col)
remove_f = wrapper(tools.image.removeBands, ['B1', 'B2', 'B7'])
removed_col = col.map(remove_f)
print_bands(removed_col)
###Output
['B3', 'B4', 'B5', 'B6', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12', 'QA10', 'QA20', 'QA60']
['B3', 'B4', 'B5', 'B6', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12', 'QA10', 'QA20', 'QA60']
['B3', 'B4', 'B5', 'B6', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12', 'QA10', 'QA20', 'QA60']
['B3', 'B4', 'B5', 'B6', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12', 'QA10', 'QA20', 'QA60']
['B3', 'B4', 'B5', 'B6', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12', 'QA10', 'QA20', 'QA60']
###Markdown
Remove some bands
###Code
removed = tools.image.removeBands(i, ['B1', 'B2', 'B7'])
ui.eprint(removed.bandNames())
###Output
_____no_output_____
###Markdown
In a collection
###Code
col = ee.ImageCollection('COPERNICUS/S2').limit(5)
def print_bands(col):
info = col.getInfo()
images = info['features']
for image in images:
bands = image['bands']
print([band['id'] for band in bands])
print_bands(col)
removed_col = col.map(lambda img: tools.image.removeBands(img, ['B1', 'B2', 'B7']))
print_bands(removed_col)
###Output
['B3', 'B4', 'B5', 'B6', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12', 'QA10', 'QA20', 'QA60']
['B3', 'B4', 'B5', 'B6', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12', 'QA10', 'QA20', 'QA60']
['B3', 'B4', 'B5', 'B6', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12', 'QA10', 'QA20', 'QA60']
['B3', 'B4', 'B5', 'B6', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12', 'QA10', 'QA20', 'QA60']
['B3', 'B4', 'B5', 'B6', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12', 'QA10', 'QA20', 'QA60']
|